COMPUTER SCIENCE: Armchair victory: Computers that recognize everyday objects

Computer VisionsJIANXIONG XIAO TYPES “CHAIR” INTO GOOGLE’S search engine and watches as hundreds of images populate his screen. He isn’t shopping — he is using the images to teach his computer what a chair looks like.

This is much harder than it sounds. Although computers have come a long way toward being able to recognize human faces in photos, they don’t do so well at understanding the objects in everyday 3-D scenes. For example when Xiao, an assistant professor of computer science at Princeton, tested a “state-of-the-art” object recognition system on a fairly average-looking chair, the system identified the chair as a Schipperke dog.

The problem is that our world is filled with stuff that computers find distracting. Tables are cluttered with the day’s mail, chairs are piled with backpacks and draped with jackets, and objects are swathed in shadows. The human brain can filter out these objects but computers falter when they encounter shadows, clutter and occlusion by other objects. Improving software for object recognition has many benefits, from better ways to analyze security-camera images to computer vision systems for robots. “We start with chairs because they are the most common indoor objects,” Xiao said, “but of course our goal is to dissect complex scenes.”

Xiao has developed an approach to teaching computers to recognize objects that he likes to call a “big 3-D data” approach because he feeds a large number of examples into the computer to teach it what an object — in this case a chair — looks like.

The chairs that he uses as training data are not pictures of chairs, nor are they the real thing. They are three-dimensional models of chairs, created with computer graphics (CG) techniques, that are available from 3D Warehouse, a free service that allows users to search and share 3-D models. With help from graduate student Shuran Song, Xiao scans the 3-D models with a virtual camera and depth sensor that maps the distance to each point on the chair, creating a depth map for each object. These depth maps are then converted into a collection of points, or a “point cloud,” that the computer can process and use to learn the shapes of chairs.

The advantage of using CG chairs rather than the real thing, Xiao said, is that the researchers can rapidly record the shape of each chair from hundreds of different viewing angles, creating a comprehensive database about what makes a chair a chair. They can also do hundreds of chairs of various shapes — including office chairs, sofa chairs, kitchen chairs and the like. “Because it is a CG chair and not a real object, we can put the sensor wherever we need,” Xiao said.

For the technique to work, the researchers also must help the computer learn what a chair is not like. For this, the researchers use a 3-D depth sensor, like the one found in the Microsoft Kinect sensor for the Xbox 360 video game console, to capture depth information from real-world, non-chair objects such as toilets, tables and washbasins.

Like a child learning to do arithmetic by making a guess and then checking his or her answers, the computer studies examples of chairs and non-chairs to learn the key differences between them. “The computer can then use this knowledge to tell not only whether an object is a chair but also what type of chair it is,” Song said.

Once this repository of chair knowledge is built, the researchers put it into use to search for chairs in everyday scenes. To improve the accuracy of the scanning, the researchers built a virtual “sliding shapes detector” to skim slowly over the scenes, like a magnifying glass skimming over a picture, only in three dimensions, looking for structures that the computer has learned are associated with different types of chairs.

Because the computer did its training on such a large and comprehensive database of examples, the program can detect chairs that are partially blocked by tables and other objects. The researchers can also spot chairs that are cluttered with stuff because the computer program can subtract away the clutter. The detection of objects using depth information rather than color, which is what determines shapes in most photos, allows the computer to ignore the effect of shadows on the objects.

In tests, the new method significantly outperformed the state-of-the-art systems on images, Xiao said. The same technology can also be generalized to other object categories, such as beds, tables and sofas. The researchers presented the work, titled “Sliding shapes for 3D object detection in RGB-D images,” at the 2014 European Conference on Computer Vision.

-By Catherine Zandonella

COMPUTER SCIENCE: Tools for the artist in all of us

Art

FROM TRANSLATING FOREIGN LANGUAGES to finding information in minutes, computers have extended our productivity and capability. But can they make us better artists?

Researchers in the Department of Computer Science are working on ways to make it easier to express artistic creativity without the painstaking hours spent learning new techniques. “Computers are making it faster and easier for beginners to do a lot of things that are time-consuming,” said Jingwan (Cynthia) Lu, who earned her Ph.D. at Princeton in spring 2014. “I’m interested in using computers to handle some of the more tedious tasks involved in the creation of art so that humans can focus their talents on the creative process.”

The techniques that Lu is creating are far more versatile than the simple drawing and painting tools that come pre-installed on most computers, yet they are much easier to use than the software marketed to artists and designers. “Lu has created tools that enable artistic expression by leveraging the use of computation,” said Professor of Computer Science Adam Finkelstein, Lu’s dissertation adviser.

Last year, Lu introduced RealBrush, a project that permits people to paint on a computer using a variety of media, ranging from traditional paints to unconventional materials such as glittered lip gloss. The software contained a library of photographs of real paint strokes. As the artist painted on a tablet or touch screen, the software pieced together the stored paint strokes.

This year, Lu has introduced two new techniques that further her goal of making it easy to create art digitally:

decoBrush

decoBrush allows the user to create floral designs and other patterns such as those found as borders on invitations and greeting cards. Many design programs offer such borders but they come in set shapes and are not easy to customize, requiring a designer to painstakingly manipulate individual curves and shapes.

With decoBrush, users can create highly structured patterns simply by choosing a style from a gallery and then sketching curves to form the intended design or layout. The decoBrush software transforms the sketched paths into structured patterns in the style chosen. For example, a user might select a floral pattern and then sketch a heart, creating a heart with a floral border.

The challenge for Lu and her codevelopers was to guide the computer to learn existing decorative structured patterns and then apply automatic algorithms to replace the tedious process of manipulating the individual curves and shapes.

“Given a target path such as a sketch that the pattern should follow, the computer copies, alters and merges segments of existing pre-designed patterns, which we call ‘exemplars,’ to compose a new pattern,”  Lu said. “It does this by searching for candidate segments that have similar curviness to the target sketch that the user drew. The candidate segments are then copied and merged using a specialized texture synthesis algorithm that transforms the curves to align with each other seamlessly at the segment boundaries.”

Lu constructed decoBrush with assistance from Connelly Barnes, who earned his doctorate degree from Princeton in 2011 and is now at the University of Virginia; undergraduate Connie Wan, Class of 2014; and Finkelstein. She also collaborated with Paul Asente and Radomir Mech of Adobe Research, where Lu interned for three summers and now works as a researcher. Lu presented decoBrush at the Association of Computer Machinery Siggraph Conference in August 2014.

RealPigment

A second project enables artists and novices to explore mixing of colors in digital painting, with the goal of making the digital results more faithful to the physical behaviors of paints.

Software programs for painting are not adept at combining colors, especially when they are simulating complex media such as oil paints or watercolors. One of the most common techniques for combining colors, alpha blending, estimates that yellow and blue make gray rather than green. Lu and her colleagues came up with a different method for figuring out how colors will blend using techniques borrowed from real-world (non-digital) painting.

The researchers use color charts that artists make to find out what color arises when overlaying or mixing two colors of paint. Making these color charts involves painting rows of one color, and then overlaying them with columns each containing a different color. The resulting grid reveals how all pairs of color will look when layered. Similar charts can be made for mixed rather than layered colors.

Lu’s approach is to feed these color charts into the computer to teach it how to combine colors in a specific medium, such as oil paints or watercolors. “The goal is to learn from existing charts to predict the result of compositing new colors,” Lu said. “We apply simplifying assumptions and prior knowledge about pigment properties to reduce the number of learning parameters, which allows us to perform accurate predictions with limited training data.”

Lu’s research was supported by a Siebel Fellowship and funding from Google. The project included Willa Chen, Class of 2013; Stephen DiVerdi of Google; Barnes and Finkelstein. The work was presented at the June 2014 International Symposium on Non-Photorealistic Animation and Rendering.

COMPUTER SCIENCE: Fierce, Fiercer, Fiercest: Software enables rapid creations

Art

A NEW SOFTWARE PROGRAM MAKES IT EASY for novices to create computer-based 3-D models using simple instructions such as “make it look scarier.” The software could be useful for building models for 3-D printing and designing virtual characters for video games.

The program, called AttribIt, allows users to drag and drop building blocks of a 3-D shape from a menu. Next the user can adjust the characteristics of the model — making it “scarier” or “sleeker” for example — by sliding a bar at the bottom of the screen.

“We wanted to create a program that could be used by people who don’t have any training in computer graphics or design,” said Siddhartha Chaudhuri, a lecturer at Cornell University who co-wrote the software while a postdoctoral researcher at Princeton with Professor of Computer Science Thomas Funkhouser as well as University of Massachusetts-Amherst Assistant Professor Evangelos Kalogerakis and graduate student Stephen Giguere.

“The challenge was to build a tool that could create a model — such as an intricate animal with claws and ears — with only simple commands and common adjectives instead of the complex geometric commands found in most other 3-D design programs,” Funkhouser said.

AttribIt makes new objects by combining parts from repositories of previously made models, Chaudhuri explained. The parts in the repository have been ranked for their “scariness,” “gracefulness” and other everyday adjectives using machine learning algorithms trained on feedback from anonymous volunteers.

The rankings are based on crowd-sourced training data from Amazon Mechanical Turk, an online research platform. Random participants are asked to view two shapes and say which one is scarier. The AttribIt software then builds a model from these value judgments that predicts the relative “scariness” of any shape.

“For example, given a bunch of animal heads, the software assigns each a number which expresses how ‘scary’ it thinks that head is,” Chaudhuri said. “You can sort the animal heads by this predicted scariness to get a sequence that goes from bunnies to velociraptors.”

The researchers tested AttribIt on users who had no prior 3-D modeling experience, including Chaudhuri’s 11-year old nephew. “People were very good at creating models in a very short amount of time,” Chaudhuri said.

In addition to creating 3-D models, the approach can be used in design tasks such as making a website look “more artistic.” The research was supported by funding from Google, Adobe, Intel and the National Science Foundation and was presented at the Association for Computing Machinery Symposium on User Interface Software and Technology in October 2013.

-By Catherine Zandonella

COMPUTER SCIENCE: Internet traffic moves smoothly with Pyretic

60_Hudson_StreetAT 60 HUDSON ST. IN LOWER MANHATTAN, a fortress-like building houses one of the Internet’s busiest exchange points. Packets of data zip into the building, are routed to their next destination, and zip out again, all in milliseconds. Until recently, however, the software for managing these networks required a great deal of specialized knowledge, even for network experts.

Now, computer scientists at Princeton have developed a programming language called Pyretic that makes controlling the flow of data packets easy and intuitive — and more reliable. The new language is part of a trend known as Software-Defined Networking, which gives a network operator direct control over the underlying switches that regulate network traffic.

“In order to make these networks work, we have to be able to program them effectively, to route traffic to the right places, and to balance the traffic load effectively across the network instead of creating traffic jams,” said David Walker, professor of computer science, who leads the project with Jennifer Rexford, the Gordon Y.S. Wu Professor of Engineering and professor of computer science. “Pyretic allows us to make sure packets of information get to where they are going as quickly, reliably and securely as possible.”

Pyretic is open-source software that uses the Python programming language and lowers the barrier to managing network switches, routers, firewalls and other components of a network. Since its initial release in April 2013, the community of developers who are using the language to govern networks has grown quickly.

Additional contributors include Associate Research Scholar Joshua Reich and graduate student Christopher Monsanto of Princeton’s Department of Computer Science as well as Nate Foster, an assistant professor of computer science at Cornell University. The project received support from the U.S. Office of Naval Research, the National Science Foundation and Google.

-By Catherine Zandonella

COMPUTER SCIENCE: Security check: A strategy for verifying software that could prevent bugs

HeartbleedIN APRIL 2014, INTERNET USERS WERE SHOCKED to learn of the Heartbleed bug, a vulnerability in the open-source software used to encrypt Internet content and passwords. The bug existed for two years before it was discovered.

Detection of vulnerabilities like Heartbleed is possible with a new approach pioneered by Andrew Appel, the Eugene Higgins Professor of Computer Science. With funding from the Defense Advanced Research Projects Agency (DARPA) and the National Science Foundation, Appel has developed a strategy for verifying software to ensure that it is performing correctly, and the technique could be applied to the Internet’s widely used encryption system, known as “Secure Sockets Layer.”

“The point is that formal program verification of correctness is now becoming feasible,” said Appel. “The downside of the approach is the expense. But for important and widely used software, it may be less expensive than the consequences of not doing it.”

-By Catherine Zandonella

Annual Report

RESEARCH BY THE NUMBERS

Fiscal year 2014

 

Sponsored Research Projects

SponsoredResearchProjects$199.8 M  Princeton University campus expenditures
$79.5 M U.S. Department of Energy’s Princeton Plasma Physics Laboratory (PPPL) expenditures
1,373 Total sponsored research projects
262 Projects supported by corporate or foundation funding

 

 

 

 

Technology Licensing Activities

TechnologyLicensingActivities106 Invention disclosures received
175 Patent applications filed
28 Patents issued
$142,980,000 Gross royalty income
25 Technologies licensed

 

 

 

 

Funding Sources for Sponsored Research Activities (Campus)

FundingSources

RESILIENT SHORES: After Sandy, climate scientists and architects explore how to co-exist with rising tides

Coastal Resilience
AFTER THE WIND, RAIN AND WAVES of Hurricane Sandy subsided, many of the modest homes in the Chelsea Heights section of Atlantic City, New Jersey, were filled to their windows with murky water. Residents returned to find roads inundated by the storm surge. Some maneuvered through the streets by boat.

This mode of transport could become more common in neighborhoods like Chelsea Heights as coastal planners rethink how to cope with the increasing risk of hurricane-induced flooding over the coming decades. Rather than seeking to defend buildings and infrastructure from storm surges, a team of architects and climate scientists is exploring a new vision, with an emphasis on living with rising waters. “Every house will be a waterfront house,” said Princeton Associate Professor of Architecture Paul Lewis. “We’re trying to find a way that canals can work their way through and connect each house, so that kayaks and other small boats are able to navigate through the water.”

The researchers aim for no less than a reinvention of flood hazard planning for the East Coast. A new approach, led by Princeton Professor of Architecture Guy Nordenson, rejects the strict dividing line between land and water that coastal planners historically have imposed, favoring the development of “amphibious suburbs” and landscapes that can tolerate periodic floods. These resilient designs can be readily modified as technologies, conditions and climate predictions change.

Discovery2014_CR_textbox1To plan for future flood risks, Princeton climate scientists are using mathematical models of hurricanes to predict storm surge levels over the next century, taking into account the effects of sea level rise at different locations. Four design teams — from Princeton, Harvard University, the City College of New York and the University of Pennsylvania — are using these projections to guide resilience plans for specific sites along the coast: Atlantic City; Narragansett Bay in Rhode Island; New York City’s Jamaica Bay; and Norfolk, Virginia. [See Planning for resilience up and down the coast.]

The designs will serve as a guide for the U.S. Army Corps of Engineers’ North Atlantic Coast Comprehensive Study, a plan to reduce the risk of flood damage to coastal communities, which is due to Congress in January 2015. “The Army Corps understands that they have to revisit what it means to make structures that are resilient,” said Enrique Ramirez, a postdoctoral research associate in architecture at Princeton and the project’s manager. He serves as a liaison between the design teams and Army Corps officials in regional districts.

The idea for the project grew out of Nordenson’s work on a pre-Sandy project to develop creative proposals for adaptation to rising sea levels in New York Harbor. The project culminated in a book, On the Water: Palisade Bay, and a 2010 exhibition, Rising Currents, at the New York City Museum of Modern Art. The proposals included repairing and lengthening existing piers, as well as planting wetlands and building up small islands inside the harbor. “It was forward thinking because we showed that there are benefits to building things in the water,” Nordenson said. Other Princeton contributors to On the Water were engineering professors James Smith and Ning Lin (then a graduate student) and climate scientist Michael Oppenheimer of the Woodrow Wilson School of Public and International Affairs.

Hurricane Sandy heightened the urgency of long-term coastal planning. While advising a New York State commission on future land use strategies, Nordenson began discussing a broader plan for the East Coast with Joseph Vietri of the U.S. Army Corps of Engineers and Nancy Kete of the Rockefeller Foundation. This discussion led to the Structures of Coastal Resilience project, which is funded by the Rockefeller Foundation and began in October 2013. The project is managed by Princeton’s Andlinger Center for Energy and the Environment and will extend resilient design concepts to other coastal regions, as well as integrate hurricane storm surge predictions with projections of local sea level rise.

One of the project’s goals is to encourage a reconsideration of the absolute flood zone boundaries on maps produced by the Federal Emergency Management Agency (FEMA), which determine building code requirements and insurance rates. Climate science shows that the geographical borders of flood risk should be based on the probabilities and outcomes of different storm events, not the placements of artificial levees that may be overtopped by high storm surges. Indeed, many of the homes and businesses ravaged by Hurricane Sandy were not located in flood hazard zones on FEMA’s maps. “Sandy really brought home the message that we have to do a lot better in the future,” said Oppenheimer, the Albert G. Milbank Professor of Geosciences and International Affairs. “Because while we sit here thinking about it, the risk is only increasing.”

Click to enlarge. The low-lying barrier island that is home to Atlantic City is particularly vulnerable to storm surges, especially in parts of the city, such as residential Chelsea Heights, that were built on wetlands. Researchers are exploring ways to make existing neighborhoods (Panel A) more resilient in the face of occasional storm surges. By raising houses, using roads as low levees and letting abandoned lots return to wetland conditions, these neighborhoods can become “amphibious suburbs” (Panel B). A similar approach can be applied to existing canal neighborhoods (Panel C), making them more resilient and tolerant of flooding (Panel D).

Click to enlarge. The low-lying barrier island that is home to Atlantic City is particularly vulnerable to storm surges, especially in parts of the city, such as residential Chelsea Heights, that were built on wetlands. Researchers are exploring ways to make existing neighborhoods (Panel A) more resilient in the face of occasional storm surges. By raising houses, using roads as low levees and letting abandoned lots return to wetland conditions, these neighborhoods can become “amphibious suburbs” (Panel B). A similar approach can be applied to existing canal neighborhoods (Panel C), making them more resilient and tolerant of flooding (Panel D).

Smarter building codes are also needed, according to Lin, an assistant professor of civil and environmental engineering, who heads the effort to predict storm surge levels. Current building code books primarily address earthquake risks. “A tiny few chapters are for wind, and very few pages are for flooding,” Lin said. Large-scale, long-term projects such as levees and seawalls have been the standard approach to coastal protection. But the Coastal Resilience team puts forth a different view, one of coping with occasional flooding rather than fighting it. “We will never be able to prevent such hazards. We can only be prepared to reduce their impact,” Lin said.

Resilient designs call for supporting, revitalizing and in some cases reengineering natural features such as wetlands and beach dunes. This so-called “soft infrastructure” can reduce the impact of waves, improve water quality and create new recreational spaces for coastal residents and visitors. Rather than the exclusive construction of barriers, the project’s plans include “layered systems of natural and engineered structures that will respond in different ways to different hazards,” Nordenson said. “It is a more nuanced and more resilient approach.”

Flexible design is also an important component of the project. Ideally, the sizes and arrangements of structures will be adaptable as predictive models improve. Scientists continue to debate how climate change will affect the strength and frequency of storms. “But we are trying to take what we know right now and do the best job we can in accounting for the uncertainties in what we know, and use that to explore how we should be thinking about adaptation,” said Smith, the William and Edna Macaleer Professor of Engineering and Applied Science and chair of the Department of Civil and Environmental Engineering at Princeton.

Meteorological measurements show that the extreme winds of a swirling hurricane transfer energy to the ocean surface. The winds and the storm’s low air pressure cause a dome of water to rise, generating a surge of high water when the storm makes landfall. “When you think of the storm, you think of the wind and the rain. That’s what seems scary,” said Talea Mayo, a postdoctoral research associate who is working with Lin to model storm surges. But the coastal storm surge was the main cause of deaths and property damages from Hurricane Sandy.

To predict future storm surges, Lin and Mayo are using thousands of synthetic hurricanes modeled by Kerry Emanuel, an atmospheric scientist at the Massachusetts Institute of Technology. “Anytime you’re studying hurricanes, especially so far north, your historical data are really limited because there just aren’t enough events,” Mayo said. “So instead of basing our risk analysis on historical data, we use synthetic data.”

Hurricane damage 1944

Storms have caused significant damage to Atlantic City’s iconic boardwalk throughout its existence. Shown here is South Inlet during the Great Atlantic Hurricane of 1944. Image from the archive of the Coastal and Hydraulics Laboratory, Engineer Research and Development Center, Vicksburg.

Emanuel’s team uses existing models of global climate circulation patterns to generate 3,000 synthetic, physically possible storms for nine different climate change scenarios at each of the four study sites — a total of more than 100,000 storms. These hurricanes exist only in computer code, but their wind speeds, air pressure levels and patterns of movement are based on physical laws and information from recorded storms. Mayo and Lin plug these parameters into algorithms that work like sophisticated versions of high school physics problems: solve the equations for conservation of mass and momentum to estimate maximum water levels at each site. Variations in tide levels, coastline shapes and seafloor topographies add additional layers of complexity.

To make reasonable projections of future flood hazards, the models must also account for sea level rise. According to geoscientist Chris Little, an associate research scholar working with Oppenheimer, storm surges are a short-term version of sea level rise. “They both contribute to coastal flooding,” Little said. “Climate change will be felt through the superposition of changes in long- and short-term variations in sea level.”

And when it comes to sea level rise, local projections are crucial for planning efforts. A constellation of factors influence regional differences in sea levels, including the vertical movement of the Earth’s surface, changes in ocean circulation and the melting of glacial ice. Little and Oppenheimer were among the authors of a study published in June 2014 in the journal Earth’s Future, which used model-based and historical tide gauge data for sites around the globe to project local sea levels over the next two centuries.

“We live in a hotspot, where the local sea level rise has been higher in the past than the global mean, and we expect it to continue to be higher in the future,” Oppenheimer said — as much as 40 percent higher than the worldwide average. One reason for this is that the land along the East Coast is slowly sinking (by a millimeter or two each year), a legacy of the ice sheet that covered much of North America until about 12,000 years ago. The ice sheet depressed Earth’s crust over present-day Canada, causing the liquid mantle beneath to bulge southward. Now that the glaciers have melted, the mantle is being gradually redistributed, flowing out from under the East Coast of the United States.

Sea levels respond slowly to changes in climate, including the current warming trend, caused in part by increased carbon dioxide levels from human activity. Because future carbon emissions depend on human decisions, predictions of sea level rise come with built-in uncertainty. This project attempts to meet this challenge head-on: “A major purpose of the project is to think about doing a more thorough job of assessing the uncertainty in these flood zones,” Little said. “I think it’s difficult but worthwhile.”

Resilient designs call for planning and reengineering natural features such as salt marshes, submerged aquatic vegetation and wetlands, as in this imagined coastline for Staten Island, south of Manhattan.

Resilient designs call for planning and reengineering natural features such as salt marshes, submerged aquatic vegetation and wetlands, as in this imagined coastline for Staten Island, south of Manhattan.

Because of this uncertainty, climate scientists deal in probabilities. The Princeton team has projected flood levels for storms with return periods of 100, 500 and 2,500 years. A return period of 100 years is akin to a “100-year flood” — this means that in any given year there is a 1 percent chance of that flood level occurring. These forecasted flood risks are key to making smart building and design decisions in the face of climate change. “Every decision-maker is going to look and decide what risk is tolerable for their region in the context of how much it would cost to defend against that risk,” Oppenheimer said.

The design teams are beginning to test their plans against the climate scientists’ predictions. Simulated local water levels will reveal which structures may be inundated by future storms and at what probabilities. These analyses may prompt the designers to adjust the heights of buildings, roads or beach dunes in their blueprints. And as the science improves, this process will repeat itself. “Over time, others can start to add things that we haven’t been able to include, like the relationship of the wind and the flood,” Nordenson said.

True resilience necessitates a change in outlook. In Atlantic City, the focus area for Lewis and the Princeton group, a narrow channel of water separates the Chelsea Heights neighborhood from the city’s famous boardwalk and high-rise casinos, where many residents work. “You have extensive areas of suburban neighborhoods that are built on wetlands,” said Lewis. “Two binary positions are retreat, where you return these to wetlands, and fortification, which is the seawall approach. And both of them are problematic.”

The team recognizes the social and economic importance of maintaining the neighborhood. But barricading it behind a seawall may be prohibitively expensive, not to mention unattractive. More important, metal or concrete seawalls can actually exacerbate flooding when areas behind them are inundated by heavy rain. Lewis and his team have a fundamentally different vision for places like Chelsea Heights: “We’re looking at developing an amphibious suburb,” he said. “We want water to come in. If there are berms [earthen seawalls] that are put in, they should be built with a series of valves.”

The plans for Chelsea Heights include raised homes and roads interspersed with canals and revitalized wetlands. Lewis hopes these ideas will be useful to policymakers and to the Army Corps of Engineers, which may apply the Princeton team’s concepts to Chelsea Heights and other similar communities along the New Jersey shore. By the end of this century, grassy suburban lawns may be transformed into salt marshes.

PLANNING FOR RESILIENCE UP AND DOWN THE COAST

Natural features play a pivotal role in the designs for two of the project’s other focal regions, New York’s Jamaica Bay and Rhode Island’s Narragansett Bay.

  • The plan for Jamaica Bay includes the use of local dredged materials to build up land for marsh terraces, which can serve to reduce wind fetch as well as improve water quality and encourage sediment deposition, according to Catherine Seavitt, an associate professor of landscape architecture at the City College of New York. In particular, her team hopes to expand the restoration of a native wetland grass, Spartina alterniflora, an effective attenuator of wind and waves that also provides valuable ecological habitat.
  • Michael Van Valkenburgh and Rosetta Elkin lead the Harvard design effort for Narragansett Bay. One of their plans involves relocating two critical reservoirs that supply drinking water to the city of Newport. The reservoirs are currently vulnerable to coastal flooding; the proposed project would use dredged material from the original reservoir to fill in and extend the existing maritime forest, now a rare ecosystem along the New England coast. The larger forest, designed by the team, would mitigate coastal erosion, attenuate wave action, and become a valuable recreational area for surrounding communities.
  • The project’s other site, the Norfolk, Virginia, area of Chesapeake Bay, calls for a more extensive reshuffling of settlement and infrastructure, according to Dilip da Cunha, an adjunct professor of landscape architecture at the University of Pennsylvania. Of the four sites, Norfolk is expected to see the most dramatic sea level rise, and is home to the world’s largest naval station and a vital commercial port. The UPenn team’s designs stem from the natural network of fractal-like interfaces where land and water meet. The plan seeks to bolster “fingers of higher ground” that will be more robust to gradual sea level rise as well as storm surges. “The higher grounds could be for housing, schools and other facilities, and the low grounds could accommodate various things, from marsh grasses to football fields,” da Cunha said. “Things that can take water in the case of a storm event, but will not endanger lives.”

-By Molly Sharlach

Poetry in Silico: Bringing digital tools to the study of poetry

Poetry in silico cover image

ONCE UPON A MIDNIGHT DREARY, an English professor at Princeton sat in her office, musing over many volumes of forgotten lore about the right way to read a poem.

There were handbooks, essays, letters from one poet to another, and even newspaper articles dedicated to arguments over what rhythms should be used, which syllables should be stressed, and where the reader should pause — all elements of prosody, the study of poetic form.

Meredith Martin, an associate professor and expert on English poetry of the 19th and 20th centuries, had assembled the sources to explore how the thinking about these “rules” for reading poems had changed during the Victorian and early Modernist periods. These rules, found in versification manuals and grammar schoolbooks of the period, sometimes appeared as markings on the poem itself — typically accents on stressed syllables, little u-shaped marks called breves atop non-stressed syllables, and vertical lines to indicate pauses.

The problem Martin faced was how to search across her assembled sources. Although many of the works she had collected already had been digitized by initiatives such as Google Books, others were scattered across databases, and most important, were unsearchable. Letters with prosodic marks are not recognized by typical computer search techniques.

Enlisting the help of computer scientists and librarians, Martin began in 2011 to build the Princeton Prosody Archive, a full-text searchable database of more than 10,000 digitized records published between 1750 and 1923. Currently in beta-testing, the Prosody Archive will be accessible to the public in 2015, with full access to the archive by 2017.

Meredith Martin

Photo by Denise Applewhite

“We are making these texts available in one place for the first time,” Martin said, “and enabling scholars to explore new analytical questions in the study of poetry.”

Bringing searchable books, maps and other historical texts online is a growing movement in the humanities at universities around the world, including Princeton. This fall, Princeton opened the Center for Digital Humanities, directed by Martin, to enable faculty and student researchers to harness the power of computing for research activities that once were only possible during laborious visits to musty archives and libraries.

A number of motivations drive the growing trend in “digital humanities,” defined broadly as the intersection of technology and the humanities. A generation raised on Google has come to expect instant online access to everything. Research budgets for physical trips to far-flung libraries have shrunk. But perhaps the biggest driver of the digital humanities movement is the potential to search widely, deeply and in new ways. From a desktop computer, a researcher can scan large numbers of tomes and search for trends using statistical tools, or select one for a close reading.

“The field of digital humanities has evolved rapidly and there are a lot of different opinions about what the term means,” said Clifford Wulfman, digital initiatives coordinator at the Princeton University Library. “I think of it as the field where the humanities and the more algorithmic and mathematical approaches meet, intersect and intermingle, and sometimes produce practical outcomes like tools that someone can use, but also give rise to new questions and deeper understanding.”

What is meter?In the case of Martin’s research, the Prosody Archive helps her explore how a seemingly objective system for reading a poem can embody issues of national identity, class and patriotism. In her book, The Rise and Fall of Meter: Poetry and English National Culture 1860-1930 (Princeton University Press, 2012), Martin describes how 19th-century English scholars fixated on finding a distinctly English meter — a rhythmic pattern created by stressed and unstressed syllables — as part of a Victorian-era glorification of English culture.

Martin became interested in meter during college. “I was fascinated by the idea that the way in which you read a poem could completely change your relationship to the poetic text,” Martin said.

The Rise and Fall of Meter

The Princeton Prosody Archive began as a quest by English Professor Meredith Martin to bring order and search capability to a large collection of books and manuscripts that she had assembled when writing her book, The Rise and Fall of Meter (Princeton University Press, 2012)

In graduate school at the University of Michigan and then at Princeton, Martin explored how the study of meter had evolved in English poetry. What she found surprised her. Rather than discovering that poems were read in standard and unchanging ways, she found that the “right way to read a poem” changed over time and was the subject of contentious debate. “I found that prosody is incredibly culturally determined,” Martin said. “It has a lot to do with the reader and with his or her sense of language, and the relationship to his or her country.”

The Prosody Archive will allow Martin to make available to other scholars the sources that she used for her book. It also will enable the exploration of new questions, such as why certain poets emerged as the exemplars of their eras and how the emerging science of linguistics influenced debates about meter.

Housed at Princeton, the Prosody Archive’s computer architecture was developed by Travis Brown of the University of Maryland Institute for Technology in the Humanities, a leading digital humanities center. Support for the archive was provided by the Andrew W. Mellon Foundation. Eventually the Prosody Archive will include newspapers and journal articles in addition to printed books, which will be scanned at Princeton. An agreement with Google Books and the nonprofit book repository HathiTrust allowed Princeton to access many of the digitized books in the collection.

Assembling digitized texts and digitizing those that are not in HathiTrust is only the first step, however. When books and papers are scanned, many of the nuances that make physical books so rich become lost, from the dog-eared corners that indicate a wellloved passage to scribbles of inspiration in the margins. “These flourishes are not captured by today’s optical character recognition software, so they don’t show up in the digitized texts,” said Meagan Wilson, the Prosody Archive’s manager and a graduate student in English. To address this issue, the Princeton Prosody Archive will include page images along with each text.

Prosodic marks are also lost. Most prosodic texts use a notation known as scansion — a word that derives from the fact that the reader scans rather than reads the poem, as shown in the opening line of Edgar Allan Poe’s The Raven:

scansion

The digitized version contains the words but not the scansion marks — the accents and breves, for example. Other prosodic marks are even more difficult to capture. Periodically, musical notation is used as a method of scansion:

musical notation

Optical character recognition software doesn’t understand these symbols, and returns:

undescipherable text

For now, the Prosody Archive team will annotate the entries by hand, said Ben Johnston, a founding member of the digital humanities initiative and manager of Princeton’s Humanities Resource Center, which develops technology resources for teaching and research. “We have to go through the entries and indicate the notation — for example a musical note or an accent mark.”

Over the next three years, the team hopes to develop a computer model for encoding scansion as well as tools for natural language-processing techniques, said Martin. One such technique is topic modeling, which yields statistical analyses of word usage and could be employed in looking for prosody-related information. The addition of data visualization software will make the collection more useful to researchers.

As the archive develops, Martin is excited about its prospects, not only to broaden the study of poetics at Princeton and beyond, but also to make possible new ways to study digital texts. The opening of the new Center for Digital Humanities will enable students and faculty members to ponder — more quickly and in greater depth — over many a quaint and curious volume of forgotten lore.

-By Catherine Zandonella

Computer visions: A selection of research projects in Computer Science

Princeton’s Department of Computer Science has strong groups in theory, networks/systems, graphics/vision, programming languages, security/policy, machine learning, and computational biology. Find out what the researchers have been up to lately in these stories:

Computer VisionsArmchair victory: Computers that recognize everyday objects

JIANXIONG XIAO TYPES “CHAIR” INTO GOOGLE’S search engine and watches as hundreds of images populate his screen. He isn’t shopping — he is using the images to…

 

 

Discovery2014_Computer_flower_mediumTools for the artist in all of us

FROM TRANSLATING FOREIGN LANGUAGES to finding information in minutes, computers have extended our productivity and capability. But can they make us better artists?

 

 

ArtFierce, fiercer, fiercest: Software enables rapid creations

A NEW SOFTWARE PROGRAM MAKES IT EASY for novices to create computer-based 3-D models using simple instructions such as “make it look scarier.” The software could be useful for…

 

 

60_Hudson_StreetInternet traffic moves smoothly with Pyretic

AT 60 HUDSON ST. IN LOWER MANHATTAN, a fortress-like building houses one of the Internet’s busiest exchange points. Packets of data zip…

 

 

Heartbleed bugSecurity check: A strategy for verifying software that could prevent bugs

IN APRIL 2014, INTERNET USERS WERE SHOCKED to learn of the Heartbleed bug, a vulnerability in the open-source software used to encrypt Internet content and passwords…

A RISKY PROPOSITION: Has global interdependence made us vulnerable?

A Risky Proposition

RISK IS EVERYWHERE. There’s a risk, for example, that volcanic ash will damage aircraft engines. So when a volcano erupted in Iceland in April 2010, concerns about the plume of volcanic ash disrupted air travel across Europe for about a week. Travelers, from the Prince of Wales to Miley Cyrus, were forced to adjust their plans.

In the interconnected world of the 21st century, that risk also put Kenyan flower farm employees out of work because their crop couldn’t reach Europe, and forced Nissan to halt production of some models in Japan because certain parts weren’t available.

Welcome to global systemic risk, where virtually every person on Earth can be affected by disruption in interdependent systems as diverse as electricity transmission, computer networks, food and water supplies, transportation, health care, and finance. The risks are complicated and little understood.

TextA core group of about two dozen faculty members from across the University — along with postdoctoral research fellows, graduate students, undergraduates and outside researchers — has come together for a three-year research effort focused on developing a comprehensive and cohesive framework for the study of such risks.

The Global Systemic Risk research community, with financial support from the Princeton Institute for International and Regional Studies, is working to better understand the nature of risk, the structure of increasingly fragile systems and the ability to anticipate and prevent catastrophic consequences.

“You can’t isolate any of these systems,” said Miguel Centeno, the Musgrave Professor of Sociology and head of the research community. “They’re all complex systems complexly put together. We’ve been running this unique experiment for the past 50 years or so, and we’re all dependent on it continuing to work.”

Making systems stronger

The goals of the community, now in its second year, include research, course development, conferences and even a movie series that will give the public a chance to use popular disaster films as a point of entry to discuss the serious issues of systemic risk.

Thayer Patterson, a research fellow and a member of the group’s executive committee, said the work that emerges should be useful not just for academics, but also policymakers, leaders in business and finance, and the public.

“This isn’t just an academic pursuit; it’s an intellectual exercise that has the potential for real consequences in terms of making our systems stronger and more robust to the inevitable shocks that they will experience,” Patterson said.

Policymakers, for example, may learn more about the ways dangerous unintended consequences can arise from seemingly sensible laws and regulations, Patterson said. And business leaders may better understand the importance of realistic risk assessment.

“We want to celebrate the risk takers and the innovators and the fruits of their labors,” Patterson said. “We are by no means doomsayers, but we hope to provide more information to people on the robustness and fragility of systems.”

Discovery2014_Centeno_full

What are the most fragile global systems? Centeno points to two that concern him the most: the Internet and global health.

While the Internet itself is fairly robust by design, Centeno said, many other crucial systems — such as electrical grids, financial institutions and transportation systems — rely on the Internet, and a catastrophic failure there could quickly have dangerous effects worldwide.

And the ease of global travel today raises the risk that disease can spread unchecked around the world before health authorities have an opportunity to react, he said.

“We now have the conditions under which we could create some kind of pandemic very quickly that we would not be able to resolve,” Centeno said.

Tackling research

The research community includes faculty members from 17 academic departments and five interdisciplinary programs at Princeton. Each brings his or her own background and approach to the topic.

Adam Elga, a professor of philosophy, said he has been interested in the topic of risk for several years and previously co-taught a course on the philosophy of extreme risk. That course piqued his interest in the idea of cascading failures, in which a series of small failures builds within a system and results in a catastrophic failure.

Elga is conducting a series of experiments this year with financial support from the research community that examines how individuals assess risks in situations where there is a small, but real, risk of catastrophic failure.

Elga’s hypothesis is that many experiment participants will struggle to accurately account for the risk of catastrophic failure. In the same way, Elga said, policymakers and the public can be lulled into complacency by the fact that important global systems haven’t experienced catastrophic failure — even though the risk is real and the potential consequences devastating.

“There is some evidence that people aren’t going to be scared enough by the bad outcomes until they’ve already been hit by one,” Elga said. “But once you get hit by a really big one, it’s too late and the game is over.”

Elga said he has derived benefits from the research community beyond direct support for his work.

“It’s been stimulating to hear people from adjacent fields such as psychology, to talk to people who have thought about this from mathematical and engineering perspectives,” he said.

Another participant is Stanley Katz, a lecturer with the rank of professor in public and international affairs, who has begun applying ideas about risk from the research community in his study on philanthropy.

How does a major philanthropic donor, for example, decide between spending $100 million in Africa on bed nets, which have a known effect on the transmission of malaria, or spending the same sum on an unproven vaccine that could either be much more effective than bed nets or be a total failure? Such decisions, Katz said, are based, in part, on assessments of risk.

“The field hasn’t ordinarily been studied this way,” Katz said. “This is relatively new language, and this is one of the things that appeals to me about the project. I’m learning a lot from scholars in social sciences who are much more accustomed to working with the language of risk.”

No easy answers

Vu Chau, a member of the Class of 2015, is an undergraduate fellow with the research community and received funding for summer research on risk-related topics. The economics major is working to understand the impact that policies the Federal Reserve implemented in response to the 2007-08 financial crisis had on emerging markets.

“Before the crisis, the common thinking was that we need only design policies and regulations that focus on individual agents such as banks, because the larger system would be safe if each of its components is safe,” Chau said. “However, the crisis taught us that even when individual parts act prudently and follow regulations, the whole system can fail under certain conditions. This is precisely why systemic risk is dangerous and deserves the kind of attention it is getting.”

Systemic risk is a topic that doesn’t lend itself to easy answers, Centeno said.

Warning systems — such as better measures of financial risk to avert another financial crisis — can be helpful but are limited. Regulations — such as environmental rules to slow global warming — can cause unintended problems.

Safety nets — such as redundant equipment on power grids — are expensive and can actually increase risky behavior.

Shut-off switches — such as quarantines to limit the spread of disease — are practically and ethically challenging.

“Maybe the way to approach this isn’t that we need a better financial system or a better food system,” Centeno said. “Maybe we need a better system as a whole. Increasingly, you can’t divide these domains.”

So while the research community won’t be able to solve the problems of systemic risk during its three-year term, Centeno said its role is both clear and important: “The task of a research community is to create interdisciplinary conversations about a set of problems or issues so you can better understand what you’re looking at. That’s what we’re trying to do.”

-By Michael Hotchkiss