Wednesday, September 30, 2009

The Neuro Revolution Will Not Be Televised

But there will be images—magnetic resonance images, for example

BY Mark Anderson // October 2009

The Neuro Revolution: How Brain Science Is Changing Our World

By Zack Lynch with Byron Laursen; St. Martin’s Press, 2009; 256 pp.; $25.99; ISBN 978-031-237-862-2

Neuroscience is by any account still in its early days, and some functions of the brain, such as emotions and the subconscious, appear especially unwilling to unlock their mysteries. But functional magnetic resonance imaging, just one of many promising neuro research technologies, has already shown scientists which regions of the brain fall in love and which become addicted to drugs. As the brain unfolds itself to neuroengineers, so, too, will the consequences of this new understanding for years to come, making this a good time to take stock.

In The Neuro Revolution: How Brain Science Is Changing Our World, Zack Lynch delivers on his subtitle through 210 mind-opening pages. There’s the discovery of 43 facial muscles whose microsecond twitches subconsciously reveal a criminal suspect’s emotions and truthfulness; memory bombs that produce short-term amnesia; electronic soporifics that induce a kind of temporary narcolepsy in enemy soldiers; and hedge funds such as MarketPsy Capital (which gained more than 35 percent in 2008 while the Dow Jones average dropped by an equal amount), that use neuroscience to help predict investor behavior.

Indisputably, learning how to manipulate the brain will result in vast, society-wide changes. But some are decades away. And as with any social upheavals, even the ones closer to hand might be completely unpredictable. Who in 1995 foresaw a decade of Google, Facebook, and YouTube?

Lynch, who sits on the advisory board of MIT’s McGovern Institute for Brain Research and has briefed the U.S. Congress and the Department of Defense on cutting-edge neuroscience, knows his field. Nevertheless, early days are always filled with predictions that don’t come to pass: Where are our flying cars, food pills, and atomic-powered dishwashers?

If Lynch and coauthor Byron Laursen have taken on the impossible task of polishing the crystal ball, they at least succeed in clearly presenting what we know today. Nor is their view of the neuro revolution entirely upbeat. For example, they devote a page to a taut description—which could have been expanded to five or more—of how Russian special forces broke a 2002 Chechnya-related hostage crisis in a Moscow theater using still-undisclosed neurochemicals. The authors report a consensus suspicion that the Russians gassed both hostage takers and hostages with fentanyl (a synthesized opiate) and in the process killed a reported 33 terrorists and 129 hostages. All but one, they chillingly note, died of a “respiratory depression.”

Yet for all that, the book’s narrative still ultimately centers around a technology enthusiast’s vision of the future, one that may not sufficiently take into account the very emotions and motivations—often flawed and sometimes malicious—that have yet to yield themselves to science. And that our future, unfortunately, plays out not always according to the way we should be but the way we are.

About the Author
Mark Anderson is an author and science writer based in Northampton, Mass., who loves books. He recently reviewed Grace Hopper and the Invention of the Information Age.

According to Spectrum's wordsmith, Paul McFedries, "the neuro- prefix gets quite a workout" these days. Read his August column, "Brave Neuro World."


Mahlon Hoagland Dies

Mahlon Hoagland, a molecular biologist whose discoveries of transfer RNA and the mechanisms behind amino acid activation helped build the foundation of genetics, died in his home in Thetford, VT, on Friday. He was 87 years old.

As a young scientist in the 1950s and 1960s, Hoagland studied RNA and DNA alongside Paul Zamecnik at Harvard Medical School and Francis Crick at Cambridge University. He made his most significant contributions to biology in his 30s and largely dedicated the rest of his career to teaching, mentoring, and writing. According to several friends and colleagues, he was also a gifted artist.

"Hoagland's early work opened up the field of biochemistry," said Thoru Pederson, a molecular biologist at the University of Massachusetts Medical School and a long-time colleague. "But beyond his research, his most notable asset was his effectiveness at communicating biomedical sciences to the general public through teaching and writing."

Hoagland was born in 1921 and grew up in Southborough, Massachusetts. His father, Hudson Hoagland, was a prominent physiologist and cofounder of the Worcester Foundation for Experimental Biology. Eager to pick a career path that wouldn't put him in direct competition with his father, Hoagland studied biochemical sciences and pediatric surgery at Harvard University. During medical school, he joined the US Navy and served as a doctor in WWII. Hoagland also took time off while recovering from tuberculosis at Trudeau Sanatorium in Saranac Lake, NY, which he contracted from a baby he was treating. When he returned to Boston in 1947 to complete his MD, he realized he was too weak to continue his residency and switched to research, working in the Huntington Laboratories at Massachusetts General Hospital.

In 1956, Hoagland, Zamecnik and Elizabeth Keller discovered the initial steps of protein synthesis -- amino acid activation by formation of aminoacyl adenylates from amino acids and ATP -- publishing their results in the Journal of Biological Chemistry. Two years later, Hoagland and Zamecnik discovered transfer RNA, the adaptor that shuttles amino acids to messenger RNA. The presence of tRNA had been predicted by Crick a few years earlier, but Hoagland's study, also published in the Journal of Biological Chemistry, was the first to prove its existence.

Hoagland traveled to Cavendish Laboratory at Cambridge University in 1957 to work with Crick and Watson, using the newly discovered tRNA to try to unlock the genetic code. He returned to Harvard soon after and in 1967, he accepted a position as chair of the microbiology department at Dartmouth Medical School. Three years later, he left Dartmouth to take over his father's old position as director and president of the Worcester Foundation, strengthening the research institution's cell biology, endocrinology, neurobiology, and reproductive biology programs.

In the 1970s, Hoagland led one of the first congressional delegations of researchers lobbying for government support for the advancement of science. "His leadership played a key role in facilitating the government's support for science," said Alex Rich, a molecular biologist at the Massachusetts Institute of Technology and a long-time colleague. "He was lively, engaging. He could get anyone behind any cause."

After 15 years at the Worcester Foundation, Hoagland retired to Thetford, VT, in 1985. During his career, he was twice nominated for the Nobel Prize and received the Franklin Medal for life sciences in 1976. He published 62 scientific papers, which have been cited more than 2,500 times, according to ISI Web of Knowledge. He also wrote six books on molecular biology for the general public, including The Way Life Works, and won American Medical Writers Book Awards in 1982 and 1996.

According to friends and colleagues, Hoagland was a gifted wood sculptor, once creating a "beautiful wooden base for a wire model of tRNA" Rich had put together, the researcher said. The structure was on display in the Worcester Foundation before the institute merged with the University of Massachusetts Medical School.

"On a personal level, Hoagland was ill-equipped for science," said Pederson. "He was plagued with self-doubt, although he has no reason to be; he made brilliant discoveries. But he loved teaching and working with his hands. He was quintessentially elegant and decent; a truly wonderful person."

Hoagland suffered from kidney failure and heart problems. He died in his home after nine days of fasting under the care of his family, carrying out his wish to die naturally. He is survived by three children, five stepchildren, four grandchildren, two great grandchildren, and his ex-wife.

Editor's note (September 30): This afternoon, James Watson returned a call we placed to him on Monday to give us his remembrances of Hoagland. "[His] science was world-class," Watson said. "He and Paul Zamecnik deserved to win the Nobel Prize for their fundamental work on tRNA... He was a very old fashioned New Englander, a true gentleman, who did beautiful experiments." Watson served as a member of Hoagland's 1977 Delegation for Basic Biomedical Research, which lobbied Congress for more support for the sciences.


Sunday, September 27, 2009

Saturday, September 26, 2009

Satellites find water on the Moon

Composite image consisting of a subset of Moon Mineralogy Mapper data for the Orientale region. The image strip on the left is a colour composite of data from 28 separate wavelengths of light reflected from the moon. The blue to red tones reveal changes in rock and mineral composition, and the green colour is an indication of the abundance of iron-bearing minerals such as pyroxene. The image strip on the right is from a single wavelength of light. (Courtesy: NASA)
There is much more water on the Moon than previously thought, according to scientists who have analysed data gathered by three different space missions. Data from one mission show that water is retained by the Moon through chemical reactions, suggesting that water may also be present below the lunar surface. Significant amounts of water on the Moon would make it much easier to sustain human colonies.

Wetter than we thoughtEver since the Apollo missions brought back chunks of the Moon, scientists have been under the impression that there is very little (if any) water on our nearest neighbour. As well as being bone dry, these Moon rocks also showed no signs of ever interacting chemically with water. Later studies of the Moon's surface yielded tantalizing hints that water could be there, but these were not conclusive.

Most of what we know about the surface of the Moon is limited to its equatorial regions. That's where the Apollo missions landed, and it's also where subsequent Russian robotic missions gathered samples. Far less is known about the polar regions, where frozen water may be lurking – particularly in shady craters.

Damp days
New data from NASA's Deep Impact spacecraft reveals that water and hydroxyl (water less one hydrogen atom) molecules are present just about everywhere on the surface of the moon. What's more, the concentration of these molecules goes up and down in a daily cycle, suggesting that they are formed during the day by chemical reactions between protons in the solar wind and moon rocks. Deep Impact used its infrared spectrometer to survey the entire surface of the moon and also found that the concentrations of water and hydroxyl were highest at the north pole.

Similar evidence for such surface water has also just been found by Roger Clark of the US Geological Survey, who has analysed data gathered in 1999 by the Visual and Infrared Mapping Spectrometer (VIMS) aboard the Cassini mission.

According to lunar expert Ian Crawford of Birbeck College London, however, the most significant of the three findings was made by Moon Mineralogy Mapper (M3)(see bottom of post) on board India's Chandrayaan-1 satellite, which was launched 11 months ago. M3 maps the mineral content of the surface of the Moon using spectrometers covering the infrared to the ultraviolet.

Retaining water
"The M3 result shows that there are hydrated minerals on the Moon," explained Crawford. "This shows that the water is not just frozen on the surface, it requires some interaction between rocks and water". These interactions show that the Moon is retaining water that arrives on its surface via comets, meteorites and dust as well as the solar wind.

Crawford also believes that these three latest results suggest that there is enough water on the Moon to be useful to future lunar colonies.

We will learn even more about the Moon next week, when NASA's LCROSS probe will crash into a shady polar crater – and hopefully kick up ice and other debris that will then be analysed.

The next big challenge for Moon scientists, according to Crawford, will be to combine the results from all these missions to gain a better understanding of water on the Moon. In particular, he points out that ice on the Moon should contain a historical record of exactly what comets deliver to terrestrial planets. This could help us understand how Earth acquired its watery environment, which is crucial for life on this planet.



The Moon Mineralogy Mapper (M3) is one of two instruments that NASA is contributing to India's first mission to the Moon, Chandrayaan-1 (meaning "Lunar Craft" in ancient Sanskrit), which launched on October 22, 2008. M3 is a state-of-the-art imaging spectrometer that will provide the first map of the entire lunar surface at high spatial and spectral resolution, revealing the minerals of which it is made.

Scientists will use this information to answer questions about the Moon's origin and development and the evolution of terrestrial planets in the early solar system. Future astronauts will use it to locate resources, possibly including water, that can support exploration of the Moon and beyond

LSD Returns--For Psychotherapeutics

Albert Hofmann, the discoverer of LSD, lambasted the countercultural movement for marginalizing a chemical that he asserted had potential benefits as an invaluable supplement to psychotherapy and spiritual practices such as meditation. “This joy at having fathered LSD was tarnished after more than ten years of uninterrupted scientific research and medicinal use when LSD was swept up in the huge wave of an inebriant mania that began to spread over the Western world, above all the United States, at the end of the 1950s,” Hofmann groused in his 1979 memoir LSD: My Problem Child.

For just that reason, Hofmann was jubilant in the months before his death last year, at the age of 102, when he learned that the first scientific research on LSD in decades was just beginning in his native Switzerland. “He was very happy that, as he said, ‘a long wish finally became true,’ ” remarks Peter Gasser, the physician leading the clinical trial. “He said that the substance must be in the hands of medical doctors again.”

The preliminary study picks up where investigators left off. It explores the possible therapeutic effects of the drug on the intense anxiety experienced by patients with life-threatening disease, such as cancer. A number of the hundreds of studies conducted on lysergic acid diethylamide-25 from the 1940s into the 1970s (many of poor quality by contemporary standards) delved into the personal insights the drug supplied that enabled patients to reconcile themselves with their own mortality. In recent years some researchers have studied psilocybin (the active ingredient in “magic mushrooms”) and MDMA (Ecstasy), among others, as possible treatments for this “existential anxiety,” but not LSD.

Gasser, head of the Swiss Medical Society for Psycholytic Therapy, which he joined after his own therapist-administered LSD experience, has only recently begun to discuss his research, revealing the challenges of studying psychedelics. The $190,000 study approved by Swiss medical authorities, was almost entirely funded by the Multidisciplinary Association for Psychedelic Studies, a U.S. nonprofit that sponsors research toward the goal of making psychedelics and marijuana into prescription drugs. Begun in 2008, the study intends to treat 12 patients (eight who will receive LSD and four a placebo). Finding eligible candidates has been difficult—after 18 months only five patients had been recruited, and just four had gone through the trial’s regimen of a pair of all-day sessions. “Because LSD is not a usual treatment, an oncologist will not recommend it to a patient,” Gasser laments.

The patients who received the drug found the experience aided them emotionally, and none experienced panic reactions or other untoward events. One patient, Udo Schulz, told the German weekly Der Spiegel that the therapy with LSD helped him overcome anxious feelings after being diagnosed with stomach cancer, and the experience with the drug aided his reentry into the workplace.

The trials follow a strict protocol—“all LSD treatment sessions will begin at 11 a.m.”—and the researchers are scrupulous about avoiding mistakes that, at times, occurred during older psychedelic trials, when investigators would leave subjects alone during a drug session. Both Gasser and a female co-therapist are present throughout the eight-hour sessions that take place in quiet, darkened rooms, with emergency medical equipment close at hand. Before receiving LSD, subjects have to undergo psychological testing and preliminary psychotherapy sessions.

Another group is also pursuing LSD research. The British-based Beckley Foundation is funding and collaborating on a 12-person pilot study at the University of California, Berkeley, that is assessing how the drug may foster creativity and what changes in neural activity go along with altered conscious experience induced by the chemical. Whether LSD will one day become the drug of choice for psychedelic psychotherapy remains in question because there may be better solutions. “We chose psilocybin over LSD because it is gentler and generally less intense,” says Charles S. Grob, a professor of psychiatry at the University of California, Los Angeles, who conducted a trial to test psilocybin’s effects on anxiety in terminal cancer patients. Moreover, “it is associated with fewer panic reactions and less chance of paranoia and, most important, over the past half a century psilocybin has attracted far less negative publicity and carries far less cultural baggage than LSD.”

Others assert the importance of comparative pharmacology—how does LSD differ from psilocybin?—because of the extended period of research quiescence. Just because many types of so-called SSRI antidepressants exist, “it doesn’t mean that they are all identical,” observes Roland Griffiths, a Johns Hopkins University researcher who conducts trials with psilocybin. In any case, on the 40th anniversary of the Woodstock music festival, psychoactive substances that represented the apotheosis of the counterculture lifestyle are no longer just hippie elixirs.


Friday, September 25, 2009

Freak waves spotted in microwave cavity

Photograph of the interior of the Marburg microwave cavity showing the randomly placed cones: the plate is 26 × 36 cm and the cones are 1.5 cm tall. (Courtesy: Lev Kaplan)
Freak waves towering as much as 30 m above the surrounding seas have long been reported by mariners, and recent satellite studies have shown that they are more common that previously expected. Now, a team of physicists in Germany and the US has gained important insights into the possible origins of such waves by scattering microwaves in the laboratory.

The work suggests that rogue waves can emerge from linear interactions between waves – contradicting some theories, which assume that non-linear interactions are required. The team believes that its insights could be used to calculate a "freak index", which would give the probability of encountering freak waves at specific locations in the oceans.

The experiment was inspired by a measurement made eight years ago by a group that included one of the present team – Eric Heller of Harvard University. Electrons flowing on a semiconductor sheet were seen to focus into several narrow beams, rather than scatter in random directions as had been expected. The reason, according to Lev Kaplan of Tulane University, is that random impurities in the semiconductor acted like a "bad lens", directing the electrons (which act like waves) towards several focal points.

Random currents
Kaplan and Heller realized that random currents in the ocean could also act as bad lenses, focusing smaller waves into larger – and even freak – waves.

According to Kaplan, it would be very difficult to test the theory using water in a wave tank because such facilities are set up to study waves propagating in only one direction. Instead, they joined forces with Ruven Höhmann, Ulrich Kuhl and Hans-Jürgen Stöckmann at the University of Marburg to study the effect in microwaves.

The team in Germany injected microwaves into a cavity comprising two parallel metal plates. The distance between the plates was much less than the wavelength of the microwaves, making the waves "quasi 2D" – just like ocean waves. Scattering from random currents was simulated by placing a number of metal cones in random positions in the cavity.

Orders of magnitude more
The team monitored the microwave intensity throughout the cavity and noticed the emergence of "hot spots", where the intensity was five or more times greater than background levels. The team counted the number of such freak waves that occurred over a finite time and discovered that they were many orders of magnitude more common than if they resulted from the random superposition of plane waves in the cavity. Random superposition had earlier been thought to govern the formation of freak waves in the ocean, which could explain why mariners and oceanographers seemed to differ on the frequency of such events.

Kaplan told that the randomly-placed cones were behaving like a bad lens, which could occasionally focus the microwaves into a hot spot. The experiment is also the first to establish that freak waves can be generated via simple linear interactions between waves – the microwaves in the cavity only interact linearly. Previously, many oceanographers had believed that non-linear interactions – which become more prevalent in shallow water – were required to create freak waves.

Leonid Mezhov-Deglin of the Institute of Solid-State Physics of the Russian Academy of Sciences said that the microwave experiments should be of interest to physicists studying ocean and other surface waves. However, he cautioned that much more work was needed in the characterization of rogue ocean waves before they could be simulated accurately using microwaves.

Freak index
The experiment has also allowed Kaplan and colleagues to hone their "freak index", which defines the likelihood of encountering a rogue wave based on the average wave and current speeds and the angular spread of wave motion. This could help mariners to identify regions of the ocean where rogue waves could be a problem, but Kaplan points out that physicists will never be able to predict the formation of individual waves.


Thursday, September 24, 2009


Trivedi hopes the experiments will help explain how certain materials, under certain conditions produce particular crystal growth patterns, such as these nickel-based superconductors.

AMES, Iowa – A research project 10 years in the making is now orbiting the Earth, much to the delight of its creator Rohit Trivedi, a senior metallurgist at the U.S. Department of Energy’s Ames Laboratory. Equipment recently delivered to the International Space Station by the Space Shuttle Discovery will allow the Earth-bound Trivedi to conduct crystal growth experiments he first conceived more than a decade ago.

The equipment is actually a mini laboratory, known as DECLIC – DEvice for the study of Critical LIquids and Crystallization – will allow Trivedi to study and even control crystal growth pattern experiments, in real time, from his laboratory in Wilhelm Hall on the Iowa State University campus in Ames. The goal is to use the microgravity environment on board the Space Station to determine how materials form crystals as they move from liquid to solid and what effect variations in growth conditions have on crystallization patterns.

“When materials ‘freeze’ there are specific crystalline growth patterns that appear,” Trivedi said, “and there are fundamental physics that govern these patterns. However, small effects can have significant influence on the patterns that form. Snow flakes, for example, form the same basic six-sided pattern, but because of minute variations, no two are exactly alike. These crystallization patterns play a critical role in governing the properties of a solidified material”

Trivedi hopes the experiments will help explain how certain materials, under certain conditions produce particular crystal growth patterns, such as these nickel-based superconductors.

The equipment is actually a mini laboratory, known as DECLIC – DEvice for the study of Critical LIquids and Crystallization – will allow Trivedi to study and even control crystal growth pattern experiments, in real time, from his laboratory in Wilhelm Hall on the Iowa State University campus in Ames. The goal is to use the microgravity environment on board the Space Station to determine how materials form crystals as they move from liquid to solid and what effect variations in growth conditions have on crystallization patterns.

“When materials ‘freeze’ there are specific crystalline growth patterns that appear,” Trivedi said, “and there are fundamental physics that govern these patterns. However, small effects can have significant influence on the patterns that form. Snow flakes, for example, form the same basic six-sided pattern, but because of minute variations, no two are exactly alike. These crystallization patterns play a critical role in governing the properties of a solidified material”

While Trivedi, who is also an ISU distinguished professor of materials science and engineering, studies primarily metals, the material to be used in the DECLIC experiments is a transparent, wax-like substance called succinonitrile. With a relatively low melting point, 57 degrees Celsius, the material lends itself to study in the controlled confines of the Space Station, and its transparency will make it possible for researchers to view the crystal growth process as the material solidifies. However, the basic principles governing crystal growth will be the same.

So why conduct the experiment in low gravity? Trivedi hopes that the low gravity will “erase” the effects of convection, the natural circulation of fluid.

“On Earth, the small effects are masked by convection,” he said. “We hope that in a low-gravity environment, convection will be minimized so that we can more clearly see the importance of the small effects and see how the experimental data match our theoretical modeling.”

Much of that modeling has been done by collaboration with Trivedi’s colleague, Alain Karma, a theoretical physicist at Northeastern University in Boston. The pair has also collaborated closely with the Centre National d'Etudes Spatiales (CNES), the French government space agency that along with NASA, helped fund the work.

After preliminary testing in September, DECLIC is scheduled to be online in October and the first set of experiments will run through February 2010 according to Trivedi. Through a connection with the computation center in Toulouse, France, Trivedi’s research group will be able to view video of the material as it solidifies. To pick up the necessary detail, Trivedi’s lab is outfitted with a big-screen, high definition monitor. But they won’t be just passive spectators.

“If we see something unusual, we can repeat the experiment, all in real time,” Trivedi said. “Likewise, if we don’t see much happening, we can alter the conditions and move on.”

All the video from the DECLIC experiments will be captured and stored for future reference by CNES in Toulouse, France. Trivedi’s research proposal was originally selected by NASA for funding back in 1998, receiving approximately $2 million in total through ISU’s Institute for Physical Research and Technology, and was later selected as one of only six projects in materials science selected for actual flight. To now be this close to seeing the project in operation is exciting for Trivedi.

“It’s been a long time since we started,” Trivedi said, “but it’s also given us time to finalize the experiments and work on the theoretical side. Now we’re just anxious to get experimental results to see if things behave as we expect.”

Trivedi’s research isn’t the only Ames Laboratory science in outer space. Materials developed at the Lab’s Materials Preparation Center are on board the Planck satellite as part of the instrument cooling system.

Ames Laboratory is a U.S. Department of Energy Office of Science laboratory operated for the DOE by Iowa State University. Ames Laboratory creates innovative materials, technologies and energy solutions. We use our expertise, unique capabilities and interdisciplinary collaborations to solve global challenges.


The digital story behind the mask

Mak Yong dancer in a 3D body scanner. Courtesy Info Com Development Centre (IDeC) of Universiti putra Malaysia.
Capturing culture in digital form can lead to impressive demands for storage and processing. And grid technology has a role to play in providing those resources. For instance, a 10-minute recording of the movements of a Malay dancer performing the classical Mak Yong dance, using motion-capture equipment attached to the dancer’s body, can take over a week to render into a virtual 3D image of the dancer using a single desktop computer. Once this is done, though, every detail of the dance movement is permanently digitized, and hence preserved for posterity.

The problem, though, is that a complete Mak Yong dance carried out for ceremonial purposes could last a whole night, not just ten minutes. Rendering and storing all the data necessary for this calls for grid computing.

Faridah Noor, an associate professor at the University of Malaya, is involved in an EUAsiaGrid initiative to promote the use of grid technologies in Asia. She sees great potential for grid-enabled e-culture to digitally preserve traditional dances and artifacts for posterity. She and her colleagues are working on several projects to capture and digitally preserve even the most ephemeral cultural relics for posterity.

Take one extreme example: intricate masks carved by shamans of the Mah Meri tribe to help cure people of their ailments or to ward off evil. This tribe’s customs are dying out due to development of the coastal region where they live, and few young people seem keen to learn the old carving techniques. Even the trees that the shamans use to make the masks are disappearing, due to this development. But perhaps the biggest challenge to preserving this tradition is that the shamans deliberately throw the masks into the sea as part of the ritual, to cast away bad spirits.

Mak Yong dancer with motion sensors attached for recording movements during the dance. Courtesy Info Com Development Centre (IDeC) of Universiti putra Malaysia.
The Museum of Asian Arts at the University of Malaya has managed to recuperate over 100 of these masks. But just preserving the masks does not amount to preserving the culture behind them. As Noor, who works in the area of sociolinguistics and ethnolinguistics, points out, “We have to capture the story behind the mask.” Each mask is made for an individual and his or her illness, so capturing the inspiration that guides the shaman while preparing the mask is as important as recording the way he carves the wood.

The benefits of being part of EUAsiaGrid are not just the access to know-how and resources for grid-enabled processing and storage of data. Through participation in the project, Noor has become aware of similar challenges being addressed by European researchers. “We notice that there are some associated technologies that we can benefit from,” says Noor. For example, she has made contact with researchers at the University of Heidelberg, who have tools that can help put all the digital information about the mask-making together into a coherent and easily accessible whole.

Mah Meri masks. Courtesy Museum of Asian Arts, University Of Malaya.
Be it the swaying of the dancer or the singing of the shaman, digital technology and grid computing provides some hope of capturing vestiges of ancient cultures from the destructive side-effects of modernization. Some day, young Malays may be able to peer into realistic virtual worlds, to experience traditions that have vanished from the real one.

—Francois Grey reporting for EUAsiaGrid


Sunday, September 20, 2009

Sandia Researchers Construct Nanotube Device That Can Detect The Colors of the Rainbow

Sandia researcher Xinjian Zhou measures the electronic and optical properties of carbon nanotube devices in a probe station. The monitor shows the electrode layout on the device wafer; the nanotubes are positioned in the small horizontal gaps.

LIVERMORE, Calif. — Researchers at Sandia National Laboratories have created the first carbon nanotube device that can detect the entire visible spectrum of light, a feat that could soon allow scientists to probe single molecule transformations, study how those molecules respond to light, observe how the molecules change shapes, and understand other fundamental interactions between molecules and nanotubes.

Carbon nanotubes are long thin cylinders composed entirely of carbon atoms. While their diameters are in the nanometer range (1-10), they can be very long, up to centimeters in length.

The carbon-carbon bond is very strong, making carbon nanotubes very robust and resistant to any kind of deformation. To construct a nanoscale color detector, Sandia researchers took inspiration from the human eye, and in a sense, improved on the model.

When light strikes the retina, it initiates a cascade of chemical and electrical impulses that ultimately trigger nerve impulses. In the nanoscale color detector, light strikes a chromophore and causes a conformational change in the molecule, which in turn causes a threshold shift on a transistor made from a single-walled carbon nanotube.

“In our eyes the neuron is in front of the retinal molecule, so the light has to transmit through the neuron to hit the molecule,” says Sandia researcher Xinjian Zhou. “We placed the nanotube transistor behind the molecule—a more efficient design.”

Zhou and his Sandia colleagues François Léonard, Andy Vance, Karen Krafcik, Tom Zifer, and Bryan Wong created the device. The team recently published a paper, “Color Detection Using Chromophore-Nanotube Hybrid Devices,” in the journal Nano Letters.

The idea of carbon nanotubes being light sensitive has been around for a long time, but earlier efforts using an individual nanotube were only able to detect light in narrow wavelength ranges at laser intensities. The Sandia team found that their nanodetector was orders of magnitude more sensitive, down to about 40 W/m2—about 3 percent of the density of sunshine reaching the ground. “Because the dye is so close to the nanotube, a little change turns into a big signal on the device,” says Zhou.

Léonard says the project draws upon Sandia’s expertise in both materials physics and materials chemistry. He and Wong laid the groundwork with their theoretical research, with Wong completing the first-principles calculations that supported the hypothesis of how the chromophores were arranged on the nanotubes and how the chromophore isomerizations affected electronic properties of the devices.

To construct the device, Zhou and Krafcik first had to create a tiny transistor made from a single carbon nanotube. They deposited carbon nanotubes on a silicon wafer and then used photolithography to define electrical patterns to make contacts.

The final piece came from Vance and Zifer, who synthesized molecules to create three types of chromophores that respond to either the red, green, or orange bands of the visible spectrum. Zhou immersed the wafer in the dye solution and waited a few minutes while the chromophores attached themselves to the nanotubes.

This diagram depicts a representation of chromophores attaching to a transistor made from a single carbon nanotube.

The team reached their goal of detecting visible light faster than they expected—they thought the entire first year of the project would be spent testing UV light. Now, they are looking to increase the efficiency by creating a device with multiple nanotubes.

“Detection is now limited to about 3 percent of sunlight, which isn’t bad compared with a commercially available digital camera,” says Zhou. “I hope to add some antennas to increase light absorption.”

A device made with multiple carbon nanotubes would be easier to construct and the resulting larger area would be more sensitive to light. A larger size is also more practical for applications.

Now, they are setting their sites on detecting infrared light. “We think this principle can be applied to infrared light and there is a lot of interest in infrared detection,” says Vance. “So we’re in the process of looking for dyes that work in infrared.”

This research eventually could be used for a number of exciting applications, such as an optical detector with nanometer scale resolution, ultra-tiny digital cameras, solar cells with more light absorption capability, or even genome sequencing. The near-term purpose, however, is basic science.

“A large part of why we are doing this is not to invent a photo detector, but to understand the processes involved in controlling carbon nanotube devices,” says Léonard.

The next step in the project is to create a nanometer-scale photovoltaic device. Such a device on a larger scale could be used as an unpowered photo detector or for solar energy. “Instead of monitoring current changes, we’d actually generate current,” says Vance. “We have an idea of how to do it, but it will be a more challenging fabrication process.”

The Miracle Array: New window on the Extreme Universe

When we gaze at the stars, the light we see is radiation from hot gas on the stellar surfaces, continuously heated by nuclear fusion at the centers of those stars. Recently, scientists at Los Alamos have developed a technique to survey the entire overhead sky for sources of much higher energy radiation, gamma rays at energies trillions of times higher than the energy of the starlight we see with our eyes.

Those gamma rays come from regions of violent activity, where unknown mechanisms accelerate matter to high energies. They come from the active centers of distant galaxies, where stars are being swallowed up by supermassive black holes, and solar-system-size clouds of hot ionized matter are being ejected in narrowly focused jets moving at nearly the speed of light. They also come from expanding nebulae left over from exploding stars (supernovae), wind nebulae streaming from compact neutron stars, and stellar-size objects that produce short, ultrabright bursts of gamma rays.

Understanding these rapidly changing regions and how they generate high-energy gamma rays presents us with some of the most-difficult problems in modern physics. For example, are these gamma-ray sources also generating cosmic rays, the very high energy charged nuclei that streak across the galaxy?

"The regions emitting gamma rays have intense gravitational, electric, and magnetic fields," says Brenda Dingus, current team leader of the Milagro project, the high-energy gamma-ray experiment at Los Alamos. "In our work at Milagro, we and our university collaborators have been surveying the sky for the very highest energy gamma rays, those with energies of 10- to 100-trillion electron volts (TeV) and higher. These gamma rays can tell us the most about these violent regions and thereby put constraints on our ideas about the acceleration mechanisms that might be taking place there."

Locating these gamma-ray sources might also help solve the century-old mystery of how and where cosmic rays are formed. Gamma rays, like visible light, travel to Earth in straight lines and therefore reveal their place of origin. In contrast, the electrically charged cosmic rays get deflected by the magnetic fields in our galaxy and arrive at Earth from all directions with no indication of where they come from.

"Cosmic rays form a large, uniform background. To detect a gamma-ray source, we must detect a significant number of gamma rays coming from a particular direction, a number larger than the fluctuations in the cosmic-ray background," continues Gus Sinnis, co-spokesperson for Milagro and leader of the Neutron Science and Technology Group. "Because gamma rays are likely ‘fellow travelers' of cosmic rays, in the sense that they are produced with them or by them, a high-energy gamma-ray source is likely to be at or near a cosmic-ray source. If one finds the former, one is likely closing in on the latter."

Until the Milagro experiment, gamma-ray astronomers had great difficulty detecting sources of gamma rays with energies as high as 10 to 100 TeV.

To detect a source of gamma rays in that energy range, one must detect at least 10 to 20 such gamma rays per day coming from a particular direction.

That's an impossible task for satellite-borne detectors. Their area is so small that the number of very high-energy gamma rays intercepting the detector's small area per year goes down to almost zero.

The ground-based gamma-ray telescopes known as atmospheric Cherenkov telescopes are much larger, but they cannot detect gamma rays directly because gamma rays never reach the ground. Instead, gamma rays produce a shower of secondary particles when they enter Earth's atmosphere (see figure below), and the atmospheric Cherenkov telescopes pick up the (Cherenkov) light radiated in the wake of those secondary particles.

Those telescopes have been very successful and were the first to discover a cosmic source of TeV gamma rays (the Crab Nebula, shown in the opening illustration). However, their narrow field of view, combined with the requirement for moonless, cloudless nights, restricts their viewing to a discrete number of directions and a limited viewing time (50 hours per year) in each direction. They therefore have difficulty detecting sources that emit gamma rays above 10 TeV or that are spread over a wide area in the celestial sky.

These limitations on field of view and observation time can be overcome by a ground-based array of particle detectors spread over a large area. This large-area array can operate around the clock and simultaneously view the entire overhead sky. However, it will detect only the shower particles that survive to ground level, and of those, only the small fraction, typically less than 1 percent, that intercept the sparsely distributed detectors. The rest fall, you might say, between the cracks. Large-area arrays are therefore sensitive to the showers from gamma rays that are 100 TeV and above because those showers cover the largest areas and generate the largest numbers of particles: millions of particles and more.

Unfortunately, the number of 100-TeV gamma rays entering Earth's atmosphere is so low that the enormous Cygnus array at Los Alamos and the Chicago Air Shower Array, covering over 200,000 square meters in Dugway, Utah, failed to detect a significant number of 100-TeV showers coming from any particular direction. Thus, no sources were found.

The Los Alamos team concluded that a successful ground-based array would have to be sensitive to showers from much lower-energy gamma rays—down to 0.1 TeV. At that energy, the showers are millions of times more numerous, but each shower contains only thousands as opposed to millions of particles, and many particles would be absorbed before reaching the ground. The array would therefore have to detect almost all the particles that reach the ground, which meant creating an array with end-to-end detectors. That approach would be way too expensive.

The impasse provided an important moment for Los Alamos' Todd Haines. When still a graduate student at the University of California, Irvine, Haines had begun working on a new approach to detecting gamma-ray showers—an approach involving water.

Shower particles that intercepted a huge tank of water would streak through the water at nearly the speed of light, setting electrons in the water in motion. Those electrons would radiate light, and because light has only two-thirds its normal speed when going through water, the light would trail behind the speeding particle and form a wide-angle (41-degree) wake, like the bow wave formed by a boat traveling faster than the speed of water waves or like a supersonic plane's shock wave, which creates the sonic boom. Any light detector within the wake would detect the shower particle.

Now, if an array of hundreds of light detectors (photomultiplier tubes, or PMTs) were immersed in a mammoth tank, an arrangement could be found that guaranteed that light from every particle entering the pond would be detected. In addition, the array could operate during the day if covered with a light-tight tent that kept out every ray of sunshine.

The idea was very promising, but with no funds to construct the water tank, it was just another new idea. Then one day Darragh Nagle, a Los Alamos nuclear physicist, burst into Haines' office and announced, "I know where we can build your project."

Nagle took Haines to Fenton Hill in the Jemez mountains west of Los Alamos. There an abandoned manmade pond, once used as a holding tank for an experimental geothermal well, now stood ready for another purpose.

Thus began Milagro (Spanish for "miracle"), in which a small pond was transformed into a new window on the universe, one that would view the most-extreme phenomena in our galaxy and regions nearby.

The Fenton Hill pond was 195 feet by 260 feet and 26 feet deep. It was smaller than desired but could be supplemented by small water tanks outside the pond—outriggers—each containing a PMT. The outriggers would be spaced over an area 10 times the size of the pond. For showers that intercepted the pond only partially, those outriggers would detect the central core of a shower, thereby enabling accurate reconstruction of the shower direction.

"The outriggers increased Milagro's sensitivity by a factor of two, and that made all the difference. We went from barely seeing the Crab Nebula, the brightest gamma-ray source in the northern sky, to discovering new sources of TeV gamma rays in our galaxy and finding a path towards proving that certain sources are cosmic-ray sources," says Sinnis.

From 2000 to 2008, Milagro detected over 200 billion showers from both gamma rays and cosmic rays, collected at the rate of 1,700 showers per second. Each was recorded electronically, analyzed on the spot, and characterized by statistical features that were saved for further analysis.

The analysis produced the most-sensitive survey of the TeV sky to date and led to several surprises. First were the five bright sources of TeV gamma rays (in the figure above, the sources with dark centers), with average gamma-ray energies of 12 TeV, standing out loud and clear above the uniform background of cosmic rays, and four less-prominent candidates.

Several of these sources overlapped the much lower energy gamma-ray sources that were discovered 5 to 10 years earlier by satellite-borne gamma-ray detectors. Ground-based atmospheric Cherenkov telescopes had not seen them, despite having searched for TeV gamma rays in the directions of these known sources.

One explanation was that some Milagro sources are spatially extended, up to a few degrees in diameter, making them difficult to detect with narrow-view telescopes. Of these, some turned out to be nearby pulsar wind nebulae, exotic nebulae left over from supernovae that have at their center a neutron star pulsar creating a wind of high-energy particles (see opening illustration showing the Crab Nebula).

A follow-up search with longer observation times by the HESS atmospheric Cherenkov telescope array in Namibia resulted in detection of one of Milagro's sources. The HESS result not only confirmed Milagro's measurement of the total flux of TeV gamma rays but also determined that this source was unusually bright at the very highest energies at which Milagro has its greatest sensitivity.

Another major Milagro discovery was a surprisingly large number of TeV gamma rays coming from an extended region of the Milky Way known as Cygnus. The expected number is calculated by assuming the gamma rays are produced by the uniform background of cosmic rays impinging on the interstellar matter and radiation in the Cygnus region. However, Cygnus contains an unusually high number of objects (supernova remnants, young stellar clusters, and stars called Wolf-Rayet stars that emit large winds) that could be cosmic-ray sources. "The excess of gamma rays detected by Milagro could be the smoking gun for a recent cosmic-ray source, an explosion of a star within the region in the past 10,000 years," explains Sinnis.

Milagro has taught us to "expect the unexpected" says Dingus. "When we open a new window on the universe, the biggest discoveries are not the ones predicted."

"With all its success, Milagro provided only a proof-of-principle for the basic technique of using Cerenkov radiation in water to detect shower particles," says Dingus. "Our experience over the last decade and our computer simulations of Milagro's performance convinced us that we could build a much more sensitive detector, and with support from the Laboratory-Directed Research and Development program, we've designed it."

Funding is being sought from the National Science Foundation and Mexican institutions for the new array: HAWC, for High Altitude Water Cerenkov array. It will be built on the shoulder of Sierra Negra, Mexico's highest peak. At 13,500 feet, HAWC will be 4,000 feet higher than Milagro was.

At the higher altitude, the number of particles reaching the ground from a shower of a given energy is 5 times higher than that at the Milagro altitude. The HAWC PMTs will be distributed one per tank in 900 water-filled tanks placed place side-by-side over an area about 10 times that of the Milagro pond. Because each PMT will be in its own large tank, the light from each shower particle will be seen by only one PMT, which will allow more-accurate determination of the shower energy.

The increase in altitude, the larger area, and the optical isolation of the PMTs will increase the overall sensitivity 10 to 15 times—high enough to detect many new gamma-ray sources and to monitor the variability of these sources.

Atmospheric Cherenkov telescopes have detected, at distances of billions of light-years, extragalactic sources that flare in only a few minutes, but they have been able to monitor only a few sources for a small amount of time. HAWC will observe the TeV sky every day, and its higher sensitivity will increase the energy range over which these sources can be detected. Milagro found a few sources, but HAWC will add important details for understanding the physical mechanisms in nature's high-energy particle accelerators.

The enormous progress in gamma-ray astronomy over the past decade has fueled intense interest in future instruments. Large investments are planned for the next generation of atmospheric Cherenkov telescopes. Meanwhile, Los Alamos has blazed a new path that will culminate in HAWC, a highly sensitive all-sky survey instrument able to reveal the transient high-energy universe. Sinnis summarizes its promise this way: "With HAWC, we will be in a unique position to close in on the century-old question of the origin of cosmic rays."


Saturday, September 19, 2009

Memristor Minds: The future of artificial intelligence

Slime mould feeding on the surface of an almond. These cunning organisms could be the missing link in memory circuits (Image: Eye of Science/Science Photo Library

EVER had the feeling something is missing? If so, you're in good company. Dmitri Mendeleev did in 1869 when he noticed four gaps in his periodic table. They turned out to be the undiscovered elements scandium, gallium, technetium and germanium. Paul Dirac did in 1929 when he looked deep into the quantum-mechanical equation he had formulated to describe the electron. Besides the electron, he saw something else that looked rather like it, but different. It was only in 1932, when the electron's antimatter sibling, the positron, was sighted in cosmic rays that such a thing was found to exist.

In 1971, Leon Chua had that feeling. A young electronics engineer with a penchant for mathematics at the University of California, Berkeley, he was fascinated by the fact that electronics had no rigorous mathematical foundation. So like any diligent scientist, he set about trying to derive one.

And he found something missing: a fourth basic circuit element besides the standard trio of resistor, capacitor and inductor. Chua dubbed it the "memristor". The only problem was that as far as Chua or anyone else could see, memristors did not actually exist.

Except that they do. Within the past couple of years, memristors have morphed from obscure jargon into one of the hottest properties in physics. They've not only been made, but their unique capabilities might revolutionise consumer electronics. More than that, though, along with completing the jigsaw of electronics, they might solve the puzzle of how nature makes that most delicate and powerful of computers - the brain.

That would be a fitting pay-off for a story which, in its beginnings, is a triumph of pure logic. Back in 1971, Chua was examining the four basic quantities that define an electronic circuit. First, there is electric charge. Then there is the change in that charge over time, better known as current. Currents create magnetic fields, leading to a third variable, magnetic flux, which characterises the field's strength. Finally, magnetic flux varies with time, leading to the quantity we call voltage.

Four interconnected things, mathematics says, can be related in six ways. Charge and current, and magnetic flux and voltage, are connected through their definitions. That's two. Three more associations correspond to the three traditional circuit elements. A resistor is any device that, when you pass current through it, creates a voltage. For a given voltage a capacitor will store a certain amount of charge. Pass a current through an inductor, and you create a magnetic flux. That makes five. Something missing?

Indeed. Where was the device that connected charge and magnetic flux? The short answer was there wasn't one. But there should have been.

Chua set about exploring what this device would do. It was something that no combination of resistors, capacitors and inductors would do. Because moving charges make currents, and changing magnetic fluxes breed voltages, the new device would generate a voltage from a current rather like a resistor, but in a complex, dynamic way. In fact, Chua calculated, it would behave like a resistor that could "remember" what current had flowed through it before (see diagram). Thus the memristor was born.

And promptly abandoned. Though it was welcome in theory, no physical device or material seemed capable of the resistance-with-memory effect. The fundamentals of electronics have kept Chua busy ever since, but even he had low expectations for his baby. "I never thought I'd see one of these devices in my lifetime," he says.

Though memristors were welcome in theory, no physical device seemed capable of the effect
He had reckoned without Stan Williams, senior fellow at the Hewlett-Packard Laboratories in Palo Alto, California. In the early 2000s, Williams and his team were wondering whether you could create a fast, low-power switch by placing two tiny resistors made of titanium dioxide over one another, using the current in one to somehow toggle the resistance in the other on and off.

Nanoscale novelty
They found that they could, but the resistance in different switches behaved in a way that was impossible to predict using any conventional model. Williams was stumped. It took three years and a chance tip-off from a colleague about Chua's work before the revelation came. "I realised suddenly that the equations I was writing down to describe our device were very similar to Chua's," says Williams. "Then everything fell into place."

What was happening was this: in its pure state of repeating units of one titanium and two oxygen atoms, titanium dioxide is a semiconductor. Heat the material, though, and some of the oxygen is driven out of the structure, leaving electrically charged bubbles that make the material behave like a metal.

In Williams's switches, the upper resistor was made of pure semiconductor, and the lower of the oxygen-deficient metal. Applying a voltage to the device pushes charged bubbles up from the metal, radically reducing the semiconductor's resistance and making it into a full-blown conductor. A voltage applied in the other direction starts the merry-go-round revolving the other way: the bubbles drain back down into the lower layer, and the upper layer reverts to a high-resistance, semiconducting state.

The crucial thing is that, every time the voltage is switched off, the merry-go-round stops and the resistance is frozen. When the voltage is switched on again, the system "remembers" where it was, waking up in the same resistance state (Nature, vol 453, p 80). Williams had accidentally made a memristor just as Chua had described it.

Williams could also show why a memristor had never been seen before. Because the effect depends on atomic-scale movements, it only popped up on the nanoscale of Williams's devices. "On the millimetre scale, it is essentially unobservable," he says.

Nanoscale or no, it rapidly became clear just how useful memristors might be. Information can be written into the material as the resistance state of the memristor in a few nanoseconds using just a few picojoules of energy - "as good as anything needs to be", according to Williams. And once written, memristive memory stays written even when the power is switched off.

Memory mould
This was a revelation. For 50 years, electronics engineers had been building networks of dozens of transistors - the building blocks of memory chips - to store single bits of information without knowing it was memristance they were attempting to simulate. Now Williams, standing on the shoulders of Chua, had showed that a single tiny component was all they needed.

The most immediate potential use is as a powerful replacement for flash memory - the kind used in applications that require quick writing and rewriting capabilities, such as in cameras and USB memory sticks. Like flash memory, memristive memory can only be written 10,000 times or so before the constant atomic movements within the device cause it to break down. That makes it unsuitable for computer memories. Still, Williams believes it will be possible to improve the durability of memristors. Then, he says, they could be just the thing for a superfast random access memory (RAM), the working memory that computers use to store data on the fly, and ultimately even for hard drives.

Were this an article about a conventional breakthrough in electronics, that would be the end of the story. Better memory materials alone do not set the pulse racing. We have come to regard ever zippier consumer electronics as a basic right, and are notoriously insouciant about the improvements in basic physics that make them possible. What's different about memristors?

Explaining that requires a dramatic change of scene - to the world of the slime mould Physarum polycephalum. In an understated way, this large, gloopy, single-celled organism is a beast of surprising intelligence. It can sense and react to its environment, and can even solve simple puzzles. Perhaps its most remarkable skill, though, was reported last year by Tetsu Saisuga and his colleagues at Hokkaido University in Sapporo, Japan: it can anticipate periodic events.

Here's how we know. P. polycephalum can move around by passing a watery substance known as sol through its viscous, gelatinous interior, allowing it to extend itself in a particular direction. At room temperature, the slime mould moves at a slothful rate of about a centimetre per hour, but you can speed this movement up by giving the mould a blast of warm, moist air.

You can also slow it down with a cool, dry breeze, which is what the Japanese researchers did. They exposed the gloop to 10 minutes of cold air, allowed it to warm up again for a set period of time, and repeated the sequence three times. Sure enough, the mould slowed down and sped up in time with the temperature changes.

But then they changed the rules. Instead of giving P. polycephalum a fourth blast of cold air, they did nothing. The slime mould's reaction was remarkable: it slowed down again, in anticipation of a blast that never came (Physical Review Letters, vol 100, p 018101).

It's worth taking a moment to think about what this means. Somehow, this single-celled organism had memorised the pattern of events it was faced with and changed its behaviour to anticipate a future event. That's something we humans have trouble enough with, let alone a single-celled organism without a neuron to call its own.

Somehow, an organism without a neuron to call its own had memorised a pattern of events
The Japanese paper rang a bell with Max Di Ventra, a physicist at the University of California, San Diego. He was one of the few who had followed Chua's work, and recognised that the slime mould was behaving like a memristive circuit. To prove his contention, he and his colleagues set about building a circuit that would, like the slime mould, learn and predict future signals.

The analogous circuit proved simple to derive. Changes in an external voltage applied to the circuit simulated changes in the temperature and humidity of the slime mould's environment, and the voltage across a memristive element represented the slime mould's speed. Wired up the right way, the memristor's voltage would vary in tempo with an arbitrary series of external voltage pulses. When "trained" through a series of three equally spaced voltage pulses, the memristor voltage repeated the response even when subsequent pulses did not appear (

Di Ventra speculates that the viscosities of the sol and gel components of the slime mould make for a mechanical analogue of memristance. When the external temperature rises, the gel component starts to break down and become less viscous, creating new pathways through which the sol can flow and speeding up the cell's movement. A lowered temperature reverses that process, but how the initial state is regained depends on where the pathways were formed, and therefore on the cell's internal history.

In true memristive fashion, Chua had anticipated the idea that memristors might have something to say about how biological organisms learn. While completing his first paper on memristors, he became fascinated by synapses - the gaps between nerve cells in higher organisms across which nerve impulses must pass. In particular, he noticed their complex electrical response to the ebb and flow of potassium and sodium ions across the membranes of each cell, which allow the synapses to alter their response according to the frequency and strength of signals. It looked maddeningly similar to the response a memristor would produce. "I realised then that synapses were memristors," he says. "The ion channel was the missing circuit element I was looking for, and it already existed in nature."

The behaviour of synapses looked maddeningly similar to a memristor's response
To Chua, this all points to a home truth. Despite years of effort, attempts to build an electronic intelligence that can mimic the awesome power of a brain have seen little success. And that might be simply because we were lacking the crucial electronic components - memristors.

So now we've found them, might a new era in artificial intelligence be at hand? The Defense Advanced Research Projects Agency certainly thinks so. DARPA is a US Department of Defense outfit with a strong record in backing high-risk, high-pay-off projects - things like the internet. In April last year, it announced the Systems of Neuromorphic Adaptive Plastic Scalable Electronics Program, SyNAPSE for short, to create "electronic neuromorphic machine technology that is scalable to biological levels".

I, memristor
Williams's team from Hewlett-Packard is heavily involved. Late last year, in an obscure US Department of Energy publication called SciDAC Review, his colleague Greg Snider set out how a memristor-based chip might be wired up to test more complex models of synapses. He points out that in the human cortex synapses are packed at a density of about 1010 per square centimetre, whereas today's microprocessors only manage densities 10 times less. "That is one important reason intelligent machines are not yet walking around on the street," he says.

Snider's dream is of a field he calls "cortical computing" that harnesses the possibilities of memristors to mimic how the brain's neurons interact. It's an entirely new idea. "People confuse these kinds of networks with neural networks," says Williams. But neural networks - the previous best hope for creating an artificial brain - are software working on standard computing hardware. "What we're aiming for is actually a change in architecture," he says.

The first steps are already being taken. Williams and Snider have teamed up with Gail Carpenter and Stephen Grossberg at Boston University, who are pioneers in reducing neural behaviours to systems of differential equations, to create hybrid transitor-memristor chips designed to reproduce some of the brain's thought processes. Di Ventra and his colleague Yuriy Pershin have gone further and built a memristive synapse that they claim behaves like the real thing(

The electronic brain will be a time coming. "We're still getting to grips with this chip," says Williams. Part of the problem is that the chip is just too intelligent - rather than a standard digital pulse it produces an analogue output that flummoxes the standard software used to test chips. So Williams and his colleagues have had to develop their own test software. "All that takes time," he says.

Chua, meanwhile, is not resting on his laurels. He has been busy extending his theory of fundamental circuit elements, asking what happens if you combine the properties of memristors with those of capacitors and inductors to produce compound devices called memcapacitors and meminductors, and then what happens if you combine those devices, and so on.

"Memcapacitors may be even more useful than memristors," says Chua, "because they don't have any resistance." In theory at least, a memcapacitor could store data without dissipating any energy at all. Mighty handy - whatever you want to do with them. Williams agrees. In fact, his team is already on the case, producing a first prototype memcapacitor earlier this year, a result that he aims to publish soon. "We haven't characterised it yet," he says. With so many fundamental breakthroughs to work on, he says, it's hard to decide what to do next. Maybe a memristor could help.


Seeing The Surface

The Lunar Reconnaissance Orbiter detected surface temperatures on the moon’s south pole during the day (left) and night (right). Regions sheltered from the sun remain cold enough to harbor water ice or other volatiles.

The Lunar Reconnaissance Orbiter satellite has imaged the moon’s craggy craters in great detail and identified new possible markers of water ice, NASA scientists reported September 17 at a press briefing.

Launched June 18, 2009 and charged with getting an improved topographical map of the moon, LRO orbits about 50 kilometers (31 miles) above the moon’s surface. Cameras aboard LRO could image a car if it were sitting on the lunar surface, said Richard Vondrak, LRO project scientist at NASA Goddard Space Flight Center in Greenbelt, Md.

So far, the data coming back from LRO’s seven instruments “exceed our wildest expectations,” Vondrak said. “We’re looking at the moon now with new eyes.”

Altitude measurements give scientists a detailed look at the topography of the lunar south pole, shown here. Red regions are high altitude, and blue regions are low altitude.

Early images have turned up fresh craters, boulders and smooth sites that would be good for landings, should humans or robots return to the moon’s surface. Also important for future expeditions, LRO’s equipment measured the types and amounts of damaging radiation at various points near the moon.

With infrared radiation detectors, LRO found that temperatures never exceed about 35 kelvins, or -238º Celsius deep in some permanently shaded regions. Vondrak said that these bitterly cold regions at the lunar south pole “are perhaps the coldest part of the solar system.” Such cold temperatures could allow volatiles, such as water ice, to survive.

Instruments aboard LRO also found hallmarks of hydrogen—a potential marker of water—in unexpected places. Signs of hydrogen turned up in cold, permanently shaded regions of the moon, as scientists expected, but also in warmer places.

“There’s still an awful lot to be done,” says Michael Wargo, chief lunar scientist at NASA Headquarters in Washington, D.C. “And the maps will only get better.”


Friday, September 18, 2009

Planck Snaps its First Images of Ancient Cosmic Light

One of Planck's first images is shown as a strip superimposed over a two-dimensional projection of the whole sky as seen in visible light. The strip covers 360-degrees of sky and, at its widest, is about 15 degrees across. The prominent horizontal band is light from our Milky Way galaxy.

The Planck image shows how the sky looks at millimeter-long wavelengths. Red areas are brighter, blue areas are darker. The large red strips show the Milky Way. The small bright and dark spots far from the galactic plane are from the cosmic microwave background -- relic radiation leftover from the birth of our universe.

Planck is measuring the sky at nine wavelengths of light, one of which is shown here.

Image credits: ESA, LFI & HFI Consortia, Background optical image: Axel Mellinger

PASADENA, Calif. – The Planck mission has captured its first rough images of the sky, demonstrating the observatory is working and ready to measure light from the dawn of time. Planck – a European Space Agency mission with significant NASA participation – will survey the entire sky to learn more about the history and evolution of our universe.

The space telescope started surveying the sky regularly on Aug. 13 from its vantage point far from Earth. Planck is in orbit around the second Lagrange point of our Earth-sun system, a relatively stable spot located 1.5 million kilometers (930,000 miles) away from Earth.

"We are beginning to observe ancient light that has traveled more than 13 billion years to reach us," said Charles Lawrence, the NASA project scientist for the mission at NASA's Jet Propulsion Laboratory in Pasadena, Calif. "It's tremendously exciting to see these very first data from Planck. They show that all systems are working well and give a preview of the all-sky images to come."

A new image can be seen online at .

Following launch on May 14, the satellite's subsystems were checked out in parallel with the cool-down of its instruments' detectors. The detectors are looking for temperature variations in the cosmic microwave background, which consists of microwaves from the early universe. The temperature variations are a million times smaller than one degree. To achieve this precision, Planck's detectors have been cooled to extremely low temperatures, some of them very close to the lowest temperature theoretically attainable.

Instrument commissioning, optimization and initial calibration were completed by the second week of August.

During the "first-light" survey, which took place from Aug. 13 to 27, Planck surveyed the sky continuously. It was carried out to verify the stability of the instruments and the ability to calibrate them over long periods to the exquisite accuracy needed. The survey yielded maps of a strip of the sky, one for each of Planck's nine frequencies. Preliminary analysis indicates that the quality of the data is excellent.

Routine operations will now continue for at least 15 months without a break. In this time, Planck will be able to gather data for two full independent all-sky maps. To fully exploit the high sensitivity of Planck, the data will require a great deal of delicate calibrations and careful analysis. The mission promises to contain a treasure trove of data that will keep cosmologists and astrophysicists busy for decades to come.

Planck is a European Space Agency mission, with significant participation from NASA. NASA's Planck Project Office is based at JPL. JPL contributed mission-enabling technology for both of Planck's science instruments. European, Canadian, U.S. and NASA Planck scientists will work together to analyze the Planck data. More information is online at and .


Hans Rosling

Hans Rosling knows that statistics can change the world—if he can only get the right people to pay attention to them. To make that happen, he has spearheaded the development of Trend-alyzer, a software package that sends stolid data into fluid motion by creating animations of economic, social, and health statistics evolving through time. Nations race across the screen through decades of progress in a few seconds, allowing undetected trends and buried connections to leap out at viewers. The dramatic animations are already changing the perspectives of political leaders, entrepreneurs, and activists around the globe.

Rosling’s passion for statistics was born in his early career as a physician in Mozambique, where he discovered a new paralytic disease called konzo. By carefully sifting through medical data from the afflicted regions, he identified malnutrition and inadequately processed cassava—a tuber used as food in tropical countries—as the cause of the disease, allowing for prevention through better food preparation. In March, Trendalyzer was acquired by Google, which will make it freely accessible to a global audience. Rosling’s latest mission is to make publicly funded health, social, and economic statistics from the United Nations and governments freely available as well. Combining that information with the software needed to interpret it, he contends, will encourage entrepreneurship and drive public policies that combat poverty and disease. Due in part to his advocacy, the U.N. recently opened its online global database free of charge. DISCOVER spoke with Rosling—professor of international health at the Karolinska Institute in Stockholm, Sweden, and a 20-year veteran researcher of disease, poverty, and hunger in Africa—about how he set statistics loose on the world stage, what he has learned from his decades studying global development, and why he is obsessed with making public data truly public.

You spent 20 years studying disease, hunger, and poverty in Africa. How did that shape your view of economic development?
I’ve done a lot of practical anthropology, living in villages with people and realizing how difficult it is to get out of poverty. When in poverty, people use their skill to avoid hunger. They can’t use it for progress. To get away from poverty, you need several things at the same time: school, health, and infrastructure—those are the public investments. And on the other side, you need market opportunities, information, employment, and human rights.

What inspired you to create statistical software rather than staying focused on your public-health research?
While teaching a course on global development at Uppsala University in Sweden, I realized our students didn’t have a fact-based worldview. They talked about “we” and “them.” They thought there were two groups of countries: the Western world, with small families and long lives, and the third world, with large families and short lives. I explained that we have a continuum of life conditions in the world—we can’t put countries into two groups. But when I showed them graphs of this [with time as one axis], it didn’t impact them.

Then in 1994, I got the idea to show each country as a bubble, with economic factors on one axis and child survival on the other. My son started writing the code that made the bubbles move through time, and his wife joined as designer. When you show time as an x-axis, you violate the way we think. But when you show time as graphic movement, as animation, people suddenly understand. Our animation really got the students’ attention. Show the income distribution of the United States and China over time, and in 15 to 20 seconds I can make people understand things that textbooks and years of study haven’t. This is a discovery in perceptional psychology, of how to show trends in society.

Who will use Trendalyzer?
So far, we have had a major hit with two target groups: children under 12 and heads of state. What they have in common is that you have only 5 to 10 seconds to impress them. Leaders at very high levels in government were ignorant, but they were very interested in learning. This software came at a point when it was needed. The cold war is over, globalization is here; everyone wants to understand what is happening with the world. It’s difficult to predict who will use any new technology.

Why do you consider it so crucial to make health and economic data from public agencies and governments accessible to all?
Data allow your political judgments to be based on fact, to the extent that numbers describe realities. In Mexico, the government decided that every village with more than a certain poverty rate and infant mortality level should get electricity. The statistical agency was told to identify these villages, which they did. But they also showed that the villages were so remote, up in the mountains, that if you were going to put electricity there, you might as well also put it in the two or three neighboring villages. Once you run electricity up the mountain, it makes economic sense to cover everyone. For very little additional cost, the whole area could have it.

Without data, you could argue in parliament about which villages should get electricity. But by analyzing the data, the government statistician shaped policy into being more cost-effective and causing less friction. That’s just one example. Good analysis is very useful when you want to convert a political decision into an investment. It can also go the other way and drive policy. You need to show where children are dying in the United States—in Appalachia and the southern and rural areas—so the public can make a serious decision about it: Do we want Appalachia to have a higher child mortality than Malaysia?

Statistics from the U.N. and government agencies are readily available for purchase, but you argue strongly for dropping fees completely. Why is this so important?
Public statistics are owned by taxpayers. These data, which cost about $10 billion in tax money to collect, belong to everyone. And governments are selling them. The World Bank gets statistics for free from the world. They put them together and sell them back to the world for $275 per copy. This hampers entrepreneurs, activists, and politicians from getting access to public statistics. The money is not the only cost: It is cumbersome to pay, it takes time to get the data, and you are not allowed to make the data available to others.

Businesses realize that statistics should be free. And there is very strong support from middle-income countries—China, South Africa, Brazil, Mexico. They desperately need statistics because their countries are changing so rapidly and they want to trade. Their entrepreneurs can’t afford to pay for data.

What is the most surprising thing about viewing global progress through Trendalyzer software, as compared with looking at the more familiar charts and tables of economic data?
Western Europe and the United States have a stagnant view of the rest of the world. It’s like the view England used to have of the United States when it was a colony. But the United States emerged as the world’s power. Now Asia is regaining its position as the world’s power. The world will be normal again; it will be an Asian world, as it always was except for these last thousand years. They are working like hell to make that happen, whereas we are consuming like hell. But because of our preconceived ideas, we don’t fully understand these global trends until we look at the data.

The concept of the Western world and the developing world is the main obstacle to understanding. Most people know only two types of countries, Western and third world, whereas I know 200 types of countries. I know each country’s gross national product, educational level, child mortality, main export products, and so on. We have a continuum of life conditions in the world. The life expectancy in Vietnam today is the same as it was in the United States in 1975. That made Al Gore jump onto the stage and say: “I didn’t know that. I didn’t have the slightest idea.” And that was Al Gore—you can imagine other politicians.

Now that Google has taken over the development of Trendalyzer, what is your next project?
An initiative called Beyond MDG (Beyond the Millennium Development Goals). We want to know: How can we better measure and communicate the conditions of the poorest 1 to 2 billion people in the world? Instead of talking about the third world or developing countries, we’re talking about these specific fellow human beings. Compared with the burden of disease, disasters are a minor health problem. The tsunami in the Indian Ocean caused the equivalent of one month of children’s pneumonia deaths in the world. There is a tsunami every month that could be cured by penicillin, for which there are no images and no reporting.

What are the biggest challenges in global health and poverty right now?
The 1 to 2 billion poorest in the world, who don’t have food for the day, suffer from the worst disease: globalization deficiency. The way globalization is occurring could be much better, but the worst thing is not being part of it. For those people, we need to support good civil societies and governments. We need to make public investments and private markets work together. We also need fair trade. You can’t have one country subsidize produce that is a matter of life and death for another country. If Niger can export its cotton and grow its economy, that’s much better than giving it aid.

But when we give economic aid, what is the best way to do it?
First, it must be steady. Listen to serious politicians from the poorest countries, and they say that they cannot implement necessary programs with aid because it is so unpredictable. Suddenly and without warning, it would be taken away. Second, it must be oriented toward the needs of the poor, not the perceptions of the rich. Jeffrey Sachs did a calculation that basic health care in Africa costs $30. You have a person there with $10, you give him $2, and then you ask why it doesn’t work. We have to be realistic.

You also talk about developing clear goals and means for development. Can you explain that concept?
The goals are paradise. Get the means in order, and the goals will follow. We know kids should go to school—make it possible. Subsidize teacher salaries steadily. I asked a child in Africa, “How do you stay so healthy?” She replied, “Grandmother can read.” Getting girls into school—we think that makes a girl healthier five years later. What really matters is that it makes her a better grandmother 45 years later. Support governments that want to put law and order in their countries. If they aren’t brutal, support them. We can’t get into nitty-gritty detail with the policies of other countries. Focus on the means, and let the goals come when they come.

When it comes to eliminating poverty, you say, “The seemingly impossible is possible.” What makes you so optimistic?
Length of life is improving. Today we take for granted that death belongs to old age. We have controlled the major diseases, and now life expectancy is up to 70 or 80 years in Asia, the Arab world, and most of Latin America. There are places where you still have low life expectancy, but overall there has been a major change. And people of all religions accept family planning. They have two to three children per family. Twenty-five years ago, the global agenda was the population explosion. Now it’s solved. Education is there. We have a higher and higher proportion of children who go to school and become a part of the modern world. And we have economic growth in the world, on average, so we can supply our material needs much better than in the past. A good world for everyone is not a given, but it’s within our reach.


Music and the Mind

Thursday, September 17, 2009

Norm Borlaug: the man who fed the world

Norman Borlaug speaking at the Ministerial Conference and Expo on Agricultural Science and Technology in Sacramento, California, in 2003 (Image: US Department of Agriculture)

They don't make 'em like Norm Borlaug anymore. The father of the green revolution finally lost his long battle with cancer over the weekend at the age of 95. I wasn't surprised: he was looking frail when I saw him last year in Ciudad Obregón, Mexico, where he had launched the revolution.

That afternoon he managed a spirited speech, in fluent Mexican Spanish, to local farmers. But later, when I was allowed to ask him questions, he was flagging. He complained that using crops for biofuel was pushing up world food prices and hurting the poor. "We had other kinds of alternative energy but we stopped developing it," he fumed. "But now I don't have enough energy to keep talking."

He was a giant of the scientific and technological revolution of the 20th century. He probably saved more lives than the more famous names behind polio vaccines or DNA: Norm Borlaug ended famine in much of the world.

What an epitaph. "I personally cannot live comfortably in the midst of abject hunger and poverty and human misery," Borlaug famously said. Some people go into science thinking they might help save the world. Norm's your proof that it's possible


Antarctica's hidden plumbing revealed

THE first complete map of the lakes beneath Antarctica's ice sheets reveals the continent's secret water network is far more dynamic than we thought. This could be acting as a powerful lubricant beneath glaciers, contributing to sea level rise.

Unlike previous lake maps, which are confined to small regions, Ian Joughin at the University of Washington in Seattle and colleagues mapped 124 subglacial lakes across Antarctica using lasers on NASA's ICESat satellite (see map).

The team also observed the lakes draining and filling. While interior lakes tended to be static, many coastal lakes changed significantly. Some even appear to be connected by channels under the ice hundreds of kilometres long. For instance, when upstream lakes under the Recovery glacier drained 3 cubic kilometres of water, lakes downstream gained a similar amount (Journal of Glaciology, vol 55, p 573).

Water flowing under glaciers can act as a lubricant, causing land ice to accelerate into the sea and add to rising sea levels. "The implications for the flow of ice are potentially quite significant," says Andy Smith of the British Antarctic Survey in Cambridge, UK. Those lakes with no clear drainage channels are of particular interest, he says, because they could be spreading a thin film of lubricating water under glaciers.