Saturday, June 19, 2010

Discover Interview: The Math Behind the Physics Behind the Universe

Shing-Tung Yau is a force of nature. He is best known for conceiving the math behind string theory—which holds that, at the deepest level of reality, our universe is built out of 10-dimensional, subatomic vibrating strings. But Yau’s genius runs much deeper and wider: He has also spawned the modern synergy between geometry and physics, championed unprecedented teamwork in mathematics, and helped foster an intellectual rebirth in China.

Despite growing up in grinding poverty on a Hong Kong farm, Yau made his way to the University of California at Berkeley, where he studied with Chinese geometer Shiing-Shen Chern and the master of nonlinear equations, Charles Morrey. Then at age 29 Yau proved the Calabi conjecture, which posits that six-dimensional spaces lie hidden beneath the reality we perceive. These unseen dimensions lend rigor to string theory by supplementing the four dimensions—three of space and one of time—described in Einstein’s general relativity.

Since then Yau has held positions at the Institute for Advanced Study, Stanford University, and Harvard (where he currently chairs the math department), training two generations of grad students and embarking on far-flung collaborations that address topics ranging from the nature of dark matter to the formation of black holes. He has won the Fields Medal, a MacArthur Fellowship, and the Wolf Prize.

Through it all, Yau has remained bluntly outspoken. In China he has called for the resignation of academia’s old guard so new talent can rise. In the United States he has critiqued what he sees as rampant errors in mathematical proofs by young academics. Yau has also strived to speak directly to the public; his book The Shape of Inner Space, coauthored with Steve Nadis, is scheduled for publication this fall. He reflected on his life and work with DISCOVER senior editor Pamela Weintraub at his Harvard office over four days in February.

You’ve described your father as an enormous intellectual influence on you. Can you tell me about him?
He went to Japan to study economics, but he came back to help the Chinese defend themselves before the Japanese invaded in 1937. By the end of the war he was distributing food and clothes to the poor for the U.N. After the revolution in 1949, he worried about getting in trouble with the Communists, so he brought the whole family to Hong Kong. We were very poor—at first we were almost starving—but my father had a large group of students constantly at home to talk about philosophy and literature. I was 10, 11, 12 years old, and I grew accustomed to abstract reasoning. My father made us memorize long essays and poems. At the time I didn’t understand what they meant, but I remembered them and later made use of it.

Did part of you ever rebel?
I read most of the Kung Fu novels in secret. I quit school for more than half a year. I’d wake up and say I was going, but I’d spend the whole day exploring the mountains and then come back—but I did the homework that my father assigned to me at home.

I heard you led a gang at one point.
I had a group of friends under me. I’d go around, and sometimes we ended up in fistfights with some other groups. So?

How did you go from that rough-and-tumble young man to the focused person you are now?
In the early 1960s my father was chairman of the department of literature and philosophy at Hong Kong College. The college president wanted to make a deal with the Taiwanese government to send in spies. My father refused to go along and resigned. That created a big money problem because he had eight children by then. My father had to run around among different, distant colleges to support the family. Back in China he’d lent a friend some money, and after the Communists took over, the friend moved to Macau, a city near Hong Kong, and ran his own schools. So he told my father, “I cannot return your money, but your daughter can come to my school, and I’ll give her free room and board and free tuition.” So my older sister went to Macau to study and got some flu, some funny disease, we never knew exactly what. She came back and she was treated, but she died in 1962. Then my elder brother got a brain disease; at the time we didn’t know what it was. My father had all kinds of burdens on his shoulders and then he got a disease, which I believe was cancer, but we didn’t know much in those days. My mother was running around trying to get funding to help my father. Finally we raised some money, but it was too late. He died after two months in the hospital in 1963, in the middle of my studies in the ninth grade. We could no longer afford our apartment, so we were kicked out. That’s when I realized I would have to make decisions for myself.

What did you do then?
After a while the government leased us some land, and we built a small house thanks to money from friends, but it was in a village far from school. The other kids looked down on us for being poor, and I had to ask the school president to allow me to pay tuition at the end of the year, when my government fellowship came through. It was humiliating. But I studied hard and did very well, especially in math. Then a former student of my father started a primary school in a town closer to school. He said I could help teach math and stay there at night. I had to take care of myself, I had to wash things and all of that, but I learned how to survive.

What happened once you made your way to college?
I had fallen in love with math early on, but at the Chinese University of Hong Kong I realized that mathematics was built on standard actions and logic. Soon I had arranged to take tests for the required math courses without actually attending while sitting in on more advanced classes, and no one seemed to mind. In my second year, Stephen Salaff, a young mathematician from U.C. Berkeley, came to teach in Hong Kong. He liked to talk to the students in the American way: He gave lectures and then he asked students questions. In many cases it turned out I could help him more than he helped me, because there were problems he couldn’t solve during class. Salaff suggested I apply to graduate school early. I was admitted to Berkeley and even got a fellowship. I borrowed some money from friends and flew to San Francisco in September 1969.

What did you think of California when you arrived?
The first thing that impressed me was the air. In Hong Kong it’s humid, hot, but in California it was cool and clear. I thought it was like heaven. A friend of Salaff’s came to the airport to pick me up and took me to the YMCA, where I shared a big room with four or five people. I noticed that everybody was watching baseball on TV. We didn’t have a TV at home. My neighbor who was sleeping there was a huge black man. He was talking in a language I had never heard before. He said, “Man, where the hell you come from?” It was fun, but I had to look for an apartment. I was walking around the street when I met another Chinese student from Hong Kong and we decided to share, but we couldn’t afford a place. We looked around and found another Chinese student, from Taiwan, so there’s three of us and it’s still not enough. Then we found an Alaskan also studying math, also on the street. So four of us went in together and the rent for each was $60 a month. My fellowship gave me $300 a month, and I sent half of it home.

What about your math studies?
There were many holes in my knowledge so I’d wake up early and start class at 8 a.m. I took three classes for credit, and the rest I audited. I brought my own lunch so even at lunchtime I was in class. I was especially excited about topology because I thought it could help reveal the structure of space. Einstein used geometry in his equation to give us the local picture: how space curved around our solar system or a galaxy. But the Einstein equation didn’t give the overall picture, the global structure of the whole universe. That’s where topology came in.

What is topology? Is it like geometry?
Geometry is specific and topology is general. Topologists study larger patterns and categories of shapes. For example, in geometry, a cube and a sphere are distinct. But in topology they are the same because you can deform one into the other without cutting through the surface. The torus, a sphere with a hole in the middle, is a different form. It is clearly distinct from the sphere because you cannot deform a torus into a sphere no matter how you twist it.

Does that mean geometry and topology are really two perspectives on the same thing?
Yes. It is like Chinese literature. A poem might describe a farewell between lovers. But in the language of the poem, instead of a man and woman, there is a willow tree, where the leaves are soft and hanging down. The way the branch is hanging down is like the feeling of the man and the woman wanting to be together. Geometry gives us a structure of that willow tree that is solid and extensive. Topology describes the overall shape of the tree without the details—but without the tree to start with, we would have nothing.

It has always amazed me to observe how different groups of people look at the same subject. My friends in physics look at space-time purely from the perspective of real physics, yet the general theory of relativity describes space-time in terms of geometry, because that’s how Einstein looked at the problem.


Tuesday, May 25, 2010

Sunday, May 23, 2010

Eco House Agent – Encouraging the Utilization of Eco-Friendly Homes

The Eco House Agent ( is an online resource providing information about the implementation of “eco-friendly” devices in homes. The main goal of Eco House Agent is to help people make their house eco-friendly, reduce the use of carbon fuels, and become carbon neutral. While the vast majority of people perceive becoming carbon neutral as a lifestyle-altering commitment requiring a great deal of dedication, it is a process that when done effectively, will not drastically reduce the convenience of their daily lives.

Eco House Agent provides simple tips for homeowners such as walking instead of driving to local shopping centers, turning off lights, washing clothes at low temperatures, taking showers instead of baths, and turning appliances off instead of on standby. Eco House Agent also suggests resources that can be installed and implemented in your house, including solar power, photovoltaics, wind power, rainwater harvesting, insulation, and going “off the grid”.

According to Eco House Agent, based on the growing number of governmental incentives for reducing your carbon footprint, the time to implement these new strategies is now. “Soon we will be forced to reduce our Carbon footprint The government is looking to introduce environmental policies to encourage people to be more “Carbon Neutral”. The Carbon Credit Scheme will attempt to reduce the amount of carbon households produce. A Carbon Credit will be given for units of energy The government will reward those who use less Carbon, penalising less energy efficient households.”

The Eco House Agent teaches readers how to install new eco-friendly sources of energy, such as solar power, Photovoltaics, wind and rain power; harvest rainwater; and how glazing, installing insulation, and damp treatment can be beneficial for your money and the environment.

The website also offers a forum where users can “post all your green thoughts on Solar Power, Photovoltaics, Insulation, Wind Power & Rainwater Harvesting and energy saving, carbon neutral house ideas, helping us all to reduce our carbon footprint and have eco friendly houses.

The following topics are touched upon in detail on the Eco house Agent website:

Solar power energy: Explaining the importance and usefulness of harnessing light from the sun, Eco House Agent also talks about solar hot water heaters and how they are an ideal alternative to ordinary oil and gas hot water heaters. Also, solar power can be used to charge batteries in laptops, cell phones, iPods, and rechargable batteries.

Recycling: The recycling section supplies the importance of recycling and what methods and situations recycling can come in handy and be beneficial for you and the environment. The Salvo recycling centre is mentioned as an excellent source of materials that can be reused, such as doors, tiles, radiators, windows, timber, and furniture.

Finally, another notable section of Echo House Agent explains the benefits of having an “organic baby”. According to the website, “The decision to have children is arguably the most life-changing decision you may ever make, not only to yourself but also to the planet. You only have to look at some of the statistics associated with having children and the overpopulation of the planet to also make it one of the most guilt inducing decisions you’ve made.”


Wednesday, May 12, 2010

Alex Ross

Sunday, May 9, 2010

Anil Seth: identifying the root of consciousness

Anil Seth in Brighton, where he has helped set up the Sackler Centre for Consciousness Science. Photograph: Andy Hall for the Observer

Consciousness is the last outpost of pure mystery in our scientific understanding of the brain. We are learning ever more about the brain's physiology and how it controls our bodies, but the idea of where "we" exist, how we develop that sense of self and how it can be explained in terms of the activity of brain cells, all of that is still largely the domain of philosophers rather than scientists.

Anil Seth, co-director of the Sackler Centre for Consciousness Science at the University of Sussex, wants to turn that around. The recently opened institute will include neuroscientists, psychiatrists, roboticists, philosophers and a hypnotist. Using brain-scanners and computer algorithms, they will measure, model and characterise what consciousness might be at a physiological level. Seth and his co-director Hugo Critchley then want to take the findings into the clinic, using these ideas to explain whether altered states of consciousness might explain (and help treat) psychiatric conditions.

Why have scientists been so reluctant to study consciousness until now?
A hundred years ago, consciousness was at the heart of psychology, and it was only excluded following the advent of behaviourism, which focused scientific efforts only on what could be observed objectively — behaviour, not experience. But now we recognise it's OK to take people's descriptions of their conscious experiences as proper scientific data.

The study of consciousness may also have been retarded by people worrying about what the philosopher David Chalmers called the "hard problem". This says, let's say we can understand everything about how the brain works, we know how you generate behaviour and perceptions... but we would still have no idea why there was anything like experience generated by this stuff. In other words, why is there consciousness in the universe at all?

Nowadays, more of us realise that we don't need to answer that "why?" question to make a lot of progress. Consciousness exists, we know when we're conscious and when we're not, and what we're conscious of. We can start to study those differences in the same way physicists have made progress without worrying about why there's a universe in the first place.

We know quite a lot about which brain mechanisms are necessary: you can get rid of quite large parts of the brain without seeming to affect consciousness. For example, you can lose large parts of the cerebellum and it doesn't seem to affect your conscious experience. But if you lose small parts of the brain, say parts of the thalamus, you lose consciousness forever.

Is consciousness something you can localise to parts of the brain or is it more likely that the senses network together to create it?
Consciousness, since it's generated by the brain, is not likely to be localisable to one region. It's likely to be a distributed process that's going to largely depend on the thalamocortical system, which is a big chunk of the brain but, by no means, all of it.

There is this idea that, to study something scientifically, you need to have a really explicit definition of it before you get going. But I don't think that's true. With consciousness, you can define it with various levels of specificity. You can distinguish between conscious level — the scale between being completely asleep or in a coma and being completely aware and awake, say — and conscious content, which would be the actual components of a given experience. So, if you were looking at cup of tea. Things that are relevant to conscious level might not be relevant to conscious content. There's another important distinction between primary consciousness – the raw components of an experience – and what people call higher-order or reflexive consciousness, or even self-consciousness. This is the part of our experience that maps onto our concept of "I". There is an experiencing subject for all these experiences we're having.

There hasn't always been as much communication between psychiatry and neuroscience as one might have expected. That's changing now. One reason is that psychiatrists are increasingly interested in the possibility of finding biomarkers for psychiatric disorders. Right now, psychiatric disorders are classified on the basis of symptoms presented in the clinic. There is, in most cases, no other reliable way of making a psychiatric diagnosis. That difficulty maps to treatments as well, which are often based primarily on alleviating symptoms. By thinking of psychiatric disorders as disturbances of conscious experience, and trying to understand the mechanisms that might generate particular patterns you see, you have a new way to diagnose and treat them.

One example comes from schizophrenia, where one of the symptoms is this misattribution of thoughts and actions, so that the person thinks they are being controlled by something else – by the TV or aliens. One possible explanation for that is, our normal experience of thinking and behaving is unproblematic because we can predict the sensory consequences of our own actions. A thought is just like an action that stays in the brain, so if we can predict what's going to happen when we have a thought or perform an action, then we know that they're not caused by anything else.

But if our predictions are awry, possibly because our internal timing mechanisms are screwed up, we might not be able to predict the consequences of our own actions so the brain is then forced to find some other cause for these things that are happening.

So it's possible that underlying some of the symptoms seen in schizophrenia, there might be a disorder of making fine time judgments or predictions.

One phenomenon we're studying is depersonalisation, a fascinating condition where the world or the self loses its subjective reality. There's evidence that those brain areas responsible for integrating external perceptions with internal ones are less active in people with depersonalisation. We want to extend this work into clinical contexts such as the early stages of schizophrenia.

In terms of how the world works, ontologically, consciousness must be. Otherwise, something dualistic is going on, there's something about consciousness that's different from the universe that is not part of the natural world. Consciousness is dependent on the laws of physics, chemistry and biology and we may not know all of those laws yet but we're not going to need anything else.

The right level at which to explain the phenomenon is a different question. I'm less confident that the right level to explain how brains generate consciousness is going to be at the level of this neurotransmitter or this molecule or something like that. It may turn out that the best explanation comes at a higher level.


Tuesday, April 27, 2010

Spacecraft Spots Active Volcanoes on Venus

Venus is alive.

Researchers using data from the European Space Agency’s Venus Express spacecraft said they spotted three active volcanoes that recently poured red hot lava onto the planet’s already broiling surface.

The discovery, announced in a paper published Friday online in Science, suggests that Venus — like the Earth — is periodically resurfaced by lava flows, explaining why it seems devoid of craters.

“We estimate the flows to be younger than 2.5 million years, and probably much younger, likely 250,000 years or less, indicating that Venus is actively resurfacing,” the authors write. They were led by Suzanne E. Smrekar of the Jet Propulsion Laboratory in Pasadena.

Venus is only slightly smaller than Earth, but it seems to have evolved rather differently. It is swaddled in dense clouds of carbon dioxide. The pressure at the surface is 93 times the atmospheric pressure on Earth, and the temperature is almost 900 degrees Fahrenheit — enough to melt lead.

The planet shows no sign of the plate tectonics, the continental shifts and rumbles that keep the Earth’s surface fresh and erase impact craters, but satellite mappers have detected nine so-called hot spots resembling the Hawaiian islands, which are higher and hotter than the surrounding Venusian plains.

The Venus Express examined three of these smoldering humps with its Visible and Infrared Thermal Imaging Spectrometer, or Virtis, which can see through the thick clouds on Venus and measure the brightness of surface rocks.

The humps range from about half a mile to a mile high. Rocks in these regions, known as Imdr Regio, Dione Regio and Themis Regio, were anomalously bright compared with their surroundings, suggesting that they were relatively young and unweathered by the corrosive Venusian environment.



Vital link of Interest

Magnifying the Quantum World

In the 1870s, when Max Planck was still a young German university student, his professor Philipp von Jolly discouraged him from continuing to pursue physics, reportedly saying that nothing was left to discover in the field except for a few minor details.

Undaunted, Planck became a professor of physics at the University of Berlin, and by 1900 had developed a theory that would turn physics upside-down: Electromagnetic energy could only be emitted in discrete packets, or “quanta.” The field of quantum mechanics was born, and its ramifications continue to echo through physics today. Indeed, modern quantum researchers aren’t just filling in minor details; they’re still adding in leaps and bounds to our knowledge of how the world fundamentally works.

Planck’s breakthrough came out of his studies of “black bodies,” idealized objects that perfectly absorb and then re-emit electromagnetic radiation. In reality, nothing can absorb light so perfectly, but many real-world objects, like a hunk of iron, absorb and emit electromagnetic radiation similarly to a black body. As an iron ingot is heated, it begins to emit electromagnetic radiation, energy that travels on a spectrum of frequencies. When it’s quite hot, the ingot turns red—and as its temperature rises further, the ingot will progressively turn orange, then yellow, then white. These are only the frequencies we can see—the ingot, of course, is emitting invisible electromagnetic radiation too, in frequencies like infrared. Planck studied this “black-body spectrum,” and precisely measured how changing temperature affected the radiation a black body emitted. In his work, he came to realize that the emitted radiation didn’t smoothly increase with temperature, but in fact changed in sudden steps. Planck never quite understood the implications of his discovery, but Einstein and other physicists soon began to see its reach. Their conclusion: Everything in the universe—energy, light, particles, and all the macroscopic objects they form and influence—is somehow quantized, and subject to strange probabilistic behavior that defies classical explanations. In the quantum world, objects can be in multiple places at the same time, can simultaneously harbor mutually exclusive states, and can pop in and out of existence spontaneously. Even Richard Feynman, the Nobel-Prize-winning physicist who arguably had a better grasp of quantum mechanics than anyone else in the 20th century, quipped that no one really understood it.

Quantum phenomena are most dramatic in extremes that humans can’t tolerate or perceive, like near-absolute-zero temperatures, or in hard vacuum, or at the scale of atomic nuclei. But this doesn’t mean quantum principles don’t apply to larger objects. Chad Orzel, a physics professor at Union College, blogs about a March study that may document the first observation of a certain type of quantum behavior in an object visible to the naked eye.

“Visible” may be a stretch. The object in question is a fork-shaped device fabricated from aluminum nitride and sandwiched between sheets of aluminum. It’s about 40 microns long, or roughly the width of a human hair. You wouldn’t be able to see it at all if you looked at it from the side, because it’s just one micron thick. Still, since quantum behavior is typically observed at the scale of a single atom or subatomic particle, this research represents an astonishing leap: The device is composed of about 10 trillion atoms.

Orzel says the researchers, led by Aaron O’Connell, cooled the object to its “quantum ground state,” at 0.025 degrees above absolute zero. At warmer temperatures, the device has some resonance, meaning it mechanically oscillates back and forth a bit like a tuning fork. At this experiment’s chilly temperatures, the device still oscillates, but only due to unavoidable quantum effects that cannot be subtracted. Hence, it resides in its quantum ground state. Now, classical physics would say that as the device’s temperature gradually increases, its resonance should smoothly increase, too. O’Connell’s team was able to show instead that the object resonated in discrete intervals. Its resonance was, in other words, quantized.

The resonance itself is not visible; instead this had to be measured indirectly by coupling the resonator to another device, a loop of wire the team fabricated to act as a “qubit,” a register for a single unit of quantum information. This arrangement allowed the researchers to induce resonance in two different ways. They could energize the qubit, which in turn caused the resonator to oscillate, or they could apply microwave radiation to the resonator, and observe the oscillatory response in the qubit. In each case, the resonance occurred only at frequencies predicted exactly by quantum theory.

Greg Fish, a science writer and computer science graduate student, says that this research could also lead to innovations in quantum computing. Because the state of a quantum resonator can’t be directly detected, it leads to a classic conundrum: If an object can be in one of two states, and if we don’t know its state, then it can be said to be in both possible states at the same time. For a computer, this ambiguity can lead to tremendous number-crunching power, because it means that instead of a binary 1 or 0 as in a classical computing “bit,” a qubit can simultaneously embody multiple probabilistic states. And since there are many more than just two possible probabilities, a quantum computer could theoretically process much more information in a given interval of time than a classical computer could. The O’Connell team’s project, therefore, can be seen as a test-run for a quantum computer integrating many resonators like the one in their device. In theory, Fish says, a quantum computer can be as much as 50,000 times faster than a modern supercomputer at solving some types of mathematical problems.

In just over a century, quantum theory has moved from being an abstract curiosity to a powerful driver of technological development, with implications not just for theoretical physicists, but nearly every branch of science. As new developments are unveiled, watch for discussion and analysis on


Sunday, April 25, 2010

Ice Fishing For Neutrinos From the Middle of the Galaxy

About 25 million years ago, Earth parted in the southeast corner of Siberia. Since then, countless rivers have converged on the gaping continental rift, creating the vast body of water known as Lake Baikal. Surrounded by mountains, this 400-mile-long inland sea has remained isolated from other lakes and oceans, leading to the evolution of unusual flora and fauna, more than three-quarters of which are found nowhere else on the planet. Russians regard it as their own Galápagos. The lake contains 20 percent of the world’s unfrozen freshwater—or just a little less during the severe Siberian winter when, despite its enormous size and depth, Baikal freezes over.

On one such winter’s day, I found myself on the lake near the town of Listvyanka, which is nestled in a crook of the shoreline. I was in an old van that was trying to head west, not along a coastal road—for there was none—but over the ice. The path, however, was blocked by a ridge. It looked like a tectonic fault: Two sections of the lake’s solid surface had slammed together and splintered, throwing up jagged chunks of ice. The driver, a Russian with a weather-beaten face, peered from underneath his peaked cap, looking for a break in the ridge. When he spied a few feet of smooth ice, he got out and prodded it with a metal rod, only to shake his head as it crumbled: not thick enough to support the van. We kept driving south, farther and farther from shore, in what I was convinced was the wrong direction. The van shuddered and lurched, its tires crunching on patches of fresh snow and occasionally slithering on ice. The ridge continued as far as the eye could see. Suddenly we stopped. In front of us was a dangerous-looking expanse littered with enormous pieces of ice that rose from the lake’s frozen surface like giant shards of broken glass.

The driver seemed to be contemplating going around them to look for thick ice that would let us reach our destination, an underwater observatory operating in one of the deepest parts of the lake. But if he did that, we’d get even farther from the shore, and it would take just one punctured tire to strand us. The sun was little more than an hour from setting, and the temperature was falling. I couldn’t ask the driver if he had a radio or a phone to call for help, since he did not speak a word of English and the only Russian phrase I knew was do svidaniya. The last thing I wanted to say to him at this point was “Good-bye.”

Thankfully, he decided to turn around. We drove along until we came upon vehicle tracks that went over some ice covering the ridge. The driver swung the van westward and cleared the ridge, and soon we were racing across the lake at a speed that turned every frozen lump into a speed bump. The van’s front rose and fell sickeningly, rattling the tools strewn around on the front seat. I worried that the ice would give way and we would plunge into the frigid waters below. But it remained solid, and the van, despite its appearance, was in fine mechanical fettle, its shock absorbers holding firm. In the distance I spied a dark spot on the otherwise white expanse. As we approached, the spot grew to its full size, revealing itself as a three-foot-high Christmas tree. We still had 20 miles to cover, and the sun would soon disappear below the icy horizon. But now that we had found the Christmas tree, I knew we were fine.

I had first seen the tree two days earlier, with Nikolai (Kolja) Budnev, a physicist from Irkutsk State University, and Bertram Heinze, a German geologist. We were headed to the site of the Lake Baikal neutrino observatory, which lay deep beneath the ice. We had just driven onto the lake from the shore near Listvyanka when Heinze asked, “When does the ice start breaking?”

“Sometime in early March,” Budnev answered. My heart skipped a beat. It was already late March, and we were on the ice in an old, olive-green military jeep. “Sorry, sometime in early April,” Budnev corrected himself. Phew.

For more than two dec­ades now, Russian and German physicists have camped on the frozen surface of Lake Baikal from February to April, installing and maintaining instruments to search for the elusive subatomic particles called neutrinos. Artificial eyes deep below the surface of the lake look for dim flashes of blue light caused by a rare collision between a neutrino and a molecule of water. I was told that human eyes would be able to see these flashes too—if our eyes were the size of watermelons. Indeed, each artificial eye is more than a foot in diameter, and the Baikal neutrino telescope, the first instrument of its kind in the world, has 228 eyes patiently watching for these messengers from outer space.

The telescope, which is located a few miles offshore, operates underwater all year round. Cables run from it to a shore station where data are collected and analyzed. It is a project on a shoestring budget. Without the luxury of expensive ships and remote-controlled submersibles, scientists wait for the winter ice to provide a stable platform for their cranes and winches. Each year they set up an ice camp, haul the telescope up from a depth of 0.7 mile, carry out routine maintenance, and lower it back into the water. And each year they race against time to complete their work before the sprigs of spring begin to brush away the Siberian winter and the lake’s frozen surface starts to crack.

What is it about the neutrino that makes scientists brave such conditions? Neutrinos—some of them dating back to right after the Big Bang—go through matter, traveling unscathed from the time they are created and carrying information in a way no other particle can. The universe is opaque to ultraenergetic photons, or gamma rays, which are absorbed by the matter and radiation that lie between their source and Earth. But neutrinos, produced by the same astrophysical processes that generate high-energy photons, barely interact with anything along the way. For instance, neutrinos stream out from the center of the sun as soon as they are produced, whereas a photon needs thousands of years to work its way out from the core to the sun’s brilliant surface.

Neutrinos therefore represent a unique window into an otherwise invisible universe, even offering clues about the missing mass called dark matter, whose presence can be inferred only by its gravitational influence on stars and galaxies. Theory suggests that over time the gravity wells created by Earth, the sun, and the Milky Way would have sucked in an enormous number of dark-matter particles. Wherever they gather in great concentrations, these particles should collide with one another, spewing out (among other things) neutrinos. It is as if a giant particle accelerator at our galaxy’s center were smashing dark-matter particles together, generating neutrinos and beaming them outward, some toward us...


Monday, April 19, 2010

Thursday, February 4, 2010

Intuition and Order in Xenakis’s Orient-Occident

In the late 1950’s and early 60’s, Xenakis worked with GRM (Groupe de recherches musicales) in Paris to produce several pieces for electromagnetic tape, including Orient-Occident, a piece often overshadowed by works like Concrete PH and Bohor, which are rightly considered more groundbreaking pieces in Xenakis’s early oeuvre, particularly Bohor, with its unique source material, its rather violent dynamics, and its rich sound palette. However, Orient-Occident is a firm testament to Xenakis’s visceral immediacy and also a clear example of his early experiments with creating a connection between micro- and macrocomposition. For Xenakis it was important that there be an inextricable link between materials, method, and form. What is most interesting about Orient-Occident is that, unlike many other works by Xenakis, this connection seems to have been forged primarily through intuition as opposed to rigorous systems of control.
In 1960, Xenakis was commissioned to write the soundtrack to Orient-Occident, a film by Enrico Fulchignoni. The film depicted civilizations from around the world, forming a historical perspective without any explicit narrative thread. The soundtrack was approximately 22 minutes long. To what extent the sounds complemented the images or how the form may have been influenced by the visuals is hard to surmise, but we do know that the director gave Xenakis very little guidance on the creation of the music (James Harley, 2002). In 1962, Xenakis trimmed the soundtrack down to a little less than 11 minutes for a concert piece. What exactly Xenakis cut and how he shaped the material into an independent work is undocumented (the date of revision varies between 1962 and 1968, depending on the source).

Of the pieces he was working on at the time of the commission, Concrete PH, Analogique B, and the acoustic piece Pithoprakta best demonstrate Xenakis’s interest in using what Agostino Di Scipio terms “control structures” to connect timbre with overall form (1998). Xenakis’s preferred method for this control was stochastics, which is derived from the calculus of probabilities. Significantly, Xenakis would use recordings of both Concrete PH and Pithoprakta as source material in Orient-Occident.

Concrete PH, written in 1958 as a counterpart to Varese’s Poème électronique and premiered with that piece in the Philips Pavilion at the World’s Fair in Brussels, is primarily a study in density and the resultant emergent properties of the sound. This piece has one sound source: the crackling sound of burning charcoal, the tape recording of which was cut into hundreds of minuscule pieces and arranged in various densities according to theories of probability. Xenakis called the idea of perceivable properties of sound created by means of control over individual elements “second-order sonorities” (1971: 47). The spectrum of small moments in time resembles the larger formal structures, which is related to the concept of fractals (Di Scipio, 1997, 1998).

Analogique B, a tape piece written in 1959 and later added to the acoustic work Analogique A to make Analogique A + B, was Xenakis’s first attempt at granular synthesis, based on Dennis Gabor’s idea that all sounds can be represented as micro-temporal pieces of sound called “grains.” Xenakis used “books” containing many “screens” each of which depicted a moment in time and contained information regarding the density, frequency and amplitude of grains at that moment (1971). These parameters, as well as the arrangement of screens and books, were determined by stochastics.

The ensemble piece Pithoprakta contains a passage with innumerable overlapping pizzicati and glissandi which, due to the complex nature of the sound, create a second-order sonority. This section was modeled after the Brownian motion of gases (a theory of probability). Much if not all of the percussive elements in Orient-Occident are derived from a recording of this piece (Harley, 2002: 38).

The concert version of Orient-Occident lasts slightly less than eleven minutes and features many blocks of diverse sounds both overlapped and more abruptly juxtaposed, as shown in this spectrogram of the piece.

After a brief introduction of rather focused spectral energy in the low/mid range, Xenakis begins building to the first climax of the piece, introducing wide spectrum sound sources, which effectively open up the sound. Points of arrival are created by quick shifts in color, most notably following the climactic sections. Around the 1:32 mark, bright, bowed tam-tam snippets usher in noisy percussive material covering the entire frequency axis (ca. 1:35). This section, with its hammered metal sounds, I have described as “industrial.” — What follows are low rumblings and short sounds of bowed metal. There are also metal pipes which produce a more focused spectrum, often perceived as quasi-pitched. Both types of bowed metal objects are heavily treated with reverberation. The bowed metal tones are spliced in to very small segments and arranged densely (ca. 4:35). — This can be seen as a transition building into the next and main climax of the piece, which features percussive material from Pithoprakta, water droplets, and snippets from Concrete PH, which supplies bright, full spectrum sounds. The percussive material has been electronically slowed down, creating a very different effect from that of its original context. Here it is used to create metered rhythms or “perceptible patterns rather than statistical ‘clouds’” (Harley, 2002). The bird-like sounds and sliding sine-tone sounds heard in the background (eg: 9:04 and 9:15) are demodulated radio frequencies (very high radio frequencies brought down to audible frequencies), subjected to reverberation, and articulated by glissandi (via changing tape speed). These sounds are relatively static (no melodic activity) and are almost always used in combination with concrete sources. A denouement with quiet beeps reminiscent of sonar leads into a coda featuring a lovely diffused sound palette.

Although much more loosely constructed than many of Xenakis’s other works and seemingly constructed in an intuitive collage-like manner, Orient-Occident relies heavily on a cohesiveness created by its constituent elements. Repetition on various time levels is a main tool in the structure of Orient-Occident. Some repetition is nearly literal as in various quasi-pitched elements (bowed metal) and certain percussive material in both the “industrial” and the “drumming” sections. Some is more subtle as in the use of bowed cardboard near the beginning of the piece (1:12) and then later at (7:46). There is one section, from ca. 1:11 to 1:20, that forms a sort of microcosm of the entire work. Within this short section the listener hears bowed tam-tam, bowed metal, bowed cardboard, and percussive material, all of which will be more fully developed later in the piece. — This passage can be seen as forming a sort of metaphorical fractal relationship with the rest of the work. The different types of repetition allow for points of reference or at least provide a sense of familiarity.

In the “drumming” section, the primary percussive attack is immediately repeated. Each time this two-attack phrase is repeated, it is also lengthened (and expanded in the stereo field). The drumming section itself recalls the other rhythmic (industrial) section from earlier in the piece. The “sonar” pings near the end of the piece (ca. 9:07) clearly recall the similar two-attack phrases from the drumming section.

Many of the bowed metal sounds in Orient-Occident are focused enough spectrally to be perceived as pitched material (Figure 2). The notes labeled D#4 and D4 in the graph are related both in the proximity of their relative frequency and in their use as transitions into more dense structures. Additionally, the pitch very near B5 near the beginning of the piece (which acts as a clear break between thicker textures) is later recalled by the oscillating pitches labeled A5/A#5 and B5/C6. These pitches provide a familiar, clear, and focused sound to balance the more chaotic sections of the piece.

The most obvious of these is the passage containing the quotes from Concrete PH. These samples are combined with other concrete sources to create a very dense section which verges on becoming a 2nd order sonority. It can be argued that both this section and the “industrial” percussive section offer a sort of slowed-down second order sonority – almost a microscopic view of a granular texture. Another example of slowed-down grains is found in the “sonar” beeps which are not dissimilar to the electronic sounds in Analogique B. Di Scipio discusses the merits of this idea in his analysis of Analogique A + B (2005). As in the present work, the successive “screens” or grains in Analogique A + B do not sound close enough in time to truly create a sense of a unique emergent sound, but the process is evident. Whether the passages were meant to be interpreted like this is debatable, but certainly worth considering in light of the amount of thought Xenakis was devoting to the idea of granularity at the time.

Perhaps the most “successful” example of a 2nd order sonority is found at the end of the work (’successful in the sense that the 2nd order sonority is more readily-appreciable than its constituent sounds). The “coda” from ca. 10′ till the end of the work is a beautiful, hazy, texture-based sound. No one sound dominates the overall sonority. There are very few clearly-articulated sounds in this section, owing in part to the liberal use of reverberation. There is also a noticeable lack of pitched material in this section. Regardless of the methods used (which seem to lack any rigorous control) this section is the most convincing 2nd order sonority.

Throughout Orient-Occident, Xenakis showcases his various techniques of layering and collages which he was developing at GRM. Though the piece was originally written as a soundtrack, its revised concert form truly stands on its own formal merits (whether controlled or intuitive). Orient-Occident is a fantastic example of Xenakis’s early forays into stochastics and granularity, a process which would inform many of his later works (as well as the work of many other 20th century composers). Although some of the texture-based passages are not explicit examples of 2nd order sonorities, the concept of texture-dominated sonorities is vividly present in many sections of the piece. Xenakis was also exploring transitions between sound-types, and a similar focus on “change-of-state” figures heavily in many later works, both acoustic and electronic (see Charisma for an example of a work based on the the juxtaposition of sonority-blocks). Ultimately, the balance between sound and structure in this piece echoes the duplicity of all of Xenakis’s music: raw and visceral while at the same time structured and controlled.

Harley, James; 2002. “The Electroacoustic Music of Iannis Xenakis” Computer Music Journal, Vol. 26, No. 1, In Memoriam Iannis Xenakis (Spring, 2002), pp. 33-57 Published by: The MIT Press Stable URL:

Di Scipio, Agostino; 1998. “Compositional Models in Xenakis’s Electroacoustic Music” Perspectives of New Music, Vol. 36, No. 2 (Summer, 1998), pp. 201-243 Published by: Perspectives of New Music Stable URL:

Di Scipio, Agostino; 1997. “The Problem of 2nd-order Sonorities in Xenakis’s Electroacoustic Music.” Organized Sound Vol. 2 No. 3, 1997, pp. 165–178.

Di Scipio, Agostino; 2005. “Formalization and Intuition in Analogique A et B: with some remarks on the historical-mathematical sources of Xenakis.” first published in A. Georgaki, M. Solomos (éd.), International Symposium Iannis Xenakis. Conference Proceedings, Athens, May 2005, p. 95-108.

Xenakis, Iannis; 1971. Formalized Music: Thought and Mathematics in Music. Stuyvesant, New York. Pendragon Press.

Manning, Peter; 1993. Computer and Electronic Music (2nd ed). Oxford: Oxford University Press, 1993

Software utilized in analysis:
Audacity (ver. 1.2.6).

Sonic Visualizer (ver. 1.5). Developed at the Centre for Digital Music, Queen Mary, University of London.


Monday, January 25, 2010

Defining an Algorithm for Inventing from Nature

"Time after time we have rushed back to nature's cupboard for cures to illnesses," noted the United Nations in declaring 2010 the International Year of Biodiversity. Billions of years of evolution have equipped natural organisms with an incredible diversity of genetically encoded wealth, which, given our biological nature as humans, presents great potential when it comes to understanding our physiology and advancing our medicine. Natural products such as penicillin and aspirin are used daily to treat disease, yeast and corn yield biofuels, and viruses can deliver therapeutic genes into the body. Some of the most powerful tools for understanding biology, such as the PCR reaction, which enables DNA to be amplified and analyzed starting from tiny samples, or the green fluorescent protein (GFP), which glows green and thus enables proteins and processes to be visualized in living cells, are bioengineering applications of genes that occur in specialized organisms in specific ecological niches. But how exactly do these tools make it from the wild to benchtop or bedside?

Many bioengineering applications of natural products take place long after the basic science discovery of the product itself. For example, Osamu Shimomura, who first isolated GFP from jellyfish in the 1960s, and who won a share of the 2008 Nobel Prize in Chemistry, once explained: "I don't do my research for application, or any benefit. I just do my research to understand why jellyfish luminesce." Around 30 years later, Douglas Prasher, Martin Chalfie, and Roger Tsien and their colleagues isolated the gene for GFP, expressed it, and began altering the gene, enabling countless new kinds of study. Bioengineering can emerge from the conscious exploration of nature, although sometimes with long latency. Every gene product is a potential tool for perturbing or observing a biological process, as long as bioengineers proactively imagine and explore the significance of each finding in order to convert natural products into tools.

Conversely, many bioengineering needs are probably satisfied, at least in part, by a process found somewhere in nature--whether it's making magnetic nanoparticles, or sensing heat, or synthesizing structural polymers, or implementing complex computations. The question in basic science often boils down to how generally important a process is across ecological diversity, but a bioengineer only needs one example of something to begin copying, utilizing, and modifying it.

If we can build more direct connections between bioengineering and the fields of ecology and basic organismal sciences--converging at a place you might call "econeering"--we could together meet urgent bioengineering needs more quickly, and direct resources toward basic science discovery. Scientists could deploy these basic science discoveries more rapidly for human bioengineering benefit.

Recently we've begun to examine some of the emerging principles of econeering, as we and others pioneer a new area--the use of natural reagents to mediate control of biological processes using light, sometimes called "optogenetics."

As an example: Opsins are light-sensitive proteins that can, among other things, naturally alter the voltage of cells when they're illuminated with light. They're almost like tiny, genetically encoded solar cells. Many opsins are found in organisms that live in extreme environments, like salty ponds. The opsins help these organisms sense light and convert it into biologically useful forms of energy, an evolutionarily early sort of photosynthesis.

Plant biologists, bacteriologists, protein biochemists, and other scientists have widely studied opsins at the basic science level since the 1970s. Their goal has been to find out how these compact light-powered machines work. It was clear to one of us (Boyden) around a decade ago that opsins could, if genetically expressed in cells that signal via electricity (such as neurons or heart cells), be used to alter the electrical activity of those cells in response to pulses of light.

Such tools could thus be a huge benefit to neuroscience. They could enable scientists to assess the causal role of a specific cell type or neural activity pattern in a behavior or pathology, and make it easier to study how other excitable cells, such as heart, immune, and muscle cells, play roles in organ and organism function. Furthermore, given the emerging importance of neuromodulation therapy tools, such as deep brain stimulation (DBS), opsins could enable novel therapies for correcting aberrant activity in the nervous system.

What might be called the "example phase" of this econeering field began about 10 years ago, when several papers suggested that these molecules might be used safely and efficaciously in mammalian cells. For example, foundational papers in 1999 (by Okuno and colleagues) and 2003 (by Nagel and colleagues) revealed and characterized opsins from archaebacteria and algae with properties appropriate for expression and operation in electrically excitable mammalian cells. Even within these papers, basic science examples began to lead directly to bioengineering insights, demonstrating in the case of the Nagel paper that an opsin could be expressed and successfully operate in a mammalian cell line. In 2005 and 2007, we and our colleagues, in a collaboration between basic scientists and bioengineers, showed that these molecules, when genetically expressed in neurons, could be used to mediate light-driven activation of neurons and light-driven quieting of neurons. In the few years since, these tools have found use in activities ranging from accelerating drug screening, to investigating how neural circuits implement sensation, movement, cognition, and emotion, to analyzing the pathological circuitry of, and development of novel therapies for, neural disorders.

Now this econeering quest is entering what could be called the "classification phase," as we acquire enough data to predict the ecological resources that will yield tools optimal for specific bioengineering goals. For example, in a paper from our research group published in Nature on January 7, 2010, we screened natural opsins from species from every kingdom of living organism except for animals. With enough examples in hand, distinct classes of opsins emerged, with different functional properties.

We found that opsins from species of fungi were more easily driven by blue light than opsins from species of archaebacteria, which were more easily driven by yellow or red light. The two classes, together, enable perturbation of two sets of neurons by two different colors of light. This finding not only enables very powerful perturbation of two intermeshed neural populations separately-- important for determining how they work together--but also opens up the possibility of altering activity in two different cell types, opening up new clinical possibilities for correcting aberrant brain activity. Building off of data from and conversations with many basic scientists, we then began mutating these genes to explore the classes more thoroughly, creating artificial opsins to help us identify the boundary between the classes. Understanding these boundaries not only gave us clarity about the space of bioengineering possibility, but told us where to look further in nature if we wanted to augment a specific bioengineering property.

In the current model of econeering, the "example phase" and the "classification phase" both provide opportunities for productive interactions between bioengineers and ecologists or organismal scientists. During the example phase described above, both basic scientists and bioengineers tested out candidate reagents to see what was useful, and later many groups initiated hunts for new examples. During the classification phase, more systematic synthetic biology and genomic strategies enabled more thorough assessment of the properties of classes of reagents.

Interestingly, something similar has been happening recently with GFP, as classes of fluorescent protein emerge with distinct properties: for a while, it's been known that mutating the original jellyfish GFP can yield blue and yellow fluorescent proteins, but not red ones. A decade ago, an example of a red fluorescent protein from coral was revealed-- now this example has yielded, through bioengineering, a new class of fluorescent molecules with colors such as tomato and plum. So it is possible that the cycle described here --find an example, define a class, repeat--might represent a generally useful econeering process, one of luck optimization intermeshed with scientific and engineering skill.

Did the opsin community do "better" than the fluorescent protein community, in speeding up the conversion of basic science insight into bioengineering application? Well, one of the opsins that we screened in this month's paper was first characterized in the early 1970s, and it was better at changing the voltage of a mammalian cell than perhaps half of the other opsins we screened. So one could argue that a decent candidate reagent had hidden in plain sight for almost 40 years!

Although these two specific fields have benefited from basic scientists and bioengineers working together, a more general way to speed up the process of econeering would be to have working summits to bring together ecology minded and organismal scientists and bioengineers at a much larger scale, to explore what natural resources could be more deeply investigated, or what bioengineering needs could be probed further. Then interfaces, both monetary and intellectual, could facilitate the active flow of insights and reagents between these fields. The next step could involve teaching people in each field the skills of their counterparts: how many bioengineers would relish the ability to hunt down and characterize species in the ocean or desert? How many organismal biologists and ecologists would benefit from trying out applications in specific areas of medical need?

To fulfill the vision of econeering, we should devise technologies for assessing the functions of biological subsystems fully and quickly, perhaps even enabling rapid basic science and bioengineering assessments to be done in one fell swoop. Devices for point-of-discovery phenotyping that allow for gene or gene pathway cloning, heterologous expression, and functional screening--and maybe even downstream methodologies such as in-the-field directed evolution--would allow the rapid assessment of the physiology of the products of genes or interacting sets of gene products. (Note well: the gene sequence is important, but only the beginning; gene sequences are not sufficient by themselves to fully understand the function of a gene product in a complex natural or bioengineering context.)

Bioinformatic visualization tools could be useful: can we scan ecology with a bioengineering lens, revealing areas of evolutionary space that haven't been investigated (at either the example or class level)? What are the areas of bioengineering need where examples from nature might be useful in inspiring solutions?

Ideally, an econeering toolbox will emerge that will let us confront some of our greatest unmet needs--not just brain disorders, but needs in complex spaces such as energy, antibiotic resistance, desalination, and climate. If we can better understand, invent from, and improve the preservation of our natural resources, we'll be poised to equip ourselves with a billion years of natural bioengineering. This will give us a great advantage in tackling the big problems of our time--and help future generations tackle theirs.


Sunday, January 10, 2010

Liu Fang & Michael O'Toole

Philip Glass: company, pipa and guitar duo by Liu Fang & Michael O'Toole

Universe In Its Infancy

Take a look at our universe's infancy. This is a simulation of the cosmos shortly after the big bang, as the density of dark and ordinary matter fluctuates before condensing into an arrangement more like that of today.

The simulation was performed using a universe simulator developed at Argonne National Laboratory, Illinois, on a graphics supercomputer called Eureka. The San Diego Supercomputer Center in California also collaborated on the project. It modelled a volume of the universe around 1 billion light years cubed and took over 4 million CPU hours to complete.


Friday, January 8, 2010