Thursday, January 31, 2013

Love potions

I have just read in Richard Kieckhefer’s Magic in the Middle Ages several recipes for medieval aphrodisiacs. I think it is a great shame that these have fallen out of use (I assume), and so want to recommend that we revive them. You can try them all at home.
1. To arouse a woman’s lust, soak wool in the blood of a bat and put it under her pillow as she sleeps.
2. The testicles of a stag or bull will arouse a woman to sexual desire. (The recipe doesn’t specify how you get them.)
3. Putting ants’ eggs into her bath will arouse her violently (to desire, that is).
4. Write “pax + pix + abyra + syth + samasic” on a hazel stick and hit a woman with it three times on the head, then immediately kiss her, and you will be assured of her love. I’ve got a feeling that this one might really work.
5. I know, it isn’t fair that the traffic is all one way. So to arouse a man to passion, mix a herb with earthworms and put it in his food. OK, so it didn’t work for Roald Dahl’s Mr and Mrs Twit, but you never know.

Saturday, January 26, 2013

Will we ever understand quantum theory?

And finally for now… a piece for BBC Future's "Will We Ever...?" column on the conundrums of quantum theory (more to come on this, I think).

___________________________________________________________

Quantum mechanics must be one of the most successful theories in science. Developed at the start of the twentieth century, it has been used to calculate with incredible precision how light and matter behave – how electrical currents pass through silicon transistors in computer circuits, say, or the shapes of molecules and how they absorb light. Much of today’s information technology relies on quantum theory, as do some aspects of chemical processing, molecular biology, the discovery of new materials, and much more.

Yet the weird thing is that no one actually understands quantum theory. The quote popularly attributed to physicist Richard Feynman is probably apocryphal, but still true: if you think you understand quantum mechanics, then you don’t. That point was proved by a poll among 33 leading thinkers at a conference on the foundations of quantum theory in Austria in 2011. This group of physicists, mathematicians and philosophers was given 16 multiple-choice questions about the meaning of the theory, and their answers displayed little consensus. For example, about half believe that all the properties of quantum objects are (at least sometimes) fixed before we try to measure them, whereas half felt that these properties are crystallized by the measurement itself.

That’s just the sort of strange question that quantum theory poses. We’re used to thinking that the world already exists in a definite state, and that we can discover what that state is by making measurements and observations. But quantum theory (‘quantum mechanics’ is often regarded as a synonym, although strictly that refers to the mathematical methods developed to study quantum objects) suggests that, at least for tiny objects such as atoms and electrons, there may be no unique state before an observation is made: the object exists simultaneously in several states, called a superposition. Only during the measurement is a ‘choice’ made about which of these possible states the object will possess: in quantum-speak, the superposition is ‘collapsed by measurement’. Before measurement, all we can say is that there is a certain probability that the object is in state A, or B, or so on. It’s not that, before measuring, we don’t know which of these options is true – the fact is that the choice has not yet been made.

This is probably the most unsettling of all the conundrums posed by quantum theory. It disturbed Albert Einstein so much that he refused to accept it all his life. Einstein was one of the first scientists to embrace the quantum world: in 1905 he proposed that light is not a continuous wave but comes in ‘packets’, or quanta, of energy, called photons, which are in effect ‘particles of light’. Yet as his contemporaries, such as Niels Bohr, Werner Heisenberg and Erwin Schrödinger, devised a mathematical description of the quantum world in which certainties were replaced by probabilities, Einstein protested that the world could not really be so fuzzy. As he famously put it, “God does not play dice.” (Bohr’s response is less famous, but deserves to be better known: “Einstein, stop telling God what to do.”)

Schrödinger figured out an equation that, he said, expressed all we can know about a quantum system. This knowledge is encapsulated in a so-called wavefunction, a mathematical expression from which we can deduce, for example, the chances of a quantum particle being here or there, or being in this or that state. Measurement ‘collapses’ the wavefunction so as to give a definite result. But Heisenberg showed that was can’t answer every question about a quantum system exactly. There are some pairs of properties for which an increasingly precise measurement of one of them renders the other ever fuzzier. This is Heisenberg’s uncertainty principle. What’s more, no one really knows what a wavefunction is. It was long considered to be just a mathematical convenience, but now some researchers believe it is a real, physical thing. Some think that collapse of the wavefunction during measurement is also a real process, like the bursting of a bubble; others see it as just a mathematical device put into the theory “by hand” – a kind of trick. The Austrian poll showed that these questions about whether or not the act of measurement introduces some fundamental change to a quantum system still cause deep divisions among quantum thinkers, with opinions split quite evenly in several ways.

Bohr, Heisenberg and their collaborators put together an interpretation of quantum mechanics in the 1920s that is now named after their workplace: the Copenhagen interpretation. This argued that all we can know about quantum systems is what we can measure, and this is all the theory prescribes – that it is meaningless to look for any ‘deeper’ level of reality. Einstein rejected that, but nearly two-thirds of those polled in Austria were prepared to say that Einstein was definitely wrong. However, only 21 percent felt that Bohr was right, with 30 percent saying we’ll have to wait and see.

Nonetheless, their responses revealed the Copenhagen interpretation as still the favourite (42%). But there are other contenders, one of the strongest being the Many Worlds interpretation formulated by Hugh Everett in the 1950s. This proposes that every possibility expressed in a quantum wavefunction corresponds to a physical reality: a particular universe. So with every quantum event – two particles interacting, say – the universe splits into alternative realities, in each of which a different possible outcome is observed. That’s certainly one way to interpret the maths, although it strikes some researchers as obscenely profligate.

One important point to note is that these debates over the meaning of quantum theory aren’t quite the same as popular ideas about why it is weird. Many outsiders figure that they don’t understand quantum theory because they can’t see how an object can be in two places at once, or how a particle can also be a wave. But these things are hardly disputed among quantum theorists. It’s been rightly said that, as a physicist, you don’t ever come to understand them in any intuitive sense; you just get used to accepting them. After all, there’s no reason at all to expect the quantum world to obey our everyday expectations. Once you accept this alleged weirdness, quantum theory becomes a fantastically useful tool, and many scientists just use it as such, like a computer whose inner workings we take for granted. That’s why most scientists who use quantum theory never fret about its meaning – in the words of physicist David Mermin, they “shut up and calculate” [Physics Today, April 1989, p9], which is what he felt the Copenhagen interpretation was recommending.

So will we ever get to the bottom of these questions? Some researchers feel that at least some of them are not really scientific questions that can be decided by experiment, but philosophical ones that may come down to personal preference. One of the most telling questions in the Austrian poll was whether there will still be conferences about the meaning of quantum theory in 50 years time. Forty-eight percent said “Probably yes”, only 15 percent “probably no”. Twelve percent said “I’ll organize one no matter what”, but that’s academics for you.

Stormy weather ahead

Next up, a kind of book review for Prospect. In my experience as a footballer playing on an outdoor pitch through the winter, the three-day forecasts are actually not that bad at all.

________________________________________________________________

Isn't it strange how we like to regard weather forecasting as a uniquely incompetent science – as though this subject of vital economic and social importance can attract only the most inept researchers, armed with bungling, bogus theories?

That joke, however, is wearing thin. With Britain’s, and probably the world’s, weather becoming more variable and prone to extremes, an inaccurate forecast risks more than a soggy garden party, potentially leaving us unprepared for life-threatening floods or ruined harvests.

Perhaps this new need to take forecasting seriously will eventually win it the respect it deserves. Part of the reason we love to harp on about Michael Fish’s disastrously misplaced reassurance over the Great Storm of 1987 is that there has been no comparable failure since. As meteorologists and applied mathematicians Ian Roulstone and John Norbury point out in their account of the maths of weather prediction, Invisible in the Storm (Princeton University Press, 2013), the five-day forecast is, at least in Western Europe, now more reliable than the three-day forecast was when the Great Storm raged. There has been a steady improvement in accuracy over this period and, popular wisdom to the contrary, prediction has long been far superior to simply assuming that tomorrow’s weather will be the same as today’s.

Weather forecasting is hard not in the way that fundamental physics is hard. It’s not that the ideas are so abstruse, but that the basic equations are extremely tough to solve, and that lurking within them is a barrier to prediction that must defeat even the most profound mind. Weather is intrinsically unknowable more than two weeks ahead, because it is an example of a chaotic system, in which imperceptible differences in two initial states can blossom into grossly different eventual outcomes. Indeed, it was the work of the American meteorologist Edward Lorenz in the 1960s, using as set of highly simplified equations to determine patterns of atmospheric convection, that first alerted the scientific community to the notion of chaos: the inevitable divergence of all but identical initial states as they evolve over time.

It’s not obvious that weather should be susceptible to mathematical analysis in the first place. Winds and rains and blazing heat seem prone to caprice, and it’s no wonder they were long considered a matter of divine providence. Only in the nineteenth century, flushed with confidence that the world is a Newtonian mechanism, did anyone dare imagine weather prediction could be a science. In the 1840s Louis-Napoléon demanded to know why, if his celebrated astronomer Urbain Le Verrier could mathematically predict the existence of the planet Neptune, he and his peers couldn’t anticipate the storms destroying his ships. Le Verrier, as well as the Beagle’s captain Robert FitzRoy, understood that charts of barometric air pressure offered a rough and ready way of predicting storms and temperatures, but those methods were qualitative, subjective and deeply untrustworthy.

And so weather prediction languished back into disrepute until the Norwegian physicist Vilhelm Bjerknes (‘Bee-yerk-ness’) insisted that it is “a problem in mechanics and physics”. Bjerknes asserted that it requires ‘only’ an accurate picture of the state of the atmosphere now, coupled to knowledge of the laws by which one state evolves into another. Although almost a tautology, that make the problem rational, and Bjerknes’s ‘Bergen school’ of meteorology pioneered the development of weather forecasting in the face of considerable scepticism.

The problem was, however, identified by French mathematician Henri Poincaré in 1903: “it may happen that small differences in the initial outcomes produce very great ones in the final phenomena.” Then, he wrote, “prediction becomes impossible.” This was an intimation of the phenomenon now called chaos, and it unravelled the clockwork Newtonian universe of perfect predictability. Lorenz supplied the famous intuitive image: the butterfly effect, the flap of a butterfly’s wings in Brazil that unleashes a tornado in Texas.

Nonetheless, it is Newton’s laws of motion that underpin meteorology. Leonhard Euler applied them to moving fluids by imagining the mutual interactions of little fluid ‘parcels’, a kind of deformable particle that avoids having to start from the imponderable motions of the individual atoms and molecules. Euler thus showed that fluid flow could be described by just four equations.

Yet solving these equations for the entire atmosphere was utterly impossible. So Bjerknes formulated the approach now central to weather modelling: to divide the atmosphere into pixels and compute the relevant quantities – air temperature, pressure, humidity and flow speed – in each pixel. That vision was pursued by the ingenious British mathematician Lewis Fry Richardson in the 1920s, who proposed solving the equations pixel by pixel using computer – not an electronic device, but as the word was then understood, by human calculators. Sixty-four thousand individuals, he estimated (optimistically), should suffice to produce a global weather forecast. The importance of forecasting for military operations, not least the D-Day crossing, was highlighted in the Second World War, and it was no surprise that this was one of the first applications envisaged for the electronic computers such as the University of Pennsylvania’s ENIAC whose development the war stimulated.

But number-crunching alone would not get you far on those primitive devices, and the reality of weather forecasting long depended on the armoury of heuristic concepts familiar from television weather maps today, devised largely by the Bergen school: isobars of pressure, highs and lows, warm and cold fronts, cyclones and atmospheric waves, a menagerie of concepts for diagnosing weather much as a doctor diagnoses from medical symptoms.

In attempting to translate the highly specialized and abstract terminology of contemporary meteorology – potential vorticity, potential temperature and so on – into prose, Roulstone and Norbury have set themselves an insurmountable challenge. These mathematical concepts can’t be expressed precisely without the equations, with the result that this book is far and away too specialized for general readers, even with the hardest maths cordoned into ‘tech boxes’. It is a testament to the ferocity of the problem that some of the most inventive mathematicians, including Richardson, Lorenz, John von Neumann and Jule Charney (an unsung giant of meteorological science) have been drawn to it.

But one of the great strengths of the book is the way it picks apart the challenge of making predictions about a chaotic system, showing what improvements we might yet hope for and what factors confound them. For example, forecasting is not always equally hard: the atmosphere is sometimes ‘better behaved’ than others. This is evident from the way prediction is now done: by running a whole suite (ensemble) of models that allow for uncertainties in initial conditions, and serving up the results as probabilities. Sometimes the various simulations might give similar results over the next several days, but other times they might diverge hopelessly after just a day or so, because the atmosphere is in a particularly volatile state.

Roulstone and Norbury point out that the very idea of a forecast is ambiguous. If it rightly predicts rain two days hence, but gets the exact location, time or intensity a little wrong, how good is that? It depends, of course, on what you need to know – on whether you are, say, a farmer, a sports day organizer, or an insurer. Some floods and thunderstorms, let alone tornados, are highly localized: below the pixel size of most weather simulations, yet potentially catastrophic.

The inexorable improvement in forecasting skill is partly a consequence of greater computing power, which allows more details of atmospheric circulation and atmosphere-land-sea interactions to be included and pixels to become smaller. But the gains also depend on having enough data about the current state of the atmosphere to feed into the model. It’s all very well having a very fine-grained grid for your computer models, but at present we have less than 1 percent of the data needed fully to set the initial state of all those pixels. The rest has to come from ‘data assimilation’, which basically means filling in the gaps with numbers calculated by earlier computer simulations. Within the window of predictability – perhaps out to ten days or so – we can still anticipate that forecasts will get better, but this will require more sensors and satellites as well as more bits and bytes.

If we can’t predict weather beyond a fortnight, how can we hope to forecast future climate change, not least because the longer timescales also necessitate a drastic reduction in spatial resolution? But the climate sceptic’s sneer that the fallibility of weather forecasting renders climate modelling otiose is deeply misconceived. Climate is ‘average weather’, and as such it has different determinants, such as the balance of heat entering and leaving the atmosphere, the large-scale patterns of ocean flow, the extent of ice sheets and vegetation cover. Nonetheless, short-term weather can impact longer-term climate, not least in the matter of cloud formation, which remains one of the greatest challenges for climate prediction. Conversely, climate change will surely alter the weather; there’s a strong possibility that it already has. Forecasters are therefore now shooting at a moving target. They might yet have to brave more ‘Michael Fish’ moments in the future, but if we use those to discredit them, it’s at our peril.

eBay chemistry

Ah, quite a bit of stuff to post tonight. Here is the first: my latest Crucible column for Chemistry World. I have a fantasy of kitting out my cellar this way one day, although the Guy Fawkes aspect of that idea would be sure to get me banished to the garden shed instead.

__________________________________________________________________

Benzyl alcohol? Dimethyl formamide? No problem. Quickfit glassware? Choose your fitting. GC mass spectrometer? Perhaps you’d like the HP 5972 5890 model for a mere £11,500. With eBay, you could possibly kit out your lab for a fraction of the cost of buying everything new.

There are risks, of course. The 90-day warranty on that mass spectrometer is scant comfort for such an investment, and you might wonder at the admission that it is “seller refurbished”. But you can probably get your hands on almost any bit of equipment for a knockdown price, if you’re willing to take the gamble.

You don’t have to rely on the hustle of eBay. Several companies now do a brisk online trade in used lab equipment. International Equipment Trading has been selling used and refurbished instruments for “independent laboratories, small and large industries, research institutions and universities around the globe” since 1979, offering anything from electron microscopes to NMR spectrometers. The cumbersomely named GoIndustry DoveBid (“Go-Dove”) is an international “surplus asset management” company used by many British technology companies for auctioning off equipment after site closures. And LabX, founded in 1995, is establishing itself as a major clearing house for used labware, serving markets ranging from semiconductor microelectronic manufacturing to analytical chemistry and medical diagnostic labs. Companies like this don’t become successful without scrupulous attention to quality, reliability and customer satisfaction – they aren’t cowboys.

Yet these transactions probably represent the tip of the iceberg as far as used and redundant lab kit goes. Can there be a chemistry department in the developed world that doesn’t have analytical instruments standing abandoned in corners and basements, perfectly functional but a little outdated? Or jars of ageing reagents cluttering up storerooms? Or drawers full of forgotten glassware, spatulas, heatproof mats? This stuff doubtless bit painfully into grants when it was first bought, but now that investment is ignored. The gear is likely to end up one day in a skip.

With universities struggling to accommodate cuts and a keen awareness of the need to recycle, this wastage seems criminal. But there seems to be little concerted effort to do much about it. Acquiring second-hand equipment is actively discouraged in some universities – partly because of understandable concerns about its quality, but often because the bureaucracy involved in setting up an ‘approved’ purchase is so slow and complicated that no one bothers, especially for little items. (One researcher used to buy sealant for bargain-basement prices until his department forbade it.) Inter-departmental recycling could be especially valuable for chemical reagents, since you might typically have to order far more than you really need. Auctioning them is another matter, however – selling chemicals requires a license, and one insider calls this a “legal minefield”.

But universities rarely have any organized system for sharing and redistributing equipment internally, and so “lots of kit sits there doing nothing”, says one chemist I spoke to, admitting that this applies to his own lab. He also points out that the EPSRC’s scheme for funding upgrades of small-scale equipment for early-career researchers seems to include no plans for reusing the old kit. This, he says, “could be worth well over £1m, and there are many universities overseas who would love to get hold of it, and wouldn’t be concerned about fixing it themselves.”

It’s a measure of the slightly disreputable taint of the second-hand equipment market that several of the researchers I spoke to requested anonymity. Early in his career, said one, he saved a lot of money buying in this way. “For a young academic it makes sense”, he says – you can get instrumentation for perhaps a twentieth of what it would cost new, such as high-pressure liquid chromatography pumps for a few hundred pounds instead of several thousand. In equipment auctions “the prices can start at nearly nothing”, according to a chemist who helped auctioneers sell off equipment from his previous employer, the pharmaceuticals company Exelgen, when it closed its site in North Cornwall in 2009. He says some second-hand equipment is bought up by the original manufacturer simply to maintain a good market for their new products. Not everything is a bargain, however: some used gear can sell for “nearly the price of new equipment as people get into bidding”, he says. On top of that you have the auctioneer’s fees, VAT, and perhaps carriage costs for equipment needing specialized transportation, not to mention the inconvenience of having to check out the goods first. So you need to know what you’re doing. “We have bought a fair bit of equipment this way”, says another researcher, “but most items require repairs, a service or at the very least some DIY to get them going. But if you happen to have a student who enjoys playing around with kit or computers, you can save quite a lot of money.” Happy hunting!

Tuesday, January 22, 2013

The thermodynamics of images

This was a somewhat challenging topic I took on for my latest column for BBC Future.

On an unrelated matter, my talk on Curiosity at the Perimeter Institute in December is now online. The Q&A is here.

And while I am doing non-sequiturs, I am deeply troubled by the news that the Royal Institution has put its Albemarle St building up for sale to cover the debts incurred in the excessively lavish refurbishment (don’t get me started). Amol Rajan in the Independent is dead right: it would be a monstrous if this place were lost to science, and Faraday’s lecture theatre became a corporate office. It must be saved!

______________________________________________________________

One of the unforeseen boons of research on artificial intelligence is that it has revealed much about our own intelligence. Some aspects of human perception and thought can be mimicked easily, indeed vastly surpassed, by machines, while others are extremely hard to reproduce. Take visual processing. We can give a satellite an artificial eye that can photograph your backyard from space, but making machines that can interpret what they ‘see’ is still very challenging. That realization should make us appreciate our own virtuosity in making sense of a visual field crammed with objects, some overlapping, occluded, moving, or viewed at odd angles or in poor light.

This ability to deconstruct immense visual complexity is usually regarded as an exquisite refinement of the neural circuitry of the human brain: in other words, it’s all in the head. It’s seldom asked what are the rules governing the visual stimulus in the first place: we tend to regard this as simply composed of objects whose identity and discreteness we must decode. But a paper published in the journal Physical Review Letters stands the problem of image analysis on its head by asking what are the typical statistical features of natural images. In other words, what sort of problem is it, really, that we’re solving when we look at the world?

Answering that question involves a remarkable confluence of scientific concepts. There is today a growing awareness that the science of information – how data is encoded, inter-converted and transported, whether in computers, genes or the quantum states of atoms – is closely linked to the field of thermodynamics, which was originally devised to understand how heat flows in engines and other machinery. For example, any processing of information – changing a bit in a computer’s binary memory from a 1 to a 0, say – generates heat.

A team at Princeton University led by William Bialek now integrates these ideas with concepts from image processing and neuroscience. The consequences are striking. Bialek and his colleagues Greg Stephens, Thierry Mora and Gasper Tkacik find that in a pixellated monochrome image of a typical natural scene, some groups of black and white pixels are more common than other, seemingly similar ones. And they argue that such images can be assigned a kind of ‘temperature’ which reflects the way the black and white pixels are distributed across the visual field. Some types of image are ‘hotter’ than others – and in particular, natural images seem to correspond to a ‘special’ temperature.

One way to describe a (black and white) image is to break it down into ‘waves’ of alternating light and dark patches. The longest wavelength would correspond to an all-white or all-black image, the shortest to black and white alternating for every adjacent pixel. The finer the pixels, the more detail you capture. It is equivalent to breaking down a complex sound into its component frequencies, and a graph of the intensity of each wavelength plotted against its wavelength is called a power spectrum. One of the characteristics of typical natural images, such as photos of people or scenery, is that they all tend to have the same kind of power spectrum – that’s a way of saying that, while the images might show quite different things, the ‘patchiness’ of light and dark is typically the same. It’s not always so, of course – if we look at the night sky, or a blank wall, there’s very little variation in brightness. But the power spectra reveal a surprising statistical regularity in most images we encounter.

What’s more, these power spectra have another common characteristic, called scale invariance. This means that pretty much any small part of an image is likely to have much the same kind of variation of light and dark pixels as the whole image. Bialek and colleagues point out that this kind of scale-invariant patchiness is analogous to what is found in physical systems at a so-called critical temperature, where two different states of the system merge into one. A fluid (such as water) has a critical temperature at which its liquid and gas states become indistinguishable. And a magnet such as iron has a critical temperature at which it loses its north and south magnetic poles: the magnetic poles of its constituent atoms are no longer aligned but become randomized and scrambled by the heat.

So natural images seem to possess something like a critical temperature: they are poised between ‘cold’ images that are predominantly light or dark, and ‘hot’ images that are featureless and random. This is more than a vague metaphor – for a selection of woodland images, the researchers show that the distributions of light and dark patches have just the same kinds of statistical behaviours as a theoretical model of a two-dimensional magnet near its critical temperature.

Another feature of a system in such a critical state is that it has access to a much wider range of possible configurations than it does at either lower or higher temperatures. For images, this means that each one is rather unique – they share few specific features, even if statistically they are similar. Bialek and colleagues suspect this might be why data files encoding natural images are hard to compress: the fine details matter in distinguishing one image from another.

What are the fundamental patterns from which these images are composed? When the researchers looked for the most common types of pixel patches – for example, 4x4 groups of pixels – they found something surprising. Fully black or white patches are very common, but as the patches become divided into increasingly complex divisions of white and black pixels, not all are equally likely: there are certain forms that are significantly more likely than others. In other words, natural images seem to have some special ‘building blocks’ from which they are constituted.

If that’s so, Bialek and colleagues think the brain might exploit this fact to aid visual perception by filtering out ‘noise’ that occurs naturally on the retina. If the brain were to attune groups of neurons to these privileged ‘patches’, then it would be easier to distinguish two genuinely different images (made up of the ‘special’ patches) from two versions of the same image corrupted by random noise (which would include ‘non-special’ patches). In other words, natural images may offer a ready-made error-correction scheme that helps us interpret what we see.

Reference: G. J. Stephens, T. Mora, G. Tkacik & W. Bialek, Physical Review Letters 110, 018701 (2013).

Thursday, January 17, 2013

History and myth

This is my Crucible column for the January issue of Chemistry World: more on why scientists rarely make good historians.

_____________________________________________________________

The history of chemistry is a discipline founded by chemists. After all, in the days before the history of science was a recognized field of academic toil, who else would have been interested in the origins of chemistry except those who now help it bear fruit? Marcellin Berthelot was one of the first to take alchemy seriously, translating ancient manuscripts and arguing that its apparent fool’s quest for gold led to useful discoveries. J. R. Partington, often considered a founding father of the modern study of chemical history, was a research chemist at Manchester and then at Queen Mary College, where another chemist, Frank Sherwood Taylor, founded the history-of-chemistry journal Ambix in 1937.

Things are different today – and not everyone is happy about it. The history of science has become as professionalized as any branch of science itself, and is therefore likewise answerable to standards of specialized expertise that leave scant room for the amateur. As a result, some chemists who enjoy exploring the lives and works of their predecessors can feel excluded from their own past, undermined and over-ruled by historians who have their own methods, norms and agendas and yet who have perhaps never held a test-tube. Conversely, those historians may end up despairing at the over-simplified narratives that practising chemists want to tell, at their naïve attachment to the founding myths of their discipline and their determination to filter the past through the lens of the present. In short, chemists and historians of chemistry don’t always see eye to eye.

That much is clear from the comments of Peter Morris in the latest issue of Ambix [1], from the editorship of which he has just stepped down after a decade in the position. (His successor is Jennifer Rampling of the University of Cambridge.) Morris is measured and diplomatic in his remarks, but his role has evidently not been an easy one. “It is unfortunate that the last three or four decades have witnessed a separation (but not yet a divorce) between historians and chemist-historians”, he says, defining the latter as practising chemists who write history. This separation is evident from the way that, while articles in Ambix come mostly from historians, several chemistry journals, such as Angewandte Chemie and the Journal of Chemical Education, sometimes publish (or at least once did) historical pieces from chemist-historians. The editors of such journals, says Morris, rarely ask historians to write such pieces, perhaps because they don’t know any, or perhaps because they “are fearful that the professionals will transgress against the standard foundation history accepted by the scientists.”

That’s a killer punch. In other words, Morris is saying that chemists fiercely defend their myths against those who dare weigh them against the evidence. For example, Morris says that an article in Ambix [2] challenging the stock account of how Wöhler’s synthesis of urea vanquished vitalism and began organic chemistry has probably done little to dislodge this widely held belief. Some chemists doubtless still prefer the fairy tale offered in Bernard Jaffe’s Crucibles: The Story of Chemistry: “About one hundred and fifty years ago an epoch-making event took place in the laboratory of a young German still in his twenties…”

Chemists aren’t unique among scientists in displaying a certain antipathy to ‘outsiders’ deigning to dismantle their cherished fables. But at face value, it seems odd that a group who recognize a culture of expertise and value facts should resist the authority of those who actually go back to the sources and examine the past. Why should this be? In part, it merely reflects the strong Whiggish streak that infuses science, according to which the purpose of history is not to understand the past so much as to explain how we got to the present. This attitude, says Morris, is evident in the way that many chemist-historians will accept only the chemical literature as the authoritative text on history – not the secondary literature that contextualizes such (highly stylized) accounts, not the social, political or economic setting. And while for historians it is often highly revealing to examine what past scientists got wrong, for scientists those are just discredited ideas and therefore so much rubbish to be swept aside.

But, as Morris stresses, not all chemist-historians think this way, and what sometimes hinders them is simply a lack of historical training: of how to assemble a sound historical argument. The trouble is, they may not be interested in acquiring it. “Many chemists, although by no means all, are loathe to take instruction from historians, whom they perceive as being non-chemists”, he says. They might write jargon-strewn, ploddingly chronological papers with no thesis or argument, and refuse to alter a word on the advice of historians. That kind of intellectual arrogance will only widen the divide.

Morris expresses optimism that “with good will and mutual understanding” the breach can be healed. Let’s hope so, because every chemistry student can benefit from some understanding of their subject’s evolution, and they deserve more than comforting myths.

1. P. J. T. Morris, Ambix 59,189-96 (2012).
2. P. J. Ramberg, Ambix 47, 170-195 (2000).

Tuesday, January 15, 2013

What's it all about Albert?

Here’s the pre-edited version of a story for Nature News about a fun poll among specialists of what quantum theory means. It seems quite possible that this material will spawn some further pieces about current work on quantum foundations, not least the ‘reconstruction’ projects that are attempting to reconstruct the theory from scratch using a few simple axioms.

_____________________________________________________________________

New poll reveals diverse views about foundational questions in physics

Quantum theory was first devised over a hundred years ago, but even experts still have little idea what it means, according to a poll at a recent meeting reported in a preprint on the physics arXiv server [1].

The poll of 33 key thinkers on the fundamentals of quantum theory shows that opinions on some of the most profound questions are fairly evenly split over several quite different answers.

For example, votes were roughly evenly split between those who believe “physical objects have their properties well defined prior to and independent of measurement” in some cases, and those who believe they never do. And despite the famous idea that observation of quantum systems plays a key role in determining their behaviour, 21 percent felt that “the observer should play no fundamental role whatsoever.”

Nonetheless, “I was actually surprised that there was so much agreement on some questions”, says Anton Zeilinger of the University of Vienna, who organized the meeting in Austria in July 2011 at which the poll was taken.

The meeting, supported by the Templeton Foundation, brought together physicists, mathematicians and philosophers interested in the meanings of quantum theory. Zeilinger, together with Maximilian Schlosshauer of the University of Portland in Oregon and Johannes Kofler of the Max Planck Institute of Quantum Optics in Garching, Germany, devised the poll, in which attendees were given 16 multiple-choice questions on key foundational issues in quantum theory.

Disagreements over the theory’s interpretation have existed ever since it was first developed, but Zeilinger and colleagues believe this may be the first poll of the full range of views held by experts. A previous poll at a 1997 meeting in Baltimore asked attendees the single question of which interpretation of quantum theory they favoured most [2].

Probably the most famous dispute about what quantum theory means was that between Einstein and his peers, especially the Danish physicist Niels Bohr, on the question of whether the world was fundamentally probabilistic rather than deterministic, as quantum theory seemed to imply. One of the few issues in the new poll on which there was something like a consensus was that Einstein was wrong – and one of the few answers that polled zero votes was “There is a hidden determinism [in nature]”.

Bohr, along with Werner Heisenberg, offered the first comprehensive interpretation of quantum theory in the 1920s: the so-called Copenhagen interpretation. This proposed that the physical world is unknowable and in some sense indeterminate, and the only meaningful reality is what we can access experimentally. As at the earlier Baltimore meeting, the Austrian poll found the Copenhagen interpretation to be favoured over others, but only by 42 percent of the voters. However, 42 percent also admitted that they had switched interpretation at least once. And whereas a few decades ago the options were very few, says Schlosshauer, “today there are more ‘sub-views’.”

Perhaps the most striking implication of the poll is that, while quantum theory is one of the most successful and quantitatively accurate theories in science, interpreting it is as fraught now as it was at the outset. “Nothing has really changed, even though we have seen some pretty radical new developments happening in quantum physics, from quantum information theory to experiments that demonstrate quantum phenomena for ever-larger objects”, says Schlosshauer. “Some thought such developments would push people one way or the other in their interpretations, but I don't think there’s much evidence of that happening.”

However, he says there was pretty good agreement on some questions. “More than two-thirds believed that there is no fundamental limit to quantum theory – that it should be possible for objects, no matter how big, to be prepared in quantum superpositions like Schrödinger’s cat. So the era where quantum theory was associated only with the atomic realm appears finally over.”

Other notable views were that 42 percent felt it would take 10-25 years to develop a useful quantum computer, while 30 percent placed the estimate at 25-50 years. And the much debated role of measurement in quantum theory – how and why measurements affect outcomes – split the votes many ways, with 24 percent regarding it as a severe difficulty and 27 percent as a “pseudoproblem”.

Zeilinger and colleagues don’t claim that their poll is rigorous or necessarily representative of all quantum researchers. John Preskill, a specialist in quantum information theory at the California Institute of Technology, suspects that “a broader poll of physicists might have given rather different results.” [There is an extended comment from Preskill on the poll here]

Are such polls useful? “I don’t know”, says Preskill, “but they’re fun.” “Perhaps the fact that quantum theory does its job so well and yet stubbornly refuses to answer our deeper questions contains a lesson in itself”, says Schlosshauer. Maybe the most revealing answer was that 48 percent believed there will still be conferences on quantum foundations in 50 years time.

References
1. Schlosshauer, M., Kofler, J. & Zeilinger, A. preprint http://www.arxiv.org/abs/1301.1069 (2013).
2. Tegmark, M. Fortschr. Phys. 46, 855 (1998).

Tuesday, January 01, 2013

Give graphene a bit of space

Here’s a piece published in last Saturday’s Guardian. I see little has changed in Comment is Free, i.e “A worthless uninformed negative article. You don't know what you are talking about. Why do you get paid for writing rubbish like this?” One even figured that the article is “anti-science.” Another decides that, because he feels steel-making is still not really a science (tell that to those now doing first-principles calculations on metal alloys), the whole article is invalidated. But this is par for the course. Back in the real world, Laurence Eaves rightly points out that his recent articles in Science and Nature Nanotech with Geim and Novoselov show that there’s hope of a solution to the zero-band-gap problem. Whether it’s be economical to make microprocessors this way is a question still far off, but I agree that there’s reason for some optimism. If carbon nanotubes are any guide, however, it’s going to be a long and difficult road. Some apparently regard it as treasonous to say so, but I'm pretty sure that Andre Geim, for one, would prefer to get on with the hard work without the burden of unreasonable expectation on their shoulders. And I know that the folks at IBM are keeping those expectations very modest and cautious when it comes to graphene.

______________________________________________________________

Wonder materials are a peculiarly modern dream. Until the nineteenth century we had to rely almost entirely on nature for the fabrics from which we built our world. Not until the 1850s was steel-making a science, and the advent of the first synthetic polymers – celluloid and vulcanised rubber – around the same time, followed later by bakelite, ushered in the era of synthetic materials. As The Man in the White Suit (1951) showed, there were mixed feelings about this mastery of manmade materials: the ads might promise strength and durability, but the economy relies on replacement. When, four years later, synthetic diamond was announced by General Electric, some felt that nature had been usurped.

Yet the ‘miracle material’ can still grab headlines and conjure up utopian visions, as graphene reveals. This ultra-tough, ultra-thin form of carbon, just one atom thick and made of sheets of carbon atoms linked chicken-wire fashion into arrays of hexagons, has been sold as the next big thing: the future of electronics and touch-screens, a flexible fabric for smart clothing and the electrodes of energy-storage devices. It’s a British discovery (well, sort of), and this time we’re not going to display our habitual dilatoriness when it comes to turning bright ideas into lucrative industries. George Osborne has announced £22m funding for commercialising graphene, the isolation of which won the 2010 Nobel prize in physics for two physicists at the University of Manchester.

It would be madness to carp about that. But let’s keep it in perspective: this investment will be a drop in the ocean if a pan-European graphene project currently bidding for a €1 bn pot from the European Union, to be decided early in 2013, is successful. All the same, it’s serious money, and those backing graphene have got a lot to live up to.

It’s not obvious that they will. With an illustrious history of materials innovation, Britain is well placed to put this carbon gossamer to work – not least, Cambridge boasts world-leading specialists in the technology of flexible, polymer-based electronics and display screens, one of the areas in which graphene looks most likely to make a mark. But overseas giants such as Samsung and Nokia are already staking out that territory, and China is making inroads too.

Perhaps more to the point, graphene might not be all it is talked up to be. No matter how hard the Manchester duo Andre Geim and Konstantin Novoselov stress that the main attraction so far is the remarkable physics of the substance and not its potential uses, accusations of hype have been flung at those touting this wonder material. The idea that all our microchips will soon be based on carbon rather than silicon circuits looks particularly dodgy, since it remains all but impossible to switch a graphene transistor (the central component of integrated circuits) fully off. They leak, leading one expert to call graphene “an extremely bad material that an electronics designer would not touch with a ten-foot pole”. Even optimists don’t forecast the graphene computer any time soon.

But here graphene is perhaps a victim of its own success: it’s such strange, interesting stuff that there’s almost a collective cultural wish to believe it can do anything. That’s the curse of the ‘miracle material’, and we have plastics to blame for it.

For plastics were the first of these protean substances. Before that, materials tended to have specific, specialized uses, their flaws all too evident. Steel was strong but heavy, stone hard but brittle. Leather and wood rotted. But plastics? Stronger than steel, hard, soft, eternal, biodegradable, insulating, conductive, sticky, non-stick, they tethered oil rigs and carried shopping. They got us used to the idea that a single fabric can be all things to all people. As a result, a new material is expected to multi-task. High-temperature superconductors, which nabbed a Nobel in 1987, would give us maglev trains and loss-free power lines. Carbon nanotubes (a sort of tubular graphene discovered in 1991) would anchor a Space Elevator and transform microelecronics. These things haven’t materialized, partly because it is really, really hard to secure a mass market overnight for high-tech, expensive new materials, especially when that means displacing older, established materials. They are instead finding their own limited niche. Graphene will do too. But miracle materials? They don’t really exist.