Monday, October 22, 2007

Lucky Jim runs out of luck (at last)

[This is also posted on the Prospect blog .]

Jim Watson seems to be genuinely taken aback by the furore his recent comments on race and IQ have aroused. He looks a little like the teenage delinquent who, after years of being a persistent neighbourhood pest, finds himself suddenly hauled in front of a court and threatened with being sent to a detention centre. Priding himself on being a social irritant, he never imagined anyone would deal with him seriously.

The truth is that there is more than metaphor in this image. Watson has throughout his career combined the intelligence of a first-rate scientist and the influence of a Nobel laureate with the emotional maturity of a spoilt schoolboy. There is nothing particularly remarkable about that – it is not hard to find examples of immaturity among public figures – but the scientific community seems to find it particularly difficult to know how to accommodate such cases. For better or worse, there are plenty of niches for emotionally immature show-offs in politics and the media – the likes of Boris Johnson, Ann Widdecombe, Jeremy Clarkson and Ann Coulter all, in their own ways, manage it with aplomb. (It is not a trait unique to right-wingers, but somehow they seem to do it more memorably.) But although they can sometimes leave po-faced opponents spluttering, the silliness is usually too explicit to be mistaken for anything else.

Science, on the other hand, has tended to be blind to this facet of human variety, so that the likes of Watson come instead to be labelled “maverick” or “controversial”, which of course is precisely what they want. The scientific press tends to handle these figures with kid gloves, pronouncing gravely on the propriety of their “colourful” remarks, as though these are sober individuals who have made a bad error of judgement. Henry Porter is a little closer to the mark in the Observer, where he calls Watson an ‘elderly loon’ – the degree of ridicule is appropriate, except that Watson is no loon, and it has been a widespread mistake to imagine that his comments are a sign of senescence.

The fact is that Watson has always considered it great sport to say foolish things that will offend people. He is of the tiresome tribe that likes to display what they deem to be ‘politically incorrectness’ as a badge of pride, forgetting that they would be ignored as bigoted boors if they did not have power and position. It is abundantly clear that behind the director of the Cold Spring Harbor Laboratory still stands the geeky young man depicted behind a model of DNA in the 1950s, whose (eminently deserved) Nobel has protected him from a need to grow up. “He was given licence to say anything that came into his mind and expect to be taken seriously,” said Harvard biologist E. O. Wilson (himself no stranger to controversy, but an individual who exudes far more wisdom and warmth than Watson ever has).

That’s a pitfall for all Nobel laureates, of course, and many are tripped by it. But few have embraced the licence with as much delight as Watson. For example, there was this little gem over a decade ago: “If you are really stupid, I would call that a disease. The lower 10 per cent who really have difficulty, even in elementary school, what’s the cause of it? A lot of people would like to say, ‘Well, poverty, things like that.’ It probably isn't. So I’d like to get rid of that, to help the lower 10 per cent.” Or this one: “Whenever you interview fat people, you feel bad, because you know you’re not going to hire them.”

Watson has been called “extraordinarily naïve” to have made his remarks about race and intelligence and expect to get away with them. But it is not exactly naivety – he probably just assumed that, since he has said such things in the past without major incident, he could do so again. Indeed, he almost did get away with it, until the Independent decided to make it front-page news.

Watson has apologized “unreservedly” for his remarks, which he says were misunderstood. This is mostly a public-relations exercise – it is not clear that there is a great deal of scope for misunderstanding, and evidently Watson now has a genuine concern that he will be dismissed from his post at Cold Spring Harbor. At least by admitting that there is “no scientific basis” for a belief that Africans are somehow “genetically inferior”, he has provided some ammunition to counter the opportunistic use of his remarks by racist groups. But it is inevitable that those groups will now make him a martyr, forced to recant in the manner of Galileo for speaking an unpalatable truth. (The speed with which support for Watson’s comments has come crawling out of the woodwork even in august forums such as Nature’s web site is disturbing.)

The more measured dismay that some, including Richard Dawkins, have voiced over the suppression of free speech implied by the cancellation of some of Watson’s intended UK talks, is understandable, although it seems not unreasonable (indeed, it seems rather civil) for an institution to decide it does not especially want to host someone who has just expressed casual racist opinions. More to the point, it is not clear what ‘free speech’ is being suppressed here – Watson does not appear to be wanting to, and being prevented from, making a case that black people are less intelligent than other races. (In fact it is no longer clear what Watson wanted to say at all; the most likely interpretation is that he simply let a groundless prejudice slip out in an attempt to boost his ‘bad boy’ reputation, and that he now regrets it.) In a funny sort of way, Watson would be less deserving of scorn if he were now defending his remarks on the basis of the ‘evidence’ he alluded to. In that event, any kind of censorship would indeed be misplaced.

Beneath the sound a fury, however, we should remember that Watson’s immense achievements as a scientist do not oblige us to take him seriously in any other capacity. Those achievements are orthogonal to his bully-boy bigotry, and they put no distance at all between Watson and the pub boor.

The real casualty in all this is genetics research, for Watson’s comments past and present can only seem (and in fact not just seem) to validate claims that this research is in the hands of scientists with questionable judgement and sense of responsibility.

Friday, October 19, 2007

Swiss elections get spooky
[This is my latest column for muse@nature.com.]

High-profile applications of quantum trickery raise the question of what to call these new technologies. One proposal is unlikely to catch on.

The use of quantum cryptography in the forthcoming Swiss general elections on 21 October may be a publicity stunt, but it highlights the fact that the field of quantum information is now becoming an industry.

The invitation here is to regard Swiss democracy as being safeguarded by the fuzzy shroud of quantum physics, which can in principle provide a tamper-proof method of transmitting information. The reality is that just a single state – Geneva – is using commercial quantum-cryptography technology already trialled by banks and financial institutions, and that it is doing so merely to send tallies from a vote-counting centre to the state government’s repository.

The votes themselves are being delivered by paper ballot – which, given the controversies over electronic voting systems, is probably still the most secure way to collect them. In any event, with accusations of overt racism in the campaigning of the right-wing Swiss People’s Party (SVP), hacking of the voting system is perhaps the least of the worries in this election.

But it would be churlish to portray this use of quantum cryptography as worthless. There is no harm in using a high-profile event to advertise the potential benefits of the technology. If nothing else, it will get people asking what quantum cryptography is.

The technique doesn’t actually make transmitted data invulnerable to tampering. Instead, it makes it impossible to interfere with the transmission without leaving a detectable trace. Some quantum cryptographic schemes use the quantum-mechanical property of entanglement, whereby two or more quantum particles are woven together so that they become a single system. Then you can’t do something to one particle without affecting the others with which it is entangled.

Entanglement isn’t essential for quantum encryption – the first such algorithm, devised by physicists Charles Bennett and Gilles Brassard in 1984, instead relies on a property called quantum indeterminacy, denoting our fundamental inability to describe some quantum systems exactly. Entanglement, however, is the key to a popular scheme devised in 1991. Here, the sender and receiver each receive one of a pair of entangled particles, and can decode a message by comparing their measurements of the particles’ quantum states. Any eavesdropping tends to randomize the relationship between these states, and is therefore detectable.

Quantum cryptography is just one branch of the emerging discipline of quantum information technology, in which phenomena peculiar to the quantum world, such as entanglement, are used to manipulate information. Other applications include quantum computing, in which quantum particles are placed in superposition states – mixtures of the classical states that would correspond to the binary 1’s and 0’s of ordinary computers – to vastly boost the power and capacity of computation. Quantum teleportation – the exact replication of quantum particles at locations remote from the originals – also makes use of entanglement.

The roots of these new areas of quantum physics lie in the early days of quantum theory, when its founders were furiously debating what quantum theory implied about the physical world. Albert Einstein, whose Nobel-winning explanation of the photoelectric effect was one of the cornerstones of quantum mechanics, doubted that quantum particles could really have the fuzzy properties ascribed to them by the theory, to which one could do no more than assign probabilities.

In 1935 Einstein and his colleagues Boris Podolsky and Nathan Rosen proposed a thought experiment that they hoped would show quantum theory to be an incomplete account of physical reality. They showed how it seemed to predict what Einstein called ‘spooky action at a distance’ that operated instantaneously between two particles.

But we now know that this action at a distance is real – it is the result of quantum entanglement. What Einstein considered a self-evident absurdity is simply the way the world is. What’s more, entanglement and superpositions are now recognized as being key to the way our deterministic classical world, where events have definite outcomes, emerges from the murky haze of quantum probabilities.

Bennett was one of the pioneers who showed that these quantum effects aren’t just abstract curiosities, but can be exploited in applications. For this, he will surely get a Nobel prize some time soon.

So far, most researchers have been happy to talk about ‘quantum cryptography’, ‘quantum computing’ and so forth, vaguely gathered under the umbrella phrase of quantum information. But is that a good name for a technology? Charles Tahan, a physicist at the University of Cambridge who is working on these technologies, thinks not. In a recent preprint, he proposes to draw inspiration from Einstein and call it all ‘spookytechnology’.

This, says Tahan, would refer to “all functional devices, systems and materials whose utility relies in whole or in part on higher order quantum properties of matter and energy that have no counterpart in the classical world.” By higher-order, Tahan means things like entanglement and superposition. He argues that his definition is broad enough to contain more than quantum information technology, but not so broad as to be meaningless.

In that respect, Tahan points to the shortcomings of ‘nanotechnology’, a field that is not really a field at all but instead a ragbag of many areas of science and technology ranging from electronics to biomedicine.

But Tahan's label will never stick, because it violates one of the most fundamental prohibitions in scientific naming: don’t be cute. No scientist is going to want to tell people that he or she is working in a field that sounds as though it was invented by Caspar the Friendly Ghost. True, the folksy ‘buckyballs’ gained some currency as a term for the fullerene carbon molecules (despite Nature’s best efforts) – but its usage remains a little marginal, and has thankfully never caught on for ‘buckytubes’, which everyone instead calls carbon nanotubes.

Attempts to label nascent fields rarely succeed, for names have a life of their own. ‘Nanotechnology’, when coined in 1974, had nothing like the meaning it has today. ‘Spintronics’, the field of quantum electronics that in some sense lies behind this year’s physics Nobel, is arguably a slightly ugly and brutal amalgam of electronics and the quantum property of electrons called spin - yet somehow it works.

Certainly, names need to be catchy: laboured plunderings of Greek and Latin are never popular. But catchiness is extremely hard to engineer. So somehow I don’t think we’re going to see the Geneva elections become a landmark in spookytechnology.

Thursday, October 18, 2007







How tortoises turn right-side up
[This is a story I’ve just written for Nature’s news site. But the deadline was such that we couldn’t include the researchers’ nice pics of tortoises and turtles doing their stuff. So here are some of them. The first is an ideal monostatic body, and a tortoise that approximates it. The second is a flat turtle righting itself by using its neck as a pivot. The last two are G. elegans shells, which are nearly monostatic.]

Study finds three ways that tortoises avoid getting stuck on their backs.

Flip a tortoise or a turtle over, and it’ll find its feet again. Two researchers have now figured out how they do it — they use a clever combination of shell shape and leg and neck manoeuvres.

As Franz Kafka’s Gregor discovered in Metamorphosis, lying on your back can be bad news if you’re cockroach-shaped. Both cockroaches and tortoises are potentially prone to getting stuck on their rounded backs, their feet flailing in the air.

For tortoises, this is more than an accidental hazard: belligerent males often try to flip opponents over during fights for territorial rights. Gábor Domokos of Budapest University of Technology and Economics and Péter Várkonyi of Princeton University in New Jersey took a mathematical look at real animals to see whether they had evolved sensible shapes to avoid getting stuck [1].

The ideal answer would seem to be to have a shell that can’t get stuck at all — one that will spontaneously roll back under gravity, like the wobbly children's toys that “won’t fall down”. Domokos and Várkonyi have investigated the rolling mechanics of idealized shell shapes, and show that in theory, such self-righting shells do exist. They would be tall domes with a cross-section like a half-circle slightly flattened on one side.

The shells of some tortoises, such as the star tortoise Geochelone elegans, come very close to this shape. They can still get stuck because of small imperfections in the shell shape, but it takes only a little leg-wagging to make the tortoise tip over and right itself. The researchers call tall shells that have a single stable resting orientation (on the tortoise's feet) monostatic, denoted as group S1.

The tall and the squat

So, tall shells are generally good for righting with minimal effort, and confer good protection against the jaws of predators. Could this be the best answer for all turtles and tortoises? No real chelonian has a perfectly monostatic shell, which Várkonyi says is probably because tall shells could have disadvantages too: you could be rolled over by wind, for instance. Also, he says, it takes quite a bit of fine-tuning to achieve a truly monostatic shape.

Flatter shells have other advantages: they can, for example, be better for swimming or for use as spade-like implements for digging. The side-necked turtle and the pancake tortoise are flat like this, with two stable resting positions (S2): right side up and on their back.

For such flat shells, righting requires more than a bit of thrashing around. These animals tend to have long necks, which they extend and use as a pivot while pushing with their legs. The longer the neck, the easier it is for the creature to right itself, in the same way that a long lever can be pushed down with less effort than a short one.

Stuck in the middle

In between these two extremes of tall and flat are shells that are moderately domed, as found in Terrapene box turtles. Surprisingly, these have three stable positions (S3): on the back, on the front or halfway between, where the shell rests on its curved side.

Turtles of the S3 class use a combination of both strategies: bobbing of their head or feet tips the shell from the back-down position to the sideways position, and from there the creature can use its neck and feet to pivot over into the belly-down state.

The work is sure to be of interest to tortoise keepers and kids with turtle pets. But it's unlikely that this tortoise-rolling work is going to suggest new ways to help robots pick themselves up — engineers already have a number of quite simple ways of ensuring that. "You can just put ballast in the bottom," Várkonyi admits.

Reference
1. Domokos, G. & Várkonyi, P. L. Proc. R. Soc. B, doi:10.1098/rspb.2007.1188.

Tuesday, October 16, 2007

We’ll never know how we began

[This is the pre-edited text of my Crucible column for the November issue of Chemistry World.]

Oddly, it is easier to explore the origin of the universe than the origin of life on Earth. ‘Easier’ is a relative term here, because the construction of the Large Hardon Collider at CERN in Geneva makes clear the increasing extravagance needed to push back the curtain ever closer to the singularity of the Big Bang. But we can now reconstruct the origin of our universe from about 10**-30 of a second onwards, and the LHC may take us back into the primordial quark-gluon plasma and the symmetry-breaking transition of the Higgs field that created particle masses.

Yet all this is possible precisely because there is so little room for contingency in the first instants of the Big Bang. The further back we go, the less variation we are likely to find between our universe and another one hypothetically sprung from a cosmic singularity – most of what happened then is constrained by physics. So while the LHC might produce some surprises, it could instead simply confirm what we expected.

The origin of life is totally different. There isn’t really any theory that can tell us about it. It might have happened in many different ways, depending on circumstances of which we know rather little. In this sense, it is a genuinely historical event, immune to first-principles deduction in the same way as are the shapes of the early continents or the events of the Hundred Years War. What we know about the former is largely a matter of extrapolating backwards from the present-day situation, and then searching for geological confirmation. We can do the same for the history of life, constructing phylogenetic trees from comparisons of extant organisms and supplementing that with data from the fossil record. But that approach can tell us little about what life was like before it was really life at all.

For the Hundred Years War there is ample documentary evidence. But for life’s origin around 3.8 billion years ago, the geological ‘documents’ tell us very little indeed. Life left its imprint in the rocks once it was fully fledged, but there is no real data on how it got going.

It is a testament to the tenacity and boldness of scientists that they have set out to explore the question anyway. In 1863 Charles Darwin concluded that there was little point in doing so: “It is mere rubbish”, he wrote, “thinking at present on the origin of life.” But he evidently had a change of heart, since eight years later he could be found musing on his “warm little pond” filled with a broth of prebiotic compounds. By the time Alexander Oparin and J. B. S. Haldane speculated about the formation of organic molecules in primitive atmospheres in the 1920s, experimentalists had already shown that substances such as formaldehyde and the amino acid glycine could be cooked up from carbon oxides, ammonia and water.

There was, then, a long tradition behind the ground-breaking experiment of Harold Urey and Stanley Miller at Chicago in 1953. They, however, were the first to use a reducing mixture, and that is why they found such a rich mélange of organics in their brew. Despite geological evidence suggesting that the early terrestrial atmosphere was mildly oxidizing, Miller remained convinced until his recent death that this was the only plausible way life’s building blocks could have been made – some say his stubbornness on this issue ended up hindering progress in the field.

In some ways, the recent study by Paul von Ragué Schleyer of the University of Georgia and his coworkers of the prebiotic synthesis of the nucleic acid base adenine from hydrogen cyanide (D. Roy et al., Proc. Natl Acad. Sci. USA doi:10.1073 pnas.0708434104) is a far cry from Urey and Miller’s makeshift ‘bake and shake’ experiment. It uses state-of-the-art quantum chemical calculations to deduce the mechanism of this reaction, first reported by John Oró and coworkers in Texas in 1960, which produces one of the building blocks of life from five molecules of a single, simple ingredient.

But in another sense, the work might be read as an indication that the field initiated by Urey and Miller is close to having run its course in its present form. The most one could have asked of their approach – and it has amply fulfilled this demand – is that it alleviate George Wald’s objection in 1954 that “one only has to contemplate the magnitude of this task to concede that the spontaneous generation of a living organism is impossible.” There are now more or less plausibly ‘prebiotic’ ways to make most of the key molecular ingredients of proteins, RNA, DNA, carbohydrates and other complex biomolecules. There are ingenious ways of linking them together, in defiance of the deconstructive hydrolysis that dilute solution seems to threaten, ranging from surface catalysis on minerals to the use of electrochemical gradients at hot springs. There are theories of cascading complexification through autocatalytic cycles, and the whole framework of the RNA World (the answer to the chicken-and-egg problem of DNA’s dependence on proteins) seems increasingly well motivated.

And yet there is no more evidence than there was fifty years ago that this is how it all happened. Time has kicked over the tracks. The chemical origin of life has become a discipline of immense experimental and theoretical refinement, as this new paper testifies – and yet it all remains guesswork, barely constrained by hard evidence from the Hadaean eon of our planet. The true history is obliterated, and we may never glimpse it.

Sunday, October 07, 2007


Time to rethink the Outer Space Treaty
[This article on Nature’s news site formed part of the journal’s “Sputnik package”.]

An agreement forged 40 years ago can’t by itself keep space free of weaponry.

Few anniversaries have been celebrated with such mixed feelings as the launch of Sputnik-1 half a century ago. That beeping little metal orb, innocuously named “fellow traveller of Earth”, signalled the beginning of satellite telecommunications, global environmental monitoring, and space-based astronomy, as well as the dazzling saga of human journeys into the cosmos. But the flight of Sputnik was also a pivotal moment in the Cold War, a harbinger of intercontinental nuclear missiles and space-based surveillance and spying.

That’s why it seems surprising that another anniversary this year has gone relatively unheralded. In 1967, 90 nations signed the Outer Space Treaty (OST), in theory binding themselves to an agreement on the peaceful uses of space that prohibited the deployment there of weapons of mass destruction. Formally, the treaty remains in force; in practice, it is looking increasingly vulnerable as a protection against th militarization of space.

Updating and reinvigorating the commitments of the OST seems to be urgently needed, but this currently stand little chance of being realized. Among negotiators and diplomats there is now a sense of gloom, a feeling that the era of large-scale international cooperation and legislation on security issues (and perhaps more widely) may be waning.

Last year was the tenth anniversary of the Comprehensive Test Ban Treaty (CTBT), and next year the fortieth anniversary of the Nuclear Non-Proliferation Treaty. But the world’s strongest nuclear power, the United States, refuses to ratify the CTBT, while some commentators believe the world is entering a new phase of nuclear proliferation. No nuclear states have disarmed during the time of the NPT’s existence, despite the binding commitment of signatory states “to pursue negotiations in good faith on effective measures relating to nuclear disarmament”.

In this arena, the situation does seem to be in decline. For example, the US appears set on developing a new generation of nuclear weapons and deploying a ballistic missile defence system, and it withdrew from the Anti-Ballistic Missile Treaty in 2002. China and Israel have also failed to ratify the CTBT, while other nuclear powers (India, Pakistan) have not even signed it. North Korea, which withdrew from the NPT in 2003, now claims to have nuclear weapons.

Given how poorly we have done so close to home, what are the prospects for outer space? “For the past four decades”, says Sergei Ordzhonikidze, Director-General of the United Nations Office at Geneva, “the 1967 Outer Space Treaty has been the cornerstone of international space law. The treaty was a great historic achievement, and it still is. The strategic – and at the same time, noble and peaceful – idea behind [it] was to prevent the extension of an arms race into outer space.”

Some might argue that those goals were attained and that there has been no arms race in space. But a conference [1] convened in Geneva last April by the United Nations Institute for Disarmament Research suggested that the situation is increasingly precarious, and indeed that military uses of space are well underway and likely to expand.

Paradoxically, the thawing of the Cold War is one reason why the OST is losing its restraining power. During a confrontration of two nuclear superpowers, it is rather easy to see (and game theory confirms) that cooperation on arms limitation is in the national interest. But as Sergey Batsanov, Director of the Geneva Office of the Pugwash group for peaceful uses of science, pointed out in the UN meeting, “after the end of the Cold War, disarmament and non-proliferation in their traditional forms could no longer be considered as vital instruments for maintaining the over-all status quo.” Batsanov suggests we are now in a transitional phase of geopolitics in which new power structures are emerging and there is in consequence a “crisis in traditional international institutions, and the erosion, or perhaps evolution, of norms of international law (such as the inviolability of borders and non-interference in another state’s internal affairs).”

It’s not hard to see what he is alluding to there. Certainly, it seems clear that the US plans for maintaining “space superiority” – the “freedom to attack as well as the freedom from attack” - does much to harm international efforts on demilitarization of space. The tensions created with Russia by US plans to site missile defence facilities in eastern Europe is just one example of that. James Armor, Director of the US National Security Space Office, indicates that, following the “emergence of space-enabled transitional warfare” using satellite reconnaissance in Operation Desert Storm in Iraq in 1991, military space capabilities have now become “seamlessly integrated into the overall US military structure”.

But it would be unwise and unfair to imply that the United States is a lone ‘rogue agent’. China has exhibited a clear display of military capability in space; as Xu Yansong of the National Space Administration of the People’s Republic of China explained at the UN conference, China’s space activities are aimed not only at “utilizing outer space for peaceful purposes” but “protecting China’s national interests and rights, and comprehensively building up the national strength” – which could be given any number of unsettling interpretations. Yet China, like Russia, has been supportive of international regulation of space activities, and it’s not clear how much of this muscle-flexing is meant to create a bargaining tool.

The real point is that the OST is an agreement forged in a different political climate from that of today. Its military commitments amount to a prohibition of nuclear weapons and other “weapons of mass destruction” in space, and the use of the Moon and other celestial bodies “exclusively for peaceful purposes.” That’s a long way from prohibiting all space weapons. As Kiran Nair of the Indian Air Force argued, “the OST made certain allowances for military uses of outer space [that] were exploited then, and are exploited now and ill continue to be so until a balanced agreement on the military utilization of outer space is arrived at.”

What’s more, there was no explicit framework in the OST for consultations, reviews and other interactions that would sustain the treaty and ensure its continued relevance. And as Batsanov says, now there are more players in the arena, and a wider variety of potential threats.

Both Russia and China have called for a new treaty, and earlier this year President Putin announced the draft of such a document. But we don’t necessarily need to ditch the OST and start anew. Indeed, the treaty has already been the launch pad for various other agreements, for example on liability for damage caused by space objects and on the rescue of astronauts. It makes sense to build on structures already in place.

The key to success, however, is to find a way of engaging all the major players. In that respect, the United States still seems the most recalcitrant: its latest National Space Policy, announced in October 2006, states that the OST is sufficient and that the US “will oppose the development of new legal regimes or other restrictions that seek to prohibit or limit US access to or use of space.” In other words, only nuclear space weaponry is to be considered explicitly out of bounds. Armor made the prevailing Hobbesian attitude clear at the Geneva meeting: “In my view, attempts to create regimes or enforcement norm that do not specifically include and build upon military capabilities are likely to be stillborn, sterile and ultimately frustrating efforts.” Whatever framework he envisages, it’s not going to look much like the European Union.

But it needn’t be a matter of persuading nations to be more friendly and less hawkish. There are strong arguments for why pure self-interest in terms of national security (not to mention national expenditure) would be served by the renunciation of all plans to militarize space – just as was the case in 1967. Rebecca Johnson of the Acronym Institute for Disarmament Diplomacy pointed out that after the experience in Iraq, US strategists are “coming to see that consolidating the security of existing assets is more crucial than pursuing the chimera of multi-tiered invulnerability.” The recent Chinese anti-satellite test, for from being a red flag to a bullish military, might be recognized as an indication that no one stays ahead in this race for long, and the US knows well that arms races are debilitating and expensive.

The danger with the current Sputnik celebrations is that they might cast the events in 1957 as pure history, which has now given us a world of Google Earth and the International Space Station. The fact is that Sputnik and its attendant space technologies reveal a firm link between the last world war, with its rocket factories manned by slaves and its culmination in the instant destruction of two cities, and the world we now inhabit. The OST is not merely a legacy of Sputnik but the only real international framework for the way we use space. Unless it can be given fresh life and relevance, we have no grounds for imagining that the military space race is over.

Reference
1. Celebrating the Space Age: 50 Years of Space Technology, 40 Years of the Outer Space Treaty (United Nations Institute for Disarmament Research, Geneva, 2007).

Wednesday, October 03, 2007

Yet more memory of water

This month’s issue of Chemistry World carries a letter from Martin Chaplin and Peter Fisher in response to my column discussing the special issue of Homeopathy on the ‘memory of water’. Mark Peplow asked if I wanted to respond, but I told him that he should regard publication of my response as strictly optional. In the event, he rightly chose to use the space to include another letter on the topic. So here for the record is Martin and Peter’s letter, and my response. I suppose I could be a little annoyed by the misrepresentation of what I said at the end of their letter, but I’m happy to regard it as miscomprehension.


From Martin Chaplin and Peter Fisher


We put together the ‘Memory of water’ issue of the journal Homeopathy, the subject of Philip Ball’s recent column (Chemistry World, September 2007, p38), to show the current state of play. It contained all the current scientific views representing the different experimental and theoretical approaches to the ‘memory of water’ phenomena. Some may be important and others less so, but now the different areas of the field can be fairly judged. The papers mostly demonstrated the similar theme that water preparations may have unexpected properties, contain unexpected solutes and show unexpected changes with time; all very worthy of investigation. Although not the main purpose of the papers, we show the problems as much as the potential of these changed properties in relation to homeopathy.

Ball skirts over the unexpected experimental findings that he finds ‘puzzling’, so ignoring the very heart of the phenomena we are investigating and misinterpreting the issue. He backs up his argument with statements concerning pure water and silicate solutions that are clearly not relevant to the present discussion. Also, he uses Irving-Langmuir to prop up his argument. This is fitting as Langmuir dismissed the Jones-Ray effect (http://www.lsbu.ac.uk/water/explan5.html#JR), whereby the surface tension of water is now known to be reduced by low concentrations of some ions, as this disagreed with his own theories. Finally Ball finishes with the amazing view that he knows the structure of water in such solutions with great confidence; I wish he would share that knowledge with the rest of us.

M F Chaplin CChem FRSC, London, UK

P Fisher, Editor,Homeopathy, Luton, UK



Response from Philip Ball

I have discussed elsewhere some of the experimental papers to which Chaplin and Fisher refer (see http://www.nature.com/news/2007/070806/full/070806-6.html and www.philipball.blogspot.com). Some of those observations are intriguing, but each raises its own unique set of questions and concerns, and they couldn’t possibly all be discussed in my column. Langmuir’s ideas feature nowhere in my argument; I simply point out that he coined the term ‘pathological science.’ If the issues I raise about silicate self-organization are not relevant to the discussion, why do Anick and Ives mention them in their paper? And I never stated that I or anyone else knows the structure of water or aqueous solutions with great confidence; I merely said that there are some things we do know with confidence about water’s molecular-scale structure (such as the timescale of hydrogen-bond making and breaking in the pure liquid), and they should not be ignored.

Monday, October 01, 2007

What’s God got to do with it

There’s a curious article in the September issue of the New Humanist by Yves Gingras, a historian and sociologist of science at the University of Quebec. Gingras is unhappy that scientists are using references to God to sell their science (or rather, their books), thereby “wrap[ping] modern scientific discoveries in an illusory shroud that insinuates a link between cutting-edge science and solutions to the mysteries of life, the origins of the universe and spirituality.” But who are these unscrupulous bounders? Well… Paul Davies, and… and Paul Davies, and… ah, and Frank Tipler. Well yes, Tipler. My colleagues and I decided recently that we should introduce the notion of the Tipler Point, being the point beyond which scientists lose the plot and start rambling about the soul/immortality/parallels between physics and Buddhism. A Nobel prize is apt to take you several notches closer to the Tipler Point, though clearly it’s not essential. And such mention of Buddhism brings us to Fritjof Capra, and if we’re going to admit him to the ranks of ‘scientists’ who flirt with mysticism then the game is over and we might as well bring in Carl Jung and Rudolf Steiner.

Gingras suggests that the anthropic principle is “bizarre and clearly unscientific”, and that it has affinities with intelligent design. Now, I’m no fan of the anthropic principle (see here), but I will concede that it is actually an attempt to do the very opposite of what intelligent design proposes – to obviate the need to interpret the incredible fine-tuning of the physical universe as evidence of design. The fact is that this fine-tuning is one of the most puzzling issues in modern physics, and if I were a Christian of the sort who believes in a Creator (not all have that materialist outlook), I’d seize on this as a pretty strong indication that my beliefs are on to something. The Templeton Foundation, another of Gingras’s targets, has hosted some thoughtful meetings on the theme of fine-tuning, and while I’m agnostic about the value and/or motives of the Templeton Foundation, I don’t obviously see a need to knock them for raising the question.

Paul Davies has indeed hit a lucrative theme in exploring theological angles of modern cosmology, but he does so in a measured and interesting way in which I don’t at all recognize Gingras’s description of “X-files science” or an “oscillation between science and the paranormal.” Frankly, I’m not sure Gingras is on top of his subject – when, as I expected resignedly, he fishes out Stephen Hawking’s famous “mind of God” allusion, he seems to see it as a serious suggestion, and not simply as an attempt by an excellent scientist but indifferent writer to inject a bit of pizzazz into his text. Hawking’s reference is obviously theologically naïve, and gains supposed gravitas only because of the oracular status that Hawking has, for rather disturbing reasons, been accorded.

Still, I suppose I will also be deemed guilty of peddling religious pseudo-science for daring to look, in my next book, at the theological origins of science in the twelfth century…