Friday, May 23, 2014

A chance of snow

Here’s my latest piece for BBC Future.


So what happened to the severe winter that we in the UK were warned about last November? Instead there was less snow than in recent years, but it rained incessantly. Sure, the winter was unusually bitter – if you live in Buffalo, not Bristol. Why can’t the forecasters get it right?

Well, here’s the thing: the UK’s Met Office didn’t forecast anything like this at all. They said simply that “temperatures are likely to remain near or slightly below average for the time of year, but otherwise fairly normal conditions for early winter are most likely”.

But more importantly, the Met Office has tried repeatedly to explain that “it’s not currently scientifically possible to provide a detailed forecast over these long timescales”. As far as snow is concerned, they said last November, “because there are so many factors involved, generally that can only be discussed in any detail in our five day forecasts.” To which commendable honesty you might be inclined to respond “a fat lot of good.”

Yet hope is at hand. In a new paper published in Geophysical Research Letters, a team of Met Office scientists say that, thanks to advances in modelling the North Atlantic weather system, “key aspects of European and North American winter climate… are highly predictable months ahead”.

So what’s changed? Well, in part it all depends on what you mean by a weather forecast. You, I or Farmer Giles might want to know if it is going to rain or snow next Thursday, so that we can anticipate how to get to work or whether to gather in the herds. But the laws of physics have set a fundamental limit on how far in advance that kind of detail can be predicted. The weather, crucially dependent on the turbulent flows of atmospheric circulation, is a chaotic system, which means that the tiniest differences in the state of the system right now can lead to completely different outcomes several days down the line.

Beyond about ten days in the future, it is therefore mathematically impossible to forecast details such as local rainfall, no matter how much you know about the state of the weather system today. All the same, advances in satellite observations and computer modelling mean that forecasting has got better over the past few decades. No matter how cynically you like to see it, the five-day forecast for Western Europe is now demonstrably more accurate than the three-day forecast was in the late 1980s.

Despite this limited window of predictability, some aspects of weather – such as general average temperature – can be forecast much further in advance, because they may depend on features of the climate system that change more slowly and predictably, such as ocean circulation. That’s what lies behind the Met Office’s current optimism about winter forecasting in the North Atlantic region.

This area has always been difficult, because the state of the atmosphere doesn’t seem to be as closely linked to ocean circulation as it is in the tropics, where medium-range forecasting is already more reliable. But it’s precisely because the consequences of a colder-than-usual winter may be more severe at these higher latitudes – from disruption to transport to risk of hypothermia – that the shortcomings of winter weather forecasts are more keenly felt there.

The most important factor governing the North Atlantic climate on the seasonal timescale is an atmospheric phenomenon called the North Atlantic Oscillation (NAO). This is a difference in air pressure between the low-pressure region over Iceland and the high-pressure region near the Azores, which controls the strength of the westerly jet stream and the tracks of Atlantic storms. The precise size and location of these differences see-saws back and forth every few years, but with no regular periodicity. It’s a little like the better-known El Niño climate oscillation in the tropical Pacific Ocean, but is confined largely to the atmosphere whereas El Niño – which is more regular and fairly well predictable now – involves changes in sea surface temperatures.

If we could predict the fluctuations of the NAO more accurately, this would give a sound basis for forecasting likely winter conditions. For example, when the difference between the high- and low-pressure “poles” of the NAO is large, we can expect high winds and storminess. The Met Office team reports that a computer model of the entire global climate called Global Seasonal Forecast System 5 (GloSea5) can now provide good predictions of the strength of the NAO one to four months in advance. They carried out “hindcasts” for 1993-2012, using the data available several months before each winter to make a forecast of the winter state of the NAO and then comparing that with the actual outcome. The results lead the team to claim that “useful levels of seasonal forecast skill for the surface NAO can be achieved in operational dynamical forecast systems”.

What is it that makes the notoriously erratic NAO well predicted by this model? The researchers can’t fully answer this question yet, but they say that the fluctuations seem to be at least partly governed by the El Niño cycle. The NAO is also linked to the state of the Arctic sea ice and to a roughly two-year cycle in the winds of the tropical lower stratosphere. If the model can get those things right, it will be likely to forecast the NAO pretty well too.

If we can make good predictions of general winter climate, says the Met Office team, then this should help with assessing – months in advance – the risks of dangerously high winter winds and of transport chaos, as well as predicting variations in wind energy and setting fuel-pricing policies. All of which, you must admit, is probably in the end more useful than knowing whether to expect a white Christmas.

Paper: A. A. Scaife et al., Geophysical Research Letters 41, 2514-2519 (2014)

Thursday, May 22, 2014

Forgotten prophet of the Internet

Here is my review of Alex Wright’s book on Paul Otlet, published in Nature this week.


Cataloguing the World:
Paul Otlet and the Birth of the Information Age
Alex Wright
Oxford University Press, New York, 2014
ISBN 978-0-19-993141-5
384 pages, $27.95

The internet is often considered to be one of the key products of the computer age. But as Alex Wright, a former staffer at the New York Times, shows in this meticulously researched book, it has a history that predates digital technology. While the organization of information has challenged us for as long as we have had libraries, the Belgian librarian Paul Otlet conceived in the late nineteenth century of schemes for collection, storage, automated retrieval and remote distribution of the sum total of human knowledge that have clear analogies with the way information today is archived and networked on the web. Wright makes a persuasive case that Otlet – a largely forgotten figure today – deserves to be ranked among the inventors of the internet.

It is possible to push the analogies too far, however, and to his credit Wright attempts to locate Otlet’s work within a broader narrative about the encyclopaedic collation and cataloguing of information. Compendia of knowledge date back at least to Pliny’s Natural History and the cut-and-paste collections of Renaissance scholars such as Conrad Gesner, although these were convenient (and highly popular) digests of typically uncited sources. Otlet, in contrast, sought to collect everything – newspapers, books, pamphlets – and to devise a system for categorizing the contents akin to (indeed, a rival of) the Dewey decimal system. Wright tells a rather poignant story of the elderly, perhaps somewhat senile Otlet stacking up jellyfish on a beach and then placing on top an index card bearing the number 59.33: the code for Coelenterata in his Universal Decimal Classification.

But the real focus of this story is not about antecedents of the internet at all. It concerns the dreams that many shared around the fin de siècle, and again after the First World War, of a utopian world order that united all nations. This was Otlet’s grander vision, to which his collecting and cataloguing schemes were merely instrumental. His efforts to create a repository of all knowledge, called the Palais Mondial (World Palace), were conducted with his friend Henri La Fontaine, the Belgian politician and committed internationalist who was awarded the Nobel peace prize in 1913. The two men imagined setting up an “intellectual parliament” for all humanity. In part, their vision paved the way for the League of Nations and subsequently the United Nations – although Otlet was devastated when the Paris Peace Conference in 1919 elected to establish the former in Geneva in neutral Switzerland rather than in Brussels, where his Palais Mondial was situated. But in part, their objective amounted to something far more grandiose, utopian and strange.

While world government was desired by many progressive, left-leaning thinkers, such as H. G. Wells (who Otlet read), during the inter-war period, Otlet’s own plans often seemed detached from mundane realities, which left leaders and politicians unconvinced and doomed Otlet to constant frustration and ultimate failure. When Henry James dismisses the scheme Otlet concocted with a Norwegian-American architect to construct an immense “World City”, you can’t help feeling he has put his finger on the problem: “The World is a prodigious & portentous & immeasurable affair… so far vaster in complexity than you or me”.

Wright overlooks the real heritage of these ideas of Otlet’s. They veered into mystical notions of transcendence of the human spirit, influenced by Theosophy, and Otlet seems to have imagined that learning could be transmitted not by careful study of documents but by a kind of symbolic visual language condensed into posters and displays. The complex of buildings called the Mundaneum that he planned with the architect Le Corbusier was full of sacred symbolism, as much a temple as a library/university/museum. Here Otlet’s predecessor is not Gesner but the Italian philosopher Tommaso Campanella, who in 1602 described a utopian “City of the Sun” in which knowledge was imbibed by the citizens from great, complex paintings on the city walls. This aspect of Otlet’s dreams makes them as much backward-looking to Neoplatonism and Gnosticism as they are forward-looking to the information age and the internet.

But the future was there too, for example in Otlet’s advocacy of the miniaturization of documents (on microfilm) and his plans for automatic systems that could locate information like steampunk search engines. He considered that his vast collection of information at the proposed Mundaneum (the real structure never actually amounted to more than a corner of the Palais Mondial, from which he was rudely ejected in 1924 by the Belgian government) might be broadcast to users worldwide by radio, and stored in a kind of personal workstation called a Mondotheque, equipped with microfilm reader, telephone, television and record player.

All this can be correlated with the software and hardware of today. But Wright recognizes that the comparison only goes so far. In particular, Otlet’s vision was consistent with the social climate of his day: centralized, highly managed and hierarchical, quite unlike the distributed, self-organized peer-to-peer networks concocted by anti-establishment computer wizards in the 1960s and 70s. And while our ability now to access an online scan of Newton’s Principia would have delighted Otlet, the fact that so much more of our network traffic involves cute cats and pornography would have devastated him.

The poor man was devastated enough. After losing government support in 1934, Otlet managed to cling to a corner of the Palais Mondial until much of his collection was destroyed by the Nazis in 1940. He salvaged a little, and it mouldered for two decades in various buildings in Brussels. What remains now sits securely but modestly in the Mundaneum in Mons – not a grand monument but a former garage. But there is another Mundaneum in Brussels: a conference room given that name in Google’s European bureau. It is a fitting tribute, and Wright has offered another.

Quantum or not?

Here’s the original text of my article on D-Wave’s “quantum computers” (discuss) in the March issue of La Recherche.


Google has a quantum computer, and you could have one too. If you have $10m to spare, you can buy one from the Canadian company D-Wave, based in Burnaby, British Columbia. The aerospace and advanced technology company Lockheed Martin has also done so.

But what exactly is it that you’d be buying? Physically, the D-Wave 2 is almost a caricature: a black box the size of a large wardrobe. But once you’ve found the on-switch, what will the machine do?

After all, it has only 512 bits – or rather, qubits (quantum bits) – which sounds feeble compared with the billions of bits in your iPad. But these are bits that work according to quantum rules, and so they are capable of much, much more. At least, that is what D-Wave, run by engineer and entrepreneur Geordie Rose, claims. But since the company began to launch its commercial products in 2011, it has been at the centre of a sometimes rancorous dispute about whether they are truly quantum computers at all, and whether they can really do things that today’s classical computers cannot.

One of the problems is that D-Wave seemed to come out of nowhere. Top academic and commercial labs around the world are struggling to juggle with more than a handful of qubits at a time, and they seem to be very far from having any kind of computer that can solve useful problems. Yet the Canadian company just appeared to pop up with black boxes for sale. Some scepticism was understandable. What’s more, the D-Wave machines use a completely different approach to quantum computing from most other efforts. But some researchers wonder if they are actually exploiting quantum principles at all – or even if they are, whether this has any advantages over conventional (classical) computing, let alone over more orthodox routes top quantum computers.

While there is now wider acceptance that something ‘quantum’ really is going on inside D-Wave’s black boxes, the issue of what they can achieve remains in debate. But that debate has opened up questions that are broader and more interesting than simply whether there has been a bit of over-zealous salesmanship. At root the issues are about how best to harness the power of quantum physics to revolutionize computing, and indeed about what it even means to do so, and how we can know for sure if we have done so. What truly makes a computer quantum, and what might we gain from it?

Defying convention

Quantum computing was first seriously mooted in the 1980s. Like classical computing, it manipulates data in binary form, encoded in 1’s and 0’s. But whereas a classical bit is either in a 1 or 0 state independently of the state of other bits, quantum bits can be placed in mixtures of these states that are correlated (entangled) with one another. Quantum rules then enable the qubits to be manipulated using shortcuts, while classical computers have to slog through many more logical steps to get the answer.

Much of the difficulty lies in keeping groups of qubits in their delicate quantum states for long enough to carry out the computation. They rapidly lose their mutual coordination (become “decoherent”) if they are too disturbed by the thermal noise of their environment. So qubits must be well isolated from their surroundings and kept very cold. Like some other research groups, D-Wave makes qubits from tiny rings of superconducting material, where roughly speaking the 1’s and 0’s correspond to electrical currents circulating in opposite directions. They’re kept at a temperature of a few hundredths of a degree above absolute zero, and most of the volume in the black box is needed to house the cooling equipment. But while the other labs have prototypes with perhaps half a dozen qubits that are nowhere near the marketplace, D-Wave is out there selling their machines.

According to the Canadian company’s researchers, they’ve got further because D-Wave has chosen an unconventional strategy. One of the key problems with quantum computing is that quantum rules are not entirely predictable. Whereas a classical logic gate will always give a particular output for a particular input of 1’s and 0’s, quantum physics is probabilistic. Even if the unpredictability is very small, it rapidly multiplies for many qubits. As a result, most quantum computer architectures will rely on a lot of redundancy: encoding the information many times so that errors can be put right.

D-Wave’s approach, called quantum annealing, allegedly circumvents the need for all that error correction (see Box 1). It means that the circuits aren’t built, like classical computers and most other designs for quantum computers, from ‘logic gates’ that take particular binary inputs and convert them to particular outputs. Instead, the circuits are large groups of simultaneously interacting qubits, rather like the atoms of a magnet influencing one another’s magnetic orientation.


Box 1: Quantum annealing

Computing by quantum annealing means looking for the best solution to a problem by searching simultaneously across the whole ‘landscape’ of possible solutions. It’s therefore a kind of optimization process, exemplified by the Travelling Salesman problem, in which the aim is to find the most efficient route that visits every node in a network. The best solution can be considered the ‘lowest-energy’ state – the so-called ground state – of the collection of qubits. In classical computing, that can be found using the technique of simulated annealing, which means jumping around the landscape at random looking for lower-lying ground. By allowing for some jumps to slightly increase the energy, it’s possible to avoid getting trapped in local, non-optimal dips. Quantum annealing performs an analogous process, except that rather than hopping classically over small hills and cols, the collective state of the qubits can tunnel quantum-mechanically through these barriers. What’s more, it works by starting with a flat landscape and gradually raising up the peaks, all the while keeping the collective state of the qubits pooled in the ground state.

Optimization problems like the Travelling Salesman belonging to a class called NP-hard problems. Working out how the amino acid chain of a protein folds up into its most stable shape is another example, of immense importance in molecular biology. These are very computationally intense challenges for classical computers, which generally have to simply try out each possible solution in turn.

Quantum annealers, explains physicist Alexandre Zagoskin of Loughborough University in England, a cofounder of D-Wave who left the company in 2005, are analog devices: less like digital logic-gate computers and more like slide rules, which calculate with continuous quantities. Logic-gate computers are ‘universal’ in the sense that they can simulate each other: you can perform the same computation using electrical pulses or ping-pong balls in tubes. “The obsession with universal quantum computing created unrealistic expectations, overhype, disillusionment and fatigue, and it keeps many theorists developing software for non-existing quantum ‘Pentiums’”, says Zagoskin. “In the foreseeable future we can make, at best, quantum slide rules like quantum annealers.”


That makes a quantum annealer well suited to solving some problems but not others. “Instead of trying to build a universal computer”, says computer scientist Catherine McGeoch of Amherst College in Massachusetts, “D-Wave is going for a sort of ‘quantum accelerator chip’ that aims to solve one particular class of optimization problem. But this is exactly the class of problem where theoreticians think the important quantum speedups would be found, if they exist. And they are important computational problems in practice.” As physicist Alexandre Zagoskin puts it, D-Wave might be a few-trick device, but “if the trick is useful enough, and the performance-to-price ratio is good, who cares?”

Aside from avoiding error correction, quantum annealing (QA) allegedly has other advantages. Daniel Lidar, scientific director of the University of Southern California–Lockheed Martin Quantum Computing Center in Los Angeles, which uses D-Wave’s latest commercial machine D-Wave 2, explains that it relaxes the need for qubits to be switched fast, compared to the logic-gate approach. That in turn means that less heat is generated, and so it’s not such a struggle to keep the circuits cool when they contain many qubits. Mark Johnson, D-Wave’s chief scientific officer, sees various other practical benefits of the approach too. “We believe a quantum annealing processor can be built at a useful scale using existing technology, whereas one based on the gate model of quantum computing cannot”, he says. There are many reasons for this, ranging from greater resistance against coherence to the less taxing demands on the materials, on control of the environment, and on interfacing with users.

However, there are arguments about whether QA actually solves optimization tasks faster than classical algorithms. “No good reason has ever been given”, says John Smolin of IBM’s T. J. Watson Research Center in Yorktown Heights, New York, where much of the pioneering theory of quantum computation was developed in the 1980s.

Put it to the test

Perhaps the best way to resolve such questions is to put D-Wave’s machines to the test. McGeoch, with Cong Wang at Simon Fraser University in Burnaby, has pitted them against various classical algorithms for solving NP-hard optimization problems. They primarily used a D-Wave circuit called Vesuvius-5, with 439 working qubits, and found that it could find answers in times that were at least as good as, and in some cases up to 3,600 times faster than, the classical approaches. The speed-up got better as the number of elements in the problem (the number of destinations for the salesman, say) increased, up to the maximum of 439.

But not everyone is persuaded. Not only is D-Wave playing to its strengths here, but the speed-up is sometimes modest at best, and there’s no guarantee that faster classical algorithms don’t exist. Smolin still doubts that D-Wave’s devices “have solved any problem faster than a fair comparison with conventional computers.” True, he says, its harsh to compare D-Wave’s brand new machine with those coming from a 50-year, trillion-dollar industry. But that after all is the whole point. “History has shown that silicon-based computers always catch up in the end”, he says. “Currently, D-Wave is not actually faster at its own native problem than classical simulated annealing – even a relatively naive program running on standard hardware, written by me, more or less keeps up. If I spent $10 million on custom hardware, I expect I could beat the running time achieved by D-Wave by a very large amount.” And the real question is whether D-Wave can maintain an advantage as the problems it tackles are scaled up. “There is no evidence their larger machines are scaling well”, Smolin says.

Besides, Lidar notes, it remains a big challenge to express computational problems in a way that D-Wave can handle. “Most optimization problems such as protein folding involve a large amount of preprocessing before they can be mapped to the current D-Wave hardware”, he says. “The pre-processing problem may itself be computationally hard.”

Quite aside from the putative speed advantages, how can we tell if D-Wave’s machines are using quantum rules? There’s some suggestive evidence of that. In 2011 Johnson and colleagues at D-Wave reported that an 8-qubit system on the 128-qubit chip of D-Wave 1 showed signs that it was conducting true quantum annealing, because the experimental results didn’t fit with what the qubits were predicted to do if they were behaving just like classical bits. Lidar and his colleagues subsequently conducted more exacting tests of D-Wave 1, finding that even 108 coupled qubits find their ground state in a way that doesn’t fit with the predictions of classical simulated annealing. Perhaps surprisingly, a ‘quantum’ signature of the behaviour remains even though the timescale needed to find the optimal solution was considerably longer than that over which thermal noise can scramble some of the quantum organization. “In this sense QA is more robust against noise than the gate model”, says Lidar.

But he stresses that “these types of experiments can only rule out certain classical models and provide a consistency check with other quantum models. There’s still the possibility that someone will invent another classical model that can explain the data, and such models will then have be ruled out one at a time.” So none of these tests, he explains, is yet a “smoking gun” for quantum annealing.

Besides, says Zagoskin, the more qubits you have, the less feasible it becomes to simulate (using classical computers) the behaviour of so many coherent qubits to see how they should act. To anticipate how such a quantum computer should run, you need a quantum computer to do the calculations. “The theory lags badly behind the experimental progress, so that one can neither predict how a given device will perform, nor even quantify the extent of its “quantumness”, he says.

Joseph Fitzsimons of the Center for Quantum Technologies of the National University of Singapore finds Lidar’s tests fairly persuasive, but adds that “this evidence is largely indirect and not yet conclusive.” Smolin is less sanguine. My opinion is that the evidence is extremely weak”, he says. The whole question of what it means to “be quantum” is a deep and subtle one, he adds – it’s not just a matter of showing you have devices that work by quantum rules, but of showing that they give some real advantage over classical devices. “No one is denying that the individual superconducting loops in the D-Wave machine are superconducting, and it is well accepted that superconductivity is a manifestation of a quantum effect.” But quantum rules also govern the way transistors in conventional computers function, “and one doesn’t call those quantum computers.”

How would you know?

To assess the performance of a quantum computer, one needs to verify that the solutions it finds are correct. Sometimes that’s straightforward, as for example with factorization of large numbers – the basis of most current encryption protocols, and one of the prime targets for quantum computation. But for other problems, such as computer simulation of protein folding, it’s not so easy to see if the answer is correct. “This raises the troubling prospect that in order to accept results of certain quantum computations we may need to implicitly trust that the device is operating correctly”, says Fitzsimons.

One alternative, he says, is to use so-called interactive proofs, where the verifier forms a judgement about correctness on the basis of a small number of randomly chosen questions about how good the solution is. Fitzsimons and his collaborators recently demonstrated such an interactive proof of quantum effects in a real physical “quantum computer” comprised of just four light-based qubits. But he says that these methods aren’t applicable to D-Wave: “Unfortunately, certain technological limitations imposed by the design of D-Wave’s devices prevent direct implementation of any of the known techniques for interactive verification.”

For NP-hard problems, this question of verification goes to the heart of one of the most difficult unresolved problems in mathematics: there is as yet no rigorous proof that such problems can’t be solved faster by classical computers – perhaps we just haven’t found the right algorithm. In that case, showing that quantum computers crack these problems faster doesn’t prove that they use (or depend on) quantum rules to do it.

Part of this problem also comes down to the lack of any agreement on how quantum computers might achieve speed-up in the first place. While early proposals leant on the notion that quantum computers would be carrying out many computations in parallel, thanks to the ability of qubits to be in more than one state at once, this idea is now seen as too simplistic. In fact, there seems to be no unique explanation for what might make quantum computation faster. “Having a physical quantum computer of interesting size to experiment on has not produced answers to any of these open theoretical questions”, says McGeoch. “Instead experiments have only served to focus and specialize our questions – now they are more numerous and harder to answer. I suppose that's progress, but sometimes it feels like were moving backwards in our understanding of what it all means.”

None of this seems about to derail the commercial success of D-Wave. “We plan to develop processors with more qubits”, says Johnson. “Of course there are more dimensions to processor performance, such as the choice of connecting qubits, or the time required to set up a problem or to read out a solution. We’re working to improve processor performance along these lines as well.”

There’s certainly no problem of sheer size. The superconducting integrated-circuit chip at the core of D-Wave’s devices is “somewhat smaller than my thumbnail”, says Johnson. Besides, the processor itself takes up a small fraction of this chip, and has a feature size “equivalent to what the semiconductor industry had achieved in the very late 1990s”. “Our expectation is that we will not run out of room on our chip for the foreseeable future”, he says. And D-Wave’s bulky cooling system is capable of keeping a lot more than 512 qubits cold. “D-Wave's technology seems highly scalable in terms of the number of qubits they place and connect on a chip”, says Lidar. “Their current fridge can support up to 10,000 qubits.” So those black boxes look set to go on getting more powerful – even if it is going to get even harder to figure out exactly what is going on inside them.

Wednesday, May 21, 2014

Longitude redux

Nesta responded graciously to my complaints about the Longitude Prize 2014, calling me up to discuss the issues and inviting me to come to the launch on Monday at the BBC. It would have been churlish to refuse.

It was a suitably glitzy affair, introduced by the BBC’s Director General Tony Hall and featuring contributions from Martin Rees and Brian Cox. The event doubled as a (very well earned) celebration of the 50th anniversary of Horizon, the BBC’s flagship science documentary programme. The six challenges selected by the Longitude Committee – a truly impressive collection of folks – were as follows:

- Food: how can we ensure everyone has nutritious sustainable food?
- Flight: how can we fly without damaging the environment?
- Paralysis: how can we restore movement to those with paralysis?
- Antibiotics: how can we prevent the rise of resistance to antibiotics?
- Water: how can we ensure everyone has access to safe and clean water?
- Dementia: how can we help people with dementia live independently for longer?

Mostly a fairly predictable selection, then, with a few choices that one might not have anticipated. Which is to say that it is a good and entirely worthy list. The idea now is that, once the winner has been selected from this list, the prize functions as a “challenge prize”, along the lines of the X-Prizes or the challenges promoted by some of the crowd-sourcing companies now in existence, about which I wrote here. Everyone, from multinational companies to garden-shed inventors, will be able to submit their solution, and one of them – if their solution is deemed adequate – will receive the £10m prize money. The criteria for success and for ranking the submissions have yet to be thrashed out, and will of course depend on the challenge.

Martin Rees, who chairs the committee, points out that the prize money is around a thousandth of the annual UK R&D budget, and says one might hope that the results, by stimulating innovative thinking, will have a disproportionately big impact. This seems quite possible, and I do hope he is right. There is surely a role for challenge prizes like this in fostering innovation and problem-solving.

All of the presenters at the BBC event – each of them a regular on Horizon – did a great job of outlining why the challenge they had been assigned was important. These presentations will be fleshed out more fully in a Horizon special tomorrow.

Why, then, did I come away feeling even more vindicated in my criticisms?

It was because the core of my concerns – let’s put behind us the initial publicity which looked as though it had been written by the fictional PR company Perfect Curve, and also the dodgy history with which the prize is framed – are about the whole premise of selecting the final prize challenge by public vote. Much was made of how this will “get the public involved”, how it will democratize science and stimulate wide interest. To object to this aspect of the project could seem elitist, as though to suggest that we should go back to deciding science policy via faceless committees of “experts” behind closed doors.

I’m all for public engagement, and I have much sympathy with Athene Donald, who responded to my criticisms by saying that “scientists should not be arrogant when it comes to public good, thinking they know what's right and what's wrong… Scientists should not pay lip-service to "public engagement with science" and yet not allow the public actually to engage in anything that matters.” Yet this seems to me to be entirely the wrong way to go about making such an attempt at engagement. I feel there is simply a category error here: if popular entertainment can be “democratized”, why not science? Scientists usually go to pains to point out that science is precisely not a democratic process: we don’t have public votes to decide which theories are right. The challenge itself, like science as a whole, should be open to ideas from all comers, judged on merit. Martin Rees pointed out that, unlike art or literary prizes, this one can be judged objectively: one can formulate solid, supportable and perhaps even quantifiable grounds for selecting a winner from the submissions. That is largely true. So why introduce a subjective element into setting up the challenge is in the first place? Why assemble an expert panel to pick the shortlist but not the final winner? This seems to me a sop to public sentiment, likely to end up being patronizing (yes you, the common people, can decide!) rather than empowering.

But my real complaint isn’t about whether the voting procedure is fit for purpose – that is, about the issue of who gets a vote. It is about having a voting procedure at all. I find that deeply objectionable, and the launch further deepened that feeling.

The idea that the needs and dignity of thousands of people with paralysis or dementia have to be placed in a horse race against the needs of billions of people without access to safe water, or the very reasonable desire of wealthier citizens to be able to fly without contributing to global warming should compete with the risks of malnutrition faced by millions in poorer nations because of inadequate food supplies, seems to me to be in bad taste. It feels like – indeed, it is disturbingly close to – asking for a public vote on whether the limited NHS coffers should be used to help people with kidney failure or people with cancer. Choices have to be made, for sure, and they are hard – but for that very reason, we shouldn’t be turning them into a beauty pageant.

As I said to one of the committee in an email after my piece was posted, it feels rather as though we are being asked to vote on which of our family members to save from a fire. The hope, no doubt, is that by introducing the contenders, others (such as rich philanthropists) might be enticed into putting up money for some of the non-victors too. But if that’s so, it is a clumsy and insensitive way to go about it, running the risk of telling people with paralysis or dementia (say) that the public doesn’t actually care that much about them. Given that the prize money is by some measures so small (Martin Rees suggested that big companies would be competing for the prestige, not the cash), is it really not possible to find £10m for each of them? If any of them were solved this way, the payback in terms of money saved from healthcare, economic losses and so forth would repay the investment many, many times over. (There’s another debate to be had about whether some of these problems – food, water, antibiotics, say – are ones that lend themselves to solution at a single stroke of technological genius, given how multi-faceted they are. But I’m willing to believe that the prize might at least elicit some useful contributions.)

The consequences of the “voting” format were made painfully apparent at the event itself. After a rather moving presentation in which a woman with a spinal-column injury explained what a difference an “artificial exoskeleton” had made to her life in enabling her to stand up – simply to address others eye to eye, let alone relieving the pain of constant sitting – another presenter then said “But mine is an even more important challenge!”, or words to that effect. He obviously didn’t intend for a moment to belittle his “opponent” or to sound at all callous – this is simply the dynamic inevitably created by the whole public-voting gambit.

So, despite the title that Prospect chose for my piece, I don’t think the Longitude 2014 Prize is a waste of time. It could have some valuable consequences, and I truly hope it does. But I think that it is, in some ways, worse than a "waste of time". Given how many good intentions and how much good thinking lie behind this project, it is all the more tragic that it has been lumbered with a format that is inappropriate, misconceived and, to my mind, offensive.

[Some version of this is likely to appear soon on the Prospect blog. I also discussed it this morning with Adam Rutherford for BBC radio. But that’s my say on the matter – I don’t want to be constantly sniping from the sidelines, and hope that the prize will now solicit some useful ideas.]

Friday, May 16, 2014

Computers and creativity

“Can a computer write Shakespeare?” Trevor Cox’s nice Radio 4 programme yesterday was inevitably able only to scratch the surface of that question (which, I should add, was being asked outside of the boring probabilistic sense, explored with characteristic panache in Borges’ The Library of Babel). For my part, I hugely enjoyed discussing with Tom Service the works of the “computer composer” Iamus. Tom had been pretty dismissive when I wrote about that project in the Guardian last year. I was disappointed that he’d only heard the “early work” Hello World!, but it seems the later compositions have not shifted his views much. Yet crucially, this is no reactionary objection to the intrusion of soulless computers into music – on the contrary, Tom thinks that the Iamus team haven’t pushed the technology far enough, and that they are making the mistake of trying to make music that sounds as if humans composed it. That error, he feels, is only compounded by composing for traditional instruments, so that one gets the expressivity of the performer complicating the issue. Why not generate entirely new sounds using electronics, he asked?

I have some sympathy with these suggestions – perhaps Iamus has been too constrained by a stylistic template to create any genuinely new soundscapes. But in a way I feel that is the whole point. I can’t help thinking that here Tom is indulging a prejudice that says if it is composed by a computer then it should sound somehow futuristic and far-out, like the Radiophonic Workshop at their craziest. But why shouldn’t computers be allowed to compose for piano/chamber ensemble/orchestra? We don’t demand that all contemporary human composers abandon these traditional sonic resources. And the Malaga team who devised Iamus specifically set out to see if a computer could be induced to compose music that couldn’t easily be distinguished from that of a human, without being merely a crude pastiche. How could we make the comparison if all we had was a vista of bleeps?

I also think Tom might be being a tad unfair to suggest that it’s entirely the programmers, not Iamus, that is “composing”. Gustavo Diaz-Jerez didn’t give away an awful lot about exactly how the evolutionary algorithm works, but it has become clear to me that the input from the human programmers is very minimal: a musical seed that bears virtually no recognizable relation to the final product. Nor are they assessing or selecting anything on the way. Asserting that, by writing the software, they are the real composers here seems a little like saying that the clever folks who developed Word are the real authors of my books. (I’m damned if they’re going to get any of my pitiful royalties.)

The central issue, however, is something else. Tom seemed to feel that, by using human performers, Iamus was somehow cheating – of course it sounds passionate and committed, because the performers are injecting that into the notes! But wait – has there ever been any music composed for which this is not the case? (Well yes of course, but such experiments – like Varèse’s musique concrete – are the exceptions.) And it is precisely here that we hit the irony. We hear Bach’s Cello Suites and think “What a great genius! What sensitivity! What emotion!” And we too easily forget that, while this is true (my God, how true!), we hardly have the same response when we hear Wendy Carlos doing Switched On Bach on the Moog synthesizer. Even now we may overlook the essential role of the performer, without whom Bach is notation on paper. It only becomes great music when the genius of the composer (in this case) is given sympathetic expression by a skilled interpreter. Why do we give Bach that benefit but feel that all a computer should be allowed is Wendy Carlos?

And it doesn’t stop there. Even Pablo Casals could, in the end, only make acoustic signals. That sounds sacrilegious, I know, but what else is it but vibrations in air – until it falls on the sympathetic ear? It only moves us because we have the resources to be moved: the logical, auditory and emotional resources. It is our minds that turn notes into music, and that is a tremendous skill which sometimes we deny with dismaying insistence (“oh, I don’t know anything about music”). This is what I wanted to get at with my comments on the romanticization of genius. It is tempting to turn the performer into a mere conduit, and ourselves into the passive receiver, and attribute all the creative process to the composer or artist. At worst, this becomes a delusion that we are somehow “communing” with the artist’s mind – as Tom pointed out after the recording, even Beethoven didn’t believe that! Without wishing to deny the artist the primary role, creativity can only be a collaboration. Otherwise, wouldn’t Bach be like a pill that, once swallowed, has the same effect on everyone – the “pharmaceutical model” of music so masterfully dismissed by musicologist John Sloboda?

This is why experiments like Iamus are so interesting. Margaret Boden expressed it better than I did at the end of Trevor’s programme. By removing one “mind” from the equation, they allow us to take apart the pieces of that process and hopefully to thereby understand them better. For, whatever else Iamus can do, its creators evidently don’t claim that it has a “mind” or some kind of autonomous intention. And so the issue becomes that of how we actively construct what we experience out of the materials we are given. That “we” may include the performer too, who is undoubtedly exercising creativity: OK, I have been given these notes, what can I do with them that has some meaning? The performer must find a form. The listener must find one too, and these may or may not overlap, although I suspect that to a considerable degree they do, simply because performer and listener are likely to have built their musical minds from very similar stimuli.

Kandinsky attributed to the artist an almost magical ability to elicit specific emotions from the onlooker. As a synaesthete, he expressed this in musical terms, even though his medium was colour; he surely imagined that music itself could do the same thing. “Colour is the keyboard”, he wrote, “the eyes are hammers, the soul is the piano with many strings. The artist is the hand that plays, touching one key after another purposively, to cause vibrations of the soul.” But few other artists have such delusions of absolute control over the effects their compositions will have. Stravinsky more or less denied anything of the sort. They have at best only a crude set of knobs for dialling in the listener’s/viewer’s response, because every mind has been shaped differently. In a cumbersomely mechanistic picture I imagine the artist making a kind of grid that, placed on the audience’s perceptions, depresses different levers depending on who has them where. It’s in the meeting of grid and levers (and in music the performer reshapes the grid a little) that creativity is determined. As computers get better at making interesting and effective grids, we might learn something new about the levers: why certain grids have certain effects, say.

Of course, those levers are connected to the heart, the tear ducts, the limbic and motor systems and so on. That’s where it gets interesting: can a computer create a grid that will make me cry – not as bad, ersatz movie music does, but as Bach does? When, or if, that happens – well, that’s when I really have to start wondering if computers are creative.

Saturday, May 10, 2014

A prize turkey?

I really don’t want to seem curmudgeonly about this. But when I was forwarded the announcement of the forthcoming launch of the Longitude 2014 Prize at the BBC on 19 May, I had to read it several times just to get some rough idea of what this prize is supposed to be all about. Then I followed up on the details, and it just got worse. I won’t totally rule out the possibility that something worthwhile might come of it all, but even if it does (and I’m not optimistic), the marketing is disastrous. It almost seems as though no one really wants to admit the truth of what the project is all about. And so I fear that my piece on the Prospect blog, of which the pre-edited version follows below, is a little cross.


Ah, the wisdom of crowds. Or is that the madness? I’m not sure any more. Do we trust the crowd to find its collective way to the perfect answer to a challenge? Or do we fear that it will tip into irrational herding behaviour and lose its grip on reality?

And do we really care? For mad or wise, the crowd is where it’s at. You know, democracy, the voice of the people, all that. So never mind I’m a Celebrity and Strictly Come Dancing – why not let the masses decide science policy?

"I'm thinking of something - Britain's Got Talent, you know, you switch on the TV and you watch the dog jumping over the pole, or whatever it is”, says David Cameron, showing that he has his finger on the pulse – or at least, that he has some vague notion that, you know, these days there’s this sort of interactive voting thing that’s popular with the masses. “Let's actually get the nation engaged on what the biggest problems are in science and in our lives that we need to crack, with a multi-million pound prize to then help us do that."

Oh, you may mock. But there’s some serious thought behind Cameron’s announcement last year of the so-called Longitude 2014 Prize. Longitude? I’ll come back to that. So you see, “It is vital” (according to the announcement of the prize on the Sciencewise website) “that in the 21st Century the challenges set are not simply those framed by academics or business leaders, but rather that the Committee responsible for overseeing the Prize understands the issues, priorities and views of the full range of stakeholders including the general public. This will be consistent with the Government’s commitment to open and transparent policy making.” You don’t get more open than delegating such policy-making to everyone.

So that’s all good. But who is this Sciencewise through which the good news is being channelled? You have to do a bit of digging there. This organization “provides co-funding and specialist advice and support to Government departments and agencies to develop and commission public dialogue activities in emerging areas of science and technology”. It is managed on behalf of the Department for Business Innovation and Skills (BIS) by Ricardo-AEA in partnership with the British Science Association and the community participation charity, Involve. So wait, who then is Ricardo-AEA? More Googling reveals that it is a private consultancy.

No matter, back to the Sciencewise announcement. “The project” – that’s Longitude 2014 Prize, do pay attention – “has been divided into phases and the current dialogue project is for the first phase, scoping and framing. Framing here refers to setting out how the project to identify challenges will run and what the areas for the challenges will be. By involving the public in this early scoping phase we can be confident that the issues and challenges set by Longitude 14 [ah, that has a nice ring to it] will be consistent with issues that are of public concern… The Longitude 14 prize will serve to inform policy that aims to encourage businesses, universities and others to find a solution to some of the major societal challenges of the day… As the project moves from the scoping to a public debate, voting, and challenge setting phases, a range of tools will be used to ensure the public are engaged and excited by the project.”

Have I landed in a scene from W1A, the glorious spoof on management-speak and corporate-think now infecting the BBC? Or are we really to understand that, after due scoping and framing, the public are going to vote on the question of what businesses, universities and others (which others?) should be spending their money on, with much the same mindset as they watch, you know, dogs jumping over poles or something?

OK, let’s get a little balance. Any initiative that has as its chairman Sir Martin Rees, Astronomer Royal and ex-president of the Royal Society, who can smell a rotten egg from fifty paces, can’t be all bad (although one wonders how much direct input Rees has been allowed so far). He will head an “illustrious committee”, managed by the innovation charity Nesta. And we should admit that paternalistic “we know what’s best for you” government doesn’t have a great record for deciding what is important in science innovation either: the UK has a pretty poor track record of capitalizing on the creativity of its scientists. The current decision to pour money into research on the news “wonder material” graphene, pioneered in Manchester, smacks slightly of a panicky determination not to let this history repeat itself.

But if our alternatives are either to delegate decisions to faceless bureaucrats behind closed doors in Whitehall, or to throw the vote open by aping reality TV, we are not doing a lot for the image of democracy in action.

Can we just remember that the original Longitude Prize of 1714, on which this current project is allegedly modelled, was not itself the result of a group vote for the most pressing of technological issues of its time? The difficulty of determining navigation at sea was already widely recognized by the authorities of the time as a serious problem; the “open-source” nature of the prize was all about the solution, not about identifying the problem in the first place. And cracking that problem was primarily about securing naval supremacy and expanding trade and colonial power. If you had asked the population, they might have been more concerned about sanitation, basic healthcare (even the concept is of course anachronistic), or their lack of voting rights on anything at all.

Besides, no one won the Longitude Prize. (In fact, as science historian Rebekah Higgitt has argued, it’s not clear that there was ever really a “prize” as such at all.) Despite Cameron’s claim that it was awarded to the clockmaker John Harrison, he was never officially given that honour. After tireless campaigning to have his achievements recognized, he finally managed to wring the equivalent money out of a reluctant Parliament, but the Board of Longitude stressed that this was a bountiful gesture to acknowledge Harrison’s efforts, not the “prize” itself. Prospective contenders for the reincarnated award might not be encouraged by this history.

What I object to most of all, however, is not the ridiculous language in which this prize has been dressed, not the poor history with which it has been framed, not the paltry million quid or so that is at stake, not even the question of who chooses the objective. It is the whole notion of a competition to find the biggest challenge our technologies face. There is no single grand challenge into which we must pour millions. It’s a whole lot worse than that. The climate is changing, and to solve that alone we will need a whole raft of technological, economic and social measures. Our antibiotics are becoming useless. We lack cures for some of the most widespread and debilitating diseases on the planet. Billions of people lack access to safe drinking water. This is not rocket science (please don't let the decision be that we must get on and populate Mars...) – we know perfectly well what the problems are, and how serious they are. We don’t need to dress them up for a beauty pageant so that we can crown a winner. We should just get on with the job.

Wednesday, May 07, 2014

Instruments: a postscript

How gratifying to see such an interest taken in my discussion with Stephen Curry on the aesthetics of scientific instruments. Kenan Malik has just posted our exchange on his fine blog Pandemonium. I should also say that Rebekah Higgitt has rightly pointed out that most of the objects I showed below were never intended for lab use, but were only ever meant for display – she says that Jim Bennett of the Museum of the History of Science in Oxford has argued that if an object is in a museum, it has probably never been used. A nice (if not foolproof) rule of thumb! I fully admit that I chose those images for their prettiness, not their potential utility – the little “hour cannon” in particular was an obvious piece of frippery. All the same, the mere fact that scientific instruments were being made for display by the rich and powerful says something interesting about the market that existed then, not to mention about the role that such instruments served – in part they were toys, but also demonstrations of the owner’s taste and authority. Can you imagine an NMR spectrometer made “for display” today? I’m not sure that “executive toy” gadgets are quite the right comparison.

This crossover between scientific instrument and marvellous gadget is explored in the splendid book Devices of Wonder by Barbara Maria Stafford and Frances Terpak (which, curses, I am now itching to find among my piles of books). Automata obviously fall into this category: simultaneously a form of entertainment, a demonstration of the maker’s skill (many were watchmakers), and an embodiment of the Cartesian notion of body as machine. Perhaps it is in this regard that the instruments of science have changed since the seventeenth century: back then, they were inclined as much to be an illustration of a theory as they were a means of testing it. They were – some were – ‘presentation devices’, so that elegance enhanced their persuasive power. There’s more to be said on this – I’d like to examine the issues for the nineteenth century in particular.

Monday, May 05, 2014

More objects of desire

In the Guardian online, Stephen Curry has provided a thoughtful response to my brief blog in which I implied that modern scientific instruments are soulless grey boxes in comparison to the gorgeous devices that were enjoyed by the likes of Galileo and Robert Hooke. My comment was something of a gut response to perusing the wonderful website of the Museo Galileo in Florence, where just about every instrument on display is a ravishing creation. That made me realise, however, that even in the nineteenth century many scientific instruments were crafted with an artistry that far exceeds what is strictly necessary. I would happily have them on my mantelpiece. So what happened?

Stephen explains that this lack of obvious aesthetic appeal in much of today’s kit doesn’t preclude researchers like him from having a response to their equipment that can be “immediate and visceral”. He describes the tactile satisfaction that he has derived from working with machines that are engineered with grace and precision. It is a delightful account of how even apparently prosaic devices can elicit a feeling of connection, even affection, for those who use them. I’m very glad to have stimulated an account like this. Anyone who talks of “science as a craft” is a man after my own heart.

Yet I can’t help thinking that my question remains. Galileo’s instruments can be appreciated as objects of wonder and desire by anyone who sees them, not just by those accustomed to their use. Why, I think we must still ask, were they put together not just with care and precision but with an apparent wish to make them beautiful?

And, to turn the question around, why should we care if they were? Would there really be any gain in adorning today’s scientific instruments with wood panelling and mother-of-pearl inlay? What would be the point?

I’m glad Stephen’s article has forced me to think about these things more deeply than I did when I posted my cri de coeur. I should say that there are of course others who are far better placed than I am to provide answers, such as Jim Bennett at the Museum of the History of Science in Oxford and Frank James at the Royal Institution. But these, such as they are, are my thoughts.

First, there is obviously a selection effect at work here of the kind that all historians and curators are familiar with. What tends to get preserved is not a representative cross-section of what is around at any time, but rather, what is deemed to be worth preserving. No doubt there was a host of unremarkable flasks and bottles and crucibles that were destroyed because no one thought them worth holding on to.

Second, there were of course no specialized scientific-instrument manufacturers in the early modern period. When investigators like Galileo and Boyle wanted something made that they could not make themselves, they would go to metalsmiths, carpenters, potters and the like, who inevitably would have brought their own craft aesthetic to the objects they made.

And when specialist manufacturers did begin to appear, such as the instrument-maker Richard Reeve in London, they were catering to a particular clientele that their products reflected. Reeve was making microscopes and so forth for the wealthy dilettantes like Samuel Pepys, who would have expected to be buying something elegant and refined, not coldly functional.

But this touches on the third and perhaps most salient point: what, and who, these instruments were for. Even for Galileo, the scientific experiment was still at least as much a demonstration as it was an exploration: it was a way of showing that your ideas were right. (It has been suggested, albeit somewhat inconclusively, that Galileo may have slightly arranged his figures to suit his ideas, since methods of timing for phenomena like free fall or rolling down a plane were not yet sufficiently accurate to really distinguish between candidate mathematical formulae for describing them.) And in the earliest of the early modern era, during the late Renaissance, scientific instruments were objects of power. They were used by the virtuosi to delight and entertain their noble patrons, and thereby to imply a command of the occult forces of nature. For such a display, it was important that a device be impressive to look at: elegance was the key attribute of the courtly natural philosopher.

And this is, in a sense, still the case: scientific instruments are not made simply to do a job, but employ a particular visual rhetoric with an agenda in mind. OK, homemade instrumentation does often tend to have an improvised Heath Robinson quality, and this is often the kind of instrument that I like best – as I argued here, it can thereby reflect the scientist’s own thought processes. But when an instrument is manufactured, even when it is mass-produced, there is another determinant of its appearance. It has – even the most anonymous of spectrometers – been designed, and that design is geared towards a particular end. For one thing, it becomes susceptible to fashion – we can all distinguish an instrument from the 1950s (chunky, retro-Space Age) from one made in the 1990s (sleek, minimalist). But more importantly, I would submit that, just as the instruments of the seventeenth century obeyed a rhetoric of virtuosic mastery of nature, today they must convey objectivity, the hallmark of modern science. That’s to say, modern instruments don’t just look bland and uninspiring because they are made without love (and they are certainly not make without skill) – they look that way because they are trying to reflect what is deemed to be the proper way to do science. It must be impersonal, free of frippery or excess. A blank casing, functional dials and knobs, sober colours, no decoration: to look otherwise would invite suspicions that it was a toy, not a means of doing good science.

So while I accept Stephen’s assertion that the utilitarian nature of modern scientific instruments doesn’t necessarily preclude their being given satisfying and even elegant designs, I think we need to recognize that there is an aesthetic shaping the way they look that says something about the character of modern scientific research – it has to maintain the correct deportment, which means looking suitably “sciency” and neutral. Does that make the slightest difference to the nature of research itself? It’s not obvious that it will, but I am struck by how my blog seemed to touch a nerve with various other folks, so perhaps some researchers do feel that their equipment is a little too functional to offer much inspiration.

In any case, this is now a good excuse for a little more scientific instrument porn. Oh how indecently I covet these things!