Monday, February 25, 2013

Scribblings of a scribe

My article on cursive handwriting in Prospect seems to be creating some debate, which pleases me. I volunteered in my last comment there to offer up a sample of my own handwriting, as used for taking notes at speed. Here it is; fire away.

Thursday, February 21, 2013

The demon-haunted microworld

Here's a bigger and illustrated version of an article published in Aeon magazine. It is part of a larger project to be revealed soon.


If small means invisible, we fear what it holds

When the Dutch cloth merchant Antony van Leeuwenhoek looked at a drop of pond water through his home-made microscope in the 1670s, he didn’t just see tiny ‘animals’ swimming in there. He saw a new world: too small for the eye to register even the faintest hint, teeming with invisible life. The implications were as much theological as they were scientific.

Invisibility comes in many forms, but smallness is the most concrete. Light ignores very tiny things as ocean waves ignore sand grains. During the seventeenth century, when the microscope was invented, the discovery of such objects posed a profound problem: if we humans were God’s ultimate purpose, why would he create anything that we couldn’t discern?

The microworld was puzzling, but also wondrous and frightening. There was nothing especially new about the idea of invisible worlds and creatures – belief in immaterial spirits, angels and demons was still widespread. But their purpose was well understood: they were engaged in the Manichean struggle for men’s souls. If that left one uneasy in a universe where there was more than meets the eye, at least the moral agenda was clear.

But Leeuwenhoek’s ‘animulcules’ and their ilk indulged their opaque, wriggly ways everywhere one looked: in moisture, air, body fluids. In human semen – Leeuwenhoek studied his own, transferred with jarring haste from the marital bed – there were tadpole-like ‘animulcules’ writhing like eels. In 1687 the German mathematician Johann Sturm suggested that disease is caused by breathing in such invisible animals in air. The Jesuit priest Athanasius Kircher proposed that the plague might be caused by the microscopic ‘seeds’ of virulent worms that enter the body through nose and mouth – just a step away, it seemed, from a germ theory of contagion, although the impossibility of seeing bacteria and viruses with the microscopes of the time obstructed that leap until Louis Pasteur and Robert Koch made it in the late nineteenth century.

Pestilence was everywhere, unseen and impossible to fend off – just like medieval demons. The narrator in Daniel Defoe’s Journal of the Plague Year (1722) attests he has heard that if a person with the plague breathes on glass, “there might living Creatures be seen by a Microscope of strange, monstrous and frightful Shapes, such as Dragons, Snakes, Serpents and Devils, horrible to behold.” He admits some doubts about whether this is true, but the message is clear: the invisible microworld is labelled “Here be dragons”.

Little has changed. Electron microscopes now reveal miniature viral monsters like science-fiction aliens, with arachnoid legs and crystal heads from which they inject genetic venom into cells. MRSA bacteria lurk unseen on hospital door handles and bed sheets. We sprinkle anti-bacterial fluids like holy water to fend off these invisible fiends.

The invisible world

The idea that matter might be composed of particles and processes too small to see – the atoms of Democritus, the whirling vortices of Descartes and the corpuscles of Newton – has a long history. But this fine-grained nature of matter only came to seem like an ‘invisible world’ when the advent of the microscope enabled us, first, to appreciate the intricacy with which it was wrought, and second, to identify life amidst the grains. When Galileo used one of the first microscopes to study insects, he was astonished and repelled, writing to his friend Federico Cesi in 1624 that
“I have observed many tiny animals with great admiration, among which the flea is quite horrible, the mosquito and the moth very beautiful… In short, the greatness of nature, and the subtle and unspeakable care with which she works is a source of unending contemplation.”

This wonder at nature’s invisible intricacy was echoed by Robert Hooke, whose 1665 book Micrographia put microscopy on the map. Crucially, Hooke’s volume was not merely descriptive: he included large, gorgeous engravings of what he saw through the lens, skilfully prepared by his own hand. The rhetorical power of the illustrations was impossible to resist. Here were fantastical gardens discovered in mould, snowflakes like fronds of living ice, and most shockingly, insects such as fleas got up in articulated armour like lobsters, and a fly that gazes into the lens with 14,000 little eyes, arranged in perfect order on two hemispheres.

This was surely a demonstration of the infinite scope of God’s creative power. “There may be as much curiosity of contrivance in every one of these Pearls”, Hooke wrote, “as in the eye of a Whale or Elephant, and the almighty’s Fiat could as easily cause the existence of the one as the other; and as one day and a thousand years are the same with him, so may one eye and ten thousand.”
In comparison, the finest contrivances of man – a needle’s tip, a razor’s edge, a printed full stop – looked crude and clumsy under the microscope.

What excited Hooke and his contemporaries most was that the microscope seemed to offer the possibility of uncovering not just the invisible structures of nature but, in consequence, its hidden mechanisms. Where in previous ages natural philosophers had attributed the cause of processes to invisible, occult forces and emanations – vague and insensible agencies – the mechanistic philosophers of the seventeenth century argued that nature worked like a machine, filled with levers, hooks, mills, pins and other familiar devices too small to be seen. Now at last these structures might be revealed. Henry Power, whose Experimental Philosophy advertised the virtues of the microscope a year before Hooke’s, wrote that we could expect to see at last “the magnetical effluviums of the loadstone [magnet], the solary atoms of light, the springy particles of air.” Hooke too insisted that “‘Those effects of Bodies, which have been commonly attributed to Qualities, and those confesse’d to be occult, are performed by the small Machines of Nature.” He never quite found them; but there was no shortage of other marvels.

Life writ small

Micrographia recorded life in this microscopic realm too, but none that could not be discerned, with effort, by the eye alone: “eels” in vinegar and mites in cheese. Leeuwenhoek’s discoveries, reported in 1676 and verified by Hooke a year later, brought home the full force of a teeming, invisible microworld. The anxieties about scales of perception that run through Swift’s Gulliver’s Travels make it clear how unsettling this was. In the land of the gigantic Brobdingnagians, Gulliver is disgusted by their bodies when seen so close up: “Their skins appeared so coarse and uneven, so variously coloured when I saw them near, with a mole here and there as broad as a trencher, and hairs hanging from it thicker than pack-threads.” Among the common folk he is repelled by the immense lice crawling on their clothes, possessing “snouts with which they rooted like swine.”

Even with refinements of the microscope in the nineteenth century that enabled scientists to peer into the invisible world with unprecedented resolution, there remained questions about what might be happening down there. In 1896 the pioneering British psychiatrist Henry Maudsley proclaimed that
“The universe, as it is within [man’s] experience, may be unlike the universe as it is within other living experience, and no more like the universe outside his experience, which he cannot think, than the universe of a mite is like his universe.”

Maudsley’s avowal of ignorance was an attack on the ready assumptions of some scientists, such as the chemist William Crookes, that invisible realms were peopled with beings like us. But this lack of knowledge could equally supply licence for the most exotic of speculations. The beginnings of molecular science engendered an appreciation that life as it was known could have a minimal possible size. But when the ‘indivisible’ atom began to display a finer-grained structure of subatomic particles, and light waves proved to have much finer oscillations in the form of X-rays, no one could rule out the possibility of an entire hierarchy of material existences on smaller scales. The physicist George Johnstone Stoney, who gave the name to subatomic electrons discovered in 1897, declared that the physical universe is really an infinite series of worlds within worlds. Another physicist, the Irishman Edmund Fournier d’Albe, developed these ideas in Two New Worlds (1907), where he envisaged an “infra-world” at a scale below that which microscopes could register, peopled like Leeuwenhoek’s drop of water with creatures (“infra-men”) that “eat, fight, and love, and die, and whose span of life, to judge from their intense activity, is probably filled with as many events as our own.” The human body, he estimated, could play host to around 10**40 of these infra-men, experiencing joys and woes “without the slightest net effect on our own consciousness”.

As ever is the case with scientific advance, the new and unfamiliar are popularly interpreted by reference to the old and prosaic. Littleness has been a consistent theme in the folklore and traditions of demons and faeries. Mischievous imps and fairies that interfere in domestic matters were a stock of folk tradition, and if these beings were not necessarily invisibly small, their diminutive stature enabled them to pass unseen. One might be tempted to imagine that by the late nineteenth century such beliefs reached no further than rural backwaters – but that would be to underestimate the grip of the invisible world on the imagination. Nowhere is this better illustrated than in the ‘demon’ of James Clerk Maxwell, perhaps the most profound physicist of the nineteenth century.

Maxwell’s idea was a response to the gloomy prediction of a ‘cosmic heat death’ of the universe. In 1851 William Thomson (later Lord Kelvin) pointed out that the second law of thermodynamics, which can be expressed as the condition that heat energy must always flow from hot to cold, must eventually create a universe of uniform temperature, from which no useful work can be extracted and in which nothing really happens.

As a devout Christian, Maxwell could not accept that God would let this happen. He believed that the second law is statistical rather than fundamental: temperature gradients get dissipated because it is far more likely that faster, ‘hotter’ molecules will mingle with slower ones, rather than by chance congregating into a ‘hot’ patch. But what if there were, as Maxwell put it in 1867, a “finite being”, small enough to ‘see’ each molecule and able to keep track of it, who could open and shut a trapdoor in a wall dividing a gas-filled vessel? This being could let through fast-moving molecules in one direction so as to congregate the heat in one compartment, separating hot from cold and creating a temperature gradient that could be tapped to do work.

Maxwell didn’t intend his creature to be called a demon. That label was applied by Thomson, where he defined it as “an intelligent being endowed with free will, and fine enough tactile and perceptive organization to give him the faculty of observing and influencing individual molecules of matter.” Maxwell was not pleased. “Call him no more a demon but a valve”, he grumbled – albeit a ‘valve’ with intelligence and autonomy, or as Maxwell once put it “a doorkeeper, very intelligent and exceedingly quick.”

Several of his contemporaries had little doubt that these ‘demons’ were to be taken literally. Thomson himself took pains to stress that the demon was plausible, calling it “a being with no preternatural qualities, [which] differs from real animals only in extreme smallness and agility.” Maxwell’s friend, the Scottish physicist Peter Guthrie Tait, evidently believed they might exist, and he enlisted them for an extraordinary cause. In 1875 Tait and fellow Scot Balfour Stewart, an expert on the theory of heat, published a book called The Unseen Universe in which they attempted to show that “the presumed incompatibility of Science and Religion does not exist.” There must be, they wrote, “an invisible order of things which will remain and possess energy when the present system has passed away.” Tait and Stewart were aware of the apparent conflict between the Christian doctrine of the immortality of the soul and the second law of thermodynamics, which seemed to enforce an eventual universe of insensate stasis. “The dissipation of energy must hold true”, they admitted, “and although the process of decay may be delayed by the storing up of energy in the invisible universe, it cannot be permanently arrested.” Maxwell’s demon gave them a way out. “Clerk-Maxwell’s demons”, they wrote, “could be made to restore energy in the present universe without spending work” – and as a result, “immortality is possible.”

Modern studies have shown that Maxwell’s demon cannot after all evade the second law, since even it has to dissipate heat as part of the process of gathering information about molecular speeds. The conceit is now generally regarded as an amusing thought experiment: it is forgotten that, in Maxwell’s day, invisibly small demons going about their micro-business seemed possible, even likely.

Nano nightmares

The demonization of invisible beings is as strong as ever, now adapted to the fantasies of our age: viruses are “alien invaders”, we go to “war” on “superbugs” with super-powers, repelling them like vampires with “magic bullets”. Children are taught that invisible “germs” are the omnipresent enemy, and they are enlisted, as imps and demons once were, to instil safe behaviour. It is a case, microbiologist Abraham Baron declared in 1959, of “man against germs”. When he explained that “we share the world with an incredible vast host of invisible things”, it was a warning and not an expression of wonder. In his 1912 study of the hazards of dust, physician Robert Hessler cautioned that “It is the invisible we have to guard against.”

This fear of the malevolent designs of imperceptibly small entities was evident in the early reception of nanotechnology, which seemed to be supplementing this gallery of invisible horrors. Among scientists, nanotechnology was a loosely defined collection of attempts to visualize and manipulate matter on scales ranging from ångstrøms (the size of atoms) to hundreds of nanometres (the size of small bacteria). But in public discourse it became dominated by a single entity, which nanotechnologists were allegedly aiming to construct: the nanoscale robot or nanobot. This, it was said, would be an autonomous device that would patrol the bloodstream for pathogenic invaders, or construct materials and molecules from the atoms up. It was, in other words, a human avatar on an invisible scale.

What if nanobots ran amok, as robots are (in fiction) almost predestined to do? A rogue robot might be a menace, but it is a comprehensible one, a kind of superhuman being. A rogue nanobot, capable of replicating like bacteria and of pulling matter apart atom by atom, would be an unthinkable threat. Hidden from sight, it could reduce anything in seconds to a formless mass of atoms, which would then be reconstituted into replica nanobots: an amorphous ‘grey goo’. The terror of this imagery was crudely but effectively exploited by Michael Crichton in his novel Prey (2002).

If the image is frightful, it is also familiar. Invisible powers have long been held capable of animating clay, creating the fearsome Golem, or of disintegrating and deliquescing matter and flesh (think now of the Ebola virus). What’s more, the nanobot connects with long-standing images of the exploration of new worlds, most notably the submarine Nautilus in which Captain Nemo explores the hidden deep sea in Jules Verne’s 20,000 Leagues Under the Sea. Once again, it seems we must remake the invisible microworld in our own image before we can explore its promise and peril. This was most explicit in the 1966 movie Fantastic Voyage (based on a short story by Isaac Asimov) and the parodic 1987 remake Inner Space, in which humans are shrunk to a scale that allows them to navigate through the human body.

The extreme miniaturization that has its ultimate expression in nanotechnology has not yet given birth to an invisible nemesis, and shows no sign of doing so. What it, in conjunction with the manipulation of invisible rays such as Marconi’s ‘wireless’ emanations, has done is create an age of technological invisibility, in which things happen with no mechanism in sight, indeed even without our volition, embedded in an omnipresent field of information. Items in stores speak to barriers and computers; miniaturized sensors control our cars and refine our household environment; libraries leap into our pockets. Dust, a metaphor for worthless matter while it was the smallest thing that could (just) be seen with the unaided eye, has become “smart dust”, a nanotechnological promise of particles laced with invisible circuitry, programmed with the intelligence to self-assemble as we will them: to make a Golem, perhaps, rebranded now as a ‘reconfigurable robot’.

It has become a commonplace that these advances would have seemed in earlier times to be magical. Less often acknowledged is how traditional reactions to invisibility can help us comprehend and negotiate the cultural changes that ensue. The boundaries between rationality and insanity can no longer be policed in behavioural terms. Is the person gesticulating and talking out loud in the street communing with demons of the mind, or with a friend? Is the person fretting over the invisible threats of nearby radio masts succumbing to some modern version of the mal aria theory of contagion, or do they have a point? We entrust our digital secrets to the intangible Cloud, assume that this nebulous entity can be summoned to regurgitate them at will. With invisibly small technology harnessed to the invisible ether, we have in a real sense animated the world.

The curse of cursive

Here’s a piece published in Prospect this month. I’m not holding my breath about whether it is going to make the slightest impact on the blinkered way this aspect of education is approached – this is one of those issues so ingrained that one rarely gets much more than a dumb stare if it is raised with teachers. I’d love someone to tell me just why cursive is so important in educational terms. They haven’t yet.


There’s something deeply peculiar about the way we teach children to play the violin. It’s a very difficult skill for them to master – getting their fingers under control, holding the bow properly, learning how to move it over the strings without scratching and slipping. But just as they are finally getting there, are beginning to feel confident, to hit the right notes, to sound a little bit like the musicians they hear, we break the news to them: we’ve taught them to play left-handed, but now it’s time to do it like grown-ups do, the other way around.

All right, I’m fibbing. Of course we don’t teach violin that way. It would be absurd. What would be the point of making it so hard? We wouldn’t do anything so eccentric for something as important as learning a musical instrument, would we? No – but that’s how we teach children to write.

It’s best not to examine the analogy too deeply, but you see the point. The odd thing is that, when most parents watch their child’s hard-earned gains in forming letters like those printed in their story books crumble under the demand that they now relearn the art of writing ‘joined up’ (“and don’t forget the joining tail!”), leaving their calligraphy a confused scrawl of extraneous cusps and wiggles desperately seeking a home, they don’t ask what on earth the school thinks it is doing. They smile, comforted that their child is starting to write like them.

As he or she probably will. The child may develop the same abominable scribble that gets letters misdirected and medical prescriptions perilously misread. In his impassioned plea for the art of good handwriting, Philip Hensher puts his finger on the issue (while apparently oblivious to it):
“You longed to do ‘joined-up writing’, as we used to call the cursive hand when we were young. I looked forward to the ability to join one letter to another as a mark of huge sophistication. Adult handwriting was unreadable, true, but perhaps that was the point.”

The real point is, of course, not that illegibility but that sophistication. When I questioned my friend, a primary teacher, about the value of teaching cursive, she was horrified. “But otherwise they’d have baby writing!” she exclaimed. I pointed out that my handwriting is printed (the so-called ‘manuscript’ form). “Oh no, yours is fine”, she – not the placatory sort – allowed. I didn’t ask whether all the books on my shelves were printed in ‘baby writing’ too.

I did also once ask my daughter’s teachers what they thought they were doing by teaching her cursive. When they realised this was not a rhetorical question but a literal one, there was bemusement and panic. “It’s just what we do”, one said. “We always have”. Another ventured the answer I’d anticipated – the children will be able to write faster – and then added that she thought she’d seen some research somewhere showing that some children find that the flowing movements help to imprint the shape of whole words more clearly in their mind. This was evidently not a question they had faced before.

We tend to forget, unless we have small children, that learning to write isn’t easy. It would make sense, then, to keep it as simple as possible. If we are going to teach our children two different ways of writing in their early years (quite apart from distinguishing capitals and lower-case), you’d think we would ensure we have a very good reason for doing so. I suspect that most primary teachers could not adduce one.

It’s not just about writing, but reading too. “As a reading specialist, it seems odd to me that early readers, just getting used to decoding manuscript, would be asked to learn another writing style,” says Randall Wallace, a specialist in reading and writing skills at Missouri State University.

There are, from time to time, proposals to stop teaching cursive handwriting – but these are usually motivated by the conviction that handwriting is passé in the digital age. The outraged response is that handwriting is an art, that there is an intrinsic value in beautifully formed script, and that to lose it would be a step towards barbarism.

Here I’m with Hensher: we should value skill with a pen. Our handwriting is an expression of our personality and humanity – not in some pseudoscientific graphological sense, but in the same way as is our clothing, our voice, our conversation. Yet these arguments are never really about cursive per se: they are about the good versus the indifferent in handwriting. It is implicitly assumed that the acme of good handwriting is beautiful cursive.

Now, I admire the elegant copperplate of the Victorians as much as anyone. But no one writes like that any more, since no one is taught to. How can we insist that to drop cursive will be to drop beauty and elegance, given that most people’s cursive handwriting is so abysmal? “It has always seemed ironic that, even after we sign a document, we have to print our signature underneath it for clarity”, says Wallace.

Surely, though, in something as fundamental to education as writing, there must be scientific evidence that will settle this matter? Let’s dispatch the most obvious red herring straight away: you will not write faster in cursive than in print. Once you need to write fast (which you don’t at primary school), you’ll join up anyhow if and when that helps. I know this to be so, because I missed the school years in which cursive was ground into my peers, and yet I never suffered from lack of speed. But don’t take my word for it – research shows that there is no speed advantage to cursive [1].

Are there any other advantages, then? Champions of cursive will always unearth tenuous arguments from dusty corners of the literature. Cursive makes it easier to learn how to write words; in cursive, b and d are not confused, and children don’t write backwards letters; the blending of sounds is made more apparent by the joining of letters; cursive helps the left-handed. None of these claims counts for very much. (There is equal reason, for example, to think that the continuous movement of the pen from left to right makes cursive especially hard for left-handers.) On the merits of learning cursive versus manuscript, Steve Graham, a leading expert in writing development at Arizona State University, avers that “I don't think the research suggests an advantage for one over the other.”

A survey in the US in 1960 found that the decision to teach cursive in elementary schools was “based mainly on tradition and wide usage, not on research findings” [2]. One school director said that public expectancy and teachers’ training were the main reasons, and that “we doubt that there is any significant advantage in cursive writing.” According to Wallace, nothing has changed. “The reasons to reject cursive handwriting as a formal part of the curriculum far outweigh the reasons to keep it”, he says.

It’s not necessarily cursive per se that’s the problem, but the practice of teaching children two different systems, perhaps in the space of so many years, without good reason. Research seems to show that it may not much matter how children learn to write, so long as it is consistent. Wallace argues that any style will do if it is “flexible enough to be perceived as similar to printed text and simple enough to last through the school years” [3].

Were there to be a choice between cursive and manuscript, one can’t help wondering why we would demand that five-year-olds master all those curlicues and tails, and why we would want to make them form letters so different from those in their reading books. But that’s a smaller matter than forcing them to struggle though one of their hardest early-learning tasks twice, with two different sets of rules, apparently because of nothing more than the arbitrary and tautological belief that only the kind of writing you had to (re)learn can be ‘grown-up’ and ‘beautiful’. After all, what’s the point of conducting research on educational methods if in the end you’re going to say “But this is how we’ve always done it”?

1. S. Graham et al., J. Educ. Res. 91(5), 290 (1998).
2. P. J. Groff, Elementary School J. 61(2), 97 (1960).
3. R. R. Wallace & J. H. Schomer, Education 114(3) (1994).

Monday, February 18, 2013

Folk tales show how culture spreads

Here’s another Nature News piece – there’s evidently a lot of language about (and more in the pipeline…).


It’s harder to transmit stories than genes across linguistic barriers

Have you heard the story of the good and bad sisters? They leave home, the good sister is kind to the people and animals she meets, and gets rewarded in gold. The bad sister is haughty and greedy, and is rewarded with a box of snakes.

This is a familiar folk tale in European culture. But how similar your version is to mine depends on how far apart we live and how ethnically and linguistically different our cultures are, according to a study by a team of researchers in Australia and New Zealand published in the Proceedings of the Royal Society B [1]. They have identified what makes the transmission of cultural traits and artefacts, such as folk tales, similar to and different from the transmission of genes.

Like genes, say psychologist Quentin Atkinson of the University of Auckland and his collaborators, folk tales get passed from group to group – and the more distant two groups are, the less similarity their genes and stories possess.

“The geographic gradients we found are similar in scale to what we see in genetics, suggesting that there may be parallel processes responsible for mixing genetic and cultural information”, says Atkinson.

“But the mechanisms aren’t identical”, he adds. “The effect of ethnolinguistic boundaries is much stronger for the folktales than for genes.” This fits with recent studies looking at other aspects of culture, such as song [2]. “Our findings support predictions that cultural variation should be more pronounced between groups than genetic variation”, says Atkinson.

“This supports the view that our cultures act almost like distinct biological species”, says evolutionary biologist Mark Pagel of the University of Reading, a specialist in cultural transmission. “Our cultural groups draw pretty tight boundaries around themselves, and can absorb genetic immigrants without absorbing their cultures.”

Atkinson and his colleagues figured that the ubiquity of folk tales would make them a good proxy for cultural exchange. “Folktales can be transmitted over the world”, says folklore specialist Hans-Jörg Uther of the University of Göttingen in Germany. “The plot can stay the same while characters and other attributes change to match the cultural traits of the region.”

The researchers used the statistical tools of population genetics to investigate variations between versions of ‘The kind and the unkind girls’ across many European cultures, from Armenian to Scottish, Basque and Icelandic.

“This tale is widely known, and we were able to locate a large, well-documented collection that spanned all of Europe”, says Atkinson: about 700 variants in all. “For example, some stories involve two cousins or brothers rather than daughters, in others it is a daughter and servant girl.” The researchers built on well established methods of enumerating these differences.

If folk tales simply spread by diffusion, like ink blots in paper, one would expect to see smooth gradients in these variations as a function of distance. But instead the team found that ethnolinguistic differences between cultures create significant barriers.

These barriers are greater than those for gene flow. You could say that the attitude is “I’ll sleep with you, but I prefer my stories to yours.”

Uther finds the work interesting, but he is “a little bit sceptical about comparing variants while neglecting their historical context and mode of performance.” He suspects that, as digital archives of folk tales become increasingly available, they will provide a valuable tool for making comparative and evolutionary studies of culture more quantitative.

1. Ross, R. M., Greenhill, S. J & Atkinson, Q. D. Proc. R. Soc. B doi:10.1098/rspb.2012.3065 (2013).
2. Rzeszutek, T., Savage, P. E. & Brown, S. Proc. R. Soc. B 279, 1606-1612 (2011).

Rooting out the mother tongue

Catching up after winter bugs… here is an article for Nature News from a week or so back.


Computer algorithm reconstructs ancient languages

On Fiji, a star is a kalokalo. For Pazeh Taiwanese aboriginals it is mintol, and for the Melanau people of Borneo, biten. All these words are thought to come from the same root – but what was it?

An automated computer algorithm devised by researchers in Canada and California now offers an answer – in this case, bituqen. The program can reconstruct extinct ‘root’ languages from modern ones, a process that has previously been done painstakingly ‘by hand’ using rules of how linguistic sounds tend to change over time.

Statistician Alexandre Bouchard-Côté of the University of British Columbia and his coworkers say that, by making the reconstruction of ancestral languages much simpler, their method should make it easier to test hypotheses about how languages evolve. They report their technique in the Proceedings of the National Academy of Sciences USA [1].

Automated computer methods like this have been attempted before, but the authors say these were rather intractable and prescriptive. The method of Bouchard-Côté and colleagues can factor in a large number of languages to improve the quality of reconstruction, and it uses rules that handle possible sound changes in flexible, probabilistic ways.

The method requires a list of words in each language, with their meanings, and a map of the phylogenetic ‘language tree’ showing how each language is related to the others. These trees are routinely constructed today by linguists using techniques borrowed from evolutionary biology.

The algorithm can automatically identify cognate words (ones with the same root) in the lexicons. It then applies rules known to govern sound changes to deduce the likely root of each set of cognates. For example, phonemes that are always paired will tend to get condensed into one if this doesn’t lose any semantic information.

The algorithm involves millions of parameters, whose values are found by an automated shuffling process that seeks the simplest fit to the data. It’s a little like cracking a code, based on a series of encoded phrases, by trying out possible bits of cipher and working your way towards one that gives a plausible solution to all the phrases.

The researchers tested their approach on 637 Austronesian languages spoken primarily on islands in Southeast Asia and the Pacific, including Malaysia, the Philippines and Indonesia. Manual methods have previously been used to reconstruct the protolanguage of this large group, thought to have come originally from Taiwan.

Bouchard-Côté and colleagues found that the predictions of their algorithm matched those of the manual method in 85 percent of cases (including bituqen). “Our system only uses a subset of the factors taken into consideration by a linguist, so we feel most of the discrepancies reflect things to be improved in our method”, admits Bouchard-Côté.

“It looks as though this method could be a very useful labor-saving device in some cases”, says linguist Don Ringe of the University of Pennsylvania. But he cautions that methods which are “correct or nearly correct in about 85% of the cases will never be good enough. Our reconstructions might be no better than an approximation, and if we settle for what look like approximations even to us, we might be plain wrong.”

Bouchard-Côté and colleagues used the method to test a hypothesis about language evolution first proposed in 1955 [2], which states that sounds that are particularly important for distinguishing words from each other are more resistant to change. Any such pattern is almost impossible to spot for just a few languages, but it emerged clearly from the data set of 637 languages.

There had previously been some scepticism about this so-called ‘functional load hypothesis’, and Ringe says that “the demonstration that there might be something to it after all is interesting.”

He adds that “it’s refreshing to find colleagues in other disciplines tackling a problem that historical linguists actually care about.”

1. Bouchard-Côté, A., Hall, D., Griffiths, T. L. & Klein, D. Proc. Natl Acad. Sci. USA doi/10.1073/pnas.1204678110 (2013).
2. Martinet, A., Économie des Changements Phonétiques (Maisonneuve & Larose, Paris, 1955).

Monday, February 11, 2013

On Growth and Form

Here’s an article on D’Arcy Thompson’s classic On Growth and Form, published in the “In Retrospect” slot of this week’s Nature. I am nearly through, at last, with the writing and editing of a forthcoming special issue of Interdisciplinary Science Reviews on Thompson and his influence, out in March (if we're lucky).


On Growth and Form
D’Arcy Wentworth Thompson
Cambridge University Press, 1917; revised and expanded version 1942.

Like Newton’s Principia, D’Arcy Thompson’s On Growth and Form is a book more often cited than read. They are both hefty tomes – Thompson’s revised edition in 1942 weighed in at over 1,000 pages, to the considerable alarm of Cambridge University Press when the long awaited revision finally arrived.

And both books stand apart from their age. Each contains ideas ahead of their time, yet seems at the same time rooted in earlier traditions. First written in 1917, with the Modern Synthesis of Neodarwinian biology one or two decades away and genes themselves still a nascent concept, how much of On Growth and Form remained relevant even by the time of the second edition, let alone today?

Thompson’s agenda is captured in the book’s epigraph by statistician Karl Pearson (published in Nature in 1901): “I believe the day must come when the biologist will – without being a mathematician – not hesitate to use mathematical analysis when he requires it.” Thompson presents mathematical physical science as a shaping agency that may supersede natural selection, showing how the structures of the living world often echo those in inorganic nature.

Thompson’s conversation to a mathematical biology was, like the man himself, idiosyncratic. The son of a Cambridge classicist, he went to Edinburgh to study medicine before switching to zoology at Cambridge – the same trajectory as Darwin. There he supplemented his income by teaching Greek, but, failing to secure an appointment at Trinity, he returned to Scotland as (in effect) a marine biologist at the University of Dundee. His frustration at the ‘Just So’ explanations of morphology offered by Darwinians burst out in a 1894 paper presented at the British Association meeting, “Some difficulties of Darwinism”, in which he argued that physical forces, not heredity, may govern biological form.

On Growth and Form elaborated at length on this theme. It was written by 1915 but delayed by the First World War, and by the time it was published Thompson had moved a few miles down the bleakly beautiful North Sea coast to the University of St Andrews. “We want to see how, in some cases at least”, he wrote, “the forms of living things, and of the parts of living things, can be explained by physical considerations, and to realise that in general no organic forms exist save such as are in conformity with physical and mathematical laws.”

Thompson’s demonstration of this claim takes him through a formidable range of topics: to name a few, the scaling laws of growth, flight and locomotion, the shapes of cells, bubbles and soap films, geometrical compartmentalization and honeycombs, corals, banded minerals, radiolarians, mollusc shells, antlers and horns, plant shapes, bone microstructure, skeletal mechanics, and the morphological comparison of species. Perhaps the central motif is the logarithmic spiral, which appears on the slate plaque commemorating Thompson’s former residence in St Andrews. He saw it first in the whorled form of foraminifera, and again in seashells, horns and claws, insect flight paths, phyllotaxis. This, to Thompson, seemed to imply some general principle of growth operating throughout nature: evidence of the universality of form and the reduction of a diverse array of phenomena to a few mathematical governing principles.

How much of this has stuck? It’s common for evolutionary and developmental biologists today to genuflect to his breadth and imagination while remaining sceptical that he told us much of lasting value about why living things look the way they do.

Thompson was reacting against the Darwinism of his age and not ours: against the first flush of enthusiasm, when it seemed adequate to account for every biological feature with a reflexive plea to adaptation. Thompson’s insistence that biological form had to make sense in engineering terms was a necessary reminder, but did not fundamentally challenge the idea that natural selection was evolution’s scalpel – it merely imposed constraints on the form that might emerge. “I have tried to make [the book] as little contentious as possible”, he wrote when he sent the manuscript of On Growth and Form to the publisher in 1915. “That is to say where it undoubtedly runs counter to conventional Darwinism, I do not rub this in, but leave the reader to draw the obvious moral for himself.”

Thompson believed that evolution could sometimes advance in a leap rather than a shuffle – still a hotly discussed issue today. But although On Growth and Form does not offer anything inconsistent with Neodarwinian genetics, the debate that Thompson tried to initiate about contingency versus necessity in biological form has never been resolved, and in some ways has not really yet been engaged. There are still biologists who believe that almost every feature of an organism must be adaptive. There are still unresolved questions about how deterministic the course of evolution is.

This is one reason to keep On Growth and Form in the canon. Another is the modern appreciation of self-organization as a means of developing complex form and pattern from simple physical rules. Here D’Arcy Thompson is not as easy to enlist as a prophet as one might expect. Many of the systems he looked at, such as the formation of the chemical precipitation patterns called Liesegang rings, the striped markings of animals, and the formation of polygonal crack networks, are now firmly recognized as paradigmatic examples of spontaneous self-organization in complex systems. But Thompson is often foxed by such things, giving them only a glancing mention while either confessing that he has no real explanation or assuming that it must be a simple one. He says of Liesegang rings, “For a discussion of the raison d’être of this phenomenon, the student will consult the textbooks of physical and colloid chemistry.” The student would have found little there in 1917, and some aspects of this chemistry are still being clarified.

The tradition from which On Growth and Form emerged was a rather different one: the biophysics and biomechanics of anatomists such as Wilhelm His and Wilhelm Roux. It is probably this strand that most securely ties Thompson to the present, for much of cell biology now centres on issues of how the mechanics of cell structures determine the fates and forms of tissues and the transport of constituents. Recent years have seen this somewhat undervalued aspect of biophysics becoming more integrated into the rest of molecular biology, as we come to realise how much mesoscale mechanics modulate gene and protein behaviour.

The precise legacy of On Growth and Form is, therefore, mixed. But much of the admiration expressed by fans like Stephen Jay Gould and Peter Medawar stems from a more general consideration: Thompson’s breadth of scholarship, coupled to his elegance of description. One doesn’t forget, reading his masterpiece, that he was a classicist as well as a scientist. There is more than a little of the antiquarian collector of curiosities in his persona. And at a time when science was succumbing to the specialization that has now become something of a liability, Thompson showed the value of synoptic thinkers who were prepared to risk being quite wrong here and there for the sake of an inspirational vision. These people are always mavericks, our Lovelocks, Mandelbrots, Goulds and Wolframs, and they still tend to present their ideas with a broad brush in books, rather than a conventional succession of closely argued papers. They excite strong responses, and sometimes they are exasperating. But science must make sure they do not, in an age of Big Science, tenure battles and funding crises, become extinct.

The D'Arcy Thompson Zoology Museum at the University of Dundee Museum Collections is currently displaying the first works acquired through a grant from the Art Fund to build a collection of art inspired by Thompson’s work.

Get used to it

Here’s my latest piece for BBC Future, pre-editing. More to follow shortly.


Post-modernism has been pronounced dead even before many of us made our peace with modernism. Picasso we can handle now; James Joyce’s Ulysses gets grudging genuflection, even if few people actually want to read it. But mention Arnold Schoenberg’s atonal music and you’ll still set many music-lovers snarling about an “ungodly racket”. The Austrian composer’s dissonant chords, unleashed more than a century ago, are denounced as unnatural, a violation of what music is meant to be.

This aversion to ‘dissonance’ has been lent some apparent support by theories of music cognition which propose that we have an innate preference for consonance: for musical tones that sit together comfortably like the soothing harmonies of Mozart. But a team of psychologists in Melbourne, Australia, led by Neil McLachlan have just taken a hatchet to that idea. Their findings support Schoenberg’s contention that consonance and dissonance are merely matters of convention: every culture develops its own rather arbitrary rules for what sounds ‘right’ and ‘wrong’. The Australian team shows that perceptions of dissonance can be shifted with even just a little training.

This might surprise no one who takes a close interest in so-called ‘world music’, meaning anything outside the cultural hegemony of the Western tradition. There are plenty of cultures that enjoy listening to chords and harmonies that might jar the ear of anyone brought up on Berlioz or Bacharach, from the metallic timbres and unusual scales of Indonesian gamelan to the semitone intervals (two notes a semitone apart, like C-C#) of some Bosnian folksong.

Nonetheless, the notion that consonant chords fall more smoothly on the human ear is deeply rooted. Pythagoras claimed that the most perfect harmonies are those in which the component tones have sound frequencies related in simple mathematical ways. A musical pitch consists of a sound wave with a particular frequency – the number of acoustic waves excited each second. Pythagoras noted that combinations of notes thought to be pleasing and consonant – for example, in modern terms an octave or C-G – have frequency ratios that are simple, whole numbers, in those cases 1:2 and 2:3.

This seems to imply that Mozart’s consonances are merely observing a law of nature: they are dictated by acoustic physics. This idea was refined in the nineteenth century, when the German physiologist and physicist Hermann von Helmholtz took into account the fact that musical instruments don’t generate ‘pure’ notes with a single frequency, but complex notes in which the ‘fundamental’ frequency that we register is supplemented by a whole succession of overtones, which are whole-number multiples of the fundamental frequency.

Helmholtz argued that consonance depends on how well all of these overtones fit together for all the notes of a chord. If two pure tones with just very slightly different frequencies are played together, their acoustic waves interfere with one another to cause an effect called beating, in which we hear not two separate tones but a single tone that is rising and falling in loudness. If the frequency difference is very small, the beats are very fast, creating a rattling or grating sound called ‘roughness’ that seems genuinely unpleasant. Helmholtz worked out the amount of beating for all the pairings of notes in the Western scale and argued that there is less roughness for traditionally consonant pairs, which have fundamental frequencies in simple ratios.

But it’s not that simple. For one thing, in the West notes are not tuned to have these simple ratios. The conventional equal-tempered scale, with equal steps between each successive note, is a compromise that slightly distorts frequency ratios compared to their ‘ideal’ Pythagorean tuning. Yet we don’t seem to mind. What’s more, Helmholtz’s idea implies that our sense of dissonance should depend on what instruments the notes are played on, since different instruments produce different overtones. But that’s not so. And the differences in roughness turn out to be insignificant for note pairs traditionally considered pleasing (C-F, say) and jarring (C-F#).

For all these and other reasons, the question of whether dissonance is innate or learnt has remained hotly debated. One of the key questions is whether our perceptions shift as our musical experience evolves. Some studies have claimed that very young infants show a preference for traditional consonance, but it’s hard to rule out other influences in these studies, especially the possibility that we start learning, from encountering music at an early age (even in the womb), what is ‘normal’.

McLachlan and his colleagues have subjected these ideas to careful testing. Using a group of 66 volunteers from the university and music conservatory of Melbourne, with a range of musical training from none to lots, they have devised a suite of tests to look for the roles of Helmholtz-style overtone matching, and of learning, in our judgements of dissonance.

In the first test, they found that the subjects’ ratings of the dissonance of two-note chords was not significantly different if the notes were pure, single-frequency tones or included various combinations of overtones. If beating was the cause of these judgements, the complex tones should have elicited a stronger sense of dissonance.

So much for Helmholtz. The team also tested and dismissed a somewhat related theory proposed in 1898 by the German philosopher Carl Stumpf, who argued that if the harmonics of the notes in a chord have several coincident frequencies, the brain interprets these as the overtones of a single pitch: it fuses the notes into one.

McLachlan has previously suggested that we ‘hear’ chords in a complex, two-stage process (N. M. McLachlan, Journal of the Acoustical Society of America 130, 2845–2854 (2011)). First, we pick out a single most salient pitch. Then, a long-term memory of the ‘quality’ of that chord – think of the instant recognition we have of a simple major or minor chord, even if we don’t know those terms – fills in the rest. Dissonance then arises as a sense of discomfort or unease when we don’t have a good ‘chord template’ to work from, and so experience expectations inconsistent with what we actually hear.

If that’s so, musical training should reduce the sense of dissonance, because this supplies a wider, more varied range of common ‘chord templates’. That’s more or less what the researchers found, but with a curious addition: a little musical training seems to increase the perception of dissonance for less-common chords, whereas this effect vanishes with more training. It seems that non-musicians lose any real sense of right or wrong when on unfamiliar harmonic territory, while slightly-trained musicians develop the somewhat rigid right/wrong distinction familiar in young learners, which relaxes with experience.

As these findings lend some support to McLachlan’s learning model of dissonance, they imply that perhaps we can learn to love what jars at first. In a final set of experiments, the Melbourne team showed that this is so. They asked the non-musicians to take ten daily sessions in which they trained to match the component pitches of certain two-note chords to single test pitches. This improved their ability to process the chords, and after the ten days they rated these chords as less dissonant than when they began.

These findings are sure to stir up more debate about why we find some music more dissonant than others, but you can be sure it won’t be the last word. In the meantime, perhaps you should give Schoenberg another listen – or ten.

Reference: McLachlan, N., Marco, D., Light, M., & Wilson, S., Journal of Experimental Psychology: General, advance online publication doi: 10.1037/a0030830 (2013). Paper here.

Monday, February 04, 2013

Painting with light

The Hayward Gallery’s new exhibition Light Show opened in London last week, and it is as spectacular as I’d expected. Reviews have been generally good so far. I have an essay on light technology and its uses in art in the exhibition catalogue, and have just posted an augmented version of that on my website here. The exhibition runs until 28 April, and is well worth a visit if you’re in town.

On a not unrelated issue, I have an article in New Scientist this week on art and image-making using caustics and related optics. I was hoping this might be accompanied by an online gallery of ‘caustic art’ images, which would have been spectacular – but sadly this didn’t prove possible. Those images, however, and others are in an extended version of the article that I have also posted on my website here.