Saturday, October 31, 2009

Can you ken?

I’ve just heard a lovely programme on BBC Radio 4 about the late, great Ken Campbell. Catch it while you can. I saw Ken perform several times, and it always left me giggling throughout – as did this programme. The phrase ‘true original’ gets overused, but there was no one for whom it is more apt. Ken had a consistent fascination with science, which is understandable in someone whose constant demand was ‘astound me’. He developed some wonderful routines around such exploits as his conversations with David Deutsch about the multiverse, and there’s a great moment in the radio show where Ken’s precision with words collides with the glibness of scientists’ habitual speech patterns: he was delighted, on asking someone at CERN if they were worrried that they might generate an entire new universe in their particle collisions, to be told that it was ‘unlikely’. Ken captured the essence of the speculative extremes of physical theory when he would talk of how it required one not exactly to believe in such things as parallel universes, but simply to suppose them. ‘Can I suppose it?’, I remember him saying, immense eyebrows gesticulating. ‘Well yes, I can suppose it.’

I was thrilled to get Ken involved in the series of talks I arranged with Harriet Coles at Nature at the V&A Museum as part of the ‘Creating Sparks’ BA Festival in 2000. He was doing his Newton/Fatio routine at the time, and it became quickly clear that we were never going to achieve much in the way of tailoring to the themes of the series – he was going to do just what he was going to do. We didn’t much care, knowing that whatever he did would supply a great night – which it did (despite a few grumbles from people who’d been expecting a ‘science’ talk). When we chatted to Ken over tea afterwards, my wife and I figured that he would be exhausting to live with – and from the accounts of his daughter Daisy, that seems to have been the case, although it is nice (and a little surprising) to hear that it was mostly fun too. Ken was interested in the Paracelsus theatre project I was putting together at the time, and I always had the suspicion that he would have made a great Paracelsus. Even then I saw it as the ideal role for either him or the clown performer Gerry Flanagan, and I’m still filled with glee that I got Gerry to do it nearly a decade later. And while working with Gerry was a joy, I suspect working with Ken could be a kind of inspirational nightmare. But it was wonderful to have briefly come into his erratic orbit.

Tuesday, October 27, 2009

Fear of music?

My piece on the cognition of atonal music has appeared in Prospect. I’m happy to say that it’s part of the site’s free content, but here it is below in extended, pre-edited form. I’m glad to see that it is drawing some comments. Not everyone is going to like what it says, for sure. But Tali Makell gets the wrong end of the stick – I’m not attacking the music of Schoenberg and his school. I’m a fan of most of this music, and think Berg’s Lyric Suite is a masterpiece. And of course I never said that Schoenberg et al. used total serialism. It’s absolutely right that they used some of tonality’s organizing principles, and to great effect. But there are problems with that in itself, as Roger Scruton has pointed out, since some of these structural principles are a direct consequence of tonality itself, and lose their meaning when taken out of that context. Besides, my view is that Schoenberg’s serialism was here acting as a convenient compositional tool that made it a little easier for the composers to use atonality – basically it was a scheme that reduced the effort needed to avoid tonality, and not one that actually brought any profundity or musicality in itself. These composers put that back in in other ways – with dynamics and so forth. There was nothing inevitable about Schoenberg’s method, and the justifications for it given by Adorno, and indeed by Schoenberg himself, are largely bogus. But that doesn’t make serialism intrinsically ‘bad’ as a way of composing. It’s when serialism becomes total that the problems really start (both musically and ideologically).

I hope all of this will be made clearer in my forthcoming book The Music Instinct. I am aware, however, that I have not addressed either here or there the defence of serialism made by Morag Josephine Grant in her book Serial Music, Serial Aesthetics (CUP). Here she attacks Fred Lerdahl’s critique of serialism (and other modern compositional methods), which is based on his linguistic approach to music cognition. Lerdahl believes that music needs to have an audible ‘grammar’: as he says, ‘The musical surface must be available for hierarchical structuring by the listening grammar’. In short, if we can’t construct hierarchies of pitch, rhythm and so forth, then the ‘musical surface’ is shallow: everything just sounds like everything else. This is the thrust of my Prospect piece, and I think it is true: for music to be more than a collection of mere sounds, there needs to be some audible way of organizing it. Grant complains that Lerdahl is forcing music into a straitjacket: she says his ‘argument ‘ that musical language, like spoken language, is generative in structure excludes the possibility of other, non-hierarchical methods of achieving musical coherence.’

But Grant is not, as it might seem, rejecting the need for music to acknowledge cognition. Rather, she asserts that this was precisely the concern of the serialists. They, however (she says), felt that our perception was culturally conditioned, and that we have ‘the ability to develop or uncover previously suppressed abilities’. One of those is the recognition of the tone row itself: ‘the use of the row is itself a constraint, not just on the composer, but in the aid of comprehensibility as well.’ Lerdahl is unaware of this, Grant says, while serialism imposes a system precisely to aid cognition.

And this is where Grant’s thesis utterly falls apart. For it has been shown quite clearly now that serialism’s ‘system’ is not one that can be cognized. It exists only on paper; listeners simply don’t hear it. And the reason for this is that it is simply not the kind of system that the mind intuits: we don’t listen to music by remembering arbitrary sequences of notes, but rather – as Lerdahl says – we organize those notes into hierarchical patterns: hierarchies of pitch, rhythm, melodic contour and so forth.

Can musical coherence be achieved by non-hierarchical methods? I’m not sure anyone knows, but certainly Grant provides no evidence of this – it is an act of faith. And what, precisely, are those methods, if we must exclude the tone row itself as one of them? She doesn’t say. She does, however, say that ‘the intense concentration on the tiniest of fluctuations’ is ‘central to the hearing of serial music’. I’m not sure what this means: fluctuations in what? The fluctuations in rhythm are either rather traditional, as in Berg, or are so extreme (as in Boulez) that rhythm has no meaning. Fluctuations in pitch, as in deviations from the tone row, would not be heard even if they were permitted. Grant also says that serialism ‘has the ability to create structures specific to each of its utterances’. Again, make of that what you will. If it means that each composition, each tone row, creates its own private language, then we know what Wittgenstein had to say about private languages. Why can’t Grant just explain how we are meant to find coherence in serial music, other than by its utilization of traditional techniques in parameters other than pitch?

I suppose one might interpret her remarks as saying that serialism encourages us to focus on each event as an entity in itself, not as something embedded in a hierarchical grammar. That’s possible. It might be interesting. And it is what I refer to below as sound art. If that’s the intention, it seems to me to be an explicit admission that ‘coherence’ isn’t the aim at all. And to my mind, coherence is the one characteristic that music should possess – I don’t care if you ditch tonality, or rhythm, or melody, or harmony, so long as what remains is in some manner coherent.

Incidentally, and in the hope of not sounding horribly patronizing, I have a lot of time for what Grant is up to more broadly. Defending Stockhausen is a noble cause in itself, even if I don’t buy it, and anyone who does so while listening to Scottish folk music and rap wins my vote.

Anyway, here’s the piece:


Writer Joe Queenan recently caused a minor rumpus in the austere world of contemporary classical music by complaining about how painful much of it is. He called Luciano Berio’s 1968 Sinfonia “35 minutes of non-stop torture”, Stockhausen’s Kontra-Punkte (1953) like “a cat running up and down the piano”, Krzysztof Penderecki’s Threnody for the Victims of Hiroshima (1960) “belligerent bees buzzing in the basement”, and Harrison Birtwistle’s latest opera The Minotaur “funereal caterwauling”. A hundred years after Schoenberg, he said, “the public still doesn't like anything after Transfigured Night, and even that is a stretch.”

Inevitably, Queenan was lambasted as a reactionary philistine. Performances of ‘modern’ works like this well attended, his critics said. And while Queenan took pains to distance himself from the conservative concert-goers who demand a steady diet of Mozart and Brahms, his comments were denounced as the same old clichés. The problem is that, like most clichés, they become such by frequent use. Sure, these works will find audiences in London’s highbrow venues, but the fact remains that Stockhausen and Penderecki, whose works now are as old as ‘Rock Around the Clock’, have not been assimilated into the classical canon in the way that Ravel and Stravinsky have. When someone like Queenan has earnestly tried and failed to appreciate this ‘new’ music, it’s fair to ask what the problem is.

David Stubbs considers this important question in his new book Fear of Music (Zero, 2009) but doesn’t come close to answering it. His speculative suggestion – that music lacks an ‘original object’ that, in visual art, can become the subject of veneration or trade – clearly has little force, given that it must surely apply equally to Beethoven and Berio. Indeed, Stubbs’ analysis is part of the problem rather than part of the solution. Like economists trying to understand market crashes, he wants to place all the motive forces outside the system: his gaze never fixes on the music itself. To Stubbs, your responses to music are a function of your context and perspective, not of the music. His comparisons of visual and musical art assumes an equivalence that allows no possibility of their being cognitively distinct processes.

He is in good company. Social theorist Theodore Adorno’s advocacy of Schoenberg’s atonal modernism was politically motivated: tonality was the bastion of bourgeois complacency. To the hardline modernists of the 1950s and 60s, any hint of tonality was a form of recidivism to be denounced with Maoist vigour; Pierre Boulez refused for a time even to speak to tonal composers. American composer Milton Babbitt’s provocative 1958 essay ‘Who Cares if You Listen?’ argued that it was time for ‘serious’ composers to withdraw from public engagement altogether, while offering nothing in the way of explanation for the public’s antipathy to ‘difficult’ music (his included) except a belief that they were too ill-informed to understand it. After giving a lecture on the music of Boulez and Elliott Carter, the eminent pianist and critic Charles Rosen responded thus to a question from the audience about whether composers have a responsibility to write music that the public can understand: “On such occasions I normally reply politely to all questions, no matter how foolish, but this time I answered that the question did not seem to me interesting but that the obvious resentment that inspired it was very significant indeed.”

No one can deny that audiences are conservative, whether they be Parisians rioting at the première of the Rite of Spring in 1913 or punks lobbing bottles at art-rockers Suicide on tour with the Clash. And since questions like this one are often a coded demand that composers start writing ‘real music’ like Mozart did, Rosen can be forgiven some impatience. Stubbs is justifiable indignant that even fans of conceptual art will parrot trite witticisms about the ‘cacophony’ of much experimental music.

But the understanding of the cognitive mechanisms of music that has emerged in the past several decades implies that it is not enough to tell ingrates bemused by Stockhausen to knuckle down and try harder. Many musicologists accept a definition of music as ‘organized sound’ (ironically, since this was the description used by avant-garde electronic pioneer Edgar Varèse for his own musique concrète, a paradigm of all that is seen as distressing about ‘modern’ music). Yet sound does not become organized merely because the composer has used a system to arrange it. Sound is structured into music not on paper, nor even in the mind of the composer, but in the mind of the listener. So music is sound in which the organization is audibly perceptible, not just that in which it is theoretically present.

Our brains use rules of thumb, both learnt and innate, to arrange an acoustic signal into a coherent entity: to pick out key, melody and harmony, to identify rhythm and metre, and to create a sense of structure and logic. The traditional music of just about every culture on earth builds in elements that assist this decoding process. When we encounter unfamiliar music, we may need to adjust our decoding rules, or learn new ones, before we can truly hear it at all.

Chief among these rules are the ‘Gestalt principles’ identified by a group of German-based psychologists in the early twentieth century. Initially identified in visual processing, these principles help us make good guesses at how to interpret complex sensory stimuli. We make assumptions about continuity, for example: the aeroplane that flies into a cloud is the same one that flies out the other side. We group together objects that look similar, or that are close together. Although the Gestalt principles are not foolproof, they make the world more comprehensible. Both in sound and in vision, the ability to interpret sensory data this way must have had clear evolutionary benefits.

In music, this means that melodies that move in small pitch steps tend to sound unified and ‘good’, while ones with large pitch jumps are liable to seem fragmented and harder to make out. Traditional melodies in diverse cultures do indeed proceed mostly in small rather than large pitch steps. Regular rhythms also contribute to coherence, while erratic ones are apt to confuse us.

The composer’s job is to manipulate the expectations that these principles produce – enough to avoid predictability and create a lively musical surface, but not so much as to lose coherence. Out of the interplay between expectation and reality comes much of music’s capacity to excite and move us. But what happens if these rules are seriously undermined? In Boulez’s Structures I or Stockhausen’s Klavierstück VII, say, there is no discernible rhythm, and the ‘melody line’, if one can call it that, is as jagged as the Dolomites. In this situation, we can develop no expectations about the music, and this absence of an audible relationship between one note and the next cuts off a key channel of musical affect. What remains may be a temporarily diverting sound, but the indulgent listener risks becoming like the sentimental audiences about whom nineteenth-century music theorist Eduard Hanslick complained, wallowing in the sonic surface while oblivious to the musical details.

And yet how can Structures lack structure? It is one of the most ‘structured’ pieces of music ever written! It was composed using the technique of ‘integral serialism’, in which musical parameters such as pitch, dynamics and rhythm are prescribed along the lines of Schoenberg’s ‘twelve-tone’ method, introduced in the 1920s. In Schoenberg’s original formulation, this approach dictated only the choice of pitches, and it was meant to eliminate all vestige of tonality – the anchoring of a piece of music to a tonic centre, which enables us to assign it a particular key. Schoenberg considered that tonal music – which meant all Western music until that point – had become tired and formulaic, and serialism was supposed to offer a systematic way of composing atonally.

The composer first chose a tone row: all twelve notes of the chromatic scale, lined up in a particular order. This was the composition’s basic musical gene: the piece was made up of repetitions of the tone row in strict order, sounded simultaneously or in succession. Individual notes could be immediately repeated, and could be used in any octave. And various mathematical permutations of the tone rows, such as reverse order, were also permitted.

The twelve-tone method ensures that no note is used more often than any other, so that none can acquire the status of a tonic merely by repetition. By the 1950s serialism had became, in many leading schools of classical composition, the only ‘respectable’ way to compose; anything hinting at tonality was considered passé and bourgeois. Yet Schoenberg not only failed to justify his horror of tonality – a composer like Béla Bartók displayed remarkable dissonant invention without abandoning it – but more importantly, he never truly came to terms with what its abandonment implied for both composer and listener. Since atonality has no tonal ‘home’, there was nowhere to depart from or return to, so that beginning, endings, and the entire matter of large-scale structure became problematic. As Roger Scruton says, ‘When the music goes everywhere, it also goes nowhere.’

Tonality is also one of the pillars of music comprehension. Far from being a decadent Western device, it is used in just about every musical tradition in the world (it does not rely on Western scales). Cognitive studies have shown how tonality provides a sense of location in pitch space and a way to organize the sequence of notes. It is the removal of this, far more than any considerations of harmony and dissonance, that many listeners find disconcerting in serialism.

This is not to say that atonality in general, and serialism in particular, is doomed to sound aimless and incomprehensible. There are plenty of other parameters, such as rhythm, dynamics and timbre, that a composer can deploy to create coherent structures. Schoenberg often did so masterfully, and Alban Berg’s Lyric Suite (1925-6) is so beautifully wrought that one would hardly know it was a twelve-tone composition at all. But as integral serialism and other techniques progressively and systematically subverted other means of providing audible organization, so it was unsurprising that audiences found the music ever harder to ‘understand’. The serialist’s rules are not ones that can be heard – even specialists in this music can rarely hear tone rows as such. Boulez’s serial piece Le Marteau sans Maître was widely acclaimed when premiered in 1955, but it wasn’t until 22 years later that anyone else could figure out how it was serial: no one could deduce, let alone hear, the organizational ‘structure’. One can hardly blame audiences for suspecting that what is left is musically rather sparse.

This is not to imply that music must return to tonal composition, with its cadences and modulations (although that is to some degree happening anyway). But ‘experimental’ music can only qualify as such if, like any experiment, it includes the possibility of failure. If musical composition takes no account of cognition – if indeed it denies that cognition has any role to play, or determinedly frustrates it – then composers cannot complain when their music is unloved.

Sadly, although these difficulties afflict only one strand of modern classical music, the fact that it was once dominant means that all the rest tends to get tarred with the same brush. Its critics often fail to differentiate music lacking clear cognitive ‘coherence systems’ from that which has new ones. What Javanese gamelan experts Alton and Judith Becker say of non-Western music pertains also to much contemporary experimental music: “it has become increasingly clear that the coherence systems of other musics may have nothing to do with either tonality or thematic development… What is different in a different musical system may be perceived, then, as noise. Or it may not be perceived at all. Or it may be perceived as a ‘bad’ or ‘simple-minded’ variant of my own system.” Often the only thing that stands in the way of comprehension, even enjoyment, is a refusal to adapt, to realise that it is no good trying to hear all music the way we hear Mozart or Springsteen. We need, in the parlance of the field, to find other ‘listening strategies’. Gyorgi Ligeti’s works can be appreciated as some of the most thrilling and inventive of the twentieth century once we realise that it handles time differently for instance. Musicologist Jonathan Kramer distinguishes it as ‘vertical’ rather than ‘horizontal’ time: musical events do not relate to one another in succession, like call and response, but are stacked up into sonic textures that slowly mutate and take on almost tangible forms.

It would arguably benefit all concerned if some experimental music, like much of Stockhausen’s or Boulez’s oeuvre and certainly the ambient noises of John Cage’s notoriously ‘silent’ 4’33”, were viewed instead as ‘sound art’, a term coined by Canadian composer Dan Lander and anticipated by the Italian futurist Luigi Russolo’s 1913 manifesto The Art of Noises. That way, one is not led to expect from these compositions what we expect of music. For if music is not acknowledged as a mental process, sound is all that remains.


Added note: The comments continue on the Prospect site, and make interesting reading. Of course, there are the inevitable blogosphere crazies. Will Orzo thinks he should tell me about this field called music cognition, in which people study other people’s responses to music. Thanks Will – hey, maybe I should use some of that work in my book on music cognition! Seriously, though, anyone who actually knows this field, as opposed to having looked it up on Wikipedia, would see straight away that this is precisely what I’m drawing on in my claims about how atonalism is perceived, especially the work of Fred Lerdahl, Carol Krumhansl and David Huron. If I am regurgitating anyone’s opinions, it is theirs. If you want references, look up my article in Nature last year (453, 160).

As for Joe Schmoe – anyone figure out who he’s ranting against? Sometimes it seems to be Stubbs, sometimes Adorno, sometimes Babbitt, sometimes me. An angry man. But incoherently so.

Tuesday, October 20, 2009

The bioethics of human cloning

It’s a funny business, bioethics. I’d never really before now looked into what it is that people called bioethicists do – but the more I do so, the more it feels that their job is basically to offer personal opinions with a professionalized aura. There is nothing intrinsically wrong about that – practising scientists tend to spend too little time thinking hard about ethical issues (beyond the basic stuff of plagiarism, fabricating data and so forth), so it is good that someone does it. And when this kind of ‘op ed’ discourse is conducted with considered philosophical rigour, and/or informed by a humane and open-minded perspective, it seems potentially to have a lot to recommend it. But from what I’ve seen so far, it strikes me as a very mixed bag.

I’ve currently been grappling with the views of Laurie Zoloth on human cloning. Zoloth is no ringside commentator, but wields considerable clout as the Director of the Center for Bioethics, Science and Society at Northwestern University. And this is what makes me kind of surprised.

Her position is laid out in ‘Born again: faith and yearning in the cloning controversy’, a chapter in Cloning and the Future of Human Embryo Research, ed. P. Lauritzen (OUP, 2001). This essay is also available online (in more or less verbatim form) here. The title goes a long way towards articulating her position. Human cloning, she believes, is all about a yearning to avoid death, and thus a narcissistic impulse to produce a copy of oneself.

Now, this is very odd. You can imagine this being the impression of someone outside the mainstream of the debate, particularly someone intuitively opposed to the idea. And there are good reasons to feel unease at the idea of reproductive cloning in humans, even in the face of the cogent arguments that Ronald Green puts forward, in the same book, in its favour. But Green makes it clear that the principal motivation for reproductive cloning is as another variant of assisted conception, alongside conventional IVF. It might be used, for example, in cases where couples wanted to have a genetically related child but either the man or the woman had no gametes at all. One can see the arguments: is it, for example, really any more potentially confusing for a child to know fully about their genetic heritage (and parentage) than to know only half of that equation in the case of anonymous gamete donation? And cloning would also offer female couples who want to conceive a child the same potential advantage over sperm donation. Of course, that opens up another can of worms in some eyes, and all the more so if one considers male homosexual couples using surrogacy for gestation. But I’m not going to argue (here) about whether such reproductive cloning is justified; the point is that the parental wish for a genetically related child, rather than for a ‘copy’ of oneself, is the motive force behind arguments supporting it. Sure, it’s possible to contend that the wish for any genetic relation to a child is itself narcissistic – and this then is a kind of narcissism displayed by the majority of the human race, and universally accepted as a ‘natural’ human desire.

Zoloth also takes on the issue of cloning performed to ‘replace’ a dead child. She movingly describes a situation in which she could appreciate this wish, but in which she could also see that the better response was to confront the anguish of the loss. Notwithstanding the fact that there is no law or intervention that prevents parents from conceiving another child ‘normally’ in response to such a tragedy, I think few would dispute that attempts to ‘replace’ a dead child are never a healthy thing. But both here and in the case of efforts to cheat one’s own mortality through cloning, the simple fact is that such actions are deluded in any event from a scientific point of view. This isn’t, as Zoloth implies, a case of science offering the temptation and bioethicists advising us to resist. Any scientist worthy of the description who knows the first thing about genetics will be the first to point out that genetically identical individuals are not in any meaningful sense ‘the same’. In her comments here, Zoloth tacitly endorses the myth of genetic determinism that scientists are always at pains to dismantle.

It seems extraordinary that a leading bioethicist would labour under these misconceptions. Naturally, if you’re temperamentally opposed to human cloning then it makes strategic sense to pick the worst possible reason for doing it in order to argue the case against. But one might question the ethics of doing so, if done intentionally. If done unintentionally, the issue is then one of competence.

What I’ve noticed in several of the critiques of the new reproductive technologies and of human embryo research is a shameful evasion of plain speaking. Time and again, one can see where the argument is inevitably heading, but the critic will not spell that out, for what one can only assume is fear of saying something unpopular. Instead, they take refuge in woolly, wise-sounding rhetoric that masks the real message. They present their criticisms – and some are, without doubt, well motivated – but decline to explain what their alternative would be. So then, for example, Zoloth says that ‘advanced reproductive technology’ relies on the notion of infertility as a disease, which must then of course be ‘cured’. This is a valid criticism: there are real dangers of setting up a situation in which people consider it their ‘right’ to have any medical treatment that will offer them the chance of conceiving a child – and in the process having their condition pathologized. But to simply say this and leave it at that is to dismiss the plight of infertility all together – to imply that ‘you just have to learn to live with it’, or perhaps, ‘you’ll just have to adopt then’ (from a greatly diminished pool, for both social and medical reasons). Zoloth nowhere acknowledges that infertility has always been seen as a problem – not just today but in the times to which she looks for the ‘wisdom of ages’. How can it possibly be that someone who makes so much of her Jewish heritage seems oblivious to the ‘disgrace’ that Rachel felt when she could bear Jacob no children (until God relented, that is). (Mind you, Zoloth’s theology seems to hold other surprises – for can it really be the case that, as she suggests, Noah is now the object of rabbinical criticism for thinking only of his wife and children and not arguing with God about the injustice of destroying the rest of his community? Sounds like a good point to me, but can it really be the case that Jewish theologists now think one should be prepared to pick arguments with God? Wild.)

So then, the unspoken text of Zoloth’s essay is that infertility is a bad roll of the dice that you have to learn to put up with. At least, I think that’s what it is. She doesn’t put it like that, of course. She puts it like this: we must look for a ‘refinement of imperfection, not the a priori obliteration of imperfection. In this, we could serve to remind [sic] of something else: of the blinding power of human love, which sees and knows, right through the brokenness’. Got that?

If I’m right in my interpretation, this doesn’t seem to offer a great deal of empathy for people who encounter infertility. Oh, but that’s mild – for elsewhere Zoloth says that ‘The hunger of the infertile is ravenous, desperate’. There are more offensive things you can say to infertile people, but not by very much.

Well then, sic indeed – for there are times when you have to wonder quite what has happened to her prose and grammar. I suspected at first if there might be a first-language issue here – if so, all criticisms are retracted – but it doesn’t look that way. Rather, one has to suspect the old post-modernist problem of language becoming a casualty of a reluctance to be truly understood. Sometimes this tension creates an utter car-crash of metaphors. For example, in telling us how parents must reject the desire for a ‘close-as-can-be-replica’ (see above), she says they must ‘earn to have the stranger, not the copy, live by our side as though out of our side’. As though what? What else can ‘out of our side’ evoke if not, after all, a clone? And indeed, the first of all clones, Eve made from Adam’s rib! Why plunge us into that thicket? Does she really mean to? Please, what is going on? (Notice here that ‘copy’ = product of one parent’s genome alone; ‘stranger’ = product of both parents’ genomes. There is some odd asymptotic calculus here, quite aside from the fact that a ‘copy’ of one parent’s genome is surely then far more of a ‘stranger’ to the other parent.)

Then how about this: ‘We need to reflect on the meaning not only of the performance gesture of cloning but also the act of the imagination that surrounds the act in popular culture.’ Now, here’s a statement I do actually endorse; but what a tortured way to put it. Indeed, that is the very aim, in a sense, of the book for which I’m reading all this stuff. And it’s therefore with some gladness of heart that I see Zoloth giving me material to work on. ‘The whole point of ‘making babies’’, she says, ‘is not the production, it is the careful rearing of persons, the promise to have bonds of love that extend far beyond the initial ask and answer of the marketplace.’ How true. And how interesting, then, that for her the hypothetical cloned human (and, to pursue the logic, already the IVF baby) becomes not a person who can be born and reared with love but a mere product of the marketplace. That assumption, that prejudice, is just the thing that interests me.

Wednesday, October 14, 2009

Google Books suits me fine

It seems that Google Books is one of the talking points of the Frankfurt book fair this year. Angela Merkel has waded into the fray to condemn the enterprise, citing its (potential) violation of copyright. As an author, I ought to be right behind such denouncement of this fiendish ploy to make our words freely available to all.

Maybe I’m naïve, but so far I think that Google Books is a rather wonderful thing. For a start, I’m not aware that there is any way to actually download and print the stuff – and who on earth is going to want to read it in this format online? And it seems that none of the books is provided in its entirety – there are pages missing, which would be infuriating if you do plan to read the lot. But more to the point, so far Google Books has encouraged me to actually buy some books that I’d not have bought otherwise: in the course of my research, I can find titles that I’d never known existed, get a good idea of their contents and make the decision about purchase via the online secondhand sellers. My only other option, if I’d discovered the books at all, would have been to make a trip into the British Library, by which point I’d have probably ended up reading them there rather than bothering to buy them. (Besides, if a book is truly relevant and interesting, I want to own it – and thanks to the wonders of the internet, it’s generally possible to do that for little more than the [inflated] cost of postage. Hopefully bookshops are benefiting from this too.)

And in the course of completing the endnotes section for my latest book, I’ve found Google Books a godsend. Inevitably there are quotes in my text that either I’ve not annotated correctly in my notes or for which I’ve never quite tracked down the original citation in the first place. With a text search in Google Books, I can locate them instantly in the books I have at home, rather than having to flick endlessly through the pages trying to find where the damned things were. Or I can, say, go straight to the original old texts by the likes of Walter Pater and discover the quote in its original context rather than at several removes. None of this does anything to deprive writers of sales, and indeed many of the relevant books are (at least in my case) old and out of copyright (and print), the authors long dead. As a result, I completed the endnotes in a couple of days, when they dragged on forever with my previous book. It was a telling indication of the way the technology has advanced, for the better, in just a couple of years. Personally, I’m looking forward to the Library of Babel that is Google Books expanding indefinitely.

Tuesday, October 13, 2009

Shaking hands with robots

[This is my forthcoming Material Witness column for Nature Materials, which seemed sufficiently low-tech to warrant inclusion here.]

Should robots pretend to be human? The plots of many science fiction novels and movies – most famously, Philip K. Dick’s Do Androids Dream of Electric Sheep?, filmed by Ridley Scott as Blade Runner – hinge on the consequences of that deception. Blade Runner opens with a ‘replicant’ undergoing the ‘Voight-Kampff’ test, in which physiological functions betray human-like emotional responses to a series of questions. This is a version of the test proposed by Alan Turing in a seminal 1950 paper pondering the question of whether machines can think [1].

But human-like thought (or its appearances) is only one aspect of the issue of robotic deception. There would be no need to test Blade Runner’s replicants if they had been made of gleaming chrome, or exhibited the jerky motions of a puppet or the stilted diction of an old-fashioned voice synthesizer. To seem truly human, a robot has to perform accurate mimesis on many (perhaps too many) fronts [2].

Today we might insist on a conceptual distinction between such mimicry and the real thing. But this was precisely what Turing set out to challenge in the realm of mind: if you can’t make the distinction empirically, in what sense can you say it exists? And in former times, that applied also to the other characteristics of humanoid machines. In the Cartesian world of the eighteenth century, when many considered humans to be merely elaborate mechanisms, it was not clear that the intricate automata which entertained salon society by writing and playing music and games were rigidly demarcated from humanity. Descartes himself refuted any such boundary, implying that automata were in a limited sense alive. In his Discourse on Method (1637) he even proposed a primitive version of the Turing test, based on the ability to use language and adapt behaviour to circumstance.

One of the most famous automata of that age was a mechanical flute player made by the virtuoso French engineer Jacques de Vaucanson, who unveiled it to wide acclaim in 1738. Not only did it sound right but its breathing mimicked human mechanics, and its right arm was upholstered with real skin [3]. This feat is brought to mind by a preprint by John-John Cabibihan at the National University of Singapore and colleagues, in which the mechanical properties of candidate ‘robot skin’ polymers (silicone and poyurethane) are tested for their likeness to human skin [4]. Can we make a robot hand feel human, the researchers ask? Not yet, at least with these materials, they conclude – in the process showing what a delicate task that is (a part of the feel of human skin, for example, comes from its hysteretic response to touch).

Underlying the research is the notion that people will be socially more at ease interacting with robots that seem ‘believable’ – we will feel queasy shaking hands if the touch is wrong. That’s supported by experience [5], but also in itself raises challenging questions about the proper limits of such illusion [6]. Arguably there are times when we should maintain an evident boundary between robot and person.

References
1. Turing, A. Mind 59, 433-460 (1950).
2. Negrotti, M. Naturoids (World Scientific, Singapore, 2002).
3. Stafford, B. M. Artful Science p.191-195 (MIT Press, Cambridge, Ma., 1994).
4. Cabibihan, J.-J., Pattofatto, S., Jomâa, M., Benallal, A. & Carrozza, M. C. Preprint http://www.arxiv.org/abs/0909.3559 (2009).
5. Fong, T., Nourbakhsh, I. & Dautenhahn, K. Robotics Autonomous Syst. 42, 143-166 (2003).
6. Sharkey, N. Science 322, 1800-1801 (2008).

Monday, October 05, 2009

What toys can tell us

[My latest Muse for Nature news…]

Sometimes all you need to do scientific research is string, sealing wax and a bit of imagination.

When Agnes Gardner King went visiting her uncle William one November day in 1887, she found him playing. He was, she wrote, ‘armed with a vessel of soap and glycerine prepared for blowing soap bubbles, and a tray with a number of mathematical figures made of wire.’ He’d dip these into the tray and see what shapes the soap films made as they adhered to the wire. ‘With some scientific end in view he is studying these films’, wrote Agnes [1].

Her uncle was William Thomson, better known as Lord Kelvin, one of the greatest scientists of the Victorian age. His ‘scientific end’ was to deduce the rules that govern soap-film intersections, so that he might figure out how to divide up three-dimensional space into cells of equal size and shape with the minimal wall area. It was the kind of problem that attracted Kelvin: simple to state, relevant to the world about him, and amenable to experiment using little more than ‘toys’.

This kind of study is brought to mind by a paper in the Proceedings of the National Academy of Sciences USA by George Whitesides and colleagues at Harvard University. In an effort to understand how polymer molecules fold and flex, they have built strings of beads and shaken them in a tray [2]. There are three types of bead: large spherical or cylindrical beads of Teflon and nylon, and small ‘spacer’ beads of poly(methyl methacrylate).

They are designed to mimic real polymers in which different monomer groups interact via forces of attraction and repulsion. When agitated on a flat surface to mimic thermal molecular motion, the Teflon and nylon beads develop negative and positive electrostatic charges respectively, and so like beads repel while unlike beads attract.

This simple ‘beads-on-a-string’ model of polymers replicates, in toy form, a mathematical description of polymers used to understand their conformational behaviour [3], such as the way the polypeptide chains of proteins fold into their compact, catalytically active ‘native’ structure. With some modification – using cylindrical beads of various lengths, so that optimal pairing of oppositely charged beads happens when they have the same length – the model can be used to look at how RNA molecules fold up using the principles of complementary base-pairing between the bases that form the ‘sticky’ monomers.

The beauty of it is that the experiments are literally child’s play (the interpretation requires a little more sophistication). Even the simplest formulations of the mathematical theory are tricky to solve – but the beads, say Whitesides and colleagues, act as an ‘analog computer’ that generates solutions, allowing them rapidly to develop and test hypotheses about how folding depends on the monomer sequence.

Whitesides has used this philosophy before, making macroscopic objects with faces coated with thin films that confer different types of mutual interaction so as to explore processes of molecular-scale self-assembly driven by selective intermolecular forces [4]. This sort of collective behaviour of many interacting parts can give rise to complex, often unexpected structures and dynamics, and is difficult to describe with rigorous mathematical theories.

It’s really a reflection of the way chemists have thought about atoms and molecules ever since John Dalton used wooden balls to represent them around 1810: as hard little entities with a characteristic size and shape. Chemists still routinely use plastic models to intuit how molecules fit together. And these investigations have long gone beyond the pedagogical to become truly experimental. The great crystallographer Desmond Bernal studied the disorderly packing of atoms in liquids using ball-bearings, and, with chalk-dusted balls of Plasticene squeezed inside a football bladder, repeated the 1727 experiment by Stephen Hales on packing of polyhedral cells that was itself a precursor to Kelvin’s investigations. (Bernal called them, apologetically, ‘rather childish experiments’ [5]).

In more recent years, model systems of beads and grains have been used as analogues of the most unlikely and complex of phenomena, from earthquakes and exotic electronic behaviour [6] to the phyllotactic growth of flower-heads [7]. Aside from the obvious issue of how closely these ‘toys’ mimic the theory (let alone how well the theory mimics reality), these approaches stand at risk of offering phenomenology without true insight: with an ‘analytical’ solution to the equations, it can be easier to discern the key physics at play. But when applied judiciously, they show that creativity and imagination can trump mathematical prowess or number-crunching muscle. And they also help underline the universality of physical theory, in which, as Ralph Waldo Emerson said, ‘The sublime laws play indifferently through atoms and galaxies.’ [8]

References
1. King, A. G. Kelvin the Man p.192 (Hodder & Stoughton, London, 1925).
2. Reches, M., Snyder, P. W. & Whitesides, G. M. Proc. Natl Acad. Sci. USA advance online publication 10.1073/pnas.0905533106 (2009).
3. Lifshitz, I. M., Grosberg, A. Y. & Khokhlov, A. R. Rev. Mod. Phys. 50, 683-713 (1978).
4. Bowden, N., Terfort, A., Carbeck, J. & Whitesides, G. M. Science 276, 233-235 (1997).
5. Bernal, J. D. Proc. R. Inst. Great Britain 37, 355-393 (1959).
6. Bak, P. How Nature Works (Oxford University Press, Oxford, 1997).
7. Douady, S. & Couder, Y. Phys. Rev. Lett. 68, 2098-2101 (1992).
8. Emerson, R. W. The Conduct of Life, p.202 (J. M. Dent & Sons, London, 1908).