Friday, June 24, 2011

Movie characters mimic each other's speech patterns


Here’s my latest news story for Nature News.
****************************************************
Script writers have internalized the unconscious social habits of everyday conversations.

Quentin Tarantino's 1994 film Pulp Fiction is packed with memorable dialogue — 'Le Big Mac', say, or Samuel L. Jackson's biblical quotations. But remember this exchange between the two hitmen, played by Jackson and John Travolta?

Vincent (Travolta): "Antwan probably didn't expect Marsellus to react like he did, but he had to expect a reaction".
Jules: "It was a foot massage, a foot massage is nothing, I give my mother a foot massage."

Computer scientists Cristian Danescu-Niculescu-Mizil and Lillian Lee of Cornell University in Ithaca, New York, see the way Jules repeats the word 'a' used by Vincent as a key example of 'convergence' in language. "Jules could have just as naturally not used an article," says Danescu-Niculescu-Mizil. "For instance, he could have said: 'He just massaged her feet, massaging someone's feet is nothing, I massage my mother's feet.'"

The duo show in a new study that such convergence, which is thought to arise from an unconscious urge to gain social approval and to negotiate status, is common in movie dialogue. It "has become so deeply embedded into our ideas of what conversations 'sound like' that the phenomenon occurs even when the person generating the dialogue [the scriptwriter] is not the recipient of the social benefits", they say.

“For the last forty years, researchers have been actively debating the mechanism behind this phenomenon”, says Danescu-Niculescu-Mizil. His study, soon to be published in a workshop proceedings [1], cannot yet say if the ‘mirroring’ tendency is hard-wired or learnt, but it shows that it does not rely on the spontaneous prompting of another individual and the genuine desire for his or her approval.

“This is a convincing and important piece of work, and offers valuable support for the notion of convergence”, says philologist Lukas Bleichenbacher at the University of Zurich in Switzerland, a specialist on language use in the movies.

The result is all the more surprising given that movie dialogue is generally recognized to be a stylized, over-polished version of real speech, serving needs such as character and plot development that don’t feature in everyday life. “The method is innovative, and kudos to the authors for going there”, says Howie Giles, a specialist in communication at the University of California at Santa Barbara.

"Fiction is really a treasure trove of information about perspective-taking that hasn't yet been fully explored," agrees Molly Ireland, a psychologist at the University of Texas at Austin. "I think it will play an important role in language research over the next few years."

But, Giles adds, "I see no reason to have doubted that one would find the effect here, given that screenwriters mine everyday discourse to make their dialogues appear authentic to audiences".

That socially conditioned speech becomes an automatic reflex has long been recognized. “People say ‘oops’ when they drop something”, Danescu-Niculescu-Mizil explains. “This probably arose as a way to signal to other people that you didn't do it intentionally. But people still say ‘oops’ even when they are alone! So the presence of other people is no longer necessary for the ‘oops’ behaviour to occur – it has become an embedded behavior, a reflex.”

He and Lee wanted to see if the same was true for conversational convergence. To do that, they needed the seemingly unlikely situation in which the person generating the conversation could not expect any of the supposed social advantages of mirroring speech patterns. But that’s precisely the case for movie script-writers.

So the duo looked at the original scripts of about 250,000 conversational exchanges in movies, and analysed them to identify nine previously recognized classes of convergence.

They found that such convergence is common in the movie dialogues, although less so than in real life – or, standing proxy for that here, in actual conversational exchanges held on Twitter. In other words, the writers have internalized the notion that convergence is needed to make dialogue ‘sound real’. “The work makes a valid case for the use of ‘fictional’ data”, says Bleichenbacher.

Not all movies showed the effect to the same extent. “We find that in Woody Allen movies the characters exhibit very low convergence”, says Danescu-Niculescu-Mizil – a reminder, he adds, that “a movie does not have to be completely natural to be good.”

Giles remarks that, rather than simply showing that movies absorb the unconscious linguistic habits of real life, there is probably a two-way interaction. “Audiences use language devices seen regularly in the movies to shape their own discourse”, he points out. In particular, people are likely to see what types of speech ‘work well’ in the movies in enabling characters to gain their objectives, and copy that. “One might surmise that movies are the marketplace for seeing what’s on offer, what works, and what needs purchasing and avoiding in buyers own communicative lives”, Giles says.

Danescu-Niculescu-Mizil hopes to explore another aspect of this blurring of fact and fiction. “We are currently exploring using these differences to detect ‘faked’ conversations”, he says. “For example, I am curious to see whether some of the supposedly spontaneous dialogs in so-called ‘reality shows’ are in fact all that real.”

1. C. Danescu-Niculescu-Mizil & L. Lee, Proc. ACL Workshop on Cognitive Modeling and Computational Linguistics, Portland, Oregon, 76-87 (Association for Computing Machinery Press, New York, 2011). Available as a preprint here.

I received some interesting further comments on the work from Molly Ireland, which I had no space to include fully. They include some important caveats, so here they are:

I think it's important to keep in mind, as the authors point out, that fiction can't necessarily tell us much about real-life dialog. Scripts can tell us quite a bit about how people think about real-life dialog though. Fiction is really a treasure trove of information about perspective-taking that hasn't been fully explored in the past. Between Google books and other computer science advances (like the ones showcased in this paper), it's become much easier to gain access to millions of words of dialog in novels, movies, and plays. I think fiction will play an important role in language and perspective-taking research over the next few years.

Onto their findings: I'm not surprised that the authors found convergence between fictional characters, for a couple of reasons. They mention Martin Pickering and Simon Garrod's interaction alignment model in passing. Pickering and Garrod basically argue that people match a conversation partner's language use because it's easier to reuse language patterns that you've just processed than it is to generate a completely novel utterance. Their argument is partly based on syntactic priming research that shows that people match the grammatical structures of sentences they've recently been presented with – even when they're alone in a room with nothing but a computer. So first of all, we know that people match recently processed language use in the absence of the social incentives that the authors mention (e.g., affection or approval).

Second, all characters were written by the same author (or the same 2-3 authors in some scripts). People have fairly stable speaking styles. So even in the context of scriptwriting, where authors are trying to write distinct characters with different speaking styles, you would expect two characters written by one author with one relatively stable function word fingerprint to use function words similarly (although not identically, if the author is any good).

The authors argue that self-convergence would be no greater than other-convergence if these cold, cognitive features of language processing [the facts that people tend to (a) reuse function words from previous utterances and (b) consistently sound sort of like themselves, even when writing dialog for distinct characters] were driving their findings. That would only be true if authors failed to alter their writing style at all between characters. Adjusting one's own language style when imagining what another person might say probably isn't conscious. It's probably an automatic consequence of taking another person's perspective. An author would have to be a pretty poor perspective-taker for all of his characters to sound exactly like he sounds in his everyday life.

Clearly I'm skeptical about some of the paper's claims, but I would be just as skeptical about any exploration into a new area of research using an untested measure of language convergence (including my own research). I think that the paper's findings regarding sex differences in convergence and differences between contentious and neutral conversations could turn out to be very interesting and should be looked at more closely – possibly in studies involving non-experts. I would just like to look into alternate explanations for their findings before making any assumptions about their results.

Thursday, June 23, 2011

Einstein and his precursors

From time to time, Nature used to receive (and doubtless still does) crank letters claiming that Einstein was not the first to derive E=mc2, but that this equation was first written down, after a fashion, by one Friedrich Hasenörhl, an Austrian physicist with a perfectly respectable, if unremarkable, pedigree and career who was killed in the First World War. This was a favourite ploy of those cranks whose mission in life was to discredit Einstein’s theory of relativity – so much so that I had two such folks discuss it in my novel The Sun and Moon Corrupted. But not until now, while reading Alan Beyerchen’s Scientists Under Hitler (Yale University Press, 1977), did I realise where this notion originated. The idea was put about by Philipp Lenard, the Nobel prizewinner and virulently anti-Semitic German physicist and member of the Nazi party. Lenard put forward the argument in his 1929 book Grosse Naturforscher (Great Natural Researchers), in which he sought to establish that all the great scientific discoveries had been made by people of Aryan-Germanic stock (including Galileo and Newton). Lenard was deeply jealous of Einstein’s international fame, and as a militaristic, Anglophobic nationalist Lenard found Einstein’s pacifism and internationalism abhorrent. It’s a little comical that this nasty little man felt the need to find an alternative to Einstein at all, given that he was violently (literally) opposed to relativity and a staunch believer in the aether. In virtually all respects Lenard fits the profile of the scientific crank (bitter, jealous, socially inadequate, feeling excluded), and he offers a stark (that’s a pun) reminder that a Nobel prize is no guarantee even of scientific wisdom, let alone any other sort. So there we are: all those crank citations of the hapless Hasenöhrl – this is a popular device of the devotees of Viktor Schauberger, the Austrian forest warden whose bizarre ideas about water and vortices led him to be conscripted by the Nazis to make a ‘secret weapon’ – have their basis in Nazi ‘Aryan physics’.

Friday, June 17, 2011

Quantum life

I have a feature in this week’s Nature on quantum biology, and more specifically, on the phenomenon of quantum coherence in photosynthesis. Inevitably, lots of material from the draft had to be cut, and it was a shame not to be able to make the point (though I’m sure I won’t be the first to have made it) that ‘quantum biology’ properly begins with Schrödinger’s 1944 book What is Life? (Actually one can take it back still further, to Niels Bohr: see here.) Let me, though, just add here the full version of the box on Ian McEwan’s Solar, since I found it very interesting to hear from McEwan about the genesis of the scientific themes in the novel.
_______________________________________________________________________________

The fact is, no one understands in detail how plants work, though they pretend they do… How your average leaf transfers energy from one molecular system to another is nothing short of a miracle… Quantum coherence is key to the efficiency, you see, with the system sampling all the energy pathways at once. And the way nanotechnology is heading, we could copy this with the right materials… Quantum coherence in photosynthesis is nothing new, but now we know where to look and what to look at.

These words are lifted not from a talk by any of the leaders in this nascent field but from the pages of Solar, a 2010 novel by the British writer Ian McEwan. A keen observer of science, who has previously scattered it through his novels Enduring Love and Saturday and has spoken passionately about the dangers of global warming, McEwan likes to do his homework. Solar describes the tragicomic exploits of quantum physicist, Nobel laureate and philanderer Michael Beard as he misappropriates an idea to develop a solar-driven method to split water into its elements. The key, as the young researcher who came up with the notion explains, is quantum coherence.

“I wanted to give him a technology still on the lab bench”, says McEwan. He came across Fleming’s research in Nature or Science (he forgets which, but looks regularly at both), and decided that this was what he needed. After ‘rooting around’, he felt there was justification for supposing that a bright postdoc might have had the idea in 2000. It remained to fit that in with Beard’s supposed work in quantum physics. This task was performed with the help of Cambridge physicist Graham Mitchison, who ‘reverse-engineered’ Beard’s Nobel citation which appears in Solar’s appendix: “Beard’s theory revealed that the events that take place when radiation interacts with matter propagate coherently over a large scale compared to the size of atoms.”

Wednesday, June 15, 2011

The Anglican atheist

To be honest, I already suspected that Philip Pullman, literary darling of militant atheists (no doubt to his chagrin), is more religious than me, a feeble weak-tea religious apologist. But it is nice to have that confirmed in the New Statesman. Actually, ‘religious’ is not the right word, since Pullman is indeed (like me) an atheist. I had thought that ‘religiose’ would do it, but it does not – it means excessively and sentimentally religious, which Pullman emphatically isn’t. The word I want would mean ‘inclined to a religious sensibility’. Any candidates?

Pullman is writing is response to a request from Rowan Williams to explain what he means in calling himself a ‘Church of England atheist’. Pullman does so splendidly. Religion was clearly a formative part of his upbringing, and he considers that he cannot simply abandon that – he is attached to what Martin Rees has called the customs of his tribe, that being the C of E. But Pullman is an atheist because he sees no sign of God in the world. He admits that he can’t be sure about this, in which case he should strictly call himself an agnostic. But I’ve always been unhappy with that view of agnosticism, even though it is why Jim Lovelock considers atheism logically untenable (nobody really knows!). To me, atheism is an expression of belief, or if you like, disbelief, not a claim to have hard evidence to back it up. (I’m not sure what such evidence would even look like…)

What makes Pullman so thoughtful and unusual among atheists (and clearly this is why Rowan Williams feels an affinity with him) is that he is interested in religion: “Religion is something that human beings do and human activity is fascinating.” I agree totally, and that is one reason why I wrote Universe of Stone: I found it interesting how religious thought influenced and even motivated other modes of thought, particularly philosophical enquiry about the world. And this is what is so bleak about the view of people like Sam Harris and Harry Kroto, both of whom have essentially told me that they are utterly uninterested in why and how people are religious. They just wish people weren’t. They see religion as a collection of erroneous or unsupported beliefs about the physical world, and have no apparent interest in the human sensibilities that sometimes find expression in religious terms. This is a barren view, yes, but also a dangerous one, because it seems to instil a lack of interest in how religions arise and function in society. For Harris, it seems, there would be peace in the Middle East if there were no religion in the world. I am afraid I can find that view nothing other than childish, and it puzzles me the Richard Dawkins, who I think shares some of Pullman’s ‘in spite of himself’ attraction to religion and has a more nuanced position, is happy to keep company with such views.

Pullman is wonderfully forthright in condemning the stupidities and bigotries that exist in the Anglican Church – its sexism and no doubt (though he doesn’t mention it) its homophobia. “These demented barbarians”, he says, “driven by their single idea that God is obsessed by sex as they are themselves, are doing their best to destroy what used to be one of the great characteristics of the Church of England, namely a sort of humane liberal tolerance.” Well yes, though one might argue that this was a sadly brief phase. And of course, for the idea that God is as obsessed with sex as we are, one must ultimately go back to St Augustine, whose loathing of the body was a strong factor in his more or less single-handed erection (sorry) of original sin at the centre of the Christian faith. But according to some religious readers of Universe of Stone, I lack the religious sensibility to appreciate what Augustine and his imitators, such as Bernard of Clairvaux, were trying to express with their bigotry.

Elsewhere in the same issue of New Statesman, Terry Eagleton implies that it is wrong to harp on about such things because religion (well, Christianity) must be judged on the basis of its most sophisticated theology rather than on how it is practised. Eagleton would doubtless consider Pullman’s vision of a God who might be usurped and exiled, or gone to focus on another corner of the universe, or old and senile, theologically laughable. For God is not some bloke with a cosmic crown and a wand, wandering around the galaxies. I’m in the middle here (again?). Certainly, insisting as Harris does that you are only going to pick fights with the religious literalists who take the Bible as a set of rules and a description of cosmic history, and have never given a moment’s thought to the kind of theology Rowan Williams reads, is the easy option. But so, in a way, is insisting that religion can’t be blamed for the masses who practise a debased form of it. That would be my criticism of Karen Armstrong too, who presents a reasonable and benign, indeed even a wise view of Christianity that probably the majority of its adherents wouldn’t recognize as their own belief system. Religion must be judged by what it does, not just what it says. But the same is true, I fear, of science.

Oh dear, and you know, I was being so good in keeping silent as Sam Harris’s book was getting resoundingly trashed all over the place.

Sunday, June 12, 2011

Go with the Flow

Nicholas Lezard has always struck me as a man with the catholic but highly selective tastes (in literature if not in standards of accommodation) that distinguish the true connoisseur. Does my saying this have anything to do with the fact that he has just singled out my trilogy on pattern formation in the Guardian? How can you even think such a thing? But truly, it is gratifying to have this modest little trio of books noticed in such a manner. I can even live with the fact that Nicholas quotes a somewhat ungrammatical use of the word “prone” from Flow (he is surely literary enough to have noticed, but too gentlemanly to mention it).

Monday, June 06, 2011

Musical intelligence

In the latest issue of Nature I have interviewed the composer Eduardo Reck Miranda about his experimental soundscapes, pinned to a forthcoming performance of one of them at London’s South Bank Centre. Here’s the longer version of the exchange.
_______________________________________________

Eduardo Reck Miranda is a composer based at the University of Plymouth in England, where he heads the Interdisciplinary Centre for Computer Music Research. He studied computer science as well as music composition, and is a leading researcher in the field of artificial intelligence in music. He also worked on phonetics and phonology at the Sony Computer Science Laboratory in Paris. He is currently developing human-machine interfaces that can enable musical performance and composition for therapeutic use with people with extreme physical disability.

Miranda’s compositions combine conventional instruments with electronically manipulated sound and voice. His piece Sacra Conversazione, composed between 2000 and 2003, consists of five movements in which string ensemble pieces are combined with pre-recorded ‘artificial vocalizations’ and percussion. A newly revised version will be performed at the Queen Elizabeth Hall, London, on 9 June as part of a programme of electronic music, Electronica III. Nature spoke to him about the way his work combines music with neurology, psychology and bioacoustics.

In Sacra Conversazione you are aiming to synthesize voice-like utterances without semantic content, by using physical modelling and computer algorithms to splice sounds from different languages in physiologically plausible ways. What inspired this work?

The human voice is a wonderfully sophisticated musical instrument. But in Sacra Conversazione I focused on the non-semantic communicative power of the human voice, which is conveyed mostly by the timbre and prosody of utterances. (Prosody refers to the acoustical traits of vocal utterances characterized by their melodic contour, rhythm, speed and loudness.)

Humans seem to have evolved some sort of ‘prosodic fast lane’ for non-semantic vocal information in the auditory pathways of the brain, from the ears to regions that processes emotion, such as the amygdala. There is evidence that non-semantic content of speech is processed considerably faster than semantic content. We can very often infer the emotional content and intent of utterances before we process their semantic, or linguistic, meaning. I believe that this aspect of our mind is one of the pillars of our capacity for music.

You say that some of the sounds you used would be impossible to produce physiologically, and yet retain an inherent vocal quality. Do you know why that is?

Let me begin by explaining how I began to work on this piece. I started by combining single utterances from a number of different languages – over a dozen, as diverse as Japanese, English, Spanish, Farsi, Thai and Croatian – to form hundreds of composite utterances, or ‘words’, as if I were creating the lexicon for a new artificial language. I carefully combined utterances by speakers of similar voice and gender and I used sophisticated speech-synthesis methods to synthesise these new utterances. It was a painstaking job.

I was surprised that only about 1 in 5 of these new ‘words’ sounded natural to me. The problem was in the transitions between the original utterances. For example, whereas the transition from say Thai utterance A to Japanese utterance B did not sound right, the transition of the former to Japanese utterance C was acceptable. I came to believe that the main reason is physiological. When we speak, our vocal mechanism needs to articulate a number of different muscles simultaneously. I suspect that even though we may be able to synthesise physiologically implausible utterances artificially, the brain would be reluctant to accept them.

Then I moved on to synthesize voice using a physical model of the vocal tract. I used a model with over 20 variables, each of which roughly represents a muscle of the vocal tract (see E. R. Miranda, Leonardo Music Journal 15, 8-16 (2005)). I found it extremely difficult to co-articulate the variables of the model to produce decent utterances, which explains why speech technology for machines is still is very much reliant on splicing and smoothing methods. On the other hand, I was able to produce surreal vocalizations that, while implausible for humans to produce, retain a certain degree of coherence because of the physiological constraints embedded in the model.

Much of the research in music cognition uses the methods of neuroscience to understand the perception of music. You appear to be more or less reversing this approach, using music to try to understand processes of speech production and cognition. What makes you think this is possible?

The choice of research methodology depends on the aims to the research. The methods of cognitive neuroscience are largely aimed at proving hypotheses. One formulates a hypothesis to explain a certain aspect of cognition and then designs experiments aimed at proving it.

My research, however, is not aimed at a describing how music perception works. Rather, I am interested in creating new approaches to musical composition informed by research into speech production and cognition. This requires a different methodology, which is more exploratory: do it first and reflect upon the outcomes later.

I feel that cognitive neuroscience research methods force scientists to narrow the concept of music, whereas I am looking for the opposite: my work is aimed at broadening the concept of music. I should not think that both approaches are incompatible: one could certainly inform and complement the other.

What have you learnt from your work about how we make and perceive sound?

One of the things I’ve learnt is that perception of voice – and, I suspect, auditory perception in general – seems to be very much influenced by the physiology of vocal production.

Much of your work has been concerned with the synthesis and manipulation of voice. Where does music enter into it, and why?

Metaphorically speaking, synthesis and manipulation of voice are only the cogs, nuts and bolts. Music really happens when one starts to assemble the machine. It is extremely hard to describe how I composed Sacra Conversazione, but inspiration played a big role. Creative inspiration is beyond the capability of computers, yet finding its origin is the Holy Grail of the neurosciences. How can the brain draw and execute plans on our behalf implicitly, without telling us?

What are you working on now?

Right now I am orchestrating raster plots of spiking neurons and the behaviour of artificial life models for Sound to Sea, a large-scale symphonic piece for orchestra, church organ, percussion, choir and mezzo soprano soloist. The piece was commissioned by my university, and will be premiered in 2012 at the Minster Church of St Andrew in Plymouth.

Do you feel that the evolving understanding of music cognition is opening up new possibilities in music composition?

Yes, to a limited extent. Progress will probably emerge from the reverse: new possibilities in musical composition contributing to the development of such understanding.

What do you hope audiences might feel when listening to your work? Are you trying to create an experience that is primarily aesthetic, or one that challenges listeners to think about the relationship of sound to language? Or something else?

I would say both. But my primary aim is to compose music that is interesting to listen to and catches the imagination of the audience. I would prefer my music to be appreciated as a piece of art rather than as a challenging auditory experiment. However, if the music makes people think about, say, the relationship of sound to language, I would be even happier. After all, music is not merely entertainment.

Although many would regard your work as avant-garde, do you feel part of a tradition that explores the boundaries of sound, voice and music? Arnold Schoenberg, for example, aimed to find a form of vocalization pitched between song and speech, and indeed the entire operatic form of recitative is predicated on a musical version of speech.

Absolutely. The notion of avant-garde disconnected from tradition is too naïve. If anything, to be at the forefront of something you need the stuff in the background. Interesting discoveries and innovations do not happen in a void.

Sunday, June 05, 2011

Are we all doomed?

That’s the question that New Statesman put to a range of folks, including me. My answer was truncated in the magazine, which is fair enough but somewhat gave the impression that I fully bought into Richard Gott’s Copernican principle. In fact I consider it to be an amusing as well as a thought-provoking idea, but not obviously more than what I depict it as in the second paragraph of my full answer below. So here, for what it’s worth, is the complete answer.
__________________________________________________________________________
There is a statistical answer to this. If you assume, as common sense suggests you should, that there is nothing special about us as humans, then it is unlikely we are among the first or last people ever to exist. A conservative guess at the trajectory of future population growth then implies that humanity has between 5,000 and 8 million years left. Whether that’s a sentence of doom or a reprieve is a matter of taste.

Alternatively, you might choose to say that we know absolutely nothing about our ‘specialness’ in this respect, and so this is just an argument that manufactures apparent knowledge out of ignorance. If you prefer this point of view, it forces us to confront our current apocalyptic nightmares. Will nuclear war, global warming, superbugs, or a rogue asteroid finish us off within the century? The last of these, at least, can be assigned fairly secure (and long) odds. As for the others, prediction is a mug’s game (which isn’t to say that all those who’ve played are mugs). I’d recommend enough pessimism to take seriously the tremendous challenges we face today, and enough optimism to think it’s worth the effort.