Skip to content


Human nature is back

This Prospect article by the RSA’s Matthew Taylor reviews an impressive amount of socially relevant psychology research. “Human nature is back”, announces Taylor, showing how the “useful shortcut” of the rational actor is now ready to be replaced by an empircally informed model of man as a social, emoitonal, being. Conclusions include

if we want to live an ethical life we do not have to pore over self-help books, but instead choose the social context that is most likely to prompt us to automatic altruism. Blinkered by the idea of humans as entirely driven by self-interest, we believe that altruistic acts must require conscious effort, perhaps as a result of exhortation from leaders. But if we are living balanced lives and enjoy mutual trust with people, behaving well comes naturally.


…susceptibility to social influence is hard-wired in us and not simply a characteristic of those lacking willpower. It may not be as catchy as the original slogan, but “tough on crime, even tougher on the causes of crime”is where the evidence points.


social institutions and cultural taboos are ways in which “generations hand down… vital tacit knowledge about human nature.”…[they] have developed to protect us from our psychological frailties, encouraging us to act long term and be socially responsible. These devices include the family, the church and civic organisations. As we become richer, we mistakenly think we do not need them.

It’s a rich brew of research evidence and political ideas. Perhaps even enough to give us hope, as Taylor claims that “new ideas about human nature can contribute to a more substantive meeting of minds between left and right”

real fake emotions

LA Noire is the new game from Rockstar Games, the notorious publisher of Grand Theft Auto. This guardian article describes it as “a new era for interactive entertainment.”, where the gameplay is not about is not about hand-eye coordination but about emotional perception, being able to judge body language and facial “tells”.

The thing is, what will the “true” meaning of the facial expressions in the game be based on? Will the correct judgements be based on the game-designers’ intepretation of what different facial expressions mean? If so, how can we trust that they have the correct intepretation? It isn’t straightforward to read meaning from expressions. Even people who think they are experts at it can be wrong, and many of the clues popularity associated with deception, such as gaze aversion, don’t truly help you tell truth from lies (see the work of Aldert Vrij, at Portsmouth).

It might be that we end up playing a game where you learn to intepret what Rockstar games believes about what people look like when they lie, rather than practice any real world emotional perception

The destructive myth of talent

The commonly held but empirically unsupported notion that some uniquely ‘‘talented’’ individuals can attain superior performance in a given domain without much practice appears to be a destructive myth that could discourage people from investing the necessary efforts to reach expert levels of performance.

From the highly readable, Ericsson, K. A., & Ward, P. (2007). Capturing the naturally occurring superior performance of experts in the laboratory. Current Directions in Psychological Science, 16(6), 346.

Why Sherry Turkle is so wrong

Review of Sherry Turkle’s Alone Together: Why we expect more from technology and less from each other 2011, Basic Books

(Attention conservation notice: a rambling 1800 word book review in which I am rude about Sherry Turkle and psychoanalysis, and I tell you how to think properly about the psychology of technology)

This book annoyed me so much I wasn’t sure at page 12 if I could manage the other 293. In the end I read the introduction and the conclusion, skimming the rest. Turkle’s argument is interesting and important, I just couldn’t face the supposed evidence she announced she was going to bring out in the body of the book.

Psychoanalysts are conspiracy theorists of the soul, and nowhere is that clearer than in Turkle’s reasoning about technology. Page after page of anecdotes are used to introduce the idea that communications technologies such as email, facebook and twitter offer an illusion of intimacy, but in fact drive us into a new solitude. This might be true, its an important idea to entertain, but pause for a moment to think how you would establish if it really was the case or not.

For Turkle, the evidence is all around, discerned by her keen psycholanlytically-trained psychologist’s eye. A young woman chats to her grandmother on skype for an hour a week – touching example of a relationship deepened and sustained? No! Unbeknownst to the grandmother the young woman uses that hour to catch up on her emails, leaving her unsatisfied with the skype conversation, with vague feelings of guilt and a failure to connect. Turkle combines stories like these of people she’s met with sweeping generalisations about how “we” feel – increasingly disconnected, overwhelmed and unable to tell where the boundary between work and home life is. Text messages, originally a substitute of the phone call you couldn’t make, “very quickly…became the connection of choice” she announces. Really? For everyone?

Throughout Turkle seems to assume that this new age of communications technology has accelerated us into an age of dislocation and disconnection. This may be so, but a few anecdotes about people’s unsatisfactory relationships and yearning for deeper intimacy and authenticity don’t establish this. Here is the news: it was ever so. Now people wonder if their facebook friends are true friends, previously we wondered if our friends on the team, or in the pub, were our true friends. Now we wish for romantic relationships without betrayal and inconvenience, previously this is what we wished for too. Ambiguity, failure and fear of disconnection are not a novel part of online relationships they are part of the human condition and it is mighty irksome that Turkle assumes the novelty of these things. She is seeing what she wants to see in the world around her. There is also an inherent conservatism in her assumption that things were better before this anarchy of technology was loosed upon the world, the assumption that not only were things better before, but that this was the way they were “supposed to be”. The comic thing is that her historical benchmark is just as arbitrary – as if phone calls were a good and proper means of communication, a ceremony of innocence drowned by the destructive forces of text messaging and skype. When the phone was invented there was a moral panic about the what this technology would do for relationships, the same as there was a moral panic when printed books became widespread. There’s no reason why we shouldn’t invent a new form of communication, such as the text messasge, and it come to fill a niche in the ecology of how we relate to each other. People haven’t stopped making phone calls, they have augmented the way they communicate with text messages, not substituted texting for phoning.

Reading the book it is hard to shake the impression that everything Turkle says is in a slightly dismayed and hysterical tones “Oh no! The kids are using text messaging” “Oh no! People underestimate the distracting effect of checking their email!” “Oh no! The kids find face to face conversations threatening, the little dears can’t live in the real world”

Again: it was ever so. And of course, with anything new, you can always find some genuinely mislead and bewildered people. Turkle has some striking examples of people who wish for relationships – both romantic and sexual – with robots. This shows, she says, that we are in the ‘robotic moment’. It is not that robots are ready for our desires, but that our desires are now ready for the idea for intimacy with robots. A young woman yearns for a robot lover, wanting to trade her human boyfriend for a “no risk relationship”; an elderly woman saying that her robot dog “won’t die suddenly and abandon you and make you very sad”; the genuinely astounding argument of David Levy’s “Love and Sex with Robots” which proposes that soon we’ll be fighting for the right to marry robots in the same way we fought for the right to marry people of the same sex. Are we only discussing these possibilites, asks Turkle, because we are failing each other in human relationships?

The impression I get is of a very earnest anthropologist, speaking to the young people of a alien tribe, ready to be shocked and titillated by their revelations. Do the people speaking to Turkle really believe what they say, or are they egged on by her credulity, just as the tribespeople compete to tell the anthropologist say ever more outrageous things? Yes, yes I would prefer a robot lover. Yes, yes, real men are a disappointment – irritating, changeable – and the simulation of intimacy would be better than a risk on authentic intimacy.

My problem is not that people are seeking to escape human frailty and ambiguity with robots, but that Turkle seems to assume that there was ever a time when some people didn’t try to escape human frailty and ambiguity. It isn’t that we are newly dissatisfied with our relationships, that we are newly struggling for authenticity. Rather it is that the old struggle has found a new form, that the eternal uncertainties we have of ourselves and each other are given a new light by technology.

Turkle has a important point disguised by a boring pessimism. “Relationships with robots are ramping up; relationships with people are ramping down” she says “of every technology, we must ask, Does it serve our human purposes?” This later point is vitally important. The idea that Turkle has proven that human relationships are “ramping down” due to the current communications technology is the distraction. This is just a generational cry of despair, common to every age, when one age group realise they don’t understand or don’t like how their children behave.

True, we must ask how technology can be built to enhance our relationships, and true intimacy and authenticity are endangered, but it was always so and Turkle’s speculations of doom help only to muddy the waters.

I find myself wondering why Turkle has this pervasive pessimism about our ability to sensibly navigate these new technologies. Perhaps, it is related to the stance she seems to adopt to the characters that populate her anecdotes, which is of subjects under her microscope, an amorphous mass of “them” rather than as unique individuals with stories and weaknesses just like all of us. This may just be my knee-jerk dislike of psychoanalysts but her stance towards these characters in her argument always felt condescending and arrogant, as if she alone possessed the objective stance, as if only she, with her psychoanalytic training, was expert enough to discern the loneliness and feel what they themselves didn’t know they felt. Again, the tone reminded me of the naive anthropologist – aren’t they strange?! Isn’t their confusion fascinating?!

I would have had more faith in Turkle’s reasoning if she talked more about her experience, rather than relating this anecdotes from people she met at conferences and at Parisian dinner parties.

Turke’s underlying assumption is that technology is a thing separate from, or gets in between, authentic relationships. (There’s a comparison to those who diagnose an addiction to the internet, as if the internet were a substance, when it is just a medium). In fact, technology is part of relationships because it is part of our minds (see Andy Clark’s book Natural Born Cyborgs for an exploration of this idea). Technology cannot get in the way of some kind of natural detection of reality because we never have direct contact with reality – it is always mediated by culture, history, language, expectations, and the whole architecture of our minds for understanding the world. As every psychologist should know, the idea of “virtual reality” is a misnomer because reality has always been virtual. A concrete example of this confusion is when Turkle assumes that she (alone) can tell the real (flesh and blood) encounters from the fake (technologically mediated) encounters. “The ties we form through the internet are not, in the end, the ties that bind” she says solemnly. This is a ridiculous generalisation, and must be confusing to all those who met over the internet, or have had relationships deepened because of the internet. Can you imagine how ridiculous Turkle would sound if she’d made such a generalisation about another medium. “The ties formed through writing are not the ties that bind”, “The ties formed by those speaking French are not the ties that bind”. Nonsense! Again Turkle has been distracted by her pessimism and her conservatism. The problem of human bonds is not a new one, we’ve always struggled to find rapprochement with each other, the internet doesn’t change that. It does give the problem interesting new dimensions, and I’ve no doubt that we’ll struggle collectively with these new dimensions for decades, but I don’t see Turkle doing anything to make clear the outlines of the problem or advance any solutions.

New technology is easy to think about, partly because the novel always stands out against the background of the old, and partly because it is easier to think about the material aspects of things, and the material aspects of technology can be ubiquitous (like text messages and email) or particularly entrancing (like robots). But let me give an alternative vision to Turkle’s Cassandra wail. Rather than technology, a far more real threat to intimacy and authenticity in the modern world is the continuous parade of advertising which tries to hock material goods with the promise that they can give access to transcendent values. Cars which give freedom, cameras which give friendship, diamonds that which give love and clothes that give confidence. Here is a cultural force, with a massive budget and the active intention to make us dissatisfied with our possessions, our lifestyles, our bodies and our relationships. How about we worry a bit more about that, and less about the essentially democratic technologies of communication.

The Narrative Escape

My ebook “The Narrative Escape” was published last week by 40k books. ‘The Narrative Escape’ is a long essay about morality, psychology and stories and is availble in Kindle format (this means you can get it for you iPhone, iPad or in PDF too). From the ebook blurb:

We instinctively tell stories about our experiences, and get lost in stories told by other people. This is an essay about our story-telling minds. It is about the psychological power of stories, and about what the ability to enjoy stories tells us about the fundamental nature of mind.

My argument in ‘The Narrative Escape’ begins by exploring Stanley Milgram’s famous experiments on obedience, looking at them as an example of moral decision making – particularly for that minority that choose to disobey in the experiment. A fascinating thing about these experiments is that although they tell us a lot about what makes people obey authority, they leave mysterious that quality that makes people resist tyrannical authority. I then go on to contrast this moral disobedience, with conventional psychological investigations of morality (for example the work of Lawrence Kohlberg). In using descriptions of moral dilemmas to ask people about their moral reasoning this research, I argue, misses something essential about real-world moral choices. This element is the ability to realise that you are acting according to someone else’s version of what is right and wrong, and to step outside of their definition of the situation. This is the “narrative escape” of the title. The essay also talks about dreams, stories and story-telling and other topics which I hope will be of interest.

There is also an interview with me available here, which discusses the ebook and some other more and less related topics.

The essay is available in Italian as “La Fuga Narrativa Link for the English edition.
…And coming soon in Portuguese, I’m told!

World without end

It is possible to see why, despite all the poverty and the hardships and dependence, the agricultural society of the early Middle Ages – and of the later Middles Ages too in many regions – should have been relatively unreceptive to the militant eschatology of the unprivileged. To an extent which can hardly be exaggerated peasant life was shaped and sustained by custom and communal routine. In the wide northern plains peasants were commonly grouped together in villages; and there the inhabitants of a village followed an agricultural routine which had been developed by the village as a collectivity. Their strips of land lay closely interwoven in the open fields, and in ploughing, sowing an reaping they must often have worked as a team. Each peasant has the right to use the ‘common’ to a prescribed extent and all the livestock grazed there together. Social relationships within the village were regulated by norms which, though they varied from village to village, had the sanction of tradition and were always regarded as inviolable. And this was true not only of relationships between villagers themselves but of the relationship between each villager and his lord. In the course of long struggles between conflicting interests each manor had developed its own laws which, once established by usage, prescribed the rights and obligations of each individual. To this ‘custom of the manor’ the lord himself was subject; and the peasants were commonly most vigilant in ensuring that he did in fact abide by it. Peasants could be vary resolute in defending their traditional rights and even on occasion in extending them. They could afford to be resolute, for the population was sparse and labour much in demand; this gave them an advantage which to some extent offset the concentration of landed property and of armed force in the hands of their lords. As a result the manorial regime was by no means a system of uncontrolled exploitation of labour. If custom bound the peasants to render dues and services, it also fixed the amounts. And to most peasants it gave at least that basic security which springs from the hereditary and guaranteed tenancy of a piece of land.
The position of the peasant in the old agricultural society was much strengthened, too, by the fact that – just like the noble – he passed his life firmly embedded in a group of kindred. The large family to which the peasant belonged consisted of blood-relatives by male and female descent and their spouses, all of them bound together by their ties with the head of the group – the father (or, failing him, mother) of the senior branch of the family. Often this kinship-group was officially recognised as the tenant of the peasant holding, which remained vested in it so long as the group survived, Such a family, sharing the same ‘pot, fire and loaf’, working the same unpartitioned fields, rooted in the same piece of earth for generations, was a social unit of great cohesiveness – even though it might itself be riven at times by bitter internal quarrels. And there is no doubt that the individual peasant gained much from belonging to such a group. Whatever his need, and even if he no longer lived with the family, he could always claim succour from his kinsfolk and be certain of receiving it. If the ties of blood bound they also supported every individual.
The network of social relationships into which a peasant was born was so strong and was taken so much for granted that it precluded any very radical disorientation. So long as that network remained intact peasants enjoyed not only a certain material security but also – which is even more relevant – a certain sense of security, a basic assurance which neither constant poverty nor occasional peril could destroy. Moreover such hardships were themselves taken for granted, as part of a state of affairs which seemed to have prevailed from all eternity. Horizons were narrow, and this was as true of social and economic as of geographical horizons. It was not simply that contact with the wide world beyond the manor boundaries was slight – the very thought of any fundamental transformation of society was scarcely conceivable. In an economy which was uniformly primitive, where nobody was very rich, there was nothing to arouse new wants; certainly nothing which could stimulate men to grandiose phantasies of wealth and power.

Norman Cohn, ‘The Pursuit of the Millenium: Revolutionary Millenarians and Mystical Anarchists of the Middle Ages’ (1957/2004, p55-56).

Homeopathy Generics

Simon Singh approached his debate with homeopathy-promoting MP David Tredinnick all wrong this morning. He dived into a critique of the studies Tredinnick presented, thus allowing him to maintain the advantage of framing the debate and losing most of the audience with discussion of statistics and control groups [1].

Instead, he should have laughed at the MP and said gently something like “It is undoubtedly true that homeopathy does work, the only question is about why it works. All the evidence suggests that the effect is due to a combination of the power of individual’s beliefs about homoeopathy and the healing benefits of a meaningful relationship with a physician. For every 1 study that says, like David Tredinnick’s three, that homeopathy has benefits beyond those of placebo there are 50 which suggest that homeopathy medicines are inert and all the properties ascribed to them are properties of belief and relationships. Because of this, we need to ask if we want to allow a misguided homeopathy industry to charge us for medicines which we know to be snake oil, and whether there is not some less expensive and less deceitful way we can access the powerful healing effects that placebos such as homeopathy provide.”

On that last point, I’ve had an idea. Homeopathy is fake medicine, and obviously this has lots of benefits. All the power of placebos! Minimal risk and side-effects! Safe to use in combination with conventional medicine! The only downside I can see is that only patients you allow to remain misinformed can benefit and that the homeopathy industry has all this rigmarole involved in the preparation and delivery of the product that necessarily makes it expensive. So why not sell fake homopathic medicine? I don’t see how homeopaths could object if the medical establishment turns their strategy back on them. We could even use their experimental methods to replicate the successful results they’ve found with homeopathic treatment for our fake-homeopathic treatment. Instead of branded pharmaceuticals you can buy generic pharmaceuticals which have the same chemical composition at the fraction of the price, why can’t we buy homeopathy generics which are equally inert? Doctors could be free to prescribe them, saving the NHS money and simultaneously allowing patients access to all the wonderful benefits of placebo.

[1] Not that discussion of statistics and control groups is a bad thing, or a guaranteed way to lose your audience, I just think Singh lost his because of the way he discussed statistics and control groups, and because it wasn’t essential to the wider issues

Narrative Extraction

I don’t predict what is going to happen when I watch a film. It isn’t like I can’t, it just doesn’t occur to me. When the bad guy turns out to be a good guy (or vice versa) my friends will say “Well that was obviously going to happen”. But it wasn’t obvious to me.

If you had described the salient facts to me, given me a plot summary, I would be able to make the correct prediction, I’m sure, but something about the way my brain works stops me making the leap from the level of experience to the level of description. I am stuck just experiencing the events of the film, and not representing them in a way that would allow me to draw obvious conclusions.

Let’s call this ability ‘narrative extraction’.

I’ve got some smarts, sure, but I think I’ve got a deductive kind of smarts. This is the kind of smarts that can take a set of facts, or axioms, and crunch through the consequences until you get to inevitable result. I’m good at maths and most logic puzzles. I think narrative extraction requires a different kind of smarts. It is the ability to pick an appropriate set of facts or an appropriate method of description which will provide you with an answer which serves your purposes.

For the film example you need to do more than just experience the characters, you need to classify them by their types, the film by genre, the plot by template and from all that infer what would be the most likely thing for an exciting film.

Moral reasoning requires the same kind of smarts. There’s a famous test of moral reasoning by Kohberg, where children are presented with vignettes (“Your wife is sick and you cannot afford medicine. Should you break into the pharmacy and steal it?” type things). Kohberg ranked children’s moral reasoning, giving the most credit to moral reasoning which invoked logical deductions from abstract moral frameworks.

Gilligan, in her book “In a different voice” has a powerful critique of Kohlberg’s system, on the grounds that it gave credit to one kind of reasoning – abstract logical deduction or calculus (e.g. “Stealing is wrong, but letting your wife die is worse, so I should therefore steal the medicine”) – and not to another kind of more contextually sensitive reasoning (e.g. “If I break into the pharmacy then I might get caught and then I won’t be able to help my wife, so I should find another way of getting the medicine”). This sensitivity to what is not in the question – what is not explicitly stated – is a part of narrative extraction.

There is another important, perhaps more primary, way in which narrative extraction is required for moral judgement. Kohlberg’s vignettes are not just logic problems, which can be convergently or divergently solved, they are also descriptions of the world. Thus they do one of the major tasks of moral reasoning for you – that of going from the nebulous world of experience to the concrete would of categories and actions.

As soon as you describe the world you massively constrain the scope for moral reasoning. You can still make the wrong judgement, but you have made moral reasoning possible by the act of description using moral categories.

Milgram demonstrated scientifically the banality of evil, that normal people could do inhuman things. Did those people who thought they were delivering lethal electric shocks make an incorrect moral judgement? Did they weigh “doing what you are told” against “the life of an innocent” and choose the former? My intuition is that they did not, not explicitly. Yes they made the wrong choice (we too would probably have made the wrong choice), but I believe that they were so caught up in the moment, in the emotion of the situation, that they did not move to the necessary level of description. We, reading this in comfort, are given the moral categories and the right choice is so obvious that we have difficulty empathising with their situation. The narrative extraction has been done for us, so right thing seems obvious. But it isn’t.

Better thinking through chemistry

This chapter was due for inclusion in The Rough Guide Book of Brain Training, but was cut – probably because the advice it gives is so unsexy!

The idea of cognitive enhancers is an appealing one, and its attraction is obvious. Who wouldn’t want to take a pill to make them smarter? It’s the sort of vision of the future we were promised on kids TV, alongside jetpacks and talking computers.

Sadly, this glorious future isn’t here yet. The original and best cognitive enhancer is caffeine (“creative lighter fluid” as one author called it), and experts agree that there isn’t anything else available to beat it. Lately, sleep researchers have been staying up and getting exciting about a stimulant called modafinil, which seems to temporarily eliminate the need for sleep without the jitters or comedown of caffeine. Modafinil isn’t a cognitive enhancer so much as something that might help with jetlag, or let you stay awake when you really should be getting some kip.

Creative types have had a long romance with alcohol and other more illicit narcotics. The big problem with this sort of drug (aside from the oft-documented propensity for turning people into terrible bores), is that your brain adapts to, and tries to counteract, the effects of foreign substances that affect its function. This produces the tolerance that is a feature of most prolonged drug use – whereby the user needs more and more to get the same effect – and also the withdrawal that characterises drug addiction. You might think this is a problem only for junkies but, if you are a coffee or tea drinker just pause for moment and reflect on any morning when you’ve felt stupid and unable to function until your morning cuppa. It might be for this reason that the pharmaceutical industry is not currently focusing on developing drugs for creativity. Plans for future cognitive enhancers focus on more mundane, workplace-useful skills such as memory and concentration. Memory-boosters would likely be most useful to older adults, especially those with worries about failing memories, rather than younger adults.

Although there is no reason in principle why cognitive enhancers couldn’t be found which fine-tune our concentration or hone our memories, the likelihood is that, as with recreational drugs, tolerance and addiction would develop. These enhancing drugs would need to be taken in moderate doses and have mild effects – just as many people successfully use caffeine and nicotine for their cognitive effects on concentration today. Even if this allowed us to manage the consequences of the brain trying to achieve its natural level, there’s still the very real possibility that use of the enhancing drugs would need to be fairly continuous – just as it is with smokers and drinkers of tea and coffee. And even then our brains would learn to associate the drug with the purpose for which they are taken, which means it would get harder and harder to perform that purpose without the drugs, as with the coffee drinker who can’t start work until he’s had his coffee. Furthermore, some reports suggest that those with high IQ who take cognitive enhancers are mostly likely to mistake the pleasurable effect of the substance in question for a performance benefit, while actually getting worse at the thing they’re taking the drug for.

The best cognitive enhancer may well be simply making best use of the brain’s natural ability to adapt. Over time we improve anything we practice, and we can practice almost anything. There’s a hundred better ways to think and learn – some of them are in this book. By practicing different mental activities we can enhance our cognitive skills without drugs. The effects can be long lasting, the side effects are positive, and we won’t have to put money in the pockets of a pharmaceutical company.

Link to more about The Rough Guide book of Brain Training
Three excellent magazine articles on cognitive enhancers, from: The New Yorker, Wired and Discover

Cross-posted at

The Rough Guide to Brain Training (Moore & Stafford, 2010)

The Rough Guide to Brain Training is a puzzle book which incluces essays and vignettes by myself. The book has 100 days of puzzles which will challenge your mental imagery, verbal fluency, numeracy, working memory and reasoning skills. There are puzzles that will look familiar like suduko, and some new ones I’ve never seen before. Fortunately the answers are included at the back. Gareth made these puzzles. I find them really hard.

I have 10 short essays in the book, covering topics such as evidence-based brain training, how music affects the developing brain, optimal brain nutrition and what the brains of the future will look like. As well as the essays, I wrote numerous short vignettes, helpful hints and suprising facts from the world of psychology and neuroscience (did you know that squids have dounut shaped brains? That you share 50% of your genes with a banana? That signals travel between brain cells at up to 200mph, which is fast compared to a cycle courier, but slow compared to a fibre optic cable). Throughout the book I try to tell it straight about what is, isn’t and might be true about brain training. I read the latest research and I hope I tell a sober, but optimistic, message about the potential for us to change how we think over our lifetimes (and the potential to protect our minds against cognitive decline in older age). I also used my research to provide a sprinkling of evidence-based advice for those who are trying to improve a skill, study for an exam or simply remember things better.

Writing the book was a great opportunity for me to dig into the research on brain training. It is a topic I’d always meant to investigate properly, but hadn’t gotten around to. The claims of those pushing commercial brain training products always seemed suspicious, but the general idea – that our brains change based on practice and experience – seemed plausible. In fact, this idea has been one of the major trends of the last fifty years of neuroscience research. It has been a big surprise to neuroscientists as experiment after experiment has shown exactly how malleable (aka ‘plastic’) the structure and function of the brain is. The resolution of this paradox of the general plausibility of brain training with my suspicion of specific products is in the vital issue of control groups. Although experience changes our brains, and although it is now beyond doubt that a physically and mentally active life can prevent cognitive decline across the lifespan, it isn’t at all clear what kinds of activities are necessary or essential for general mental sharpness. Sure, after practicing something you’ll get better at it. And doing something is better than doing nothing, but the crucial question is doing something you pay for better than doing something else that is free? The holy grail of brain training would be a simple task which you could practice (and copyright! and sell!!) and which would have benefits for all mental skills. Nobody has shown that such a task or set of tasks exists, so while you could buy a puzzle book, you could also go for a jog or go to the theatre with friends. Science wouldn’t be able to say for certain which activity would have the most benefits for your mental sharpness as an individual – although the smart money is probably on going jogging. It is to the credit of the editors at the Rough Guides that they let me say this in the introduction to the Rough Guide to Brain Training!

There wasn’t room in the book for all the references I used while writing it. This was a great sadness to me, since I believe that unless you include the references for a claim, you’re just spouting off, relying on a dubious authority, rather than really talking about science. So, to make up for this, and by way of an apology, I’ve put the references here. It will be harder to track specific claims from this general list that it would be with in-text citations, so if you do have a query, please get in touch and I promise will point you to the evidence for any claims I make in the book.

Additionally, I’ll be posting here a few things from the cutting room floor – text that I wrote for the book which didn’t make it into the final draft. Watch out, and if you do get your hands on a copy of this Rough Guide to Brain Training, get in touch and let me know what you think.

Amazon link (only £5.24!)
Scientific references and links used in researching the book
Cross-posted at

Ad Nauseam

adnauseam I am reading Ad Nauseam: A Survivor’s Guide to American Consumer Culture, edited by Carried McLaren and Jason Torchinsky. The book is a funny, smart and sometimes shocking collection of articles from Stay Free Magazine and blog. I first came across Stay Free when I was researching the psychology of advertising and was impressed by their sophisticated take on how adverts affect consumers’ decision making. They discuss in Ad Nauseam how advertising is often misunderstood, with people relying on an intuitive ‘Advertising doesn’t effect me’ view or swinging to the opposite extreme of the ‘Sinister Advertisers Manipulate Consumers with their Mind Control Tricks’ position. Both positions distract from the very real, but not magical, power of advertising.

The book has a great discussion of Wilson Bryan Key’s Subliminal Seduction, the book that launched the idea that subliminal, and often sexual, figures are embedded in random features of adverts such as in ice cube shadows. The idea of these ’embeds’ is nonsense, of course, but great fun to look for and a great distraction from the real persuasive content of the advert. The book also has a chapter on the origins of modern advertising practice in 19th century pharmaceutical advertising (the manufacturing of ailments for which ready made ‘cures’ can be sold has been covered by Vaughan on before, in relation to the mental health). Packed with critical analysis of the advertising industry, more informative history and some shocking examples of how consumerism has worked its way into many aspects of our daily lives, this book is essential intellectual self-defense, managing to be critical and aware without ever being sanctimonious or hysterical.

Cross-posted at

Do you dream of being chased?

Last night I had two dreams in which I was being chased (once by a tour-de-france cyclist in Venice, once by a giant snake in a field, since you ask). I was thinking that being-chased dreams are probably my brain rehearsing escape behaviours – a night-time training programme built in by evolution. Thinking more on it, I realised that I have never had a chasing dream, only being-chased dreams. Is this because being-chased is more adaptive to rehearse, or because of something peculiar to my idiosyncratic psychology? Let’s find out, please vote using the poll below:

Choose two anwers

View Results

Loading ... Loading ...

Tall Stories

Reprint of the text from my article in Prospect magazine, 4th July 2009, Issue 160

If someone tells you something that isn’t true, they may not be lying. At least not in the conventional sense. Confabulation, a rare disorder resulting from severe brain damage, causes its sufferers to relentlessly invent and believe fictions—both mundane and fantastical—about their lives. If asked where she has just been, a patient might say that she was in the laundry room (when she wasn’t) or that she’s been visiting Scotland with her sister (who’s been dead for 20 years), or even that she isn’t in the room where you’re talking to her, but in one exactly like it, further down the corridor. And could you fetch her hand cream please? These stories aren’t maintained for long periods, but are sincerely believed.

While it only affects a tiny minority of those with brain damage, confabulation tells us something important: that spontaneous, fluid, even riotous creativity is a natural part of the design of the mind. The damage associated with confabulation—usually to the frontal lobes—adds nothing to the brain’s makeup. Instead it releases a capacity for fiction that lies dormant inside all of us. Anyone who has seen children at play knows that the desire to make up stories is deeply embedded in human nature. And it can be cultivated too, most clearly by anarchic improvisers like Paul Merton.

Chris Harvey John taught me “improv” at London’s Spontaneity Shop. He can step on stage in front of 200 people to perform a totally unscripted hour-long show. There’ll be no rehearsal, no discussion of characters or plot. Instead, he and the other actors invent a play from scratch, based entirely on their unplanned reactions to each other. This seemingly effortless, throwaway attitude is the opposite of what we normally assume about the creative process: that it is hard work. Artists are often talked about in reverent, mystical tones. Art does connect with deep and mysterious human forces, but that doesn’t mean it is only available to a select few who, through luck or special training, are allowed to invent things.

Psychological research increasingly shows that inventiveness is fundamental to the normal operation of the mind. Aikaterini Fotopoulou is a research psychologist at the Institute of Cognitive Neuroscience, London, who specialises in confabulation. She regards it as a failure of the psychological mechanisms responsible for memory. “These inventions are really memory constructions,” she says. “When people confabulate they are failing to check the origin of the material that they build into their memories. You or I can usually tell the difference between a memory of something we’ve done and a memory of something we’ve just heard about, and distinguish both from stray thoughts or hopes. Confabulators can’t do this. Material that, for emotional or other reasons, comes to mind can at times be indiscriminately assumed to be a memory of what really happened.”

There’s a clue to confabulation in the responses of other patients with damage to the frontal lobes. These patients, who may have suffered violent head injuries or damage from illnesses such as strokes or Alzheimers, don’t necessarily confabulate but will often have problems with planning and motivation. They can seem heavily dependent on their external environment. Some, for example, indiscriminately respond to the things they see, regardless of whether it is appropriate in the context. The French psychiatrist L’hermitte demonstrated this “environmental dependency” in the 1980s when he laid a syringe on a table in front of a patient with frontal lobe damage and then turned around and took down his trousers. Without hesitation the patient injected him in the buttocks. This was a completely inappropriate action for the patient, but in terms of the possible actions made available by the scene in front of her, it was the obvious thing to do.

In those patients with frontal damage who do confabulate, however, the brain injury makes them rely on their internal memories—their thoughts and wishes—rather than true memories. This is of course dysfunctional, but it is also creative in some of the ways that make improvisation so funny: producing an odd mix of the mundane and impossible. When a patient who claims to be 20 years old is asked why she looks about 50, she replies that she was pushed into a ditch by her brothers and landed on her face. Asked about his good mood, another patient called Harry explains that the president visited him at his office yesterday. The president wanted to talk politics, but Harry preferred to talk golf. They had a good chat.

Improvisers tap into these same creative powers, but in a controlled way. They learn to cultivate a “dual mind,” part of which doesn’t plan or discriminate and thus unleashes its inventive powers, while the other part maintains a higher level monitoring of the situation, looking out for opportunities to develop the narrative.

Both improvisation and confabulation show that the mind is inherently sense-making. Just as a confabulator is unfortunately driven to invent possible stories from the fragments of their memories and thoughts, so an improviser looks at the elements of a scene and lets their unconscious mind provide them with possible actions that can make sense of it. On stage, this allows them to create entrancing stories. But this capacity for invention is inside all of us. As audience or performers, we are all constantly inventing.

Me in a dream

And I dreamt that, for totally mundane reasons, I needed to change my clothes and as I took off the black t-shirt I was wearing I noticed a flash of red folded-up in the black of the t-shirt cloth. And in the dream I remember thinking to myself “What’s that? Oh, of course, it must be the red snood I wear” (A snood is a kind of scarf, and I do indeed often wear one, which is red). So, still in the dream, I started to peel apart the red and black cloth, as you do with clothes you have taken off all in one go. And the red cloth, it turned out, was not my snood, but instead a red t-shirt which I was wearing underneath my black t-shirt and which, I could see – or maybe ‘know’ in the way that you just know some things in dreams – was some kind of socialist / trade union t-shirt from the mid 1980s.

So far, so boring. This seems even more ordinary and unremarkable than most people’s dreams which have extraordinary and remarkable content, yet still manage to bore in the daylight telling. But listen to this – this ordinary story of a boring dream has a message about the nature of the mind, because, you see, I don’t own any red t-shirts that I wear underneath a black t-shirt. .

There’s a theory that dreams result from random activations in our brain, which trigger ideas and images and which some story-telling aspect of our minds then tries to weave meaning around. Dreams reveal the mind trying to make sense of noise, this theory goes.

Now, notice what happened in my boring dream. I – the voice I experience as “I” – was trying to make sense and I came up with a story about the flash of red, that it was my snood. In fact, this is the most plausible story, certainly more plausible than the red t-shirt story. If my mind was a unity then the red snood story adopted by my internal voice would have been the same story adopted by the part of my mind generating the dream experience. But it wasn’t. The dream world delivered me a different story, that of the red t-shirt, and told that story to me, not in terms of a internal voice, but in terms of a direct experience.

Conclusions? That my mind has at least two substantive parts, both of which are capable of reasoning about the world, of making sense of it and telling stories, but which speak a different language and make different inferences from the same data.

The technology of our wiser, wider, selves

At the end of this summer I gave a talk in the Treehouse Gallery about technology and thinking. In particular, I told three stories about three pieces of technology which, I argued, fundamentally affect the way we think.

First, I told a personal story about a few months last year when I was without a mobile phone. I found, like others, that this frustration had unexpected benefits (‘Life became slower, and slightly more rewarding‘). I paid attention to what I was doing and who I was with more. I committed to plans easily, both socially and psychically – when I was somewhere I knew I wasn’t going to dial my way out to another arrangement. I made the best of where I was. If I was at a loose end I looked in my immediate environment for things to do and people to talk to, rather than using it as an opportunity to catch up on my email and text message conversations. I don’t know if it was a particularly profound change, but I felt different and acted different because of the lack of a piece of technology (now, of course, I have jumped with both feet back into the world of mobile telephony and the same piece of technology is making me feel and act differently again – its presence rather than its absence is making me feel connected to the wider world, alive with a constant stream of opportunities and messages).

The second story I told was of written language, and in particular some research done by an early twentieth century psychologist called Luria. My point here was to widen the idea of technology. It is easy to forget that written language is a human invention, something that didn’t need to exist but does. Spoken language is a human universal, and will arise where ever humans can communicate in groups. Written language is a historical event, something that was separately created three or four times in human history and which followed a contingent trajectory. Elements of written language we take for granted are no more inevitable than writing as a whole. We invented silent reading – in the medieval period the norm was to read aloud. We invented punctuation and even gaps between the words – early documents reveal their absence. These things had to arise and become embedded in the culture of writing.

In my talk I mentioned Walter Ong’s fantastic “Orality and Literacy: The Technologizing of the Word” in which he talks about the particular habits of thought that are associated with cultures which rely on oral tradition and with those which rely on written language. Writing frees thought from the heavy constraints of memory. Ong details the characteristics that oral language will tend to have, in the service of being memorable; characters will tend to be vivid and extreme, of high emotion and grand actions (think myths and legendary heroes), oral knowledge will be expressed with rhyme, repetition and cliche. In contrast, written language can be nuanced, detailed and even boring. Written language can service lengthy analysis and unstructured lists. Furthermore, because written language is separated in time and space from its audience it will tend to be explicit and comprehensive in its explanations, rather than able to rely on immediate audience feedback and common reference (“like this!”) that spoken language can. Literacy has a tendency towards the abstract, the objectively distanced and the divisive, where as orality will have a tendency towards the concrete, subjectively immediate, the holistic and the conservative (since patterns are hard to establish in memory and fragile once there, the rule is ‘don’t innovate unless you have to’).

Ong discusses Luria’s investigations with literate and illiterate farm workers in the Ukraine in the 1930s. Luria’s investigations showed that questions that seem simple to those from literate cultures involve a whole bundle of learnt habits of thought and ‘unnatural’ assumptions that come along with the acquisition of literacy. Luria asked one illiterate to name the ‘odd one out’ from “a hammer, a saw, a log and a hatchet” and was told “If one has to go, I’d throw out the hatchet, it does the same job as the saw”. The individual questioned saw the objects in terms of activities, not in terms of the abstract category “tools” (the obvious division for a literate). Explained to that three of the objects are tools, the individual still doesn’t find the desired answer “Yes, but we still need the wood”. The mind patterned by orality seeks functional wholes, not division, claims Ong.

Another question of Luria’s is a logical syllogism: “In the far north, where there is snow, all bears are white. Noraya Zembla is in the far north and there is always snow there. What colours are the bears there?”. A literate mind, which is used to answering such questions (indeed, used to being sat down and examined for hours at a time for no immediately apparent purpose) knows to seek the logical structure of the question and answer it within its own terms. Not so the illiterate mind. Luria receives answers such as “I don’t know, go and look” and “I’ve seen black bears, every locality has its own animals”. A question to give the definition of a tree receives similar replies: “Why should I? Everybody knows what a tree is”.

I offer this discussion of Ong and Luria to draw out how fundamental the changes brought about by literacy are for our thought, and how invisible they are in normal circumstances, how for-granted we take tendencies like the pretence of objectivity and abstraction from the immediate and concrete.

My third and final example continued the task of widening the idea of what technology is and can do. I told the story of a clinical case study of a man with memory loss known as S.S (I took this story from a chapter by Margaret O’Connor and colleagues in the book “Broken Memories” by Campbell and Conway). S.S. suffered from a form of brain damage that prevented him creating new memories for episodes in his life. Although he could remember who he was, and events from his pre-injury life, he had no memory for events that happened post-injury. The film Memento gives an extreme illustration of this condition, known as anterograde amnesia. I’ve told S.S’s story elsewhere, in a chapter I wrote for Christian Nold’s book ‘Emotional Cartography‘.

The question O’Connor and colleagues set out to investigate is that of S.S.’s emotional state. S.S. had a buoyant, upbeat, personality, and seemed friendly and cheerful, despite his injury. Conventional tests of mood, which are really formalised sets of direct questions (“Do you cry often?” etc), confirmed this impression. Indirect tests of personality and mental state sounded a warning though – they seemed to suggest that S.S. had profound underlying anxiety and a preoccupation with decay and his own helplessness. These wouldn’t be unusual feelings for someone to have in his position – a relatively young man reduced from being the president of a company and head of the family to being out of work and housebound. The authors ask “was S.S. really depressed or not?”. My interest in the case is around the way in which, it seems likely, S.S.’s deeper underlying anxieties failed to manifest in his moment to moment conscious. S.S., I argue, lacked a particular piece of cognitive technology which most of us take for granted – a reliable memory for the episodes of our life. We can use this episodic memory as a workspace to integrate weak or sometimes fleeting feelings, to store and work out thoughts which may be in contradiction to our momentary surroundings or immediate inclinations. S.S. didn’t have this memory, so his reasonable feelings of anxiety were prevented from ever getting a hold in his consciousness. As they arose they would be repeatedly swept away by his optimistic demeanour.

This example was offered, in part, to illustrate that technology isn’t just something we think about but something we think with (I am following the line of thought set out by Andy Clark in his Natural Born Cyborg‘s here).

So I ended my talk with a question for the audience: ‘If we could invent any technology to help society deal with the future, what would we invent?’. The subsequent discussion never really cohered, I think, in part, because I’d been too successful (!) in broadening our conception of what technology could be. In these terms religions and cultures are technologies of thought too and discussion circuled around the idea that perhaps inventions of these kind are what we need in the future.

I realised, following this discussion, that I was interested in the effect of technology on thinking in a very different sense. The appeal of technology, for me, is that technology embodies in a physical object particular tendencies for thought, and, moreover, that the spread of technology offers a participatory, bottom up, model of how a kind of thinking can spread through the population without government policies, laws or other top-down corporate decisions. Episodic memory, written language and mobile phones spread without centralised control and worked their particular effects on our thinking on one person at a time (but have now had profound societal effects).

One example of a simple piece of technology which affects behaviour is given in Howard Rachlin’s book ‘The Science of Self-Control’. He discusses an experiment in which smokers were divided into three groups. One group was sent away to smoke as normal. The second was given an educational class about the health effects of smoking and encouraged in a number of ways to cut down. The third group, and the one that interests me most, was not told to cut down at all, but simply to record how much they smoked using a system in which, with each cigarette, they tore a tab off of a piece of card. Amazingly, the group that cut down smoking the most was not the group trying to cut down, but the group which was given an effective way of recording and monitoring their smoking. Rachlin offers the convincing analysis that the problem a smoker faces is one common to many situations – that of ‘complex ambivalence’ between single actions which are desirable in isolation, and larger patterns of actions which are undesirable in aggregate. Each cigarette is individually tempting, and carries no particularly large marginal cost, but overall ‘being a smoker’ has a large financial and health cost. You can see that the same pattern of temptation applies to many situations: having a drink verses being drunk or being ‘a drunk’, taking a single flight for a particular reason and being a frequent flyer, for example. The power of the card system is that it provides a mechanism by which people can integrate individual choices into a larger pattern, so that they may make decisions based on their preferences within a larger temporal window. In effect, the card system allows or encourages us to prioritise our longer term interests over the short-termist within all of us. In contrast most new technologies seem to prioritise the opposite self in us – the short-termist over the self that practices delay of gratification.

Another example of a simple piece of technology that can affect our day to day living is an energy meter which displays the momentary electricity consumption of your house (like this one). In theory there is already a mechanism for reducing household energy consumption (known as “your electricity bill”), but these energy meters have been shown to reduce consumption by 30%. Because the feedback between your actions and energy use is immediate – turn on a light and see the meter reading jump by 120 watts – it becomes clear how to effectively cut down your electricity consumption. This will be obvious to all well trained cyberneticists – ‘How can you have control without feedback?’

After the talk, Vinay pointed out to me that the environmental movement has never got to grips with the implications of Amdahl’s Law, a principle in computer programming for optimising the speed of your code by improving the part which causes the largest hold-up. In other words, the principle is a guide to how you should direct your efforts in trying to save a limited resource, in this case time. Without similar guidence – proper context for our energy use – the environmental moment is tempted to fall back on generalised hair-shirt ludditism (build nothing, do nothing, everything is bad). Without feedback our choice is between total rejection of consumption and total indulgence.

In keeping with this latent ludditism, technology and technological solutions are often seen as the enemy of the environmental movement. This is an understandable defence against a techno-utopianism which can be a form of denial about the seriousness of climate change, offering false promise of business as usual for the consumer society. In my talk I wanted to focus on how technology could be part of the solution in a positive way, to help us deal with energy-descent and the move to a sustainable global society, to be part of dealing with change, not of denying it. I wanted to ask the question of what kinds of technology should we be encouraging; energy meters? lifetime guaranteed products? reusable containers? what else?. And what kind of technology should we be discouraging; everything ‘disposable’? things which titillate and encourage our most superficial, immediate and grossly consumptive personalities? what else?

Technologies can encourage and reinforce elements of our selves. Because technological objects are solid things they can be anchors for behaviour which won’t be easily swept away by changes in mood or fashion. We need to find technologies which constraint our worst instincts and encourage our best. I have a liberal’s faith in human nature that, given the opportunity, we can be rational social creatures who recognise their best long term interests rather than being enslaved to momentary passions and immediate rewards. We can find technologies that encourage this long-view. Technology can help us realise our wiser, wider, better selves rather than our greedy, selfish, myoptic selves.

What would you invent?

Quote #242

I seem to be, to my surprise, a member of a large profession. There are some 20,000 psychologists in this country alone, nearly all of whom have become so in my adult lifetime. They are all prosperous. Most of them seem to be busily applying psychology to problems of life and personality. They seem to feel, many of them, that all we need to do is to consolidate our scientific gains. Their self-confidence astonishes me. For these gains seem to me puny, and scientific psychology seems to me ill-founded. At any time the whole psychological applecart might be upset. Let them beware

J.J. Gibson. (1967) Autobiography. In: Reed, E.& Jones, R. (Eds.) Reasons for Realism (p. 21)

Second-order action slips

An action slip is when, due to a failure of attention, you accidentally perform an action out of context or out of sequence. For example, you pour milk on your toast or you forget to add the tea bag when making a cup of tea. Making action slips is common. Lately I have realised that I also make ‘second order’ action slips. These are where I perform the correct action, or the correct sequence of actions, but in the state of absent-mindedness whereby I might be more likely to make an action slip. I catch myself in the middle of some mundane and appropriate behaviour and with a start think to myself “Oh no what have I done!”. Usually this is during sudden, irreversable actions which would be bad if done out of context, such as urinating on things (ok for toilets, bad for most other things), getting into the shower and turning it on full (ok if clothes off, bad otherwise), pouring boiling water on things (ok for making hot drinks, bad for most foods, pets and family members). Of course with this kind of action I have, so far, always managed to do the right thing, but something about the consequences, and my lack of attention, causes a brief moment of panic. A chasm of intentional vertigo opens up as I ask myself exactly what I’m doing and how I know it is the right thing to do.

File under ‘perils of metacognition’?

Technology and mental states

Tanya Gold gave up computers and mobile phones for a week. She reports ‘Life seemed slower, and slightly more rewarding’.

These electronic toys are skilled at making you believe you are achieving things – working or interacting with those strange things I think are called other people. They give you the illusion of occupation and purpose. But it is false. You do nothing. You fritter and buzz and beep and shout “I’m in Swindon!”, all the way to the grave.

But she picked back up her mobile phone, and logged back on to facebook I’m sure. Maybe, like Oliver Burkeman says, we like feeling busy and the self-importance (and distraction) that it brings.

I also like being busy, and without a certain amount of freneticism I don’t get as much done. But I also like the mental breathing space of not having a mobile phone, or not feeling like I need to check my email. I think technology can make us smarter and happier, and if people constantly twitter or check their email or whatever I think it is probably because they like things like that. But there is a trade-off, a state of mind that is lost when you adopt the continuous partial attention mode. The conundrum is how to get the benefits of energy withouty the costs of loss-of-focus (or, from the other perspective, how to keep the benefits of calm while still being in touch and efficient). Answers on a postcard please…

The psychology of coffee

I do not do research on why people have a favourite coffee mug. I do research on fundamental mechanisms of learning and decision making, and how they are built into our brains. I was on the Today programme discussing the psychology of coffee last week and I mentioned favourite mugs (you can listen to what I said here, or read it in this Telegraph article which quotes me from that programme). I was asked to be on the Today programme because of an article I wrote in 2003, Psychology in the Coffee Shop. This was a light review and opinion piece about all the ways in which psychological theory intersects with the experience of drinking a cup of coffee. It is this article that comes up as the first hit if you google “psychology” and “coffee”.

This is my opinion, briefly, on favourite mugs: coffee and tea contain caffeine, which promotes dopamine release. Dopamine is a neurotransmitter, a chemical messenger in the brain, known to be intimately connected with learning and reward. The dopamine release brought about by a caffeinated drink hacks our natural learning mechanisms, causing them to seek to identify and repeat whatever is consistently associated with that dopamine release. This is why rituals, such as favourite coffee mugs, develop.

Before appearing on the Today programme I did ask myself if I should really be speaking to the media about something which is really no more than an entertaining opinion. I decided I should, partly because my research does cover the wider topics of learning and the development of preferences, partly because although it is just an opinion it is my professional, theory-motivated, opinion as a psychologist, and partly because I wanted my grandmother to be able to listen to me on radio 4.

I’ve been surprised by how much interest there is in the “why you have a favourite mug” aspect of what I’ve said. Several people have got in touch to ask about “my research into how coffee tastes out of favourite mugs”, or to find out how I “proved that coffee tastes better from your favourite mug”.
I have done no research into whether coffee does or does not taste better in your favourite mug. I am taking this as an accepted fact, for which I have offered a theoretical explanation. I regard the taste of the coffee from a favourite mug as something people can verify for themselves, without needing a psychologist to tell them. We all know that the drink is chemically the same from whatever mug it is served in, but yet people develop preferences. This is because taste and enjoyment are not merely about objective measurements, such as temperature, chemical composition and whatnot, but about psychological factors as well, such as the history of learning experiences that each individual has had.

Arguably, it might be something of a waste of public money if I spent my professional life asking people about their favourite coffee mugs. It is not clear that things such as this are interesting in themselves, or that anyone needs to have their choice of beverage receptacle validated by the latest research in psychological science. Despite the impression formed by some in the media, this is not what psychologists do. We investigate the fundamental principles of the operation of the mind, how they are played out in behaviour and how they are based in the brain. Sometimes we even make some progress in our understanding, and then are in the position to give a deeper perspective on some phenomenon with which everyone is familiar. This, I hope, is the case with the favourite coffee mug example.

Email: the technology and psychology of continuous partial attention

I gave a talk on Wednesday at UFI in Sheffield entitled “Email: the technology and psychology of continuous partial attention”, which was a brief little intro to some of my thoughts about the psychology of email use (the phrase ‘continuous partial attention’ I owe to Linda Stone, whose thoughts on the matter are far more considered than mine). Here is the abstract from my talk:

What did you interrupt to read this? Chances are you were in the middle of something, or maybe several things, which you put on hold to find out what I’m going to talk about. I’m a research psychologist with an interest in technology, learning and communication. In my talk I’ll tell you why email has such a compulsive hold on people’s attention, how to spot true email addiction, why technology which helps you know less actually makes you smarter, and how there’s both good and bad in the multitasking habit. Now – what were you doing again?

A game of you

I asked the audience to imagine that I was running a game show. I announced that I would go along every row, starting at the front, and give each member a chance to say “cooperate” or “defect.” Each time someone said “defect” I would award a euro only to her. Each time someone said “cooperate” I would award ten cents to her and to everyone else in the audience. And I asked that they play this game solely to maximize their individual total score, without worrying about friendship, politeness, the common good, etc. I said that I would stop at an unpredictable point after at least twenty players had played

Like successive motivational states within a person, each successive player had a direct interest in the behavior of each subsequent player; and had to guess her future choices somewhat by noticing the choices already made. If she realized that her move would be the most salient of these choices right after she made it, she had an incentive to forego a sure euro, but only if she thought that this choice would be both necessary and sufficient to make later players do likewise.

In this kind of game, knowing the other players’ thoughts and characters– whether they are greedy, or devious, for instance—will not help you choose, as long as you believe them to be playing to maximize their monetary gains. This is so because the main determinant of their choices will be the pattern of previous members’ play at the moment of these choices. Retaliation for a defection will not occur punitively– a current player has no reason to reward or punish a player who will not play again — but what amounts to retaliation will happen through the effect of this defection on subsequent players’ estimations of their prospects and their consequent choices. These would seem to be the same considerations that bear on successive motivational states within a person, except that in this interpersonal game the reward for future cooperations is flat (ten cents per cooperation, discounted negligibly), rather than discounted in a hyperbolic curve depending on each reward’s delay.

Perceiving each choice as a test case for the climate of cooperation turns the activity into a positive feedback system—cooperations make further cooperations more likely, and defections make defections more likely. The continuous curve of motivation is broken into dichotomies, resolutions that either succeed or fail.

George Ainslie, A Selectionist Model of the Ego: Implications for Self-Control (also see pp93-94 in Breakdown of will)

Orality and academia

Academia is a quintessentially literate culture. Studying is reading, the outputs of research are papers and books. Your are judged on your ability to express yourself in writing, and when you do this you must reference the written works of others. This is what defines scholarship. More than this, the habits of thought are patterned by literacy. Literate thought is analytic, dissective. The business of science is that of ordering into lists, breaking into parts, assigning subordinate and superordinate.

And yet there is also a non-literate part to being an academic, a part closely alighted with the praxis of the discipline. This is giving lectures, discussing in seminars, attending conferences. The paper outputs of academia can disguise this component, but it is essential.

In a high-education system dominated by production-line and consumption values, by a receptacle-model of education (students as containers, education as a substance) it is easy to denigrate the ‘live’ oral component of academia, but in doing so we deny students contact with an essential part of scholarly life. Additionally we deny ourselves a cognitive model which can augment literate thinnking.

Walter Ong (Orality and Literature, 1982/2002) has written convincingly about the psychological dynamics of oral vs literate culture. Literate culture encourages finished works, whereas the knowledge of oral cultures is always live, an act of telling rather than knowing. As such it is part of a commons, rather than copyrighted (and plagarised). Oral knowledge is situational, empathetic and participatory, rather than abstracted and objectively distanced, it is contested rather than autonomous and tends towards holism rather than the progressive analytic deconstruction of literate thought. Oral thought is thematic compared to literate thought’s ability to dictate strict chronologies and linear narratives / list structures (just think of the memory constraints on oral culture which lacks written aids to see why pure abstract lists are impossible).

It is clear that academia needs to take elements from both of sides of these distinctions. Oral cognition can be impressionistic rather than precise, conservative rather than innovative and amnesic rather the hypermnemonic. Nonetheless the lived aspects of oral thought are a vital part of disciplinary practice and exposure to them is essential if students are to get a true view of their subjects.

A similar thing is argued by Kevin McCarron, a English literature lecturer and part-time stand-up comedian who argues for the imporance of improvisation in teaching. He says that an overreliance on preparation (the script or text of the class) as getting in the way of the (living) interaction of student and teacher:

(article in the Times Higher Education)

“If we don’t put ourselves under pressure, nothing interesting or exciting is going to happen. How could it? In fact, what we’ve done is spent three hours the previous night making sure that it doesn’t happen.

“Then we have the gall to offer these hours of preparation as morally sound. Self-protection is being offered to the world as a moral value. That preparation has been done to protect the teacher from the students. Teachers spend hours and hours preparing because they are terrified of bring caught out.”

apostrophe creep

Apostrophe’s insert themselves in my writing at inappropriate places. I know the rules of their use, I promise (for possessives, not plurals. Permissible to indicate omission, etc) but they just seem to come out. My explicit knowledge isn’t enough to make my procedural knowledge, as expressed through the faster-than-deliberate-thought action of typing, obey. I hypothesise that although my deliberative consciousness has learn the rules of apostrophe use, which are defined at the syntactic and semantic level, my procedural motor system is more vulnerable to low-level statistical features of writing — such as that apostrophes often come immediately before an ‘s’ at the end of the word. Presumably some words have a sequence of letters which trigger my ‘end of word’ pattern (for example the stem ‘apostophe’, which ends, like many words, in an e) and when I go to add an ‘s’ the ‘use apostrophe’ pattern is also, although in appropriately, triggered. This is galling because I hate reading writing with inappropriately placed apostrophes, but I also find it interesting. What is interesting is that typing, like speaking, is a complex action which is overlearnt, but flexible, which is in obeyance of conscious goals, but which the bulk of the details of enactions are unconscious. This unconscious realm isn’t merely motor, not just how I type or speak the words, but it reaches up to include what words I type or say, even what precise meanings I come out with. Hence the phrase “How can I know what I think until I see what I say”.

Chimpanzee Dentistry

From McGrew, W. C., & Tutin, C. E. (1972). Chimpanzee dentistry. J Am Dent Assoc, 85(6), 1198-204. cited in Moerman, D. E. (2002). Meaning, medicine, and the ‘placebo effect’. Cambridge University Press: New York.

Click for larger version

Don’t belive the neurohype (excerpt)

Vaughan is so spot on over at that I’m going to excerpt him here:

For example, an experiment might find that fear is associated with amygdala activation. But it’s impossible to say the reverse, that every time the amygdala is activated, the person is fearful.

Here’s an analogy. On average, people from New York may be more impatient than people from other cities.

If you predicted that all people from New York were impatient on the basis of this, you’d be grossly mistaken so many times that it would make your prediction invalid.

In fact, taking the average attributes of populations and applying them to individuals is stereotyping, and we avoid it because it is so often wrong as to cause us to misjudge people.

Alternatively, if you met an impatient person and therefore concluded that they must live in New York, you’d be equally inaccurate.

The Constructive Character of Remembering

We must, then, consider what does actually happen more often than not when we say that we remember. The first notion to get rid of is that memory is primarily or literally reduplicative, or reproductive. In a world of constantly changing environment, literal recall is extraordinarily unimportant. It is with remembering as it is with the stroke in a skilled game. We may fancy that we are repeating a series of movements learned a long time before from a text-book or from a teacher. But motion study shows that in fact we build up the stroke afresh on a basis of the immediately preceding balance of postures and the momentary needs of the game. Every time we make it, it has its own characteristics.

Bartlett, F. C. (1995). Remembering: A Study in Experimental and Social Psychology. Cambridge University Press. my emphasis

against commuting, for plastic surgery

In Pursuit of Happiness: Empirical Answers to Philosophical Questions
Pelin Kesebir and Ed Diener
Perspectives on Psychological Science
March 2008 – Vol. 3 Issue 2, pages 117–125

Since the early studies showing that lottery winners were not happier than controls and that even paralyzed accident victims revert approximately to their initial levels of happiness (e.g., Brickman, Coates, & Janoff-Bulman, 1978), the hedonic treadmill theory—the idea that our emotional systems adjust to almost anything that happens in our lives, good or bad—has been embraced by psychologists as a guiding principle in happiness research. In affiliation with the hedonic treadmill model, the set-point theory posits that major life events, such as marriage, the death of a child, or unemployment, affect a person’s happiness only temporarily, after which the person’s happiness level regresses to a default determined by genotype (Lykken & Tellegen, 1996). The implication of these assertions is that no matter how hard we try to be happier, adaptation on the one hand and our temperament on the other will ensure that our venture will remain just a futile rat race with an illusory goal.

Our conviction is that the time is ripe for a revision of hedonic adaptation theories. Accumulating evidence reveals that, even though adaptation undeniably occurs to some extent and personal aspirations do rise and adjust, people do not adapt quickly and/or completely to everything (Diener, Lucas, & Scollon, 2006). Lucas, Clark, Georgellis, and Diener (2003, 2004), for example, have observed in a 15-year longitudinal study that individuals who experienced unemployment or widowhood did not, on average, fully recover and return to their earlier life satisfaction levels. Other studies have shown that people hardly, if ever, adapt to certain elements in their lives such as noise, long commutes, or interpersonal conflict (Haidt, 2006), whereas other events such as plastic surgery may have long-lasting positive effects on one’s psychological well-being (Rankin, Borah, Perry, & Wey, 1998).

perception as the potential for sensation

From O’Regan, J. K., & Noë, A. (2002). A sensorimotor account of vision and visual consciousness. Behavioral and Brain Sciences, 24(05), 939-973:

Particularly interesting is the work being done by Lenay (1997), using an extreme simplification of the echolocation device, in which a blind or blindfolded person has a single photoelectric sensor attached to his or her forefinger, and can scan a simple environment (e.g., consisting of several isolated light sources) by pointing. Every time the photophotosensor points directly at a light source, the subject hears a beep or feels a vibration. Depending on whether the finger is moved laterally, or in an arc, the subject establishes different types of sensorimotor contingencies: lateral movement allows information about direction to be obtained, movement in an arc centered on the object gives information about depth. Note several interesting facts. First, users of such a device rapidly say that they do not notice vibrations on their skin or hear sounds, rather they “sense” the presence of objects outside of them. Note also that at a given moment during exploration of the environment, subjects may be receiving no beep or vibration whatsoever, and yet “feel” the presence of an object before them. In other words, the experience of perception derives from the potential to obtain changes in sensation, not from the sensations themselves. Note also that the exact nature or body location of the stimulation (beep or vibration) has no bearing on perception of the stimulus – the vibration can be applied on the finger or anywhere else on the body. This again shows that what is important is the sensorimotor invariance structure of the changes in sensation, not the sensation itself.

Lenay, C. (1997) Le mouvement des boucles sensori-motrices aux représentations cognitives et langagières. Paper presented at the Sixième Ecole d’Eté de l’Association pour la Recherche Cognitive.

Rock climbing hacks! (now with added speculation)

I’m going to tell you about an experience that I often have rock-climbing and then I’m going to offer you some speculation as to the cognitive neuroscience behind it. If you rock-climb I’m sure you’ll find my description familiar. If you’re also into cognitive neuroscience perhaps you can tell me if you think my speculation in plausible.

Rock-climbing is a sort of three-dimensional kinaesthetic puzzle. You’re on the side of rock-wall, and you have to go up (or down) by looking around you for somewhere to move your hands or feet. If you can’t see anything then you’re stuck and just have to count the seconds before you run out of strength and fall off. What often happens to me when climbing is that I look as hard as I can for a hold to move my hand up to and I see nothing. Nothing I can easily reach, nothing I can nearly reach and not even anything I might reach if I was just a bit taller or if I jumped. I feel utterly stuck and begin to contemplate the immanent defeat of falling off.

But then I remember to look for new footholds.

Sometimes I’ve already had a go at this and haven’t seen anything promising, but in desperation I move one foot to a new hold, perhaps one that is only an inch or so further up the wall. And this is when something magical happens. Although I am now only able to reach an inch further, I can suddenly see a new hold for my hand, something I’m able to grip firmly and use to pull myself to freedom and triumph (or at least somewhere higher up to get stuck). Even though I looked with all my desperation at the wall above me, this hold remained completely invisible until I moved my foot an inch — what a difference that inch made.

Psychologists have something they call affordances (Gibson, 1977, 1986), which are features of the environment which seem to ‘present themselves’ as available for certain actions. Chairs afford being sat on, hammers afford hitting things with. The term captures an observation that there is something very obviously action-orientated about perception. We don’t just see the world, we see the world full of possibilities. And this means that the affordances in the environment aren’t just there, they are there because we have some potential to act (Stoffregen, 2003). If you are frail and afraid of falling then a handrail will look very different from if you are a skateboarder, or a freerunner. Psychology typically divides the jobs the mind does up into parcels : ‘perception’, (then) ‘decision making’, (then) ‘action’. But if you take the idea of affordances seriously it gives lie to this neat division. Affordances exist because action (the ‘last’ stage) affects perception (the ‘first’ stage). Can we experimentally test this intuition, is there really an effect of action on perception? One good example is Oudejans et al (1996) who asked baseball fielders to judge were a ball would land, either just watching it fall or while running to catch it. A model of the mind that didn’t involve affordances might think that it would be easier to judge where a ball would land if you were standing still; after all, it’s usually easier to do just one thing rather than two. This, however, would be wrong. The fielders were more accurate in their judgements — perceptual predictions basically — when running to catch the ball, in effect when they could use base their judgements on the affordances of the environment produced by their actions, rather than when passively observing the ball.

The connection with my rock-climbing experience is obvious: although I can see the wall ahead, I can only see the holds ahead which are actually within reach. Until I move my foot and bring a hold within range it is effectively invisible to my affordance-biased perception (there’s probably some attentional-narrowing occurring due to anxiety about falling off too, (Pijpers et al, 2006); so perhaps if I had a ladder and a gin and tonic I might be better at spotting potential holds which were out of reach).

There’s another element which I think is relevant to this story. Recently neuroscientists have discovered that the brain deals differently with perceptions occurring near body parts. They call the area around limbs ‘peripersonal space’ (for a review see Rizzolatti & Matelli, 2003). {footnote}. Surprisingly, this space is malleable, according to what we can affect — when we hold tools the area of peripersonal space expands from our hands to encompass the tools too (Maravita et al, 2003). Lots of research has addressed how sensory inputs from different modalities are integrated to construct our brain’s sense of peripersonal space. One delightful result showed that paying visual attention to an area of skin enhanced touch-perception there. The interaction between vision and touch was so strong that providing subjects with a magnifying glass improved their touch perception even more! (Kennett et al, 2001; discussed in Mind Hacks, hack #58). I couldn’t find any direct evidence that unimodal perceptual accuracy is enhanced in peripersonal space compared to just outside it (if you know of any, please let me know), but how’s this for a reasonable speculation — the same mechanisms which create peripersonal space are those which underlie the perception of affordances in our environment. If peripersonal space is defined as an area of cross-modal integration, and is also malleable according to action-possibilities, it isn’t unreasonable to assume that an action-orientated enhancement of perception will occur within this space.

What does this mean for the rock-climber? Well it explains my experience, whereby holds are ‘invisible’ until they are in reach. This suggests some advice to follow next time you are stuck halfway up a climb: You can’t just look with your eyes, you need to ‘look’ with your whole body; only by putting yourself in different positions will the different possibilities for action become clear.

(references and footnote below the fold)

My intuition is that this is the area around which we feel ‘an aura’ if someone reaches towards us; this is completely unsubstantiated speculation however


Gibson, J.J. The theory of affordances. In R.E. Shaw and J. Bransford,
eds., Perceiving, Acting, and Knowing, Erlbaum Assoc., Hillsdale. N.J., 1977.

Gibson, J. J. (1986). The ecological approach to visual perception. Lawrence Erlbaum Associates Inc, US.

Kennett, S., Taylor-Clarke, M., & Haggard, P. (2001). Noninformative vision improves the spatial resolution of touch in humans, Current Biology, 11(15), 1188-1191.

Maravita, A., Spence, C., & Driver, J. (2003). Multisensory integration and the body schema: close to hand and within reach, Current Biology, 13(13), 531-539.

Oudejans, R. R., Michaels, C. F., Bakker, F. C., & Dolne, M. A. (1996). The relevance of action in perceiving affordances: perception of catchableness of fly balls., J Exp Psychol Hum Percept Perform, 22(4), 879-91.

Pijpers, J. R. R., Oudejans, R. R. D., Bakker, F. C., & Beek, P. J. (2006). The role of anxiety in perceiving and realizing affordances, Ecological Psychology, 18(3), 131.

Rizzolatti, G., & Matelli, M. (2003). Two different streams form the dorsal visual system: anatomy and functions, Experimental Brain Research, 153(2), 146-157.

Stoffregen, T. A. (2003). Affordances as properties of the animal-environment system, Ecological Psychology, 15(2), 115-134.

Crossposted at