Spotted in a doorway round the corner from the Union Pool, in Williamsburg, Brooklyn:
I am now back in Sheffield, England
Photo taken approximately here, last Sunday
As important as it is to change the light bulbs, it is more important to change the laws
Thank you, Al Gore. In his 2008 TED talk calling for a ‘new hero generation’ to deal with climate change
I don’t predict what is going to happen when I watch a film. It isn’t like I can’t, it just doesn’t occur to me. When the bad guy turns out to be a good guy (or vice versa) my friends will say “Well that was obviously going to happen”. But it wasn’t obvious to me.
If you had described the salient facts to me, given me a plot summary, I would be able to make the correct prediction, I’m sure, but something about the way my brain works stops me making the leap from the level of experience to the level of description. I am stuck just experiencing the events of the film, and not representing them in a way that would allow me to draw obvious conclusions.
Let’s call this ability ‘narrative extraction’.
I’ve got some smarts, sure, but I think I’ve got a deductive kind of smarts. This is the kind of smarts that can take a set of facts, or axioms, and crunch through the consequences until you get to inevitable result. I’m good at maths and most logic puzzles. I think narrative extraction requires a different kind of smarts. It is the ability to pick an appropriate set of facts or an appropriate method of description which will provide you with an answer which serves your purposes.
For the film example you need to do more than just experience the characters, you need to classify them by their types, the film by genre, the plot by template and from all that infer what would be the most likely thing for an exciting film.
Moral reasoning requires the same kind of smarts. There’s a famous test of moral reasoning by Kohberg, where children are presented with vignettes (“Your wife is sick and you cannot afford medicine. Should you break into the pharmacy and steal it?” type things). Kohberg ranked children’s moral reasoning, giving the most credit to moral reasoning which invoked logical deductions from abstract moral frameworks.
Gilligan, in her book “In a different voice” has a powerful critique of Kohlberg’s system, on the grounds that it gave credit to one kind of reasoning – abstract logical deduction or calculus (e.g. “Stealing is wrong, but letting your wife die is worse, so I should therefore steal the medicine”) – and not to another kind of more contextually sensitive reasoning (e.g. “If I break into the pharmacy then I might get caught and then I won’t be able to help my wife, so I should find another way of getting the medicine”). This sensitivity to what is not in the question – what is not explicitly stated – is a part of narrative extraction.
There is another important, perhaps more primary, way in which narrative extraction is required for moral judgement. Kohlberg’s vignettes are not just logic problems, which can be convergently or divergently solved, they are also descriptions of the world. Thus they do one of the major tasks of moral reasoning for you – that of going from the nebulous world of experience to the concrete would of categories and actions.
As soon as you describe the world you massively constrain the scope for moral reasoning. You can still make the wrong judgement, but you have made moral reasoning possible by the act of description using moral categories.
Milgram demonstrated scientifically the banality of evil, that normal people could do inhuman things. Did those people who thought they were delivering lethal electric shocks make an incorrect moral judgement? Did they weigh “doing what you are told” against “the life of an innocent” and choose the former? My intuition is that they did not, not explicitly. Yes they made the wrong choice (we too would probably have made the wrong choice), but I believe that they were so caught up in the moment, in the emotion of the situation, that they did not move to the necessary level of description. We, reading this in comfort, are given the moral categories and the right choice is so obvious that we have difficulty empathising with their situation. The narrative extraction has been done for us, so right thing seems obvious. But it isn’t.
“You have asked me what I would do and what I would not do. I will tell you what I will do and what I will not do. I will not serve that in which I no longer believe, whether it call itself my home, my fatherland or my church: and I will try to express myself in some mode of life or art as freely as I can, and as wholly as I can, using for my defence the only arms I allow myself to use . . . silence, exile, and cunning.”
James Joyce, in A Portrait of the Artist as a Young Man
Susan Neiman’s “Moral Clarity – a guide for grown-up idealists” (2009) is a passionate and literary book about moral reasoning and the achievements of the Enlightenment (especially Kant). The book contains fantastic and acute re-readings of the myths of Job and Odysseus, as well as plenty of examples of Neiman’s own moral clarity – she has a great analyst’s knack of being able to articulate clearly and succinctly exactly what was so pernicious about many of the arguments and actions of the neocon government under Bush. Recommended.
“The Enlightenment gave reason pride of place, not because it expected absolute certainty, but because it sought a way to live without it” (p218)
I am on study leave. I’ll be in Berkeley, California, in February and some of March, and then from mid-March I’ll be in New York. I am contactable by email.
This chapter was due for inclusion in The Rough Guide Book of Brain Training, but was cut – probably because the advice it gives is so unsexy!
The idea of cognitive enhancers is an appealing one, and its attraction is obvious. Who wouldn’t want to take a pill to make them smarter? It’s the sort of vision of the future we were promised on kids TV, alongside jetpacks and talking computers.
Sadly, this glorious future isn’t here yet. The original and best cognitive enhancer is caffeine (“creative lighter fluid” as one author called it), and experts agree that there isn’t anything else available to beat it. Lately, sleep researchers have been staying up and getting exciting about a stimulant called modafinil, which seems to temporarily eliminate the need for sleep without the jitters or comedown of caffeine. Modafinil isn’t a cognitive enhancer so much as something that might help with jetlag, or let you stay awake when you really should be getting some kip.
Creative types have had a long romance with alcohol and other more illicit narcotics. The big problem with this sort of drug (aside from the oft-documented propensity for turning people into terrible bores), is that your brain adapts to, and tries to counteract, the effects of foreign substances that affect its function. This produces the tolerance that is a feature of most prolonged drug use – whereby the user needs more and more to get the same effect – and also the withdrawal that characterises drug addiction. You might think this is a problem only for junkies but, if you are a coffee or tea drinker just pause for moment and reflect on any morning when you’ve felt stupid and unable to function until your morning cuppa. It might be for this reason that the pharmaceutical industry is not currently focusing on developing drugs for creativity. Plans for future cognitive enhancers focus on more mundane, workplace-useful skills such as memory and concentration. Memory-boosters would likely be most useful to older adults, especially those with worries about failing memories, rather than younger adults.
Although there is no reason in principle why cognitive enhancers couldn’t be found which fine-tune our concentration or hone our memories, the likelihood is that, as with recreational drugs, tolerance and addiction would develop. These enhancing drugs would need to be taken in moderate doses and have mild effects – just as many people successfully use caffeine and nicotine for their cognitive effects on concentration today. Even if this allowed us to manage the consequences of the brain trying to achieve its natural level, there’s still the very real possibility that use of the enhancing drugs would need to be fairly continuous – just as it is with smokers and drinkers of tea and coffee. And even then our brains would learn to associate the drug with the purpose for which they are taken, which means it would get harder and harder to perform that purpose without the drugs, as with the coffee drinker who can’t start work until he’s had his coffee. Furthermore, some reports suggest that those with high IQ who take cognitive enhancers are mostly likely to mistake the pleasurable effect of the substance in question for a performance benefit, while actually getting worse at the thing they’re taking the drug for.
The best cognitive enhancer may well be simply making best use of the brain’s natural ability to adapt. Over time we improve anything we practice, and we can practice almost anything. There’s a hundred better ways to think and learn – some of them are in this book. By practicing different mental activities we can enhance our cognitive skills without drugs. The effects can be long lasting, the side effects are positive, and we won’t have to put money in the pockets of a pharmaceutical company.
Link to more about The Rough Guide book of Brain Training
Three excellent magazine articles on cognitive enhancers, from: The New Yorker, Wired and Discover
Cross-posted at mindhacks.com
The Rough Guide to Brain Training is a puzzle book which incluces essays and vignettes by myself. The book has 100 days of puzzles which will challenge your mental imagery, verbal fluency, numeracy, working memory and reasoning skills. There are puzzles that will look familiar like suduko, and some new ones I’ve never seen before. Fortunately the answers are included at the back. Gareth made these puzzles. I find them really hard.
I have 10 short essays in the book, covering topics such as evidence-based brain training, how music affects the developing brain, optimal brain nutrition and what the brains of the future will look like. As well as the essays, I wrote numerous short vignettes, helpful hints and suprising facts from the world of psychology and neuroscience (did you know that squids have dounut shaped brains? That you share 50% of your genes with a banana? That signals travel between brain cells at up to 200mph, which is fast compared to a cycle courier, but slow compared to a fibre optic cable). Throughout the book I try to tell it straight about what is, isn’t and might be true about brain training. I read the latest research and I hope I tell a sober, but optimistic, message about the potential for us to change how we think over our lifetimes (and the potential to protect our minds against cognitive decline in older age). I also used my research to provide a sprinkling of evidence-based advice for those who are trying to improve a skill, study for an exam or simply remember things better.
Writing the book was a great opportunity for me to dig into the research on brain training. It is a topic I’d always meant to investigate properly, but hadn’t gotten around to. The claims of those pushing commercial brain training products always seemed suspicious, but the general idea – that our brains change based on practice and experience – seemed plausible. In fact, this idea has been one of the major trends of the last fifty years of neuroscience research. It has been a big surprise to neuroscientists as experiment after experiment has shown exactly how malleable (aka ‘plastic’) the structure and function of the brain is. The resolution of this paradox of the general plausibility of brain training with my suspicion of specific products is in the vital issue of control groups. Although experience changes our brains, and although it is now beyond doubt that a physically and mentally active life can prevent cognitive decline across the lifespan, it isn’t at all clear what kinds of activities are necessary or essential for general mental sharpness. Sure, after practicing something you’ll get better at it. And doing something is better than doing nothing, but the crucial question is doing something you pay for better than doing something else that is free? The holy grail of brain training would be a simple task which you could practice (and copyright! and sell!!) and which would have benefits for all mental skills. Nobody has shown that such a task or set of tasks exists, so while you could buy a puzzle book, you could also go for a jog or go to the theatre with friends. Science wouldn’t be able to say for certain which activity would have the most benefits for your mental sharpness as an individual – although the smart money is probably on going jogging. It is to the credit of the editors at the Rough Guides that they let me say this in the introduction to the Rough Guide to Brain Training!
There wasn’t room in the book for all the references I used while writing it. This was a great sadness to me, since I believe that unless you include the references for a claim, you’re just spouting off, relying on a dubious authority, rather than really talking about science. So, to make up for this, and by way of an apology, I’ve put the references here. It will be harder to track specific claims from this general list that it would be with in-text citations, so if you do have a query, please get in touch and I promise will point you to the evidence for any claims I make in the book.
Additionally, I’ll be posting here a few things from the cutting room floor – text that I wrote for the book which didn’t make it into the final draft. Watch out, and if you do get your hands on a copy of this Rough Guide to Brain Training, get in touch and let me know what you think.
Amazon link (only £5.24!)
Scientific references and links used in researching the book
Cross-posted at mindhacks.com
An experimental analysis shifts the determination of behavior from autonomous man to the environment—an environment responsible both for the evolution of the species and for the repertoire acquired by each member. Early versions of environmentalism were inadequate because they could not explain how the environment worked, and much seemed to be left for autonomous man to do. But environmental contingencies now take over functions once attributed to autonomous man, and certain questions arise. Is man then “abolished”? Certainly not as a species or as an individual achiever. It is the autonomous inner man who is abolished, and that is a step forward. But does man not then become merely a victim or passive observer of what is happening to him? He is indeed controlled by his environment, but we must remember that it is an environment largely of his own making. The evolution of a culture is a gigantic exercise in self-control. It is often said that a scientific view of man leads to wounded vanity, a sense of hopelessness, and nostalgia. But no theory changes what it is a theory about; man remains what he has always been. And a new theory may change what can be done with its subject matter. A scientific view of man offers exciting possibilities. We have not yet seen what man can make of man.
B.F.Skinner, last words of Beyond Freedom and Dignity (1971)
They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety
Benjamin Franklin, from notes for a proposition at the Pennsylvania Assembly, February 17, 1775
Selfishness is not living as one wishes to live, it is asking others to live as one wishes to live.
Oscar Wilde, in ‘The Soul of Man Under Socialism’ (1895)
I am reading Ad Nauseam: A Survivor’s Guide to American Consumer Culture, edited by Carried McLaren and Jason Torchinsky. The book is a funny, smart and sometimes shocking collection of articles from Stay Free Magazine and blog. I first came across Stay Free when I was researching the psychology of advertising and was impressed by their sophisticated take on how adverts affect consumers’ decision making. They discuss in Ad Nauseam how advertising is often misunderstood, with people relying on an intuitive ‘Advertising doesn’t effect me’ view or swinging to the opposite extreme of the ‘Sinister Advertisers Manipulate Consumers with their Mind Control Tricks’ position. Both positions distract from the very real, but not magical, power of advertising.
The book has a great discussion of Wilson Bryan Key’s Subliminal Seduction, the book that launched the idea that subliminal, and often sexual, figures are embedded in random features of adverts such as in ice cube shadows. The idea of these ’embeds’ is nonsense, of course, but great fun to look for and a great distraction from the real persuasive content of the advert. The book also has a chapter on the origins of modern advertising practice in 19th century pharmaceutical advertising (the manufacturing of ailments for which ready made ‘cures’ can be sold has been covered by Vaughan on mindhacks.com before, in relation to the mental health). Packed with critical analysis of the advertising industry, more informative history and some shocking examples of how consumerism has worked its way into many aspects of our daily lives, this book is essential intellectual self-defense, managing to be critical and aware without ever being sanctimonious or hysterical.
Cross-posted at mindhacks.com
Last night I had two dreams in which I was being chased (once by a tour-de-france cyclist in Venice, once by a giant snake in a field, since you ask). I was thinking that being-chased dreams are probably my brain rehearsing escape behaviours – a night-time training programme built in by evolution. Thinking more on it, I realised that I have never had a chasing dream, only being-chased dreams. Is this because being-chased is more adaptive to rehearse, or because of something peculiar to my idiosyncratic psychology? Let’s find out, please vote using the poll below:
[poll id=”3″]
Inspired by badscience:
Reprint of the text from my article in Prospect magazine, 4th July 2009, Issue 160
If someone tells you something that isn’t true, they may not be lying. At least not in the conventional sense. Confabulation, a rare disorder resulting from severe brain damage, causes its sufferers to relentlessly invent and believe fictions—both mundane and fantastical—about their lives. If asked where she has just been, a patient might say that she was in the laundry room (when she wasn’t) or that she’s been visiting Scotland with her sister (who’s been dead for 20 years), or even that she isn’t in the room where you’re talking to her, but in one exactly like it, further down the corridor. And could you fetch her hand cream please? These stories aren’t maintained for long periods, but are sincerely believed.
While it only affects a tiny minority of those with brain damage, confabulation tells us something important: that spontaneous, fluid, even riotous creativity is a natural part of the design of the mind. The damage associated with confabulation—usually to the frontal lobes—adds nothing to the brain’s makeup. Instead it releases a capacity for fiction that lies dormant inside all of us. Anyone who has seen children at play knows that the desire to make up stories is deeply embedded in human nature. And it can be cultivated too, most clearly by anarchic improvisers like Paul Merton.
Chris Harvey John taught me “improv” at London’s Spontaneity Shop. He can step on stage in front of 200 people to perform a totally unscripted hour-long show. There’ll be no rehearsal, no discussion of characters or plot. Instead, he and the other actors invent a play from scratch, based entirely on their unplanned reactions to each other. This seemingly effortless, throwaway attitude is the opposite of what we normally assume about the creative process: that it is hard work. Artists are often talked about in reverent, mystical tones. Art does connect with deep and mysterious human forces, but that doesn’t mean it is only available to a select few who, through luck or special training, are allowed to invent things.
Psychological research increasingly shows that inventiveness is fundamental to the normal operation of the mind. Aikaterini Fotopoulou is a research psychologist at the Institute of Cognitive Neuroscience, London, who specialises in confabulation. She regards it as a failure of the psychological mechanisms responsible for memory. “These inventions are really memory constructions,” she says. “When people confabulate they are failing to check the origin of the material that they build into their memories. You or I can usually tell the difference between a memory of something we’ve done and a memory of something we’ve just heard about, and distinguish both from stray thoughts or hopes. Confabulators can’t do this. Material that, for emotional or other reasons, comes to mind can at times be indiscriminately assumed to be a memory of what really happened.”
There’s a clue to confabulation in the responses of other patients with damage to the frontal lobes. These patients, who may have suffered violent head injuries or damage from illnesses such as strokes or Alzheimers, don’t necessarily confabulate but will often have problems with planning and motivation. They can seem heavily dependent on their external environment. Some, for example, indiscriminately respond to the things they see, regardless of whether it is appropriate in the context. The French psychiatrist L’hermitte demonstrated this “environmental dependency” in the 1980s when he laid a syringe on a table in front of a patient with frontal lobe damage and then turned around and took down his trousers. Without hesitation the patient injected him in the buttocks. This was a completely inappropriate action for the patient, but in terms of the possible actions made available by the scene in front of her, it was the obvious thing to do.
In those patients with frontal damage who do confabulate, however, the brain injury makes them rely on their internal memories—their thoughts and wishes—rather than true memories. This is of course dysfunctional, but it is also creative in some of the ways that make improvisation so funny: producing an odd mix of the mundane and impossible. When a patient who claims to be 20 years old is asked why she looks about 50, she replies that she was pushed into a ditch by her brothers and landed on her face. Asked about his good mood, another patient called Harry explains that the president visited him at his office yesterday. The president wanted to talk politics, but Harry preferred to talk golf. They had a good chat.
Improvisers tap into these same creative powers, but in a controlled way. They learn to cultivate a “dual mind,” part of which doesn’t plan or discriminate and thus unleashes its inventive powers, while the other part maintains a higher level monitoring of the situation, looking out for opportunities to develop the narrative.
Both improvisation and confabulation show that the mind is inherently sense-making. Just as a confabulator is unfortunately driven to invent possible stories from the fragments of their memories and thoughts, so an improviser looks at the elements of a scene and lets their unconscious mind provide them with possible actions that can make sense of it. On stage, this allows them to create entrancing stories. But this capacity for invention is inside all of us. As audience or performers, we are all constantly inventing.
the brakes slipped in the wet
somebody messed up
the dam burst
the reinforcements never came
the supports didn’t hold
i forgot to write the address down
you never calledthe brakes slipped in the wet
the backups didn’t run
the first aid box was empty
the safety catch slipped on this gunthe worse came true
we weren’t prepared for this
this wasn’t supposed to happen
but it didthe lifeboats weren’t ready
we weren’t warned
the fire-exits were obstructed
the alarm didn’t go offsomebody should have said something
and somebody should have checked
but it wasn’t methis wasn’t supposed to happen
but
it did
And I dreamt that, for totally mundane reasons, I needed to change my clothes and as I took off the black t-shirt I was wearing I noticed a flash of red folded-up in the black of the t-shirt cloth. And in the dream I remember thinking to myself “What’s that? Oh, of course, it must be the red snood I wear” (A snood is a kind of scarf, and I do indeed often wear one, which is red). So, still in the dream, I started to peel apart the red and black cloth, as you do with clothes you have taken off all in one go. And the red cloth, it turned out, was not my snood, but instead a red t-shirt which I was wearing underneath my black t-shirt and which, I could see – or maybe ‘know’ in the way that you just know some things in dreams – was some kind of socialist / trade union t-shirt from the mid 1980s.
So far, so boring. This seems even more ordinary and unremarkable than most people’s dreams which have extraordinary and remarkable content, yet still manage to bore in the daylight telling. But listen to this – this ordinary story of a boring dream has a message about the nature of the mind, because, you see, I don’t own any red t-shirts that I wear underneath a black t-shirt. .
There’s a theory that dreams result from random activations in our brain, which trigger ideas and images and which some story-telling aspect of our minds then tries to weave meaning around. Dreams reveal the mind trying to make sense of noise, this theory goes.
Now, notice what happened in my boring dream. I – the voice I experience as “I” – was trying to make sense and I came up with a story about the flash of red, that it was my snood. In fact, this is the most plausible story, certainly more plausible than the red t-shirt story. If my mind was a unity then the red snood story adopted by my internal voice would have been the same story adopted by the part of my mind generating the dream experience. But it wasn’t. The dream world delivered me a different story, that of the red t-shirt, and told that story to me, not in terms of a internal voice, but in terms of a direct experience.
Conclusions? That my mind has at least two substantive parts, both of which are capable of reasoning about the world, of making sense of it and telling stories, but which speak a different language and make different inferences from the same data.
First they ignore you, then they laugh at you, then they fight you, then you win.
Mahatma Gandhi, attributed (but disputed)
At the end of this summer I gave a talk in the Treehouse Gallery about technology and thinking. In particular, I told three stories about three pieces of technology which, I argued, fundamentally affect the way we think.
First, I told a personal story about a few months last year when I was without a mobile phone. I found, like others, that this frustration had unexpected benefits (‘Life became slower, and slightly more rewarding‘). I paid attention to what I was doing and who I was with more. I committed to plans easily, both socially and psychically – when I was somewhere I knew I wasn’t going to dial my way out to another arrangement. I made the best of where I was. If I was at a loose end I looked in my immediate environment for things to do and people to talk to, rather than using it as an opportunity to catch up on my email and text message conversations. I don’t know if it was a particularly profound change, but I felt different and acted different because of the lack of a piece of technology (now, of course, I have jumped with both feet back into the world of mobile telephony and the same piece of technology is making me feel and act differently again – its presence rather than its absence is making me feel connected to the wider world, alive with a constant stream of opportunities and messages).
The second story I told was of written language, and in particular some research done by an early twentieth century psychologist called Luria. My point here was to widen the idea of technology. It is easy to forget that written language is a human invention, something that didn’t need to exist but does. Spoken language is a human universal, and will arise where ever humans can communicate in groups. Written language is a historical event, something that was separately created three or four times in human history and which followed a contingent trajectory. Elements of written language we take for granted are no more inevitable than writing as a whole. We invented silent reading – in the medieval period the norm was to read aloud. We invented punctuation and even gaps between the words – early documents reveal their absence. These things had to arise and become embedded in the culture of writing.
In my talk I mentioned Walter Ong’s fantastic “Orality and Literacy: The Technologizing of the Word” in which he talks about the particular habits of thought that are associated with cultures which rely on oral tradition and with those which rely on written language. Writing frees thought from the heavy constraints of memory. Ong details the characteristics that oral language will tend to have, in the service of being memorable; characters will tend to be vivid and extreme, of high emotion and grand actions (think myths and legendary heroes), oral knowledge will be expressed with rhyme, repetition and cliche. In contrast, written language can be nuanced, detailed and even boring. Written language can service lengthy analysis and unstructured lists. Furthermore, because written language is separated in time and space from its audience it will tend to be explicit and comprehensive in its explanations, rather than able to rely on immediate audience feedback and common reference (“like this!”) that spoken language can. Literacy has a tendency towards the abstract, the objectively distanced and the divisive, where as orality will have a tendency towards the concrete, subjectively immediate, the holistic and the conservative (since patterns are hard to establish in memory and fragile once there, the rule is ‘don’t innovate unless you have to’).
Ong discusses Luria’s investigations with literate and illiterate farm workers in the Ukraine in the 1930s. Luria’s investigations showed that questions that seem simple to those from literate cultures involve a whole bundle of learnt habits of thought and ‘unnatural’ assumptions that come along with the acquisition of literacy. Luria asked one illiterate to name the ‘odd one out’ from “a hammer, a saw, a log and a hatchet” and was told “If one has to go, I’d throw out the hatchet, it does the same job as the saw”. The individual questioned saw the objects in terms of activities, not in terms of the abstract category “tools” (the obvious division for a literate). Explained to that three of the objects are tools, the individual still doesn’t find the desired answer “Yes, but we still need the wood”. The mind patterned by orality seeks functional wholes, not division, claims Ong.
Another question of Luria’s is a logical syllogism: “In the far north, where there is snow, all bears are white. Noraya Zembla is in the far north and there is always snow there. What colours are the bears there?”. A literate mind, which is used to answering such questions (indeed, used to being sat down and examined for hours at a time for no immediately apparent purpose) knows to seek the logical structure of the question and answer it within its own terms. Not so the illiterate mind. Luria receives answers such as “I don’t know, go and look” and “I’ve seen black bears, every locality has its own animals”. A question to give the definition of a tree receives similar replies: “Why should I? Everybody knows what a tree is”.
I offer this discussion of Ong and Luria to draw out how fundamental the changes brought about by literacy are for our thought, and how invisible they are in normal circumstances, how for-granted we take tendencies like the pretence of objectivity and abstraction from the immediate and concrete.
My third and final example continued the task of widening the idea of what technology is and can do. I told the story of a clinical case study of a man with memory loss known as S.S (I took this story from a chapter by Margaret O’Connor and colleagues in the book “Broken Memories” by Campbell and Conway). S.S. suffered from a form of brain damage that prevented him creating new memories for episodes in his life. Although he could remember who he was, and events from his pre-injury life, he had no memory for events that happened post-injury. The film Memento gives an extreme illustration of this condition, known as anterograde amnesia. I’ve told S.S’s story elsewhere, in a chapter I wrote for Christian Nold’s book ‘Emotional Cartography‘.
The question O’Connor and colleagues set out to investigate is that of S.S.’s emotional state. S.S. had a buoyant, upbeat, personality, and seemed friendly and cheerful, despite his injury. Conventional tests of mood, which are really formalised sets of direct questions (“Do you cry often?” etc), confirmed this impression. Indirect tests of personality and mental state sounded a warning though – they seemed to suggest that S.S. had profound underlying anxiety and a preoccupation with decay and his own helplessness. These wouldn’t be unusual feelings for someone to have in his position – a relatively young man reduced from being the president of a company and head of the family to being out of work and housebound. The authors ask “was S.S. really depressed or not?”. My interest in the case is around the way in which, it seems likely, S.S.’s deeper underlying anxieties failed to manifest in his moment to moment conscious. S.S., I argue, lacked a particular piece of cognitive technology which most of us take for granted – a reliable memory for the episodes of our life. We can use this episodic memory as a workspace to integrate weak or sometimes fleeting feelings, to store and work out thoughts which may be in contradiction to our momentary surroundings or immediate inclinations. S.S. didn’t have this memory, so his reasonable feelings of anxiety were prevented from ever getting a hold in his consciousness. As they arose they would be repeatedly swept away by his optimistic demeanour.
This example was offered, in part, to illustrate that technology isn’t just something we think about but something we think with (I am following the line of thought set out by Andy Clark in his Natural Born Cyborg‘s here).
So I ended my talk with a question for the audience: ‘If we could invent any technology to help society deal with the future, what would we invent?’. The subsequent discussion never really cohered, I think, in part, because I’d been too successful (!) in broadening our conception of what technology could be. In these terms religions and cultures are technologies of thought too and discussion circuled around the idea that perhaps inventions of these kind are what we need in the future.
I realised, following this discussion, that I was interested in the effect of technology on thinking in a very different sense. The appeal of technology, for me, is that technology embodies in a physical object particular tendencies for thought, and, moreover, that the spread of technology offers a participatory, bottom up, model of how a kind of thinking can spread through the population without government policies, laws or other top-down corporate decisions. Episodic memory, written language and mobile phones spread without centralised control and worked their particular effects on our thinking on one person at a time (but have now had profound societal effects).
One example of a simple piece of technology which affects behaviour is given in Howard Rachlin’s book ‘The Science of Self-Control’. He discusses an experiment in which smokers were divided into three groups. One group was sent away to smoke as normal. The second was given an educational class about the health effects of smoking and encouraged in a number of ways to cut down. The third group, and the one that interests me most, was not told to cut down at all, but simply to record how much they smoked using a system in which, with each cigarette, they tore a tab off of a piece of card. Amazingly, the group that cut down smoking the most was not the group trying to cut down, but the group which was given an effective way of recording and monitoring their smoking. Rachlin offers the convincing analysis that the problem a smoker faces is one common to many situations – that of ‘complex ambivalence’ between single actions which are desirable in isolation, and larger patterns of actions which are undesirable in aggregate. Each cigarette is individually tempting, and carries no particularly large marginal cost, but overall ‘being a smoker’ has a large financial and health cost. You can see that the same pattern of temptation applies to many situations: having a drink verses being drunk or being ‘a drunk’, taking a single flight for a particular reason and being a frequent flyer, for example. The power of the card system is that it provides a mechanism by which people can integrate individual choices into a larger pattern, so that they may make decisions based on their preferences within a larger temporal window. In effect, the card system allows or encourages us to prioritise our longer term interests over the short-termist within all of us. In contrast most new technologies seem to prioritise the opposite self in us – the short-termist over the self that practices delay of gratification.
Another example of a simple piece of technology that can affect our day to day living is an energy meter which displays the momentary electricity consumption of your house (like this one). In theory there is already a mechanism for reducing household energy consumption (known as “your electricity bill”), but these energy meters have been shown to reduce consumption by 30%. Because the feedback between your actions and energy use is immediate – turn on a light and see the meter reading jump by 120 watts – it becomes clear how to effectively cut down your electricity consumption. This will be obvious to all well trained cyberneticists – ‘How can you have control without feedback?’
After the talk, Vinay pointed out to me that the environmental movement has never got to grips with the implications of Amdahl’s Law, a principle in computer programming for optimising the speed of your code by improving the part which causes the largest hold-up. In other words, the principle is a guide to how you should direct your efforts in trying to save a limited resource, in this case time. Without similar guidence – proper context for our energy use – the environmental moment is tempted to fall back on generalised hair-shirt ludditism (build nothing, do nothing, everything is bad). Without feedback our choice is between total rejection of consumption and total indulgence.
In keeping with this latent ludditism, technology and technological solutions are often seen as the enemy of the environmental movement. This is an understandable defence against a techno-utopianism which can be a form of denial about the seriousness of climate change, offering false promise of business as usual for the consumer society. In my talk I wanted to focus on how technology could be part of the solution in a positive way, to help us deal with energy-descent and the move to a sustainable global society, to be part of dealing with change, not of denying it. I wanted to ask the question of what kinds of technology should we be encouraging; energy meters? lifetime guaranteed products? reusable containers? what else?. And what kind of technology should we be discouraging; everything ‘disposable’? things which titillate and encourage our most superficial, immediate and grossly consumptive personalities? what else?
Technologies can encourage and reinforce elements of our selves. Because technological objects are solid things they can be anchors for behaviour which won’t be easily swept away by changes in mood or fashion. We need to find technologies which constraint our worst instincts and encourage our best. I have a liberal’s faith in human nature that, given the opportunity, we can be rational social creatures who recognise their best long term interests rather than being enslaved to momentary passions and immediate rewards. We can find technologies that encourage this long-view. Technology can help us realise our wiser, wider, better selves rather than our greedy, selfish, myoptic selves.
What would you invent?
…I may venture to affirm of the rest of mankind, that they are nothing but a bundle or collection of different perceptions, which succeed each other with an inconceivable rapidity, and are in a perpetual flux and movement. Our eyes cannot turn in their sockets without varying our perceptions. Our thought is still more variable than our sight; and all our other senses and faculties contribute to this change: nor is there any single power of the soul, which remains unalterably the same, perhaps for one moment. The mind is a kind of theatre, where several perceptions successively make their appearance; pass, repass, glide away, and mingle in an infinite variety of postures and situations. There is properly no simplicity in it at one time, nor identity in different, whatever natural propension we may have to imagine that simplicity and identity. The comparison of the theatre must not mislead us. They are the successive perceptions only, that constitute the mind; nor have we the most distant notion of the place where these scenes are represented, or of the materials of which it is composed.
David Hume, in A Treatise of Human Nature, Book I, Part 4, Section 6, ‘Of Personal Identity’