February 13th, 2013 § § permalink
I screwed up. My latest column for BBC Future is about why cyclists enrage motorists. My argument is that cyclists offend the ‘moral order’ of the roads, evoking in motorists a feeling of outrage over perceived rule breaking.
Unfortunately, I included some loose words in my article that implied things I don’t believe and wasn’t arguing. Exhibit A:
Then along comes a cyclist, who seems to believe that the rules aren’t made for them, especially the ones that hop onto the pavement, run red lights, or go the wrong way down one-way streets.
This wrongly suggests both that I think the typical cyclists breaks the law (they don’t), and/or that motorists are enraged by cyclists’ law breaking. This is not the case, rather I am arguing that motorists are engaged by cyclists’ perceived rule breaking, where I mean rule in the sense of ‘convention’. Cyclists habitually, legally, and sensibly break conventions of car-driving such as waiting in queued traffic, moving at the speed limit or not under-taking.
Exhibit A has now been changed in the article to the more pleasing:
Then along come cyclists, innocently following what they see are the rules of the road, but doing things that drivers aren’t allowed to: overtaking queues of cars, moving at well below the speed limit or undertaking on the inside.
So, my bad and apologies for this. I should have been a lot clearer than I was. I’m just grateful that a few people understood what I was getting at (if you read the whole article I hope the correct interpretation I supported by the rest of the phrasing I use). The amount and vehemence of feedback has been quite surprising. Lots of people thought I was a frustrated driver who hated cyclists. In fact, the bike is my main form of transport. I’ve ridden nearly every day for over ten years (and been hit by a car once). For this article I was trying not to sound like the self-righteous cycling proto-fascist I feel like sometimes. I obviously succeeded. Perhaps too well.
Other people thought I was claiming that this was the only factor affecting road-user’s attitudes. I don’t think this. Obviously selective memory (for bad cyclists or drivers), in- group/out-group effects and the asymmetry in vulnerability all play a role. I did write a version of the article which laid out the conceptual space a bit clearer, but I decided it was boring to read, and really I wanted to talk about evolutionary game theory and make a novel – and, I thought, interesting – claim.
I sometimes think I should get “Telling the truth, just not the whole truth” translated into Latin so I can use it as the motto for the column. Each one I write someone comes back to me with something I missed out. If I tried to be comprehensive I’d end up with a textbook, instead of a 800 word magazine column. I don’t want to write textbooks, so I’m reasonably happy with leaving things out, but I do worry that there is a line you cross when telling some of the truth amounts to a deception or distortion of the whole truth. I’m trying, each time, not to cross that line. Feedback on how to manage this is welcome.
There were many other comments of all shades. You can ‘enjoy’ some of them on the BBC Future facebook page here. If you did leave a comment on email/facebook/twitter I’m sorry I couldn’t respond to all of them. I hope this post clarifies things a bit.
October 25th, 2012 § § permalink
I have made myself a new website for my day job. I used wordpress, and it was fantastically convenient. I’m also pretty happy with how it looks. Feedback welcome.
June 6th, 2012 § § permalink
The exploration-exploitation trade-off is a fundamental dilemma whenever you learn about the world by trying things out. The dilemma is between choosing what you know and getting something close to what you expect (‘exploitation’) and choosing something you aren’t sure about and possibly learning more (‘exploration’). For example, suppose you are in a restaurant and you look at the menu:
- Fish and Chips
- Chole Poori
- Paneer Uttappam
- Khara Dosa
Assuming for the sake of example that you’re not very good with Sri Lankan food, you’ve now got a choice. You can ‘exploit’ – go with the fish and chips, which will probably be alright – or you can ‘explore’ – try something you haven’t had before and see what you get. Obviously which you decide to do will depend on many things: how hungry you are, how good the restaurant reviews are, how adventurous you are, how often you reckon you’ll be coming back ..etc. What’s important is that the study of the best way to make these kinds of choices – called reinforcement learning – has shown that optimal learning requires that you to sometimes make some bad choices. This means that sometimes you have to choose to avoid the action you think will be most rewarding, and take an action which you think will be less rewarding. The rationale is that these ‘sub-optimal’ actions are necessary for your long term benefit – you need to go off track sometimes to learn more about the environment. The exploration-exploitation dilemma is really a trade-off : enjoy more now vs learn more now and enjoy later. You can’t avoid it, all you can do is position yourself somewhere along the spectrum.
Because the trade-off is fundamental we would expect to be able to see it in all learning domains, not just restaurant food choices. In work just published, we’ve been using a new task to look at how actions are learnt. Using a joystick we asked people to explore the space of all possible movements, giving them a signal when they made a particular target movement. This task – which we’re pretty keen on – gives us a lens to look at the relation between how people explore the possible movements they can make and which particular movements they learn to rely on to generate predictable outcomes (which we call ‘actions’).
Using data gathered from this task, it is possible to see the exploitation-exploration trade-off in action. With each target people get 10 attempts to try to identify the right movement to make. Obviously some successful movements will be more efficient than others, because it is possible to hit the target after going all “round the houses” first, adding lots of extraneous movements and taking longer than needed. If you had a success like this you could repeat it exactly (‘exploit’), or try and cut out some of the extraneous movement and risk missing the target (‘explore’). Obviously this refinement of action through trial and error is of critical interest to anyone who cares about how we learn skilled movements.
I calculated an average performance score for the first 50% and second 50% of attempts (basically a measure of distance travelled before hitting the target – so lower scores mean better performance). I also calculated how variable these performance scores were in the first 50% and second 50%. Normally we would expect people who perform best in the first half of a test to perform best in the second half (depressingly people who start out ahead usually stay there!). But this analysis showed up something interesting: a strong correlation between variability in the first half and performance in the second half. You can see this in the graph
This shows that people who are most inconsistent when they start to learn perform best towards the end of learning. Usually inconsistency is a bad sign, so it is somewhat surprising that it predicts better performance later on. The obvious interpretation is in terms of the exploration-exploitation trade-off. The inconsistent people are trying out more things at the beginning, learning more about what works and what doesn’t. This provides them with the foundation to perform well later on. This pattern holds when comparing across individuals, but it also holds for comparing across trials (so for the same individual, their later performance is better for targets on which they are most inconsistent on early in learning).
You can read about this, and more, in our new paper, which is open-access over at PLoS One A novel task for the investigation of action acquisition.
January 28th, 2012 § § permalink
Previously I blogged about an experiment which used the time it takes people to make decisions to try and elucidate something about the underlying mechanisms of information processing (Stafford, Ingram & Gurney, 2011) . This post is about the companion paper to that experiment, reporting some computational modelling inspired by the experiment (Stafford & Gurney, 2011).
The experiment contained a surprising result, or at least a result that I claim should surprise some decision theorists. We has asked people to make a simple judgement – to name out loud the ink colour of a word stimulus, the famous Stroop Task (Stroop, 1935). We found that two factors which affected the decision time had independent effects – the size of the effect of each factors was not effected by the other factor. (The factors were the strength of the colour, in terms of how pale vs deep it was, and how the word was related to the colour, matching it, contradicting it or being irrelevant). This type of result is known as “additive factors” (because they add independently of each other. On a graph of results this looks like parallel lines).
There’s a long tradition in psychology of making an inference from this pattern of experimental results to saying something about the underlying information processing that must be going on. Known as the additive factors methodology (Donders, 1868–1869/1969; Sternberg, 1998), the logic is this: if we systematically vary two things about a decision and they have independent effects on response times, then the two things are operating on separate loci in the decision making architecture – thus proving that there are separate loci in the decision making architecture. Therefore, we can use experiments which measure only outcomes – the time it takes to respond – to ask questions about cognitive architecture; i.e. questions about how information is transformed and combined as it travels between input and output.
The problem with this approach is that it commits a logical fallacy. True separate information processing modules can produce additive factors in response data (A -> B), but that doesn’t mean that additive factors in response time data imply separate information processing modules (B -> A). My work involved taking a widely used model of information processing in the Stroop task (Cohen et al, 1990) and altering it so it contained discrete processing stages, or not. This allowed me to simulate response times in a situation where I knew for certain the architecture – because I’d built the information processing system. The result was surprising. Yes, a system of discrete stages could generate the pattern of data I’d observed experimentally and reported in Stafford, Ingram & Gurney (2011), but so could a single stage system in which all information was continuously processed in parallel, with no discrete information processing modules. Even stranger, both of these kind of systems could be made to produce either additive or non-additive factors without changing their underlying architecture.
The conclusion is straightforward. Although inferring different processing stages (or ‘modules’) from additive factors in data is a venerable tradition in psychology, and one that remains popular (Sternberg, 2011), it is a mistake. As Henson (2011) points out, there’s too much non-linearity in cognitive processing, so that you need additional constraints if you want to make inferences about cognitive modules.
Thanks to Jon Simons for spotting the Sternberg and Henson papers, and so inadvertantly promting this bit of research blogging
Cohen, J. D., Dunbar, K., and McClelland, J. L. (1990). On the control of automatic processes – a parallel distributed-processing account of the Stroop effect. Psychol. Rev. 97, 332–361.
Donders, F. (1868–1869/1969). “Over de snelheid van psychische processen. onderzoekingen gedann in het physiologish laboratorium der utrechtsche hoogeshool,” in Attention and Performance, Vol. II, ed. W. G. Koster (Amsterdam: North-Holland).
Henson, R. N. (2011). How to discover modules in mind and brain: The curse of nonlinearity, and blessing of neuroimaging. A comment on Sternberg (2011). Cognitive Neuropsychology, 28(3-4), 209-223. doi:10.1080/02643294.2011.561305
Stafford, T. and Gurney, K. N.(2011), Additive factors do not imply discrete processing stages: a worked example using models of the Stroop task, Frontiers in Psychology, 2:287.
Stafford, T., Ingram, L., and Gurney, K. N. (2011), Pieron’s Law holds during Stroop conflict: insights into the architecture of decision making, Cognitive Science 35, 1553–1566.
Sternberg, S. (1998). “Discovering mental processing stages: the method of additive factors,” in An Invitation to Cognitive Science: Methods, Models, and Conceptual Issues, 2nd Edn, eds D. Scarborough, and S. Sternberg (Cambridge, MA: MIT Press), 702–863.
Sternberg, S. (2011). Modular processes in mind and brain. Cognitive Neuropsychology, 28(3-4), 156-208. doi:10.1080/02643294.2011.557231
Stroop, J. (1935). Studies of interference in serial verbal reactions. J. Exp. Psychol. 18, 643–662.
January 28th, 2012 § § permalink
I’ve had a pair of papers published recently and I thought I’d have a go at putting simply what the research reported in them shows.
The first is called ‘Pieron’s Law holds during Stroop conflict: insights into the architecture of decision making‘. It reports a variation on the famous Stroop task. The Stroop task involves naming the ink colour of various words, words which can themselves be the name of colours. So you find yourself looking at the word GREEN in red ink and your job is to say “red”. If the word matches the ink colour people respond faster and more accurately; if the word doesn’t match, they are slower and less accurate. What we did was vary the strength of the colour component of the stimilus – e.g. we used more and less intense red ‘ink’ (actually we presented the stimuli on a computer screen, so the ink was pixel values). There’s a well established relationship between stimulus strength and responding – the ‘Pieron’s Law’ of the title – showing how response speed decreases with increasing stimulus strength.
So our experiment simply took two well know psychological findings and combined them in a single experiment. The result is interesting because it can help us arbitrate between different theories of how decisions are made. One popular theory of decision making is that all the information relevant to the decision is optimally combined to produce the swiftest and most accuracte response (Bogacz, 2007). There’s lots of support for this theory, including evidence from looking at responses of humans making simple judgements, recordings from the brain cells of monkeys and deep connections to statistical theory. It’s without doubt that the brain can and does integrate information optimally in some circumstances. What is interesting to me is that this optimal information integration perspective is completely at odds with the most successful research programme in post-war psychology: the heuristics and biases approach. This body of evidence suggest that human decision making is very non-optimal, with all sorts of systemmatic errors creeping into the way people combine information to make a decision. The explanation for these errors is that we process information using heuristics, mental shortcuts which give a good answer most of the time and cut down on the amount of effort which have to expend in deciding (“do what you did last time” is probably the most common decision heuristic).
My experiment connects to these ideas because it asked people to make a simple judgement (the colour of the ink), like the experiments supporting an optimal information integration perspective on decision making, but the judgement requested was just marginally more complex because we manipulate both Stroop condition (whether the word and ink matched) and colour strength. If you are a straight-down-the-line optimal information decision theorists then you must believe that evidence about the decision based on the word is combined with evidence about the decision based on the colour to make a single ‘amount of evidence’ variable which drives the decision. In the paper I call this the ‘common metric’ hypothesis. The logic is a bit involved (see the paper), but a consequence of this hypothesis is that the size of the effect of the word condition should vary across the colour strength condition, and vice versa. In other words, you should see an interaction. Visually, the lines on the graph of results would be non-parallel.
Here’s what we found:
What you’re looking at is a graph of response times (the y-axis) for different colour strengths (the x-axis). The three lines are the three Stroop conditions: when the word matches the colour (‘congruent’), when it doesn’t match (‘conflict’) and when there is no word (‘control’). The result: there is no interaction between these two factors – the lines are parallel.
The implication is that you don’t need to move very far from simple perceptual decision making before human decision making starts to look non-optimal – or at least non optimal in the sense of combining information from different sources. This is important because of the widespread celebration of decision making as informationally optimal. Reconciling this research programme with the wider heuristics and biases approach is important work, and fits more generally with an honourable tradition in science of finding “boundary conditions” where one way the world works gives way to another way.
Coming up next: Infering from behavioural results to underlying cognitive architecture – its not as simple as we were told (Stafford & Gurney, 2011).
Bogacz, R. (2007). Optimal decision-making theories: linking neurobiology with behaviour. Trends in Cognitive Sciences, 11(3), 118–125.
Stafford, T., Ingram, L. and Gurney, K.N.(2011), Pieron’s Law holds during Stroop conflict: insights into the architecture of decision making, Cognitive Science 35, 1553–1566.
Tom Stafford and Kevin N. Gurney (2011), Additive factors do not imply discrete processing stages: a worked example using models of the Stroop task, Frontiers in Psychology, 2:287.
January 25th, 2012 § § permalink
(Local news warning: just details of a talk I’m giving)
Psychology in the Pub is a Sheffield event which happens in the Showroom Cinema Bar. I’m giving a talk there on the 15th of March and I’ve just written the blurb. Here it is for your enjoyment
Thinking Meat: Understanding brain and mind
You’re brain weighs the same as half a brick and has the consistency of warm butter. Yet such a mundane object allows you to have every thought you’ve ever had, every feeling, dream or hope. This talk will be an introduction to what I view as the central puzzle of psychology: how the brain creates the mind. I’ll discuss fundamental insights from the study of perception and action and suggest how these provide important clues for understanding all of human psychology. The talk will feature: Lego Robots! ‘Subliminal messages’! Britney Spears! Pirates! And a no-holds-bared personal revelation from the speaker
The content will be similar to the talk I gave in Manchester recently, which you can hear here
May 13th, 2011 § § permalink
This is a plot of the number of citations turned up by a simple “Web of Knowledge” search for papers containing the words “dopamine” and “reinforcement learning”, against year of publication. The rise, dating from approximately the time of publication of the first computational theory of phasic dopamine function, is rapid. There are, as far as I know, two computational theories of phasic dopamine function. One from Schultz, Dayan and Montague (1997) and one from our team here in Sheffield (Redgrave and Gurney, 2006)
Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. Science, 275, 1593-1599.
Redgrave, P., & Gurney, K. (2006). The short-latency dopamine signal: a role in discovering novel actions? Nature Reviews Neuroscience, 7(12), 967-75.
February 23rd, 2011 § § permalink
I’ve been invited to give a talk at York Centre for Complex Systems Analysis. I’ll be speaking on the 13th of May, a friday, to the title “Infering cognitive architectures from high-resolution behavioural data”. It’ll be an overview of what it is exactly that I try to do as part of my work.
Abstract: I will give an overview of some of the work done in our lab, the Adaptive Behaviour Research Group (http://www.abrg.group.shef.ac.uk/ ) in the Department of Psychology, University of Sheffield. Across human, non-human animal, simulation and robotics platforms we investigate the neural circuits that allow intelligent behaviour, bringing to bear psychological, neuroscientific and computational perspectives. We are particularly interested in the action selection problem – that of deciding what to do next (and of doing it). This talk will focus on my own work looking at 3 paradigms where we have collected high-resolution behavioural data in humans – mistakes made by expert touch typists, eye-movements during visual search and a novel paradigm for investigating the learning of new motor skills. I will make comments on how we analyse such data in order to make inferences about the underlying architecture of human decision making.
October 25th, 2010 § § permalink
My ebook “The Narrative Escape” was published last week by 40k books. ‘The Narrative Escape’ is a long essay about morality, psychology and stories and is availble in Kindle format (this means you can get it for you iPhone, iPad or in PDF too). From the ebook blurb:
We instinctively tell stories about our experiences, and get lost in stories told by other people. This is an essay about our story-telling minds. It is about the psychological power of stories, and about what the ability to enjoy stories tells us about the fundamental nature of mind.
My argument in ‘The Narrative Escape’ begins by exploring Stanley Milgram’s famous experiments on obedience, looking at them as an example of moral decision making – particularly for that minority that choose to disobey in the experiment. A fascinating thing about these experiments is that although they tell us a lot about what makes people obey authority, they leave mysterious that quality that makes people resist tyrannical authority. I then go on to contrast this moral disobedience, with conventional psychological investigations of morality (for example the work of Lawrence Kohlberg). In using descriptions of moral dilemmas to ask people about their moral reasoning this research, I argue, misses something essential about real-world moral choices. This element is the ability to realise that you are acting according to someone else’s version of what is right and wrong, and to step outside of their definition of the situation. This is the “narrative escape” of the title. The essay also talks about dreams, stories and story-telling and other topics which I hope will be of interest.
There is also an interview with me available here, which discusses the ebook and some other more and less related topics.
The essay is available in Italian as “La Fuga Narrativa”
Amazon.com Link for the English edition.
…And coming soon in Portuguese, I’m told!
April 11th, 2010 § § permalink
Spotted in a doorway round the corner from the Union Pool, in Williamsburg, Brooklyn:
I am now back in Sheffield, England
February 27th, 2010 § § permalink
I don’t predict what is going to happen when I watch a film. It isn’t like I can’t, it just doesn’t occur to me. When the bad guy turns out to be a good guy (or vice versa) my friends will say “Well that was obviously going to happen”. But it wasn’t obvious to me.
If you had described the salient facts to me, given me a plot summary, I would be able to make the correct prediction, I’m sure, but something about the way my brain works stops me making the leap from the level of experience to the level of description. I am stuck just experiencing the events of the film, and not representing them in a way that would allow me to draw obvious conclusions.
Let’s call this ability ‘narrative extraction’.
I’ve got some smarts, sure, but I think I’ve got a deductive kind of smarts. This is the kind of smarts that can take a set of facts, or axioms, and crunch through the consequences until you get to inevitable result. I’m good at maths and most logic puzzles. I think narrative extraction requires a different kind of smarts. It is the ability to pick an appropriate set of facts or an appropriate method of description which will provide you with an answer which serves your purposes.
For the film example you need to do more than just experience the characters, you need to classify them by their types, the film by genre, the plot by template and from all that infer what would be the most likely thing for an exciting film.
Moral reasoning requires the same kind of smarts. There’s a famous test of moral reasoning by Kohberg, where children are presented with vignettes (“Your wife is sick and you cannot afford medicine. Should you break into the pharmacy and steal it?” type things). Kohberg ranked children’s moral reasoning, giving the most credit to moral reasoning which invoked logical deductions from abstract moral frameworks.
Gilligan, in her book “In a different voice” has a powerful critique of Kohlberg’s system, on the grounds that it gave credit to one kind of reasoning – abstract logical deduction or calculus (e.g. “Stealing is wrong, but letting your wife die is worse, so I should therefore steal the medicine”) – and not to another kind of more contextually sensitive reasoning (e.g. “If I break into the pharmacy then I might get caught and then I won’t be able to help my wife, so I should find another way of getting the medicine”). This sensitivity to what is not in the question – what is not explicitly stated – is a part of narrative extraction.
There is another important, perhaps more primary, way in which narrative extraction is required for moral judgement. Kohlberg’s vignettes are not just logic problems, which can be convergently or divergently solved, they are also descriptions of the world. Thus they do one of the major tasks of moral reasoning for you – that of going from the nebulous world of experience to the concrete would of categories and actions.
As soon as you describe the world you massively constrain the scope for moral reasoning. You can still make the wrong judgement, but you have made moral reasoning possible by the act of description using moral categories.
Milgram demonstrated scientifically the banality of evil, that normal people could do inhuman things. Did those people who thought they were delivering lethal electric shocks make an incorrect moral judgement? Did they weigh “doing what you are told” against “the life of an innocent” and choose the former? My intuition is that they did not, not explicitly. Yes they made the wrong choice (we too would probably have made the wrong choice), but I believe that they were so caught up in the moment, in the emotion of the situation, that they did not move to the necessary level of description. We, reading this in comfort, are given the moral categories and the right choice is so obvious that we have difficulty empathising with their situation. The narrative extraction has been done for us, so right thing seems obvious. But it isn’t.
January 28th, 2010 § § permalink
This chapter was due for inclusion in The Rough Guide Book of Brain Training, but was cut – probably because the advice it gives is so unsexy!
The idea of cognitive enhancers is an appealing one, and its attraction is obvious. Who wouldn’t want to take a pill to make them smarter? It’s the sort of vision of the future we were promised on kids TV, alongside jetpacks and talking computers.
Sadly, this glorious future isn’t here yet. The original and best cognitive enhancer is caffeine (“creative lighter fluid” as one author called it), and experts agree that there isn’t anything else available to beat it. Lately, sleep researchers have been staying up and getting exciting about a stimulant called modafinil, which seems to temporarily eliminate the need for sleep without the jitters or comedown of caffeine. Modafinil isn’t a cognitive enhancer so much as something that might help with jetlag, or let you stay awake when you really should be getting some kip.
Creative types have had a long romance with alcohol and other more illicit narcotics. The big problem with this sort of drug (aside from the oft-documented propensity for turning people into terrible bores), is that your brain adapts to, and tries to counteract, the effects of foreign substances that affect its function. This produces the tolerance that is a feature of most prolonged drug use – whereby the user needs more and more to get the same effect – and also the withdrawal that characterises drug addiction. You might think this is a problem only for junkies but, if you are a coffee or tea drinker just pause for moment and reflect on any morning when you’ve felt stupid and unable to function until your morning cuppa. It might be for this reason that the pharmaceutical industry is not currently focusing on developing drugs for creativity. Plans for future cognitive enhancers focus on more mundane, workplace-useful skills such as memory and concentration. Memory-boosters would likely be most useful to older adults, especially those with worries about failing memories, rather than younger adults.
Although there is no reason in principle why cognitive enhancers couldn’t be found which fine-tune our concentration or hone our memories, the likelihood is that, as with recreational drugs, tolerance and addiction would develop. These enhancing drugs would need to be taken in moderate doses and have mild effects – just as many people successfully use caffeine and nicotine for their cognitive effects on concentration today. Even if this allowed us to manage the consequences of the brain trying to achieve its natural level, there’s still the very real possibility that use of the enhancing drugs would need to be fairly continuous – just as it is with smokers and drinkers of tea and coffee. And even then our brains would learn to associate the drug with the purpose for which they are taken, which means it would get harder and harder to perform that purpose without the drugs, as with the coffee drinker who can’t start work until he’s had his coffee. Furthermore, some reports suggest that those with high IQ who take cognitive enhancers are mostly likely to mistake the pleasurable effect of the substance in question for a performance benefit, while actually getting worse at the thing they’re taking the drug for.
The best cognitive enhancer may well be simply making best use of the brain’s natural ability to adapt. Over time we improve anything we practice, and we can practice almost anything. There’s a hundred better ways to think and learn – some of them are in this book. By practicing different mental activities we can enhance our cognitive skills without drugs. The effects can be long lasting, the side effects are positive, and we won’t have to put money in the pockets of a pharmaceutical company.
Link to more about The Rough Guide book of Brain Training
Three excellent magazine articles on cognitive enhancers, from: The New Yorker, Wired and Discover
Cross-posted at mindhacks.com
January 14th, 2010 § § permalink
The Rough Guide to Brain Training is a puzzle book which incluces essays and vignettes by myself. The book has 100 days of puzzles which will challenge your mental imagery, verbal fluency, numeracy, working memory and reasoning skills. There are puzzles that will look familiar like suduko, and some new ones I’ve never seen before. Fortunately the answers are included at the back. Gareth made these puzzles. I find them really hard.
I have 10 short essays in the book, covering topics such as evidence-based brain training, how music affects the developing brain, optimal brain nutrition and what the brains of the future will look like. As well as the essays, I wrote numerous short vignettes, helpful hints and suprising facts from the world of psychology and neuroscience (did you know that squids have dounut shaped brains? That you share 50% of your genes with a banana? That signals travel between brain cells at up to 200mph, which is fast compared to a cycle courier, but slow compared to a fibre optic cable). Throughout the book I try to tell it straight about what is, isn’t and might be true about brain training. I read the latest research and I hope I tell a sober, but optimistic, message about the potential for us to change how we think over our lifetimes (and the potential to protect our minds against cognitive decline in older age). I also used my research to provide a sprinkling of evidence-based advice for those who are trying to improve a skill, study for an exam or simply remember things better.
Writing the book was a great opportunity for me to dig into the research on brain training. It is a topic I’d always meant to investigate properly, but hadn’t gotten around to. The claims of those pushing commercial brain training products always seemed suspicious, but the general idea – that our brains change based on practice and experience – seemed plausible. In fact, this idea has been one of the major trends of the last fifty years of neuroscience research. It has been a big surprise to neuroscientists as experiment after experiment has shown exactly how malleable (aka ‘plastic’) the structure and function of the brain is. The resolution of this paradox of the general plausibility of brain training with my suspicion of specific products is in the vital issue of control groups. Although experience changes our brains, and although it is now beyond doubt that a physically and mentally active life can prevent cognitive decline across the lifespan, it isn’t at all clear what kinds of activities are necessary or essential for general mental sharpness. Sure, after practicing something you’ll get better at it. And doing something is better than doing nothing, but the crucial question is doing something you pay for better than doing something else that is free? The holy grail of brain training would be a simple task which you could practice (and copyright! and sell!!) and which would have benefits for all mental skills. Nobody has shown that such a task or set of tasks exists, so while you could buy a puzzle book, you could also go for a jog or go to the theatre with friends. Science wouldn’t be able to say for certain which activity would have the most benefits for your mental sharpness as an individual – although the smart money is probably on going jogging. It is to the credit of the editors at the Rough Guides that they let me say this in the introduction to the Rough Guide to Brain Training!
There wasn’t room in the book for all the references I used while writing it. This was a great sadness to me, since I believe that unless you include the references for a claim, you’re just spouting off, relying on a dubious authority, rather than really talking about science. So, to make up for this, and by way of an apology, I’ve put the references here. It will be harder to track specific claims from this general list that it would be with in-text citations, so if you do have a query, please get in touch and I promise will point you to the evidence for any claims I make in the book.
Additionally, I’ll be posting here a few things from the cutting room floor – text that I wrote for the book which didn’t make it into the final draft. Watch out, and if you do get your hands on a copy of this Rough Guide to Brain Training, get in touch and let me know what you think.
Amazon link (only £5.24!)
Scientific references and links used in researching the book
Cross-posted at mindhacks.com
December 13th, 2009 § § permalink
Last night I had two dreams in which I was being chased (once by a tour-de-france cyclist in Venice, once by a giant snake in a field, since you ask). I was thinking that being-chased dreams are probably my brain rehearsing escape behaviours – a night-time training programme built in by evolution. Thinking more on it, I realised that I have never had a chasing dream, only being-chased dreams. Is this because being-chased is more adaptive to rehearse, or because of something peculiar to my idiosyncratic psychology? Let’s find out, please vote using the poll below:
November 26th, 2009 § § permalink
Inspired by badscience:
November 21st, 2009 § § permalink
Reprint of the text from my article in Prospect magazine, 4th July 2009, Issue 160
If someone tells you something that isn’t true, they may not be lying. At least not in the conventional sense. Confabulation, a rare disorder resulting from severe brain damage, causes its sufferers to relentlessly invent and believe fictions—both mundane and fantastical—about their lives. If asked where she has just been, a patient might say that she was in the laundry room (when she wasn’t) or that she’s been visiting Scotland with her sister (who’s been dead for 20 years), or even that she isn’t in the room where you’re talking to her, but in one exactly like it, further down the corridor. And could you fetch her hand cream please? These stories aren’t maintained for long periods, but are sincerely believed.
While it only affects a tiny minority of those with brain damage, confabulation tells us something important: that spontaneous, fluid, even riotous creativity is a natural part of the design of the mind. The damage associated with confabulation—usually to the frontal lobes—adds nothing to the brain’s makeup. Instead it releases a capacity for fiction that lies dormant inside all of us. Anyone who has seen children at play knows that the desire to make up stories is deeply embedded in human nature. And it can be cultivated too, most clearly by anarchic improvisers like Paul Merton.
Chris Harvey John taught me “improv” at London’s Spontaneity Shop. He can step on stage in front of 200 people to perform a totally unscripted hour-long show. There’ll be no rehearsal, no discussion of characters or plot. Instead, he and the other actors invent a play from scratch, based entirely on their unplanned reactions to each other. This seemingly effortless, throwaway attitude is the opposite of what we normally assume about the creative process: that it is hard work. Artists are often talked about in reverent, mystical tones. Art does connect with deep and mysterious human forces, but that doesn’t mean it is only available to a select few who, through luck or special training, are allowed to invent things.
Psychological research increasingly shows that inventiveness is fundamental to the normal operation of the mind. Aikaterini Fotopoulou is a research psychologist at the Institute of Cognitive Neuroscience, London, who specialises in confabulation. She regards it as a failure of the psychological mechanisms responsible for memory. “These inventions are really memory constructions,” she says. “When people confabulate they are failing to check the origin of the material that they build into their memories. You or I can usually tell the difference between a memory of something we’ve done and a memory of something we’ve just heard about, and distinguish both from stray thoughts or hopes. Confabulators can’t do this. Material that, for emotional or other reasons, comes to mind can at times be indiscriminately assumed to be a memory of what really happened.”
There’s a clue to confabulation in the responses of other patients with damage to the frontal lobes. These patients, who may have suffered violent head injuries or damage from illnesses such as strokes or Alzheimers, don’t necessarily confabulate but will often have problems with planning and motivation. They can seem heavily dependent on their external environment. Some, for example, indiscriminately respond to the things they see, regardless of whether it is appropriate in the context. The French psychiatrist L’hermitte demonstrated this “environmental dependency” in the 1980s when he laid a syringe on a table in front of a patient with frontal lobe damage and then turned around and took down his trousers. Without hesitation the patient injected him in the buttocks. This was a completely inappropriate action for the patient, but in terms of the possible actions made available by the scene in front of her, it was the obvious thing to do.
In those patients with frontal damage who do confabulate, however, the brain injury makes them rely on their internal memories—their thoughts and wishes—rather than true memories. This is of course dysfunctional, but it is also creative in some of the ways that make improvisation so funny: producing an odd mix of the mundane and impossible. When a patient who claims to be 20 years old is asked why she looks about 50, she replies that she was pushed into a ditch by her brothers and landed on her face. Asked about his good mood, another patient called Harry explains that the president visited him at his office yesterday. The president wanted to talk politics, but Harry preferred to talk golf. They had a good chat.
Improvisers tap into these same creative powers, but in a controlled way. They learn to cultivate a “dual mind,” part of which doesn’t plan or discriminate and thus unleashes its inventive powers, while the other part maintains a higher level monitoring of the situation, looking out for opportunities to develop the narrative.
Both improvisation and confabulation show that the mind is inherently sense-making. Just as a confabulator is unfortunately driven to invent possible stories from the fragments of their memories and thoughts, so an improviser looks at the elements of a scene and lets their unconscious mind provide them with possible actions that can make sense of it. On stage, this allows them to create entrancing stories. But this capacity for invention is inside all of us. As audience or performers, we are all constantly inventing.
June 14th, 2009 § § permalink
For what it is worth twitter.com/tomstafford
I’m still trying to work out what it is good for
April 26th, 2009 § § permalink
The Emotional Cartography book launch was on friday and went off a treat. Since I had, unusually for me, planned my talk by writing it out in full I have reproduced it below. This is more-or-less what I said:
There is a saying that those who want to enjoy laws and sausages should not find out how they are made. I think the same is true about facts. Or rather I think that anyone who wants to believe in simple honest facts, objective lumps of knowledge which are true and eternal, ought to stay away from the places where facts are produced. When you see how facts are made you ought to gain, I believe, a healthy scepticism about how they are used.
I am a scientist — an experimental psychologist — and I work in a University. In the University, in the Faculty of Science, we like to think we are the factory of facts. Yet, it still surprises me that some of my colleagues still believe in simple honest facts, even after years in the workrooms squeezing the meat of messy reality into the offal tubes of truth. Many times I’ve been faced down by these colleagues who refuse to believe that some complex social or political dilemma is really problematic. “Just find out the facts” they say. “When we know the facts, what do — about schools, israel-palestine, whatever — will become obvious”
Curious that anyone who has seen facts being made can still believe that on their own they’ll help!
I’m reminded of another quote, this one by Matt Cartmill – Professor of Biology, at Duke University. He said,
“As an adolescent I aspired to lasting fame, I craved factual certainty, and I thirsted for a meaningful vision of human life – so I became a scientist. This is like becoming an archbishop so that you can meet girls.”
Now don’t get me wrong, facts do exist. We look to the stars and ask if things can be known — can things be known? And things can be known. There is right and wrong, true and untrue. Facts, in this sense, do exist. But they aren’t enough for a balanced intellectual diet.
I think facts are seductive because they take a lot of technical skill to produce. If you want to make even a basic truth which will hang together long enough to survive being passed around, you need a lot of disciplinary training. You need expensive and complex measuring equipment, you need esoteric statistical techniques and you need to make the right comparisons. All this takes time and money and a lot of discipline specific experience. No wonder scientists are proud of their facts, and the facts themselves invoke some envy and respect.
The problem is that facts always — always! — come with a set of presumptions, they always come along with a view of the world that they promote.
If you’ve read Thomas Pynchon’s ‘Gravity’s Rainbow’, his sprawling riotous novel of wartime paranoia you might know one of his Proverbs for Paranoids: “If you can get them asking the wrong questions, you don’t have to worry about the answers”
To stop this getting too abstract, and to bring it back to the book that we’re gathered here to launch, let’s talk about maps.
Everyone interested in maps should read Dennis Wood’s “The Power of Maps”. I believe this so strongly I have forced myself to only say this once, so that was it.
I read this book while I was working on a thing called the Sheffield Greenmap. Greenmaps are community mapping projects designed to mark environmental and community sites in a local area. Dennis Wood talks about the selective accuracy of maps, how they show one part of the world, and can seduce you into thinking (because of the professionalism of their accuracy) that their representation is the way to view the world. But, he said, “Accuracy is not the issue. Selectivity is the issue”. Perhaps because I was involved with the Sheffield Greenmap project these words of his resonated with me very strongly. Maps are choices about how to view the world. When I looked at maps of my hometown, maps which I would happily point to and say “this is Sheffield”, I saw the one way roads marked, the petrol stations. The base I used for the greenmap of the area around the University was based on the University’s own map. Running up the hill from the University in Sheffield is a road with parks either side of it. When I came to look at the University map, I noticed, for the first time, that only one of the parks was marked. The road, you see, was also a socio-demographic division between the leafy suburb of the university and the estates next door. For the University, the park next to the estates didn’t exist (or at least not as a place for students and University visitors to go).
The greenmap project made me look at the maps of Sheffield and ask why what was on them was on them. Why the roads and the petrol stations, why not the scenic routes through park, the community cafes and the places you could lock your bike up?
Accuracy wasn’t the issue. Selectivity was.
Wood discusses the ‘general purpose’ map as a particularly insidious example of interests using maps to mask their interests. The generality, the lack of explicit purpose, in a map disguises that it represents the end of a careful and directed process of selection. Like the scientific facts, the beauty of the technical process can blind you to the bias inherent in the construction.
I was invited to contribute to the Emotional Cartography because of Mind Hacks, a book I wrote with Matt Webb and a few valiant contributors in 2004. Mind Hacks is a collection of 100 do it yourself experiments that you can try at home and which reveal something about the moment-to-moment workings of your mind and brain. Our ambition with the book was to perform a smash and grab on the goodies of cognitive neuroscience, to open source some of the fascinating science that has been done, turning it into demonstrations which anyone could try. We wanted to make some of what was known about the mind available to be re-purposed by other people. So designers, artists, educators and whoever could notice and reuse various principles of how our minds make sense of the world.
Lately I’ve come to think — and this was inspired by writing the chapter in Emotional Cartography — that the view of the mind we took in Mind Hacks was limited. And this limitation reflects that of academic psychology as a whole. We focussed on the perceptions, thoughts and feelings of isolated individuals, rather than of people in their full social context, in interaction with others.
This is why I’m excited about Emotional cartography. It takes the idea of open-sourcing the production and consumption of facts to the social level.
First, emotional cartography. We’re a visuo-spatial species. We love sights, spaces, exploring with our eyes. We reify this prioritisation into maps, which are themselves inherently visuo-spatial. If you believe that maps are a kind of technology for thinking with, which I do (my chapter in the book is about this), then this is turn makes it easier to think about the kinds of things which are easiest to show on a map. The maps of physical space then make this selection bias invisible, by pretending to be natural.
Emotional Cartography makes another kind of information mappable, and this opens up the space to think with and debate about what that mapping makes explicit. For example: why are people anxious or excited in this place? Is that something we should do something about?
The other reason I’m excited about emotional cartography is because, truly, it opens up a space for emotional cartographies — a refocussing on the process of mapping and remapping. By open-sourcing mapping it allows mapping to be a processes rather than a product and this powerfully opens up space for people to take part in the negotiation of the representation of their own geographical spaces. Rather than one true map of a locale, there are many maps, and these maps can be a medium for the mappers to meet and discuss their feelings, the places where they live, and the interrelation of these two things.
So let me finish by congratulating Christian on his idea and all his hard work which resulted in this book, let me congratulate the other authors for their contributions and let me commend to you the practice of emotional cartography because, as should be obvious by now, in all areas of life, including map making, I believe it is far more satisfying to be a participant than a mere consumer.
April 24th, 2009 § § permalink
March 27th, 2009 § § permalink
March 25th, 2009 § § permalink
March 24th, 2009 § § permalink