perception as the potential for sensation

From O’Regan, J. K., & Noë, A. (2002). A sensorimotor account of vision and visual consciousness. Behavioral and Brain Sciences, 24(05), 939-973:

Particularly interesting is the work being done by Lenay (1997), using an extreme simplification of the echolocation device, in which a blind or blindfolded person has a single photoelectric sensor attached to his or her forefinger, and can scan a simple environment (e.g., consisting of several isolated light sources) by pointing. Every time the photophotosensor points directly at a light source, the subject hears a beep or feels a vibration. Depending on whether the finger is moved laterally, or in an arc, the subject establishes different types of sensorimotor contingencies: lateral movement allows information about direction to be obtained, movement in an arc centered on the object gives information about depth. Note several interesting facts. First, users of such a device rapidly say that they do not notice vibrations on their skin or hear sounds, rather they “sense” the presence of objects outside of them. Note also that at a given moment during exploration of the environment, subjects may be receiving no beep or vibration whatsoever, and yet “feel” the presence of an object before them. In other words, the experience of perception derives from the potential to obtain changes in sensation, not from the sensations themselves. Note also that the exact nature or body location of the stimulation (beep or vibration) has no bearing on perception of the stimulus – the vibration can be applied on the finger or anywhere else on the body. This again shows that what is important is the sensorimotor invariance structure of the changes in sensation, not the sensation itself.

Lenay, C. (1997) Le mouvement des boucles sensori-motrices aux représentations cognitives et langagières. Paper presented at the Sixième Ecole d’Eté de l’Association pour la Recherche Cognitive.


Rock climbing hacks! (now with added speculation)

I’m going to tell you about an experience that I often have rock-climbing and then I’m going to offer you some speculation as to the cognitive neuroscience behind it. If you rock-climb I’m sure you’ll find my description familiar. If you’re also into cognitive neuroscience perhaps you can tell me if you think my speculation in plausible.

Rock-climbing is a sort of three-dimensional kinaesthetic puzzle. You’re on the side of rock-wall, and you have to go up (or down) by looking around you for somewhere to move your hands or feet. If you can’t see anything then you’re stuck and just have to count the seconds before you run out of strength and fall off. What often happens to me when climbing is that I look as hard as I can for a hold to move my hand up to and I see nothing. Nothing I can easily reach, nothing I can nearly reach and not even anything I might reach if I was just a bit taller or if I jumped. I feel utterly stuck and begin to contemplate the immanent defeat of falling off.

But then I remember to look for new footholds.

Sometimes I’ve already had a go at this and haven’t seen anything promising, but in desperation I move one foot to a new hold, perhaps one that is only an inch or so further up the wall. And this is when something magical happens. Although I am now only able to reach an inch further, I can suddenly see a new hold for my hand, something I’m able to grip firmly and use to pull myself to freedom and triumph (or at least somewhere higher up to get stuck). Even though I looked with all my desperation at the wall above me, this hold remained completely invisible until I moved my foot an inch — what a difference that inch made.

Psychologists have something they call affordances (Gibson, 1977, 1986), which are features of the environment which seem to ‘present themselves’ as available for certain actions. Chairs afford being sat on, hammers afford hitting things with. The term captures an observation that there is something very obviously action-orientated about perception. We don’t just see the world, we see the world full of possibilities. And this means that the affordances in the environment aren’t just there, they are there because we have some potential to act (Stoffregen, 2003). If you are frail and afraid of falling then a handrail will look very different from if you are a skateboarder, or a freerunner. Psychology typically divides the jobs the mind does up into parcels : ‘perception’, (then) ‘decision making’, (then) ‘action’. But if you take the idea of affordances seriously it gives lie to this neat division. Affordances exist because action (the ‘last’ stage) affects perception (the ‘first’ stage). Can we experimentally test this intuition, is there really an effect of action on perception? One good example is Oudejans et al (1996) who asked baseball fielders to judge were a ball would land, either just watching it fall or while running to catch it. A model of the mind that didn’t involve affordances might think that it would be easier to judge where a ball would land if you were standing still; after all, it’s usually easier to do just one thing rather than two. This, however, would be wrong. The fielders were more accurate in their judgements — perceptual predictions basically — when running to catch the ball, in effect when they could use base their judgements on the affordances of the environment produced by their actions, rather than when passively observing the ball.

The connection with my rock-climbing experience is obvious: although I can see the wall ahead, I can only see the holds ahead which are actually within reach. Until I move my foot and bring a hold within range it is effectively invisible to my affordance-biased perception (there’s probably some attentional-narrowing occurring due to anxiety about falling off too, (Pijpers et al, 2006); so perhaps if I had a ladder and a gin and tonic I might be better at spotting potential holds which were out of reach).

There’s another element which I think is relevant to this story. Recently neuroscientists have discovered that the brain deals differently with perceptions occurring near body parts. They call the area around limbs ‘peripersonal space’ (for a review see Rizzolatti & Matelli, 2003). {footnote}. Surprisingly, this space is malleable, according to what we can affect — when we hold tools the area of peripersonal space expands from our hands to encompass the tools too (Maravita et al, 2003). Lots of research has addressed how sensory inputs from different modalities are integrated to construct our brain’s sense of peripersonal space. One delightful result showed that paying visual attention to an area of skin enhanced touch-perception there. The interaction between vision and touch was so strong that providing subjects with a magnifying glass improved their touch perception even more! (Kennett et al, 2001; discussed in Mind Hacks, hack #58). I couldn’t find any direct evidence that unimodal perceptual accuracy is enhanced in peripersonal space compared to just outside it (if you know of any, please let me know), but how’s this for a reasonable speculation — the same mechanisms which create peripersonal space are those which underlie the perception of affordances in our environment. If peripersonal space is defined as an area of cross-modal integration, and is also malleable according to action-possibilities, it isn’t unreasonable to assume that an action-orientated enhancement of perception will occur within this space.

What does this mean for the rock-climber? Well it explains my experience, whereby holds are ‘invisible’ until they are in reach. This suggests some advice to follow next time you are stuck halfway up a climb: You can’t just look with your eyes, you need to ‘look’ with your whole body; only by putting yourself in different positions will the different possibilities for action become clear.

(references and footnote below the fold)

My intuition is that this is the area around which we feel ‘an aura’ if someone reaches towards us; this is completely unsubstantiated speculation however


Gibson, J.J. The theory of affordances. In R.E. Shaw and J. Bransford,
eds., Perceiving, Acting, and Knowing, Erlbaum Assoc., Hillsdale. N.J., 1977.

Gibson, J. J. (1986). The ecological approach to visual perception. Lawrence Erlbaum Associates Inc, US.

Kennett, S., Taylor-Clarke, M., & Haggard, P. (2001). Noninformative vision improves the spatial resolution of touch in humans, Current Biology, 11(15), 1188-1191.

Maravita, A., Spence, C., & Driver, J. (2003). Multisensory integration and the body schema: close to hand and within reach, Current Biology, 13(13), 531-539.

Oudejans, R. R., Michaels, C. F., Bakker, F. C., & Dolne, M. A. (1996). The relevance of action in perceiving affordances: perception of catchableness of fly balls., J Exp Psychol Hum Percept Perform, 22(4), 879-91.

Pijpers, J. R. R., Oudejans, R. R. D., Bakker, F. C., & Beek, P. J. (2006). The role of anxiety in perceiving and realizing affordances, Ecological Psychology, 18(3), 131.

Rizzolatti, G., & Matelli, M. (2003). Two different streams form the dorsal visual system: anatomy and functions, Experimental Brain Research, 153(2), 146-157.

Stoffregen, T. A. (2003). Affordances as properties of the animal-environment system, Ecological Psychology, 15(2), 115-134.

Crossposted at

psychology science

Cognitive Dissonance Reduction

Following on from my earlier post about the way psychologists look at the world, let me tell you a story which I think illustrates very well the tendency academic psychologists have for reductionism. It’s a story about a recent paper on the phenomenon of cognitive dissonance, and about a discussion of that paper by a group of psychologists that I was lucky enough to be part of.

Cognitive Dissonance is a term which describes an uncomfortable feeling we experience when our actions and beliefs are contradictory. For example, we might believe that we are environmentally conscious and responsible citizen, but might take the action of flying to Spain for the weekend. Our beliefs about ourselves seem to be in contradiction with our actions. Leon Festinger, who proposed dissonance theory, suggested that in situations like this we are motivated to reduce dissonance by adjusting our beliefs to be in line with our actions.

Obviously after-the-event it is a little too late to adjust our actions, so our beliefs are the only remaining point of movement. In the flying to Spain example you might be motivated by cognitive dissonance to change what you believe about flying: maybe you come to believe that flying isn’t actually that bad for the environment, or that focussing on personal choices isn’t the best way to understand environmental problems, or you could even go all the way and decide that you’re not an environmentally responsible person.

The classic experiment of dissonance theory involved recruiting male students to take part in a crushingly boring experiment. The boring part was an hour of two trivial actions — loading spools into a tray, turning pegs a quarter-turn in a peg-board. At the end of this, after the students through the experiment was over, was the interesting part of us. The students were offered either $1 or $20 to tell the next participant in the experiment (actually the female accomplice of the experimenter) that the experiment she was about to do was really enjoyable. After telling this lie, the participants were then interviewed about how enjoyable they really found the experiment. What would you expect from this procedure? Now one view would predict that the students paid $20 would enjoy the experiment more. This is certainly what behaviourist psychology would predict — a larger reward should produce a bigger effect (with the effect being a shift from remembering the task as boring, which is was, to remembering it being enjoyable, which getting £20 presumably was). But cognitive dissonance theory suggests that the opposite would happen. Those paid $20 would have no need to change their beliefs about the task. They lied about how enjoyable the task was to the accomplice, something which presumably contradicted their beliefs about themselves as nice and trustworthy people, but they did it for a good reason, the $20. Now consider the group paid only $1. They lied about how enjoyable the task was, but looking around for a reason they cannot find one — what kind of person would lie to an innocent for only $1? So, the theory goes, they would experience dissonance between their actions and their beliefs and reduce this by adjusting their beliefs: they would come to believe that they actually did enjoy the boring task, and this is the reason that they told the accomplice that it was enjoyable. And, in fact, this is what happened.

At this point I want you to notice two things about cognitive dissonance. Firstly, it requires the existence of quite sophisticated mental machinery to operate. Not only do you need to have abstract beliefs about the world and yourself, you need to have some mechanism which detects when these beliefs are in contradiction with each other or with your actions, and which can (unconsciously) adjust selective beliefs to reduce this contradiction. The second thing to notice is that all this sophisticated mental machinery is postulated to exist from changes in behaviour, it is never directly measured. We don’t have any evidence that the change in attitudes really does result from an uncomfortable internal state (‘dissonance’) or that any such dissonance does result from an unconscious perception of the contradiction between beliefs and actions.

So, to the recent paper and to reductionism. The paper, by Louisa Egan and colleagues at Yale [ref below] is titled ‘The Origins of Cognitive Dissonance‘, and represents one kind of reductive strategy that psychologists might employ when considering a theory like cognitive dissonance. The experiments in the paper (summarised here and here) both involved demonstrating cognitive dissonance in two groups which do not have the sophisticated mental machinery normally considered necessary for cognitive dissonance — four year-old children, and monkeys. The reductionism of the paper, which the authors are quite explicit about, is to show that something like cognitive dissonance can occur in these two groups despite their lack of elaborate explicit beliefs. Unlike the students in Festinger’s classic experiment we can’t suppose that the children or the monkeys have thoughts about their thoughts in the same way that dissonance theory suggests.

To demonstrate this the authors employed an experimental method that could be used with subjects who did not have language, but would still allow them to observe the core phenomena of dissonance theory — the adjusting of attitudes in line with previous actions. The method worked like this. For each participant — be they a child or a monkey — the experimenters identified three items (stickers for the children, coloured M&M’s for the monkeys) which the participant preferred equally. In other words, if we call the three items A, B and C then the child or monkey liked all of the items the same amount. Then the experimenter forced the participating child or monkey to choose between two of the items (lets say A and B), so that they only got one. Next the child or monkey was offered a choice between item C and the item they did not choose before. So, if the first choice was between A and B and the participant chose A, then the next choice would be between B and C. What does dissonance theory predict for this kind of situation? Well, originally the three items are equally preferred — that’s how the items are selected. After someone is forced to make a first choice, between A and B, cognitive dissonance supposedly comes into play. The participant now has a reason to adjust their attitudes, and the way they do this is to downgrade their evaluation of their unchosen item. This will is known as being happy with what you got or “I must not like B as much, because I chose A”. So on the second choice (B vs C) the participants are more likely to choose C (more likely that chance, and more likely than a control group that goes straight to the ‘second’ choice). This prediction is exactly what the experimenters found, in both children and monkeys, and the startling thing is that this occurred despite the fact that we know that neither group was explicitly talking to themselves in the way I outline the dissonance theory prediction above (“I must not prefer B as much…etc”). Obviously something like cognitive dissonance can be produced by far simpler mental machinery than that usually invoked to explain it, conclude the experimenters. In this way, The paper is a call to reduce the level at which we try and explain cognitive dissonance.

How far should you go when trying to reduce the level of theory-complexity that is needed to explain something? Psychologists know the answer to this immediately — as far as possible! So when our happy band of psychologists got to discussing the the Egan paper it wasn’t long before someone came up with a new suggestion, a further reduction.

What if, it was suggested, there was nothing like dissonance going on in the Egan et al experiments? After all, there was no direct measurement of anxiety or discomfort, so why suppose that dissonance occurred at all — perhaps, if we can come up with a plausible alternative, we can do away with dissonance all together. Imagine this, see if you find it plausible: all of us, including monkeys and children, possess a very simple cognitive mechanism which saves us energy by remembering our choices and, when similar situations arise, applying our old choices to new situations, thus cutting down on decision time. That sounds plausible, and it would explain the Egan et al results if you accept that the result of the first, A vs B, decision is not just “choosing A” but is also “not choosing B”. So, when you get to the second choice, B vs C, you are more likely to choose C because you are simply re-applying the previous decision of “not choosing B”, rather than performing some complicated re-evaluation of your previously held attitudes a-la cognitive dissonance theory.

At this point in the discussion the psychologists in the room were feeling pretty pleased with themselves — we’d started out to with cognitive dissonance, reduced the level of complexity of mental processes required to explain the phenomenon (the Egan et al result) and then we’d taken things one step further and reduced the complexity of the phenomenon itself. At this point, we had a discussion of how widely the ‘decisional inertia’ reinterpretation could be applied to supposed cognitive dissonance phenomena. Obviously we’d have only been really satisfied with the reinterpretation if it applied more widely than just to this one set of experiments under consideration.

But further treats were in store. What if we could reduce things again, what if we could make even simpler the processes involved? We’d already started to convince ourselves that the experimental results could be produced by simple cognitive processes rather than complex cognitive processes, perhaps we could come up with a theory about how the experimental results can be produced without any cognitive processes at all! Now that would be really reductive.

Here’s what was suggested, not as definitely what was happening, but as a possibility for what could potentially be happening — and remember, if you are sharing a table with reductionists then they will prefer the simple theory by default because it is simpler. You will need to persuade them of the reasons to accept any complex theory before they abandon the simple one. Imagine that there is no change at all going on in the preferences of the monkeys and the children. Instead, imagine — o the simplicity! — that any participant in the experiment merely has a set of existing preferences. These preferences don’t even have to be mental, by preferences all I mean are consistent behaviours towards the items in question (stickers for the children, M&Ms for the monkeys). From here, via a bit of statistical theory, we can produce the experimental result with out any recourse to change in preferences, cognitive dissonance or indeed anything mental. Here’s how. Whenever you measure anything you get inaccuracies. This means that your end result reflects two things: the true value and some random ‘noise’ which either raises or lowers the result away from the true value. Now think about the Egan et al experiment. The experimenters picked three items, A, B and C, which the children or monkeys ‘preferred equally’, but what did this mean? It meant only that when the experimenters measured preference their result was the same for items A, B and C. And we know, as statistically-savvy psychologists, that those results don’t reflect the true preferences for A, B and C, but instead reflect the true preferences plus some noise. In reality, we can suppose, the children and monkeys actually do prefer each item differently from the others. Furthermore this might even be how they make their choice. So when they are presented with A vs B and choose A it may be because, on average, they preferred A all along. Now watch closely at what happens next. The experimental participants are given a second choice which depends on their first choice. If at first they chose A over B then the second choice is B vs C. But if they chose B over A then the second choice is A vs C. We know the results: they then choose C more than the unchosen option from the first choice, be it A or B, but now we have another theory as to why this might be. What could be happening is merely that, after the mistaken equivalence of A, B and C, the true preferences of the monkey or child are showing through, and the selective presentation of options on the second choice is making it look like they are changing their preferences in line with dissonance theory. Because the unchosen option from the first choice is more likely to have a lower true preference value (that, after all, may be why it was the unchosen option), it is consequently less likely to be preferred in the second choice, not because preferences have changed, but because it was less preferred all along. In the control condition, where no first choice is presented, their is no selective presentation of A and B and so the effect of the true values for preferences for A and B will tend to average out rather than produce a preferential selection of C.

Now obviously the next step with this theory would be to test if it is true, and check some details which might suggest how likely this is. Did Egan et al assess the reliability their initial preference evaluation? Did they test preferences and then re-test them at a later date to see if they were reliable? These and many other things could persuade us that such an explanation might or might not be very likely. The important thing, for now, is that we’ve come up with an explanation that seems as as simple as it could possibly be and still explain the experimental results.

For psychologists, reductionism is a value as well as a habit. We seek to use established simple features of the mind to explain as many things as possible before we get carried away with theories which rely on novel or complex possibilities. The reductionist position isn’t the right one in every situation, but it is an essential guiding principle in our investigations of how the mind works.

Link: to earlier post about how psychologists think.

Cross-posted at


Egan L. C., Santos L.R., Bloom P. (2007). The Origins of Cognitive Dissonance: Evidence from Children and Monkeys. Psychological Science, 18, 978-983.

Festinger, L. and Carlsmith, J. M. (1959).”>Cognitive consequences of forced compliance. Journal of Abnormal and Social Psychology, 58, 203-211.

academic psychology

How do psychologists think?

I believe that the important thing about psychology is the habits of thought it teaches you, not the collection of facts you might learn. I teach on the psychology degree at the University of Sheffield and, sure, facts are important here — facts about experiments, about the theories which prompted them and about the conclusions which people draw from them — but more important are the skills which you acquire during the process of learning the particular set of facts. Skills like finding information and articulating yourself clearly in writing. Those two things are common to all degrees. But lately I’ve been wondering what skills are most emphasised on a psychology degree? And I’ve been thinking that the answer to this is the same as to the question ‘how do psychologists think?’. How does the typical psychologist[*] approach a problem? I’ve been making a list and this is what I’ve got so far:

1. Critical — Psychologists are skeptical, they need to be convinced by evidence that something is true. Their default is disbelief. This relates to…

2. Scholarly — Psychologists want to see references. By including references in your work you do two very important things. Firstly you acknowledge your debt to the community of scholars who have thought about the same things you are writing about, and, secondly, you allow anyone reading your work to go and check the facts for themselves.

3. Reductionist — Psychologists prefer simple explanations to complex ones. Obviously what counts as simple isn’t always straightforward, and depends on what you already believe, but in general psychologists don’t like to believe in new mental processes or phenomena if they can produce explanations using existing processes or phenomena.

I am sure there are others. One of the problems with habits of thought is that you don’t necessarily notice when you have them. Can anyone offer any suggested additions to my incoate list?

* I’m using the label ‘psychologists’ here to refer to my kind of psychologists — academic psychologists. How and if what I say applies to the other kinds of psychologists (applied, clinical, etc) I’ll leave as an exercise to the reader.

Cross-posted at

psychology science

A primitive darkness creepeth in

Google video of Richard Dawkins railing against the march of unreason here

Apart from the cheap and badly written philosophy of science, did you notice how most of this is psychology – cold reading, the barnum effect, double-blind control trials, probability theory?

Update Charlie Brooker review in the Guardian is hilarious


Cosma Shalizi on IQ

Such a good link it gets a post all to itself!

Part one, Part two


Beat the winter blues the Velten way

The Velten Mood induction procedure consists of reading a series of statements which start neutral and get progressively more and more positive, or more and more negative. So you end up with things like “Things look good. Things look great!” for the positive velten, and “I want to go to sleep and never wake up.” for the negative. When I was writing Mind Hacks I wanted to get hold of the full list of Velten statements, but couldn’t find them. Now, for your education and delight (or despair) I’ve got them and put them here:

positive Velten
negative Velten

The great things about the Velten is that it really works. I think the correct analogy is to watching a play – you know it is a fiction, the characters can even point out that it is a fiction, yet you are still emotionally involved in the story. (I think this a fact of fundamental importance to understanding the nature of consciousness).


something rotten in the statistics of analysis

The Poor Availability of Psychological Research Data for Reanalysis.
American Psychologist, Volume 61, Issue 7
Wicherts, Jelte M.; Borsboom, Denny; Kats, Judith; Molenaar, Dylan

The origin of the present comment lies in a failed attempt to obtain, through e-mailed requests, data reported in 141 empirical articles recently published by the American Psychological Association (APA). Our original aim was to reanalyze these data sets to assess the robustness of the research findings to outliers. We never got that far. In June 2005, we contacted the corresponding author of every article that appeared in the last two 2004 issues of four major APA journals. Because their articles had been published in APA journals, we were certain that all of the authors had signed the APA Certification of Compliance With APA Ethical Principles, which includes the principle on sharing data for reanalysis. Unfortunately, 6 months later, after writing more than 400 e-mails–and sending some corresponding authors detailed descriptions of our study aims, approvals of our ethical committee, signed assurances not to share data with others, and even our full resumes-we ended up with a meager 38 positive reactions and the actual data sets from 64 studies (25.7% of the total number of 249 data sets). This means that 73% of the authors did not share their data. (PsycINFO Database Record (c) 2006 APA, all rights reserved)


look into my eye

This is a picture of the back of one of my eyes:


The light patch in the center is the blindspot, where the optic nerves gather to exit the retina (on their journey to the rest of the brain). Obviously I was a little disappointed that I couldn’t make out individual rod and cone photoreceptors, like wot I seen in textbooks, but apparently the dark patch off to the left is the fovea, with the darkening being caused by the increased photoreceptor density.


The Costs of Pleasure and the Benefits of Pain

The Opponent-Process Theory of Acquired Motivation is that, if i’ve got it right, any innate releaser will be habituated to. The removal of the stimulus involves an opposite reaction (for pleasurable stimuli, pain; for painful stimuli, pleasure at their removal). Habituation results in the exaggeration of this opponent-process evoked reaction, and so stimuli which might previously have been avoided have the capacity to become innately rewarding, thus widening the space of stimuli that can reinforce behaviour.

Another feature of the opponent-process theory warrents comment. It is obviously a puritan’s theory. It argues for the existence of psychological mechanisms for the automatic or autonomic control of affect, such that repeated pleasures lose a lot of their pleasentness and make one potentially capable of new sources of suffering; in the same vein, repeated aversive events lose a lot of their unpleasentness and make one potentially capable of new sources of pleasure. The philosophical implications of such a theory should be obvious.

Solomon, Richard L.


the stroop-test and car-crashes

Title Stroop color


the dreaming brain

From Dreaming and the brain: Toward a cognitive neuroscience of conscious states by J. Allan Hobson, Edward F. Pace-Schott, and Robert Stickgold in Behavioral and Brain Sciences (2000), 23: 793-842

3.1.3. Selective deactivation of the dorsolateral prefrontal cortex in REM sleep.
Relevant to the cognitive deficits in self-reflective awareness, orientation, and memory during dreaming was the H215O PET finding of significant deactivation, in REM, of a vast area of dorsolateral prefrontal cortex (Braun et al. 1997; Maquet et al. 1996). A similar decrease in cerebral blood flow to frontal areas during REM has been noted by Madsen et al. (1991a) using single photon emission computed tomography (SPECT) and by Lovblad et al. (1999) using fMRI. Dorsolateral prefrontal deactivation during REM, however, was not replicated by an FDG PET study (Nofzinger et al. 1997) and this discrepancy, therefore, remains to be clarified by other FDG as well as H215O studies. (A potential cause of this discrepancy arising from differences between FDG and H215O methods is discussed further in sect. Nevertheless, it seems likely that considerable portions of executive and association cortex active in waking may be far less active in REM, leading Braun et al. (1997) to speculate that


questionnaire data

I always promised myself that I’d never do any research involving questionnaires.

Well, times change and we’ve all done things for money which we might not have done otherwise. So I’ve been running these huge postal-questionnaire surveys and gathering hundreds of thousands of data points and wondering what sense can be made of the morass of information.

Why the previous distaste for questionnaires? Well, true to the behaviourist-roots which i share with all experimental psychologists, I don’t have a lot of faith that people’s answers to questionnaire questions bare much relation to the thing that we, asking the questions, are interested in. The vagaries of personal intepretation, context, ambiguities in wording, differences in perspective between researcher and respondent add so much noise – why should i believe that the average response on a particular question reflects anything more than the willingness of the average respondent to tick that part of the response scale on that question?

(by the way, this is common, useful and potentially unhealthy aspect of the experimental psychologist’s trade: a complete distrust of people’s professed desires and beliefs. Just because they said they’ll vote Labour / choose to do that job for that reason / are a kind and conscientious person / etc you don’t actually believe them do you??).

Anyway, does that mean that my 200,000 data points are a load of junk? It means that i think that most of the survey data reported in the news is a load of junk. 75% of people think this. 2 in 3 people think that. etc etc. Junk. So, why not my stuff?

Well, it’s all about statistics and differences. Admittedly the point someone marks on a questionnaire may bare little relation to the thing the question refers to, but we can demonstrate that there is consistency in how people answer certain questions. Further than this, there are systemmatic differences in how different groups of people answer questions. By looking at differences, we can stop worrying about the reponse to the questionnaire as an indicator of wider meaning, and focus on the existence of differences between different people’s responses as indicators of wider meaning. Sure, if someone asks “How worried are you about water pollution” then my response is pretty meaningless, whether I indicate 1 (not at all) or 7 (extremely). If I ask 200 people, then the average response is still pretty meaningless. But if I find that the 100 Guardian readers give a statistically higher response than 100 Telegraph readers then that says something about the world. Anyway, maybe this was obvious to the social psychologists all along, but if it was they never told me.


happiness and desire

In the first place, it seems clear that people’s self-report of how happy they are is a fairly valid measure of their happiness. It correlates highly with the perception of family and friends, with the incidence of pathologies and relevant behaviours – in short, people who think they are happy also look and act like happy people are supposed to. They tend to be extroverted, they have stable relationships, the live healthy and productive lives. So far so good.

Although, if i was an extrovert with stable relationships and a healthy and productive life, I think i’d be happy too! Any proof that this is causation not correlation?

But there might be some interesting downsides as well. For instance, one of the most widely accepted definitions of happiness is that it is a state in which one does not desire anything else. Happy people tend not to value material possessions highly, are less affected by advertising and propoganda, are not as drive by desire for power and achivement. Why would they? They are happy already, right? The prospect of a society of happy people should be enough to send shivers down the spine of our productive system, built on ever-escalating consumption, on never-satisfied desire.

Would like to see the references for this. Seems just a little too convenient for the liberal world-view to me. I bet happy people would value their material possessions highly if there was a threat that they would be taken away – likewise i’d be less happy and value material possessions more if i didn’t possses any. I don’t see why you can’t have a kind of happiness which is based on activity (including consumption), rather than on a lack-of-desires (contentment?)

Will academic psychology be of any help in providing answers to these impending choices?
…Among the the things we learned is that people who are engaged in challenging activities with clear goals tend to be happier than those who lead relaxing, pleasurable lives. The less one works just for oneself, the larger the scope of one’s relationships and commitments, the happier a person is likely to be.

This much, I think, is well supported by the evidence. But why can’t shopping be a challenging activity with clear goals. I think Csikszentmihalyi in this paragraph is contradicting his assertion in the one i’ve quoted here as proceeding it. True, relationships, commitments and a lack of selfishness suggest that shopping is probably not the best route to happiness, but happy people will still desire stuff, i’m sure.

Mihaly Csikszentmihalyi (2002). The Future of Happiness. In J. Brockman (Ed.), The next fifty years, pp. 85-92. New York: Vintage


do i turn the wheel or does the wheel turn me?

There is a human bias to underestimate the role we play in creating our own circumstances (this is part of the ‘Fundamental Attribution Error’). I wonder also if there is an opposite bias to underestimate the effect that our circumstances have on us. If there is, what is it called?

Either way, I think both (putative) biases can be explained by perceptual selectivity and an adapted mind. It’s easier and more useful to notice how our circumstances affect things than how unchanging aspects of ourselves do. Contrawise, it’s hard to notice slow changes that our circumstances have on ourselves.


Stay Free!

My new favourite blog is the blog of Stay Free Magazine and a must for ‘Media criticism, consumer culture, and Brooklyn curiosities’. Top recent posts include this piece about a public art project which involved covering all the adverts and advertisers slogans in Vienna for two weeks. This on an archive of propoganda music (‘The Happy Listeners Guide to Mind Control’), and this well put and much needed bit of commentary on the thesis of Malcolm Gladwell’s new book Blink. That last post led me to this article from Stay Free magazine proper (yes! they have an online archive) about those who study ‘consumer behaviour’, which contains this choice quote:

Funny how such a studied observer of consumer behavior could overlook a pretty basic truth–any company spending that much money, time, and energy on my psyche must not have a product worth buying. That is, my so-called needs only bear such intense scrutiny when the differences between deodorants don’t matter.

(Compare with gladwell, here)

psychology quotes

Quote #101

Introspective psychology and analytical philosophy of the self, of perception and of will, do not seem to take into account that in any well-made machine one is ignorant of the working of most of the parts – the better they work, the less are we conscious of them. Thus it is very unlikely that introspection will reveal those intermediate processes which are most important

Kenneth Craik, ‘The Nature Of Explanation’ (1943)

psychology quotes

Quotes #94 & #95

Two more from Steven Pinker’s ‘How The Mind Works’:


Each of the major engineering problems solves by the mind is unsolvable without built-in assuptions about the laws that hold in that arena of interaction with the world


Beliefs and desires are the explanatory tools of our own intuitive psychology, and intuitive psychology is the still the most useful and complete science of behaviour their is. The predict the vast majority of human acts – going to the refrigerator, getting on the bus, reaching into one’s wallet – you don’t need to crank through a mathematical model, run a computer simulation of a neural network, or a hire a professional psychologist; you can just ask your grandmother.


“I say Jung Man, there’s a place you can go”

Anyone who wants to know the human psyche will learn next to nothing from experimental psychology. He would be better advised to abandon exact science, put away his scholar’s gown, bid farewell to his study, and wander with human heart throught the world. There in the horrors of prisons, lunatic asylums and hospitals, in drab suburban pubs, in brothels and gambling-hells, in the salons of the elegant, the Stock Exchanges, socialist meetings, churches, revivalist gatherings and ecstatic sects, through love and hate, through the experience of passion in every form in his own body, he would reap richer stores of knowledge than text-books a foot thick could give him, and he will know how to doctor the sick with a real knowledge of the human soul.

Carl Jung

The first line is true, the last line is false – everything in between is poetry


what can functional neuroimaging tell the experimental psychologist

This perhaps of interest to those of us who worry about such things:

Henson, R. (2005). What can functional neuroimaging tell the experimental psychologist? The Quarterly Journal of Experimental Psychology, 58A(2), 193?233.

I argue here that functional neuroimaging data?which I restrict to the haemodynamic techniques of fMRI and PET?can inform psychological theorizing, provided one assumes a ?systematic? function?structure mapping in the brain. In this case, imaging data simply comprise another dependent variable, along with behavioural data, that can be used to test competing theories. In particular, I distinguish two types of inference: function-to-structure deduction and structure-to-function induction. With the former inference, a qualitatively different pattern of activity over the brain under two experimental conditions implies at least one different function associated with changes in the independent variable. With the second type of inference, activity of the same brain region(s) under two conditions implies a common function, possibly not predicted a priori. I illustrate these inferences with imaging studies of recognition memory, short-term memory, and repetition priming. I then consider in greater detail what is meant by a ?systematic? function?structure mapping and argue that, particularly for structure-tofunction induction, this entails a one-to-one mapping between functional and structural units, although the structural unit may be a network of interacting regions and care must be taken over the appropriate level of functional/structural abstraction. Nonetheless, the assumption of a systematic function?structure mapping is a ?working hypothesis? that, in common with other scientific fields, cannot be proved on independent grounds and is probably best evaluated by the success of the enterprise as a whole. I also consider statistical issues such as the definition of a qualitative difference and methodological issues such as the relationship between imaging and behavioural data. I finish by reviewing various objections to neuroimaging, including neophrenology, functionalism, and equipotentiality, and by observing some criticisms of current practice in the imaging literature.

In which this pleasing analogy is noted:

?the use of functional imaging to understand the brain? [is like] ?trying to understand how a car engine works, using only a thermal sensor on a geostationary satellite? (original source unknown; apologies for plagiarism)

Henson is not convinced. Or to put it another way, he is convinced of the utility of neuroimaging for psychologists. It’s an interesting, and almost conversational, read. I suspect that the ‘systemmatic function?structure mapping’ assumption is probably like the adaptationist position in evolutionary biology. You can’t prove it, you’re certain it must sometimes be wrong and misleading, but it does useful work for you so you might as well use it.

One ‘best-practice’ caveat the paper mentions about imaging is

…a minimal requirement for deducing the presence of a different function (F2) is an interaction in which one region shows a reliably greater change in activity across conditions than at least one other region.

Which, I think, is saying that if you have a notional function (which you hope is involved in your challenge task but not in your baseline task) then you do not demonstrate it (or localise it) by selecting a region which survived your SPM statistical tests of difference. You’ve just found a region which responds more in at least this one task. Henson (i think) is saying that you need to include region as an (independent) variable of analysis and show that there are tasks which increase activation in region A more than in region B, and vice versa.


the drawing power of crowds

Milgram, Bickerman and Berkowitz (1969) found that if one person stood in a Manhattan street gazing at a sixth floor window, 20% of pedestrians looked up; if five people stood gazing 80% of people looked up

Argyle, M., & Cook, M. (1976). Gaze and Mutual Gaze. New York: Cambridge University Press.

Milgram, Bickerman & Berkowitz (1969). Note on the drawing power of crowds of different size. Journal of Personality and Social Psychology, 13


for all your classical and operant conditioning needs…

Between about 1920 and 1970 a good proportion of psychology involved studying how animals learn associations – conditioning. A lot of this stuff is still true (for what it is) and relevant, but has never made it into electronic archives. If i ever want to know something about _any_ detail of classical or operant conditioning, no matter how small, i assume it must have been done by someone. Fundamentally the hypothetical result i’m thinking of will be out there already, i just need to find out where.

There’s a very senior professor in my department. I go and ask him. If he doesn’t know, he refers me to this book:

Mackintosh, N.J. (1974). The Psychology of Animal Learning. Academic Press.


neural process control as internal foraging

Discussing work on the neurobiology of decision mechanisms (in which they find that the signal used to integrate the evidence in favour of a particular action is represented in the same area as is used to represent the action itself), Shalden & Gold speculate:

This intention-based architecture seems to take the hard work of consciousness away from the homunculus. However, another, equally mysterious mechanism seems to be required. If sensory information flows to circuits where it can exert leverage on intentions, plans, and rules, what controls the flow? Which intentions, plans, and rules are under consideration at any moment? The need for a homunculus has apparently been replaced by the need for a traffic cop.

We speculate that, unlike for the homunculus, we already have insights into the brain mechanisms that serve as traffic cop. These are the same mechanisms that allow an animal to explore its environment; that is, to forage. Foraging is about connecting data in the environment to a prediction of reward through complex behavior (Gallistel, 2000). However, in principle, the mechanisms of foraging, like the mechanisms of decision-making, do not need to be tied to overt behaviors. The same principles that apply to visits to flowers could direct the parietal lobe to query the visual cortex for evidence needed to answer a question about motion. More generally, foraging might be related to the leaps our brains make to replace one percept with another (e.g., binocular rivalry), to escape one behavioral context for another, or to explore new ideas. For cognitive neuroscientists, these ideas inspire research on how reward expectation influences sensory-motor and higher processing in association areas of the brain. For the philosopher of mind, these ideas provide an inkling of how properties of the brain give rise to agency and, perhaps, free will.

Shadlen MN, Gold JI (2004) The neurophysiology of decision-making as a window on cognition. In: The Cognitive Neurosciences, 3rd edition. (Gazzaniga MS, ed): MIT Press. [Preprint][Proofs with color figs in back]


psychoactive salad

Over at The Straight Dope, a reader writes:

Various health and yoga websites claim that iceberg lettuce contains chemicals similar to laudanum, morphine, or other opiates. There are also reports of people being admitted to hospitals after injecting themselves with lettuce extracts, and papers about smoking lettuce. I have found no information about the chemical constitution of lettuce that mentions morphine or opiates. Are there such things in supermarket lettuce? –Curious Lettuce Eater, via e-mail

Cecil replies:

You’re thinking: How can iceberg lettuce be a drug? It barely qualifies as a food. Little do you know. While the stuff from the supermarket isn’t likely to do much, lettuce generally speaking does contain psychoactive compounds. Enough to get you high? Hard to say. Judging from available evidence, the stuff might do nothing, give you a buzz, or kill you. Here’s what we know:

Read the rest at The Straight Dope


The Tangled Wing

The second edition of Melvin Konner’s The Tangled Wing: Biological Constraints on the Human Spirit is reviewed here and the book has a website, here which includes the notes Konner used to write the book (yay!) and his afterword about the dangers of sociobiology

The contents of this book are known to be dangerous.

I do not mean that in the sense that all ideas are potentially dangerous. Specifically, ideas about the biological basis of behavior have encouraged political tendencies and movements later regretted by all decent people and condemned in school histories. Why, then, purvey such ideas?

Because some ideas in behavioral biology are true?among them, to the best of my knowledge, the ones in this book?and the truth is essential to wise action. But that does not mean that these ideas cannot be distorted, nor that evil acts cannot arise from them. I doubt, in fact, that what I say can prevent such distortion. Political and social movements arise from worldly causes, and then seize whatever congenial ideas are at hand. Nonetheless, I am not comfortable in the company of scientists who are content to search for the truth and let the consequences accumulate as they may. I therefore recount here a few passages in the dismal, indeed shameful history of the abuse of behavioral biology, in some of which scientists were willing participants.

(read more)


the ideal christmas presents, for monkeys

My friend Stephen sends me this by email, saying:

‘… it turns out that Psychology CAN sometimes be interesting – who would have thought it?

Searching for inspiration about toys that are “gender neutral” for use in Theory of Mind stories today, I came across the abstract below. It seems that some researchers got a grant to give toys to a bunch of monkeys and see which ones the boy monkeys liked and which ones the girl monkeys liked. It turned out that the boy monkeys preferred the stereotypical boy-toys (a car and ball) and the girl monkeys preferred the stereotypical girl-toys (a doll and a pot), which is one in the eye for theories that say boys and girls are only different because we socialise them that way.

P.S. Is it just me, or is anything that involves monkeys (even when it’s
psychology) inherently brilliant.’

And the paper is:

Title: Sex differences in response to children’s toys in nonhuman primates (Cercopithecus aethiops sabaeus)
Author(s): Alexander GM, Hines M
Source: EVOLUTION AND HUMAN BEHAVIOR 23 (6): 467-479 NOV 2002
Abstract: Sex differences in children’s toy preferences are thought by many to arise from gender socialization. However, evidence from patients with endocrine disorders suggests that biological factors during early development (e.g., levels of androgens) are influential. In this study, we found that vervet monkeys (Cercopithecus aethiops sabaeus) show sex differences in toy preferences similar to those documented previously in children. The percent of contact time with toys typically preferred by boys (a car and a ball) was greater in male vervets (n=33) than in female vervets (n=30) (P<.05), whereas the percent of contact time with toys typically preferred by girls (a doll and a pot) was greater in female vervets than in male vervets (P<.01). In contrast, contact time with toys preferred equally by boys and girls (a picture book and a stuffed dog) was comparable in male and female vervets. The results suggest that sexually differentiated object preferences arose early in human evolution, prior to the emergence of a distinct hominid lineage. This implies that sexually dimorphic preferences for features (e.g., color, shape, movement) may have evolved from differential selection pressures based on the different behavioral roles of males and females, and that evolved object feature preferences may contribute to present day sexually dimorphic toy preferences in children.

I emailed him about blogging it and he said

‘After all, just because there might be something about stereotypical boy-toys that inclines boys towards them (and the same for girls with girl-toys) doesn’t mean we ought to encourage and reinforce gender differences through socialisation – I don’t want to promote some kind of wicked is-to-ought fallacy here!’


My research interests

I’ve just updated my page on the Adaptive Behaviour Research Group webpages at the University of Sheffield (here). It seems that, like scars, i have accumulated a collection of interests within psychology. My A-level history teacher was right, I am an intellectual butterfly, flitting from topic to topc (I’m still hoping that she meant it in a nice way).

My (updated) research interests

  • The cognitive neuroscience of decisions; the interaction of cortical and subcortical sites of attentional and action selection; response selection in the stroop task
  • Learning theory approaches to attitude psychology
  • Social attitudes; social networks & social influence
  • Philosophy of science; especially the role(s) of computational modelling in psychology

    Currently, i am working with SUBR:IM (Sustainable Urban Brownfield Regenation: Integrated Management, looking at residents’ perceptions of risk from pollution, feelings of trust in government behaviour and their desires for the long-term management of brownfield sites.

  • Categories

    Why Susie Sells Seashells by the Seashore

    More attacks on the notion of deliberate agency. Again, emphasis in the abstract is mine. Looking at the data the effects are far larger than i expected them to be (but i expected them to be pretty small)

    Makes me wonder if my motives for liking Sheffield are anything to do with my surname (but not for long).

    Why Susie Sells Seashells by the Seashore: Implicit Egotism and Major Life Decisions

    Brett W. Pelham, Matthew C. Mirenberg, and John T. Jones

    Because most people possess positive associations about themselves, most people prefer things that are connected to the self (e.g., the letters in one?s name). The authors refer to such preferences as implicit egotism. Ten studies assessed the role of implicit egotism in 2 major life decisions: where people choose to live and what people choose to do for a living. Studies 1?5 showed that people are disproportionately likely to live in places whose names resemble their own first or last names (e.g., people named Louis are disproportionately likely to live in St. Louis). Study 6 extended this finding to birthday number preferences. People were disproportionately likely to live in cities whose names began with their birthday numbers (e.g., Two Harbors, MN). Studies 7?10 suggested that people disproportionately choose careers whose labels resemble their names (e.g., people named Dennis or Denise are overrepresented among dentists). Implicit egotism appears to influence major life decisions. This idea stands in sharp contrast to many models of rational choice and attests to the importance of understanding implicit beliefs.


    different languages, different dyslexias

    Readers of Chinese use different parts of the brain from readers of English, write Brian Butterworth and Joey Tang

    This guardian article is interesting in it’s own right – different phonological and visuo-spatial requirements of reading english vs reading chinese, ‘Chinese dyslexia may be caused by a different genetic anomaly than English dyslexia’, etc – and also because it is an example of two scientists turning their research into popular news form themselves – bravo them and bravo the guardian for letting them do it.