Reed fails to sell arms fairs

Times Online december 30th 2007:

THE professional publisher Reed Elsevier has failed to sell off its controversial arms fairs by the end of this year as planned.

The tiny but highly profitable division was put up for sale in June after key customers and authors took offence at Reed’s involvement in shows such as DSEi (Defence Systems & Equipment International), London’s main arms fair, where some exhibitors were ejected this year for trying to promote leg irons.

Sir Crispin Davis, Reed’s chief executive, was criticised at last year’s annual meeting by antiwar campaigners. F&C Asset Management and the Joseph Rowntree Charitable Trust also sold their shares in the company in protest.

Bids for the division, which includes the Abu Dhabi Idex fair, came in at close to £30m, but failed to progress. The sale is being handled by Price Waterhouse Coopers.

Reed said recently that there was “very active interest” in the portfolio.


How the bee got its dance

There are only two species that have language — humans, and honeybees. Other animals communicate, but its only us two that have language. And language means grammar; some abstract structure which conveys meaning according to the arrangement of symbols in that structure.

Our language is vastly more sophisticated than the honeybees. Their language is something called a waggle-dance, which conveys information about a food source between a individual who has returned to the hive from foraging and between her fellow workers. She peforms her waggle dance and the length and orientation of different parts of the dance indicates the quality of the food source and the direction in relation the the current position of the sun. The dance has structure, and that structure conveys the meaning of the components within it — its a language, a primitive language, but still the only thing that looks close to ours in the animal kingdom.

Why is that? Why is the only other grammar not found in a fellow primate, nor even a fellow mammal but found in an insect?

Here’s my theory — language is a system with unparalleled power to communicate information. But this means that it also has unparalleled ability to deceive, which is what one of the basic properties of communication systems. If you can use language to convey a very specific message, you can also use it to make very specific deceptions, for example tricking someone the food is in one direction, while you go and enjoy it in another. Because of the unprecedented capacity of language to deceive, it exists at the top of a steep evolutionary mountain. Any species which is evolving language must have some protection against the threat of deception, otherwise the only defense is to ignore language-based communications all-together (in which case you don’t get any benefit, and so language-evolution never gets off the ground).

The human defense against deception-using-language is based on our other cognitive abilities — the ability to reason about who to trust and when to trust them. Co-evolving these capacities with language is one strategy which allows the evolution of language. The honeybees have used another, circumventing the threat of deception by making deception evolutionarily pointless: honeybees in a hive are all genetically identical, so although language inherantly contains the capacity to deceive, in honeybees there is no reason why deception itself would evolve to diminish the benefits of communicating through language.

Update: I was wrong about honey bees being genetically identical, but I don’t think it demolishes the argument (quoting N.) “Honey bees are not genetically identical. Worker sisters share 3/4 of the genes with each other, but would only share 1/2 their genes with any offspring they might produce, so they can better propagate their genes by helping the queen produce more sisters. Wikipedia link here


Quote #213

Consciousness is a fascinating but elusive phenomenon; it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written about it.

Stuart Sutherland in the International Dictionary of Psychology (1989)

psychology science

Cognitive Dissonance Reduction

Following on from my earlier post about the way psychologists look at the world, let me tell you a story which I think illustrates very well the tendency academic psychologists have for reductionism. It’s a story about a recent paper on the phenomenon of cognitive dissonance, and about a discussion of that paper by a group of psychologists that I was lucky enough to be part of.

Cognitive Dissonance is a term which describes an uncomfortable feeling we experience when our actions and beliefs are contradictory. For example, we might believe that we are environmentally conscious and responsible citizen, but might take the action of flying to Spain for the weekend. Our beliefs about ourselves seem to be in contradiction with our actions. Leon Festinger, who proposed dissonance theory, suggested that in situations like this we are motivated to reduce dissonance by adjusting our beliefs to be in line with our actions.

Obviously after-the-event it is a little too late to adjust our actions, so our beliefs are the only remaining point of movement. In the flying to Spain example you might be motivated by cognitive dissonance to change what you believe about flying: maybe you come to believe that flying isn’t actually that bad for the environment, or that focussing on personal choices isn’t the best way to understand environmental problems, or you could even go all the way and decide that you’re not an environmentally responsible person.

The classic experiment of dissonance theory involved recruiting male students to take part in a crushingly boring experiment. The boring part was an hour of two trivial actions — loading spools into a tray, turning pegs a quarter-turn in a peg-board. At the end of this, after the students through the experiment was over, was the interesting part of us. The students were offered either $1 or $20 to tell the next participant in the experiment (actually the female accomplice of the experimenter) that the experiment she was about to do was really enjoyable. After telling this lie, the participants were then interviewed about how enjoyable they really found the experiment. What would you expect from this procedure? Now one view would predict that the students paid $20 would enjoy the experiment more. This is certainly what behaviourist psychology would predict — a larger reward should produce a bigger effect (with the effect being a shift from remembering the task as boring, which is was, to remembering it being enjoyable, which getting £20 presumably was). But cognitive dissonance theory suggests that the opposite would happen. Those paid $20 would have no need to change their beliefs about the task. They lied about how enjoyable the task was to the accomplice, something which presumably contradicted their beliefs about themselves as nice and trustworthy people, but they did it for a good reason, the $20. Now consider the group paid only $1. They lied about how enjoyable the task was, but looking around for a reason they cannot find one — what kind of person would lie to an innocent for only $1? So, the theory goes, they would experience dissonance between their actions and their beliefs and reduce this by adjusting their beliefs: they would come to believe that they actually did enjoy the boring task, and this is the reason that they told the accomplice that it was enjoyable. And, in fact, this is what happened.

At this point I want you to notice two things about cognitive dissonance. Firstly, it requires the existence of quite sophisticated mental machinery to operate. Not only do you need to have abstract beliefs about the world and yourself, you need to have some mechanism which detects when these beliefs are in contradiction with each other or with your actions, and which can (unconsciously) adjust selective beliefs to reduce this contradiction. The second thing to notice is that all this sophisticated mental machinery is postulated to exist from changes in behaviour, it is never directly measured. We don’t have any evidence that the change in attitudes really does result from an uncomfortable internal state (‘dissonance’) or that any such dissonance does result from an unconscious perception of the contradiction between beliefs and actions.

So, to the recent paper and to reductionism. The paper, by Louisa Egan and colleagues at Yale [ref below] is titled ‘The Origins of Cognitive Dissonance‘, and represents one kind of reductive strategy that psychologists might employ when considering a theory like cognitive dissonance. The experiments in the paper (summarised here and here) both involved demonstrating cognitive dissonance in two groups which do not have the sophisticated mental machinery normally considered necessary for cognitive dissonance — four year-old children, and monkeys. The reductionism of the paper, which the authors are quite explicit about, is to show that something like cognitive dissonance can occur in these two groups despite their lack of elaborate explicit beliefs. Unlike the students in Festinger’s classic experiment we can’t suppose that the children or the monkeys have thoughts about their thoughts in the same way that dissonance theory suggests.

To demonstrate this the authors employed an experimental method that could be used with subjects who did not have language, but would still allow them to observe the core phenomena of dissonance theory — the adjusting of attitudes in line with previous actions. The method worked like this. For each participant — be they a child or a monkey — the experimenters identified three items (stickers for the children, coloured M&M’s for the monkeys) which the participant preferred equally. In other words, if we call the three items A, B and C then the child or monkey liked all of the items the same amount. Then the experimenter forced the participating child or monkey to choose between two of the items (lets say A and B), so that they only got one. Next the child or monkey was offered a choice between item C and the item they did not choose before. So, if the first choice was between A and B and the participant chose A, then the next choice would be between B and C. What does dissonance theory predict for this kind of situation? Well, originally the three items are equally preferred — that’s how the items are selected. After someone is forced to make a first choice, between A and B, cognitive dissonance supposedly comes into play. The participant now has a reason to adjust their attitudes, and the way they do this is to downgrade their evaluation of their unchosen item. This will is known as being happy with what you got or “I must not like B as much, because I chose A”. So on the second choice (B vs C) the participants are more likely to choose C (more likely that chance, and more likely than a control group that goes straight to the ‘second’ choice). This prediction is exactly what the experimenters found, in both children and monkeys, and the startling thing is that this occurred despite the fact that we know that neither group was explicitly talking to themselves in the way I outline the dissonance theory prediction above (“I must not prefer B as much…etc”). Obviously something like cognitive dissonance can be produced by far simpler mental machinery than that usually invoked to explain it, conclude the experimenters. In this way, The paper is a call to reduce the level at which we try and explain cognitive dissonance.

How far should you go when trying to reduce the level of theory-complexity that is needed to explain something? Psychologists know the answer to this immediately — as far as possible! So when our happy band of psychologists got to discussing the the Egan paper it wasn’t long before someone came up with a new suggestion, a further reduction.

What if, it was suggested, there was nothing like dissonance going on in the Egan et al experiments? After all, there was no direct measurement of anxiety or discomfort, so why suppose that dissonance occurred at all — perhaps, if we can come up with a plausible alternative, we can do away with dissonance all together. Imagine this, see if you find it plausible: all of us, including monkeys and children, possess a very simple cognitive mechanism which saves us energy by remembering our choices and, when similar situations arise, applying our old choices to new situations, thus cutting down on decision time. That sounds plausible, and it would explain the Egan et al results if you accept that the result of the first, A vs B, decision is not just “choosing A” but is also “not choosing B”. So, when you get to the second choice, B vs C, you are more likely to choose C because you are simply re-applying the previous decision of “not choosing B”, rather than performing some complicated re-evaluation of your previously held attitudes a-la cognitive dissonance theory.

At this point in the discussion the psychologists in the room were feeling pretty pleased with themselves — we’d started out to with cognitive dissonance, reduced the level of complexity of mental processes required to explain the phenomenon (the Egan et al result) and then we’d taken things one step further and reduced the complexity of the phenomenon itself. At this point, we had a discussion of how widely the ‘decisional inertia’ reinterpretation could be applied to supposed cognitive dissonance phenomena. Obviously we’d have only been really satisfied with the reinterpretation if it applied more widely than just to this one set of experiments under consideration.

But further treats were in store. What if we could reduce things again, what if we could make even simpler the processes involved? We’d already started to convince ourselves that the experimental results could be produced by simple cognitive processes rather than complex cognitive processes, perhaps we could come up with a theory about how the experimental results can be produced without any cognitive processes at all! Now that would be really reductive.

Here’s what was suggested, not as definitely what was happening, but as a possibility for what could potentially be happening — and remember, if you are sharing a table with reductionists then they will prefer the simple theory by default because it is simpler. You will need to persuade them of the reasons to accept any complex theory before they abandon the simple one. Imagine that there is no change at all going on in the preferences of the monkeys and the children. Instead, imagine — o the simplicity! — that any participant in the experiment merely has a set of existing preferences. These preferences don’t even have to be mental, by preferences all I mean are consistent behaviours towards the items in question (stickers for the children, M&Ms for the monkeys). From here, via a bit of statistical theory, we can produce the experimental result with out any recourse to change in preferences, cognitive dissonance or indeed anything mental. Here’s how. Whenever you measure anything you get inaccuracies. This means that your end result reflects two things: the true value and some random ‘noise’ which either raises or lowers the result away from the true value. Now think about the Egan et al experiment. The experimenters picked three items, A, B and C, which the children or monkeys ‘preferred equally’, but what did this mean? It meant only that when the experimenters measured preference their result was the same for items A, B and C. And we know, as statistically-savvy psychologists, that those results don’t reflect the true preferences for A, B and C, but instead reflect the true preferences plus some noise. In reality, we can suppose, the children and monkeys actually do prefer each item differently from the others. Furthermore this might even be how they make their choice. So when they are presented with A vs B and choose A it may be because, on average, they preferred A all along. Now watch closely at what happens next. The experimental participants are given a second choice which depends on their first choice. If at first they chose A over B then the second choice is B vs C. But if they chose B over A then the second choice is A vs C. We know the results: they then choose C more than the unchosen option from the first choice, be it A or B, but now we have another theory as to why this might be. What could be happening is merely that, after the mistaken equivalence of A, B and C, the true preferences of the monkey or child are showing through, and the selective presentation of options on the second choice is making it look like they are changing their preferences in line with dissonance theory. Because the unchosen option from the first choice is more likely to have a lower true preference value (that, after all, may be why it was the unchosen option), it is consequently less likely to be preferred in the second choice, not because preferences have changed, but because it was less preferred all along. In the control condition, where no first choice is presented, their is no selective presentation of A and B and so the effect of the true values for preferences for A and B will tend to average out rather than produce a preferential selection of C.

Now obviously the next step with this theory would be to test if it is true, and check some details which might suggest how likely this is. Did Egan et al assess the reliability their initial preference evaluation? Did they test preferences and then re-test them at a later date to see if they were reliable? These and many other things could persuade us that such an explanation might or might not be very likely. The important thing, for now, is that we’ve come up with an explanation that seems as as simple as it could possibly be and still explain the experimental results.

For psychologists, reductionism is a value as well as a habit. We seek to use established simple features of the mind to explain as many things as possible before we get carried away with theories which rely on novel or complex possibilities. The reductionist position isn’t the right one in every situation, but it is an essential guiding principle in our investigations of how the mind works.

Link: to earlier post about how psychologists think.

Cross-posted at


Egan L. C., Santos L.R., Bloom P. (2007). The Origins of Cognitive Dissonance: Evidence from Children and Monkeys. Psychological Science, 18, 978-983.

Festinger, L. and Carlsmith, J. M. (1959).”>Cognitive consequences of forced compliance. Journal of Abnormal and Social Psychology, 58, 203-211.

academic psychology

How do psychologists think?

I believe that the important thing about psychology is the habits of thought it teaches you, not the collection of facts you might learn. I teach on the psychology degree at the University of Sheffield and, sure, facts are important here — facts about experiments, about the theories which prompted them and about the conclusions which people draw from them — but more important are the skills which you acquire during the process of learning the particular set of facts. Skills like finding information and articulating yourself clearly in writing. Those two things are common to all degrees. But lately I’ve been wondering what skills are most emphasised on a psychology degree? And I’ve been thinking that the answer to this is the same as to the question ‘how do psychologists think?’. How does the typical psychologist[*] approach a problem? I’ve been making a list and this is what I’ve got so far:

1. Critical — Psychologists are skeptical, they need to be convinced by evidence that something is true. Their default is disbelief. This relates to…

2. Scholarly — Psychologists want to see references. By including references in your work you do two very important things. Firstly you acknowledge your debt to the community of scholars who have thought about the same things you are writing about, and, secondly, you allow anyone reading your work to go and check the facts for themselves.

3. Reductionist — Psychologists prefer simple explanations to complex ones. Obviously what counts as simple isn’t always straightforward, and depends on what you already believe, but in general psychologists don’t like to believe in new mental processes or phenomena if they can produce explanations using existing processes or phenomena.

I am sure there are others. One of the problems with habits of thought is that you don’t necessarily notice when you have them. Can anyone offer any suggested additions to my incoate list?

* I’m using the label ‘psychologists’ here to refer to my kind of psychologists — academic psychologists. How and if what I say applies to the other kinds of psychologists (applied, clinical, etc) I’ll leave as an exercise to the reader.

Cross-posted at


We live in small worlds

When I go to the kinds of places that I go to, I tend to see people I know. Sometimes I don’t know the people I meet in these places, but we find out we have mutual friends or acquiantances. Then we laugh and we say “Small world!”. Small worlds are nice. They make a large and sometimes hostile world seem friendly and controllable. Small worlds mean there is a handle on every person I meet — not a stranger, but a friend-of-a-friend I haven’t met yet. I like being surprised when I walk into a pub in a strange city and meet someone I know from university, or go on demonstration and meet someone I met at a party once. This kind of thing happens so often that it is sometimes really hard to believe that there are six billion individuals on the planet. If that was true, why do I keep bumping into the same five hundred or so?

There answer, of course, is one part our ability to ignore what we aren’t interested in (even people) and another part the ability to move in tremendously limited circles, circumscribed by habit, personality and class. The first factor means that we spend a lot of time not noticing the fifty people in a bar, say, who we don’t know, and far more noticing the one person we do. The second factor means that our choices of location are very very far from random. The shops we use, the evenings we spend, the paths we walk are patterned, always and pervasively, by the kind of person we are. Because of this patterning, we tend to run into a circuit of people who are like us — people with the same or overlapping orbits of habits, preferences and class.

The two factors combine when we manage to ignore our own influence in determining who we meet ‘by chance’. Even in foreign cities our habits, preferences and class will make selections on where we go, allowing us to almost unconsciously ignore certain options (“that restaurant looks tacky”, “that restaurant looks pretentious”, etc) so that when we arrive in a place that we tell ourselves was completely random, we are in fact already set up to bump into other people who are like ourselves, and hence have those small world meetings that we love so much.

The dark side of the small world is when we allow our choices to colour who we meet so strongly that we stop noticing that other kinds of people exist. If we never meet people with other attitudes or backgrounds then we lose a contrast against which we can gauge our own attitudes and background. Suddenly the whole world is full of people just like us (vegans, libertarians, motorist, whatever). From here it is hard to avoid self-righteousness and then irrelevance.