Diet choices

Do our individual actions matter when faced with global climate change?

There’s an argument that the collective action required to combat climate change is undermined by focussing on individual consumption choices. Encouraging people to think that responsibility for the environment involves choosing the right kind of teabags, drinking straws or washing powder distracts from the real culprits for emissions (corporations?) and the most effective levers of change (legislation?).

This view is buffered by the blunt logic that a single individual’s behaviour won’t affect the collective outcome. If I reduce my emissions, but nobody else does, then my efforts will have been in vain. If everybody else reduces their emissions, it similarly wouldn’t matter whether I did too, or not.

Diet seems like a prime example of a highly individualised choice. The idea that each of us can and should choose what to eat, and can do so for personal reasons, based on everything from taste to ethics, is widespread. A plant-based diet is associated with lower emissions than a meat-and-diary one, so seems appealing to anyone worried about climate and carbon emissions.

But, sings the critical chorus, if you change your diet, are you just allowing yourself to be distracted from the structural causes of climate change, seduced by an illusion that you can solve collective problems through fashionable lifestyle choices?

Dietary choice is also an example of something highly cultural, as well as highly individual. And food culture is changing. The number of people eating plant-based diets is increasing, as are the options for anyone who want to eat meat and/or diary free. Individual dietary choices take place within this context, and contribute to it.

So here is my question: Knowing what we know about the extent of our carbon emissions, and the reduction required in them, how consequential will individual changes in diet be? Are the ~5% of the UK population who are vegan substantially affecting UK emissions? If not, what percentage would need to be vegan to have a substantial impact?

So, speculating wildly (this is no rhetoric, I am mostly ignorant about actual carbon emissions sources and targets), if a timeline of our carbon emissions and target looks like this:

What does the contribution of human diet look like? Something like this?

And what are the possible projections if different proportions of meals are plant-based rather than meat-and-diary?

The answers interest me because they seem to suggest a bridge between seeing diet as a solely individual choice – and so one which suffers from the brute logic of the collective action problem – and seeing diet as a part of the collective response to climate change. Certainly effective action on climate change requires more government action, but it is also interesting to know how large an effect this particular individual choice could have.


Christian Reynolds has related work, which shows generational changes in diet, and a corresponding change in associated emissions: The greenhouse gas emission impacts of generational and temporal change on the UK diet

Further clues may be in here Zero Carbon: Rethinking the Future, from CAT.

politics psychology systems

Collective intelligence in twitter discussions

The UCU strike has shown how effective twitter can be. University staff from around the country have shared support, information and analysis . There has been a palpable feeling of collective intelligence at work. When the first negotiated agreement was released (at 7.15 on a Monday evening) my impression was that most people didn’t know what to make of it. I didn’t know what to make of it. Pensions are complex, and the headline feature – retention of a Defined Benefit scheme seemed positive. Overnight on twitter sentiment coalesced around the hashtag #NoCapitulation and at 10am on the Tuesday union members around the country held branch meetings – all 64 of which resoundingly rejected the agreement. The subsequent – substantially improved – offer suggests that this was the right thing for union members to do, and the speed and unanimity with which they did it wouldn’t have been possible without the twitter discussion that happened over night.

So why, on this occasion, does twitter work as a platform for collective intelligence? Often enough twitter seems to be a platform which supports idiocy, narcissism and partisan bickering. The case of UCU strike twitter contrasts with other high volume / high urgency discussions, such as the aftermath of disasters, where twitter is as likely to be used to spread fake news and political point scoring as it is for useful information and insightful analysis.

Collective intelligence: what helps, what hurts

There is a literature on collective decision making, which highlights a few things which need to hold for a group discussion to be more productive than individuals just making up their own mind.

  • Arguments must be exchanged . First off, and a factor which should hearten committed rationalist everywhere, the exchange of arguments – not just information – seems to be key to productive groups (“studies that have manipulated the amount of interaction or that have examined the content of interactions have found that the exchange of arguments is critical for these improvements to occur”, Mercier, 2016 ).

  • Agreed purpose . Productive groups need to have a shared idea of what they are trying to achieve. If, for example, half of a group like solving problems and half like having arguments, their contributions to the discussion will, sooner or later, push in different directions ( van Veelen & Ufkes, 2017 , Sperber & Mercier, 2017 )

  • Diversity, in viewpoints . The literature on the effect of diversity on collective intelligence is mixed. Too much diversity between participants may hinder group discussions ( Wooley et al, 2015 ) and demographic diversity alone certainly isn’t sufficient for the wisdom of crowds to emerge ( de Oliveira & Nisbett, 2018 ). Instead enough ‘ view point diversity ‘ to produce a cognitive division of labour without impairing group cohesion. A corollary is that the more group cohesion you have the higher your opportunity to harness group diversity.

Bang & Frith’s fantastic 2017 review on group decision making also highlights some traps which successful group decision must avoid:

  • Herding Herding is excessive agreement. This can happen when group members lack independent information or suffer overly similar viewpoints. It can also be caused by group members having the desire to align to the group for its own sake, or if they believe that others have better knowledge. The result is the same: an information cascade where a popular viewpoint attracts adherents because it is popular, and so appears more correct because it is popular, and on in a vicious circle.

  • Group decision biases One of these, according to Bang & Frith, is ‘shared information bias’ which is a bias to discuss the things everyone knows about rather than share information or discuss aspects of the decision which aren’t yet common to the group

  • Competing sub-goals As well as lacking shared a shared purpose in discussion, group decision making can be derailed by status issues(think showing off, excessive pride preventing admission of error, etc), accountability issues (such as people avoiding unpopular opinions if they will be punished if that position turns out to be in error) and ‘social loafing’ (this is the textbook phenomenon whereby people try less hard in larger groups, effectively free-riding on others’ contributions)

The #USSstrike discussion on twitter

Before trying to apply the factors identified from the literature on collective intelligence / group decision making to the #USSstrike, let’s throw up a quick list factors which seem plausible candidates for why twitter was the site of a productive conversation this time. Once we have a list of candidates, we can see how they map to the features identified in the literature as necessary conditions for useful group decision making.

So, the #USSstrike twitter conversation may have been productive because:

  • twitter discussion built on top of existing networks (academics have local connections to colleagues at their own institutions, as well as disciplinary connections at other institutions across the country.)

  • twitter discussion built on top of IRL discussions on picket lines (lots of opportunity to chat on picket lines).

  • common interest (participants in the conversation are invested in understanding the issue, and want to same thing – a positive outcome to the dispute – even if they don’t agree on what that actually means).

  • niche interest (most of the population is not that interested in academic pensions, which means fewer trolls, troublemakers and idle speculators).

  • participants have training in critically evaluating sources (i.e. hopefully have good filters for unreliable information, recognise important facts)

  • participants have experience discussing substantive issues in public, daily using twitter -as it is at its best – as a platform for information synthesis and recommendation

Combining these lists we get some traction on why academic twitter was suddenly able to transform into a vehicle for productive collective intelligence on pensions (and maybe how we can help keep it that way).

In short, our three criteria for productive group decisions were met:

  • Arguments were exchanged: arguments are the daily tools of academics, of course we exchanged arguments, not just information

  • Our purpose was agreed: the nature of the dispute did that for us. Those in the discussion had a common purpose to understand an issue with high stakes . Not only do we face the same pension cuts, but the logic of collective bargaining and action puts us all on the same side

  • Diverse viewpoints were represented: maybe it is less clear this criteria was met, but perhaps we can thank the fact that academics from all disciplines have been discussing the dispute for at least some boost in the diversity of backgrounds and assumptions that participants bring t the discussion.

The three decision traps – herding, bias and competing sub-goals – are all warnings for the future. We seem to have avoided them for the moment. but there are plenty of individual behaviours which can encourage them. Most of us, with notable exceptions, are guilty of some social loafing. Blindly following others (leading to herding) seems a particular risk given that the logic of collective action is an important part of Union identity. I also note that bad manners, such as abusing people who make mistakes or adopt alternative viewpoints, as well as being bad manners, also works to effectively punish viewpoint diversity, with a corresponding decrement in our capacity for collective intelligence.

As a student of decision making the dispute has been exhilarating to take part in and I’ll watch with interest the next rounds (and the corresponding twitter discussion).

My quick primer on the UCU strike action is here .


Bang, D., & Frith, C. D. (2017). Making better decisions in groups . Royal Society Open Science, 4 (8), 170193.

Mercier, H. (2016). The argumentative theory: Predictions and empirical evidence . Trends in Cognitive Sciences, 20 (9), 689-700.

de Oliveira, S., & Nisbett, R. E. (2018). Demographically diverse crowds are typically not much wiser than homogeneous crowds . Proceedings of the National Academy of Sciences, 115 (9), 2066-2071.

Woolley, A. W., Aggarwal, I., & Malone, T. W. (2015). Collective intelligence and group performance . Current Directions in Psychological Science, 24 (6), 420-424.

politics systems

Cognitive Democracy

This leads us to argue that democracy will be better able to solve complex problems than either markets or hierarchy, for two reasons. First, democracy embodies a commitment to political equality that the other two macro-institutions do not. Clearly, actual democracies achieve political equality more or less imperfectly. Yet if we are right, the better a democracy is at achieving political equality, the better it will be, ceteris paribus, at solving complex problems. Second, democratic argument, which people use either to ally with or to attack those with other points of view, is better suited to exposing different perspectives to each other, and hence capturing the benefits of diversity, than either markets or hierarchies.

From ‘Cognitive Democracy‘ by Henry Farrell and Cosma Shalizi (2012, ‘unpublished’ article in preparation)


a post-creole continuum

From wikipedia: “William Stewart, in 1965, proposed the terms acrolect and basilect as sociolinguistic labels for the upper and lower boundaries respectively of a post-creole speech continuum”….
“In certain speech communities, a continuum exists between speakers of a creole language and a related standard language. There are no discrete boundaries between the different varieties and the situation in which such a continuum exists involves considerable social stratification”.

And so:

18 different ways of rendering the phrase “I gave him one” in Guyanese English (from Bell, R.T. (1976), Sociolinguistics: Goals, Approaches, and Problems, Batsford).


idiocy systems

Games which teach kids systems thinking

Procedural thinking may be the 21st century’s most essential yet endangered way of thinking. Of course the best way of teaching it to your kids is to live in the 1980s and buy them a BBC Micro, but that is getting harder and harder in these days of touchscreens and it being 30 years too late. Now children’s games designers Exploit ™ have introduced a new range of children’s games for exactly the purpose of teaching procedural thinking skills to your kids. Each game in the new range is designed to be played by children and adults together and involves rules of age appropriate complexity. Standard play of these games should allow the player with the most foresight and self-control to win most of the time (ie the adult). Within each ruleset, however, is hidden a loop-hole which, if discovered, should allow the unscrupulous player crushing victory after crushing victory. The thrill of discovering and using these loop-holes will train your kids in the vital skills of system analysis, procedural thinking and game theory. Parents can either play in “carrot” mode, feigning ignorance of each game’s loop-hole and thus allowing their children the joy of discovery; or they can play in “stick” mode, exploiting the loop-hole for their own ends and using their child’s inevitable defeat, amidst cries of “it’s not fair!” as encouragement for them to engage their own ludic counter-measures.

academic politics science systems

Trust in science

I’ve been listening to the CBC series (2009) “How to Think about Science” (listen here, download here). The first episode starts with Simon Schaffer, co-author of the The Leviathan and the Air Pump. Schaffer argues that scientists are believed because they organise trust well, rather than because they organise skepticism well (which is more in line with the conventional image of science). Far from questioning everything, as we are told science teaches, scientists are successful as expects because of the extended network of organisations, techniques and individuals that allows scientists, in short, to know who to trust.

Schaffer also highlights the social context of the birth of science, focussing on the need for expertise —for something to place trust in — at a time of military, political and ideological conflict. Obviously, our need for certainty is as great in current times.

Understanding of the processes of science, Schaffer asserts, is required for a true understanding of the products of science, and public understanding of both is required for an informed and empowered citizenry.

This last point puts the debate about open access scientific journals in a larger and more urgent perspective. In this view, open access is far more than a merely internal matter to academia, or even merely a simple ethical question (the public fund scientists, the publications of scientists should be free to the public). Rather, open access is foundational to the institution of trusted knowledge that science (and academia more widely) purports to be. The success of early science was in establishing the credibility of mechanisms for ‘remote witnessing’ of phenomenon. The closed-access publishing system threatens to undermine the credibility of scientific knowledge. Once you recognise that scientific knowledge is not the inviolable product of angelic virtue on the part of science, you concede that there the truth of scientific propositions is not enough — we need to take seriously the institutions of trust that allow science to be believed. The status of expert who cannot be questioned is a flattering one, but it relies on short-term cache. If we care about science and the value of scholarship more widely then open access publishing is an urgent priority.

Update: Romanian translation of this web page (by Web Geek Science)

intellectual self-defence systems

choice is not preference

There is a beauty to the arrangement whereby a cake is shared by one of us dividing it and the other choosing which part they want. The person dividing doesn’t know which part they’ll get so they have every incidentive to make fair shares. They say that John Rawls took this as inspiration for his philosophy of how a just society should be organised (but I don’t know enough about that).

But the cake cutting example only works for a world where the cake is homogeneous and the two cake-eaters have identical preferences (in this case, to have as much as possible). Imagine a world where the cake has a fruit half and a nut half, say, and I have two cake-eaters, A and B. A likes fruit and nut equally, she doesn’t care. B is allergic to nuts. Now the game of “one cuts, one chooses” doesn’t work. If A cuts she will slice the cake in half and be happy with whichever half she’s left with, but B better hope that A makes a half which is entirely fruit, otherwise she’ll be forced to make a choice between two bits of cake, some of which she can’t eat. B is at no risk of losing out, A is at substantial risk. If B cuts first, she might consider cutting the cake into a nut half and a fruit half, but then she has to hope A chooses the fruit half. And she might cut the cake into mixed halves an put up with a portion she can’t eat (but ensuring B only gets half the cake). The game-theoretic solution is probably to cut the cake into a larger, nut-plus-small-amount-of-fruit, half and a smaller, just-fruit, half. A will choose the larger half. A definitely wins, B loses out.

The solution whereby A and B both have half, and both enjoy their halves equally (ie B gets the fruit half) is simple, but enreachable via this sharing game.

I’m reminded of an experiment I think I read about in George Ainslie’s Breakdown of Will (don’t have the book to hand to check, so apologies for inaccuracies. We can pretend it is a thought experiment and I think it still makes the point). There’s a large long cage with a lever that opens a door at the other end. If you are a pig it take 15 seconds, say, to run from the lever to the door. After 20 seconds the door closes, so you get to eat your fill for 5 seconds. One pig on her own gets regular opportunities to feed, as well as plenty of exercise running backward and forth. Now imagine a big pig and a small pig. The big pig is a bully and always pushes the small pig off any food. In a cage with normal feeding arrangements the big pig gets all the food (poor small pig!). But in this bizarre long cage with the lever-for-food arrangement, a funny thing happens. The big pig ends up as a lever pressing slave for the small pig, who gets to eat all the foot.

To see why, we need a game-theory analysis like with the cake example. If the little pig pressed the lever, the big pig would start eating the food and the little pig wouldn’t be able to budge her. There’s no incentive for the little pig to press the lever, she doesn’t get any food either way! The big pig, however, has a different choice : if she presses the lever then she can charge down to the food and knock the little pig out of the way, getting 5 seconds of food. It’s worth it for big pig, but the outcome is that she does all the running and only gets a quarter of the food.

This suprising result is none the less a ‘behaviourally stable strategy’, to bastardise a phrase from evolutionary game theory.

Bottom line: minimally complex environments and heteogenities in agents’ abilities and preferences break simple fairness games. In anything like the real world, as Tom Slee so convincingly shows, choice is not preference.

politics sheffield systems

Small Worlds, article in Now Then

Now Then is an independent Sheffield-based arts and community magazine. They are monthly, good chaps and have an out of date website. It is part of the Opus Productions media empire. For the first issue of the magazine, last year, I wrote them an article about something that has interested me for a long time: small worlds. Specifically I’d been thinking about social networks and what the Watts and Strogatz small-world result had to tell us about them. The article is now here, should you wish to read it. It is pretty upbeat. I think if I had more room and less inclination to try to be positive I would include something about how we tend to organise our social worlds so that it seems, from the ‘ground-level’, that we are talking to everyone important, but in actually fact we are ignoring — completely estranged from — most of the people we are physically close to, insulated in comforting small worlds.

See also bridging and bonding social capital, ‘60 Million People You’d Never Talk To Voting For Other Guy’, We Live in Small Worlds

science systems

A kettle from Dublin

The rust inside this kettle shows an emergent pattern that is typical of the self-organising dynamics of reaction-diffusion systems.

One example of self-organising dynamics is in the topographic map of ocular dominance columns in the visual cortex. These intricate maps display a fascinating combination and interplay of regularity and irregularity. Such patterns have been modelled by computational neuroscientists using the Kohonen algorithm and variants

Thanks for the picture Cat!

psychology systems

A game of you

I asked the audience to imagine that I was running a game show. I announced that I would go along every row, starting at the front, and give each member a chance to say “cooperate” or “defect.” Each time someone said “defect” I would award a euro only to her. Each time someone said “cooperate” I would award ten cents to her and to everyone else in the audience. And I asked that they play this game solely to maximize their individual total score, without worrying about friendship, politeness, the common good, etc. I said that I would stop at an unpredictable point after at least twenty players had played

Like successive motivational states within a person, each successive player had a direct interest in the behavior of each subsequent player; and had to guess her future choices somewhat by noticing the choices already made. If she realized that her move would be the most salient of these choices right after she made it, she had an incentive to forego a sure euro, but only if she thought that this choice would be both necessary and sufficient to make later players do likewise.

In this kind of game, knowing the other players’ thoughts and characters– whether they are greedy, or devious, for instance—will not help you choose, as long as you believe them to be playing to maximize their monetary gains. This is so because the main determinant of their choices will be the pattern of previous members’ play at the moment of these choices. Retaliation for a defection will not occur punitively– a current player has no reason to reward or punish a player who will not play again — but what amounts to retaliation will happen through the effect of this defection on subsequent players’ estimations of their prospects and their consequent choices. These would seem to be the same considerations that bear on successive motivational states within a person, except that in this interpersonal game the reward for future cooperations is flat (ten cents per cooperation, discounted negligibly), rather than discounted in a hyperbolic curve depending on each reward’s delay.

Perceiving each choice as a test case for the climate of cooperation turns the activity into a positive feedback system—cooperations make further cooperations more likely, and defections make defections more likely. The continuous curve of motivation is broken into dichotomies, resolutions that either succeed or fail.

George Ainslie, A Selectionist Model of the Ego: Implications for Self-Control (also see pp93-94 in Breakdown of will)

politics systems

The tragedy of the commons

The phrase ‘tragedy of the commons’ was first popularised in an article about population control.

The rebuttal to the invisible hand in population control is to be found in a scenario first sketched in a little-known pamphlet in 1833 by a mathematical amateur named William Forster Lloyd (1794–1852). We may well call it “the tragedy of the commons,” using the word “tragedy” as the philosopher Whitehead used it: “The essence of dramatic tragedy is not unhappiness. It resides in the solemnity of the remorseless working of things.” He then goes on to say, “This inevitableness of destiny can only be illustrated in terms of human life by incidents which in fact involve unhappiness. For it is only by them that the futility of escape can be made evident in the drama.”

Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.

When men mutually agreed to pass laws against robbing, mankind became more free, not less so. Individuals locked into the logic of the commons are free only to bring on universal ruin once they see the necessity of mutual coercion, they become free to pursue other goals.

The most important aspect of necessity that we must now recognize, is the necessity of abandoning the commons in breeding. No technical solution can rescue us from he misery of overpopulation. Freedom to breed will bring ruin to all.

Hardin, G. (1968). The Tragedy of the Commons. Science, 162(3859), 1243-1248.

science systems

schelling on the purposes of modelling

Simplified models of artificial situations can be offered for either of two purposes. One is ambitious: these are “basic models” – first approximations that can be elaborated to simulate with higher fidelity the real situations we want to examine. The second is modest: whether or not these models constitute a “starting set” on which better approximations can be built, they illustrate the kind of analysis that is needed, some of the phenomena to be anticipated, and some of the questions worth asking.

The second, more modest, accomplishment is my only aim in the preceding demonstrations. The models were selected for easy description, easy visualization, and easy algebraic treatment. But even these artificial models invite elaboration. In the closed model [of self-sorting of a fixed population across two sub-groups (‘rooms’) according to individual’s preferences for a group mean age closest to their own], for example, we could invoke a new variable, perhaps “density”, and get a new division between the two rooms at a point where the greater attractiveness of the age level is balanced by the greater crowding. To do this requires interpreting “room” concretely rather than abstractly, with some physical dimension of some facility in short supply. (A child may prefer to be on the baseball squad which has older children, but not if he gets to play less frequently; a person may prefer to travel with an older group, but not if it reduces his chances of a window seat; a person may prefer the older discussion group, but not if it means a more crowded room, more noise, fewer turns at talking, and less chance of being elected chairman.) As we add dimensions to the model, and the model becomes more particular, we can be less confident that our model is of something we shall ever want to examine. And after a certain amount of heuristic experiments with building blocks, it becomes more productive to identify the actual characteristics of the phenomena we want to study, rather than to explore general properties of self-sorting on a continuous variable. Nursing homes, tennis clubs, bridge tournaments, social groupings, law firms, apartment buildings, undergraduate colleges, and dancing classes may display a number of similar phenomena in their membership; and there be a number of respects in which age, I.Q., walking speed, driving speed, income, seniority, body size, and social distinction motivate similar behaviours. But the success of analysis eventually depends as much on identifying what is peculiar to one of them as on the insight developed by studying what is common to them.

Schelling, T. (1978/2006). Micromotives and Macrobehaviour, pp183-184.

books politics systems

Questions for economists

Tim Harford wrote ‘The Undercover Economist’ and also writes the ‘Dear Economist’ column for the Financial Times. His book is excellent — a very readable introduction to economic theory and how it applies to various facets of everyday life. I was going to write him a letter, but then I found out that he’d sold half a million copies of his book and so, reckoning that he’d be too busy to write back to me, I am posting my thoughts here. This is partly for my own benefit as a note-to-self and partly because I’d be very happy to get answers from anyone or everyone on the questions I ask. Useful references are an acceptable substitute for wordy explanations.

Dear Undercover Economist,

On development — can everyone be rich? Won’t there always have to be someone to work the fields / clean the toilets / serve the coffee? Technologists answer: automatisation will remove much of life’s drudgery. Environmentalist retort: resources put limits on growth. Economists: imagine a world where every economy is ‘developed’. In that world we would expect to find people are wealthy according to their talents (because talents define scarcity). My question : in that world, what will the utterly talentless be paid to do with their time? Presumably we’ll still be forcing them to clean toilets, because the toilet-cleaning robots will be too expensive (they need to be in order to pay the wages of the very-expensive-to-hire toilet-cleaning-robot designers).

Information asymmetry: Akerlof (1970) has a description of how information asymmetry can prevent a viable market existing. Harford’s discussion credits to information asymmetry the reason why you can’t get a decent meal in tourist areas, but I am wondering if the effects are far more wide reaching that this. Big organisations will have an information advantage over individual consumers (on some things), as will anyone who devotes their entire economic energy to a single domain (eg selling avocados) over someone who is time poor (eg the typical avocado buyer). Coupled with a dynamic economic environment, couldn’t those with informational advantage effectively manipulate those with informational disadvantage? In other words, i’d be willing to bet that in a static market even an extremely informationally-deprived / cognitively challenged agent will work out the best deal, given enough time. But if the best deal keeps changing (and those with the information advantage keep changing it to suit their ends) the chances of the individual agent aren’t so good. File under benefits of collectivisation / market failure?

Efficiency of the market leads to loss of diversity (because all inefficient solutions are squeezed out). Diversity has it’s own value, both in system robustness (see ecosystems) and in terms of human experience (belonging to a specific place, variety being the spice of life, etc). So how do we incorporate the value of this diversity into market systems? I would submit that diversity is an example of something that exists above the single-agent view of things — is an example of an emergent phenomenon (see below). (Previously on idiolect Why is capitalism boring?)

Markets don’t have foresight. Do free marketeers admit that this is one of the functions of government? For example imagine agents who like to consume some finite resource. Presumably a ‘free market’ will be the most efficient way to organise their consumption. Efficient consumption of the resouce leads to its disappearence. Then what? In the Undercover Economist (p237) Harford says that in markets ‘mistakes cannot happen’ because any experiments with resources stay small scale. I would submit that while this is true at the micro level, with respect to efficiency — in other words, I agree that markets tend to efficiency — this is not true at the macro level, with respect to whole-system health.

An objection to this is that markets do have foresight because the individual agents have foresight – so they will incorporate into their cost function the anticipated future (so, eg, anticipated future resource availability). But what is agents do not have the information, or motivation to worry about the future? Does my concern just resolve down to the existence, or not, of the tragedy of the commons? Perhaps. I think key is the existence of a discontinuity between agent-level information and collective-level information; ie the issue is really about emergence, which is what the tragedy of the commons is really a specific example of.

Side note: if you are a market economist you are a de facto fan of emergence. Aggregate effects which emerge from mass individual action = emergence. Disconnection between individual goals (eg profit) and collective outcomes (efficiency). Etc. Economics is interesting precisely because their are non-obvious relations between agents and outcomes.

Side note the second: there is an essential similarity between economics and cognitive psychology – a focus on information processing. Further, market economics recognises the power of distributed information processing, as does the connectionist school of cognitive psychology. This is the reason I talk about agents, rather than consumers. I believe that the same principles will not just apply to the economic and social sciences, but also to the social sciences (remember Minksy’s “Society of Mind”). A question: can we usefully apply the idea of a distributed, free, ‘informational economy’ to undestanding neural coding? (Remember Glimcher’s “Neuroeconomics”)


Heuristics for decentralised thinking

Heursitics for decentralised thinking

(Michael Resnick, ‘Turtles, Termites and Traffic Jams: Explorations in Massively Parallel Microworlds’ MIT Press, 1994, p134ff)

1. Positive feedback can be good

  • Take-off effects

  • 2. Randomness can help create order

  • comined with positive feedback -> self-causation
  • shakes off local minima (annealing)
  • prevents worst excesses of exploitation in exploitation-exploration dilemma

  • 3. & 4. Emergence

  • Need to distinguish between levels
  • Not all properties of a system have to be explicitly built in
  • Emergent objects may have different properties than their subunits

  • 5. The Environment is Active

  • Intelligent behaviour doesn’t just rely on agents.
  • The environment is dynamic and a source of complexity

  • Shalizi review of TT&TJ



    If you think about the distribution of events, some will have very low probabilities. So low, in fact, that on average they occur less than once. But of course events either happen or they don’t, so an average of less than once tends to get approximated to not-at-all if you think in probabilistic terms. But if you look at the class of all those things with average frequency less than one there’s a good chance of one, or some, of them happening. And when they do, they happen at a frequency far greater than the prediction of the average (by necessity, since the average is between zero and one if they occur it will be an integer value of minimum one).

    I was thinking these thoughts while watching this demo of ‘The Galton Machine’ which illustrates the central limit theorum and, peripheraly, provides an example of a system which can be approximately described by one ‘rule’ (a Gaussian distribution) but follows quite different mechanistics rules (you see, it’s always about minds and brains round here). Look at the extremes of the distibution. Soon enough a single ball will fall somewhere there, and when it does it will far exceed the predicted (average) frequency of it occuring.

    It occured to me that all this was analagous to an interesting piece in the Guardian, an extract from Worst Cases: Terror and Catastrophe in the Popular Imagination, by Lee Clarke, published by the University of Chicago Press. Clarke says that thinking about probabilities lets you get away with thinking about was is most likely to happen, whereas thinking about possibilities lets you plan for rare events of serious magnitude.

    The trouble is that when it comes to real worst cases – actual disasters – there are no “average events”. How could we talk about a normal distribution of extreme events? If we imagine the future in terms of probabilities, then risks look safe. That’s because almost any future big event is unlikely.

    counterfactuals …help us see how power and interest mould what is considered legitimate to worry about. One lesson is that we cannot necessarily trust high-level decision-makers to learn from their mistakes. They could. But they often have an interest in not learning.

    Our governments and institutions almost always have a vested interest in not engaging in possibilistic thinking, and in offering us instead the reassuring palliative of probabilistic scenarios.

    I wonder if this is part of the story of why modern government seems to be about the management of existing trends rather than envisioning of future, alternative (possibly radically alternative), states that society could exist in.


    A human network syndrome?

    Dan wrote me a comment on my post on modelling local economies and the effect of shops which generate more income but send profits outside the local economy. It’s quite long so I’ve put most of it below the fold. Some context may be found from this post i’ve linked to before, by Dan at, about the
    redevelopment plans current for Burngreave, Sheffield. Even if you’re not interested in redevelopment policy, there’s stuff about the utility and use of simulations that has general interest

    Some abbreviations i’m not sure he defines: LM3 = Local Multiplier 3, a measure developed by the NEF which gauges how much of money spent in the local economy stays in the local economy. NEF = The New Economics Foundation. ABM = Agent Based Modelling. ODPM = Office of Deputy Prime Minister.

    Anyway, Dan says:

    This is all a bit like wading through underbrush at the moment. One day in the future, the concepts we’re trying to get at may emerge from the murk, but for now….

    1. The value of modeling
    2. A human network syndrome?
    3. Capitalism, network breakdown


    S(t)imulating the local economy

    Following on from the happiness maths and the associated notes about the value of toy models, here is a toy economic model and some notes about what it might mean for regeneration of local economies (also known as ‘are you sure you want to knock down those shops and build a supermarket?’). Comments on both the economics and the epistomology very welcome…


    The Happiness Maths

    We know that momentary happiness is some kind of function of experience, partially with respect to how that experience compares to previous experiences. We also know that people have hedonic baselines – a basic level of happiness to which they return, irrespective of changes in their quality of life. People win the lottery, and – obviously – they’re delighted. And then in a few months there as happy or as miserable as they ever were. Or they lose their legs, and – obviously – they’re devestated. And then they adjust and end up as happy or as miserable as they ever were.

    So, here’s a simple model that explains that phenomena, and maybe does some other interesting things as well:

    Momentary Happiness is defined by the difference between your current experience and an average of your previous experiences (with more recent previous experiences weighted more heavily in the average)

    The rest of this post is dedicated to exploring a mathematical formulation of this model, and seeing what it implies, what it misses out and how it could be improved. There’s also one eye on the question “How can experience be best manipulated to produce the maximum total happiness?”. If you are not interested in fun with maths, or the role of formal models in aiding thinking, then you might want to give up here.


    A group is it’s own worst enemy

    Read this! Clay Shirky on ‘A Group is it’s own worst enemy’. As well as containing gems like this, on the power of out-group prejudice:

    groups often gravitate towards members who are the most paranoid and make them leaders, because those are the people who are best at identifying external enemies.

    And, on why geocities came before weblogs:

    It took a long time to figure out that people talking to one another, instead of simply uploading badly-scanned photos of their cats, would be a useful pattern.

    He also has some interesting reflections on the basic patterns groups reproduce (he says sex/flirtation, outgroup animosity and religious veneration are the top three), on why structure is needed to protect groups from themselves (it is members of the group, operating within the deliberate remit of the group’s initial intention that often cause its collapse, not ‘outsiders’) and a whole lot of other stuff about social software.

    And, basically, i’m not too wrapped up in the software bit of social software, i’m more interested in the social. How can we catalyse well functioning groups?

    Clay says that you need some kind of privilaged group of core users, or some kinds of barriers to entry – in an internet forum a lack of these things leads to a one-person-one-vote tyranny of the majority [please read the article before getting upset about any anti-democratic sentiments you perceive here].

    However for non-internet groups, I’m wonder if our problem isn’t the lack of limits to commitment, rather than lack of barriers to entry based on a minimum level of commitment. I’ve seen a lot of social and political groups which get overly swayed by the minority that have the time to commit totally and obsessively to the group – it doesn’t make for a rounded decision making process.

    I’m in danger of starting an epic string of posts on the interrelation of group structure and group function, if anyone would like to head me off at the pass and recommend some reading/ideas to get my head corrected on this first i’d welcome it…


    Job Security

    I’ve only ever heard two kinds of arguments or implied arguments about laws which enforce job security

    (1) From the worker-perspective: they are good. Job security improves quality of life, peace of mind, etc.

    (2) From the company-perspective: they are bad. Job security hinders flexibility, stops rapid responding to changing economic conditions, etc

    Now I’ve only been passively absorbing information on this and I’m not claiming to have made anything like an active search for alternative arguments, but I suspect that the fact that I haven’t encountered a third kind of argument reflects the fact that it isn’t made as often, or as clearly-

    (3) From the company-perspective: they are good. Commitment to job security encourages companies to seek out sustainable and stable business strategies.

    Not just good, but good for everyone! I wonder if the argument isn’t made because it is a two-time point / second-order kind of argument, rather than a simple ‘If X happened i could/couldn’t do Z’ kind of argument. You need to represent a more profound kind of counter-factual to think about this third line of reasoning; not just if the company needed to downsize it couldn’t, but because downsizing is harder the company would alter circumstances so that it would be less likely to need to downsize.

    psychology science systems

    how to work with models

    Economist Paul Krugman writes ‘How I work’, and along the way covers some psychology-relevant thoughts on the use of models (as recommended by the Yale Perception and Cognition Lab).

    He also articulates one of my main reasons for having a weblog We just don’t see what we can’t formalize

    psychology science systems

    describing systems / systems for describing

    Systems theory, like catastrophe theory before it, is a descriptive theory not a predictive theory. Which, means that it’s harder to say if it’s any use (and, indeed, you can always re-phrase any discoveries within that framework using the language of the old framework, once you have made them).

    Given this, we’d expect the most utility of systems theory to be in fields which are suffering most from a lack of adequate epistemological tools. Which is why, I guess, I’m convinced of the necessity of some kind of systems thinking in cognitive neuroscience and for social psychology.

    And why, maybe, to date the best systems theory work in psychology has been in developmental psychology


    The Invisible College says some considered, interesting, important things on Echoes in the Invisible College. I so want to be able to see five years into the future to find out what conclusions all the current thinking about the influence of social structure on group cognition will produce…