There is a beauty to the arrangement whereby a cake is shared by one of us dividing it and the other choosing which part they want. The person dividing doesn’t know which part they’ll get so they have every incidentive to make fair shares. They say that John Rawls took this as inspiration for his philosophy of how a just society should be organised (but I don’t know enough about that).

But the cake cutting example only works for a world where the cake is homogeneous and the two cake-eaters have identical preferences (in this case, to have as much as possible). Imagine a world where the cake has a fruit half and a nut half, say, and I have two cake-eaters, A and B. A likes fruit and nut equally, she doesn’t care. B is allergic to nuts. Now the game of “one cuts, one chooses” doesn’t work. If A cuts she will slice the cake in half and be happy with whichever half she’s left with, but B better hope that A makes a half which is entirely fruit, otherwise she’ll be forced to make a choice between two bits of cake, some of which she can’t eat. B is at no risk of losing out, A is at substantial risk. If B cuts first, she might consider cutting the cake into a nut half and a fruit half, but then she has to hope A chooses the fruit half. And she might cut the cake into mixed halves an put up with a portion she can’t eat (but ensuring B only gets half the cake). The game-theoretic solution is probably to cut the cake into a larger, nut-plus-small-amount-of-fruit, half and a smaller, just-fruit, half. A will choose the larger half. A definitely wins, B loses out.

The solution whereby A and B both have half, and both enjoy their halves equally (ie B gets the fruit half) is simple, but enreachable via this sharing game.

I’m reminded of an experiment I think I read about in George Ainslie’s Breakdown of Will (don’t have the book to hand to check, so apologies for inaccuracies. We can pretend it is a thought experiment and I think it still makes the point). There’s a large long cage with a lever that opens a door at the other end. If you are a pig it take 15 seconds, say, to run from the lever to the door. After 20 seconds the door closes, so you get to eat your fill for 5 seconds. One pig on her own gets regular opportunities to feed, as well as plenty of exercise running backward and forth. Now imagine a big pig and a small pig. The big pig is a bully and always pushes the small pig off any food. In a cage with normal feeding arrangements the big pig gets all the food (poor small pig!). But in this bizarre long cage with the lever-for-food arrangement, a funny thing happens. The big pig ends up as a lever pressing slave for the small pig, who gets to eat all the foot.

To see why, we need a game-theory analysis like with the cake example. If the little pig pressed the lever, the big pig would start eating the food and the little pig wouldn’t be able to budge her. There’s no incentive for the little pig to press the lever, she doesn’t get any food either way! The big pig, however, has a different choice : if she presses the lever then she can charge down to the food and knock the little pig out of the way, getting 5 seconds of food. It’s worth it for big pig, but the outcome is that she does all the running and only gets a quarter of the food.

This suprising result is none the less a ‘behaviourally stable strategy’, to bastardise a phrase from evolutionary game theory.

Bottom line: minimally complex environments and heteogenities in agents’ abilities and preferences break simple fairness games. In anything like the real world, as Tom Slee so convincingly shows, choice is not preference.