Skip to content

tom

Learning, Motor Skill, and Long-Range Correlations

Nourrit-Lucas et al (2015) compare expert and novice performance on a ski-simulator, a complex task which is often used by human movement scientists. Acquiring skilled performance on the ski-simulator requires you to learn a particular form of control as you shift your weight from side to side (van der Pol form of damped oscillations).

Their main result involved comparing the autocorrelations (which they call serial correlations) of participants’ performance. The participants were instructed to move from side to side as fast, and widely, as possible and these movements were motion tracked. The period of the oscillations was extracted and the autocorrelation for different lags calculated (other complexity measures were also calculated, which I ignore here). The autocorrelations for novices were positive for lag 1 and possibly for other short lags, but dropped to zero for longer range lags (5-30). Expert’s autocorrelations were higher for shorter lags and did not drop to zeros for any of the lags examined (showing positive long-range correlations in performance).

Figure 3, Nourrit-Lucas et al (2015)
Figure 3, Nourrit-Lucas et al (2015)

Nourrit-Lucas et al put an impressive interpretation on their result. It undermines, they say, that motor learning involves merely a process of simplification, unification or selection of a single efficient motor programme. Instead, they say “Expert performance seems characterized by a more complex and structured dynamics than that of novices.”

They link this interpretation to the idea of degeneracy, in which learning is the coordination of a complex network so that multiple functional units become linked to all support given outcomes. “This enrichment of neural networks could explain the property of robustness of motor skills, essentially revealed in retention tests, but also the properties of generalizability and transfer”

They cite modelling by Delignieres & Mermelat (2013) which links level of degeneracy to extent of long-range correlations. Whilst they admit that other complex networks are also capable of producing the long-range correlations observed, I would go further and say that a “simple-unitary” model of motor learning might also produce long-range correlations if there was some additional structured noise on performance (e.g. drift in attention or some such). Novices of course, would also have this influence on their performance, but perhaps it is swamped by the larger variability of their yet to be optimised motor system. I don’t see why the analysis of Nouritt-Lucas excludes this interpretation, but I may be missing something.

I also note that their result contrast with that of van Beers et al (2013), who showed that lag 1 autocorrelations in experts at at an aiming task tended towards zero. They interpreted this as evidence of optimal learning (ie neither under- nor over- correction of performance based on iterated error feedback). The difference may be explained by the fact that van Beers’ task used an explicit target whilst Nourrit-Lucas’ task lacked any explicit target (merely asking participants to, in effect, “do their best” in making full and fast oscillations).

The most impressive element of the Nouritt-Lucas study is not emphasised in the paper – the expert group are recruited from a group that was trained on the task 10 years previously. In Nourrit-Lucas et al (2013) she shows that despite the ten year gap the characteristic movement pattern of experts (that damped van der Pol oscillation) is retained – a truly impressive lab demonstration of the adage that you “don’t forget how to ride a bike [or equivalently complex motor task]”.

REFERENCES

Delignieres, D., & Marmelat, V. (2013). Degeneracy and long-range correlations. Chaos: An Interdisciplinary Journal of Non-linear Science, 23, 043109.

Nourrit-Lucas, D., Tossa, A. O., Zélic, G., & Delignières, D. (2015). Learning, Motor Skill, and Long-Range Correlations. Journal of motor behavior, (ahead-of-print), 1-8.

Nourrit-Lucas, D., Zelic, G., Deschamps, T., Hilpron, M., & Delignieres, D. (2013). Persistent coordination patterns in a complex task after 10 years delay: How validate theold saying “Once you have learned how to ride a bicycle, you never forget!” Human Movement Science, 32, 1365–1378. doi:10.1016/j.humov.2013.07.005

van Beers, R. J., van der Meer, Y., & Veerman, R. M. (2013). What autocorrelation tells us about motor variability: Insights from dart throwing. PloS one, 8(5), e64332.

The Moral Arc

I am seeking suggestions for things to read on a specific topic, which I am struggling to articulate. I would like to read an analysis of how individuals understand their own moral development. Moral philosophers have accounts of what is moral, how it should be understood. This lacks the first person perspective I want to explore – I want to read something that takes seriously the subjective moral life as it is, not as it should be. Experimental philosophers have accounts of differences in people’s responses to moral dilemmas. This is too static – I want to read something that takes seriously our ability to change morally, and particularly to be agents of our own changes in belief. Biographies, particularly of spiritual or political figures, have first person accounts of moral change – why people lost their faith, or changed faith, in deities, parties or principles – but these don’t allow the comparison across people that I’d like.

I wonder if such a book exists. Something like “In a different voice“, but with more emphasis on adult development, or The Intellectual Life of the British Working Classes, with a specific focus on moral change.

The motivation is to escape the implicit model of many psychological accounts, which portray people as passive information processors; at their worse stimulus response machines, but even at their best as mere suboptimal rational agents. I’d like to think more about people as active moral agents – as having principles which are consciously developed, seriously considered, subject to revision, passionately defended and debated. Then, of course the trick is to design empirical psychology research which, because it takes this perspective seriously, allows this side of people to manifest rather then denying or denigrating it.

Habits as action sequences: hierarchical action control and changes in outcome value

Dezfouli, Lingiwi and Balleine (2014) advocate hierarchical reinforcement learning (hierarchical RL) as a framework for understanding important features of animal action learning.

Hierarchical RL and model-free RL are both capable of coping with complex environment where outcomes may be delayed until a sequence of actions is completed. In these situations simple model-based (goal-directed) RL does not scale. The key difference between hierarchical and model free RL is that in model free RL actions are evaluated at each step, whereas in hierarchical RL they are evaluated at the end of an action sequence.

The authors note two features of the development of habits. The concatenation of actions, such that sequences can be units of selection, is predicted by hierarchical RL. The insensitivity of actions to the devaluation of their outcomes is predicted by model-free RL. Here they report experiments, and draw on prior modelling work, to show that hierarchical RL can lead to outcome devaluation insensitivity. This encompasses these two features of habit learning under a common mechanisms, and renders a purely model-free RL account of action learning redundant. Instead model-free RL will be subsumed within a hierarchical RL controller, which is involved in early learning of action components but will later devolve oversight (hence insensitivity to devaluation).

Hierarchical RL leads to two kinds of action errors, planning errors and action slips (for which they distinguish two types).

Planning errors result from ballistic control, meaning that intervening changes in outcome do not affect the action sequence.
Action slips are also due to ‘open-loop control’, ie due to a lack of outcome evaluation for component actions. The first kind is where ballistic control means an action is completed despite a reward being delivered midsequence (and so rendering completion of the action irrelevant, see refs 30 and 31 in the original). The second subcategory of action slip is ‘capture error’ or ‘strong habit intrusion’, which is where a well rehearsed completion of a sequence runs off from initial action(s) which were intended as part of a different sequence.

I don’t see a fundamental difference between the first type of action slip and the planning error, but that may be my failing.

They note that model free RL does not predict specific timing of errors (hierarchical RL predicts errors due to devaluation in the middle of sequences, and habitual intrusions at joins in sequences, see Botvinick & Bylsma, 2005), and doesn’t predict action slips (as Dezfouli et al define them)

EXPT 1

They use a two stage decision task to show insensitivity to intermediate outcomes in a sequence, in humans.

Quoting Botvinick & Weinstein (2014)’s description of the result, because their own is less clear:
“they observed that when subjects began a trial with the same action that they had used to begin the previous trial, in cases where that previous trial had ended with a reward, subjects were prone to follow up with the same second-step action as well, regardless of the outcome of the first action. And when this occurred, the second action was executed with a brief reaction time, compared to trials where a different second-step action was selected.”

The first action, because it was part of a successful sequence, was reinforced (more likely to be choosen, quicker), despite the occasions when the intermediate outcome – the one that resulted from that first action – was not successful.

EXPT 2

Rats tested in extinction recover goal-directed control over their actions (as indicated by outcome devaluation having the predicted effect). This is predicted by a normative analysis where habits should only exist when their time/effort saving benefits outweigh the costs.

The authors note that this is “consistent with a report showing that the pattern of neuronal activity, within dorso-lateral striatum that marks the beginning and end of the action sequences during training, is diminished when the reward is removed during extinction [37]”

Discussion

They review evience for a common locus (the striatum of the basal ganglia) and common mechanism (dopamine signals) for action valuation and sequence learning. Including:
“evidence suggests that the administration of a dopamine antagonist disrupts the
chunking of movements into well-integrated sequences in capuchin monkeys [44], which can be reversed by co-administration of a dopamine agonist [45]. In addition, motor chunking appears not to occur in Parkinsons patients [46] due to a loss of dopaminergic activity in the sensorimotor putamen, which can be restored in patients on L -DOPA [47].”

My memory of this literature is that evidence on chunking in Parkinsons is far from convincing or consistent, so I might take these two results with a pinch of salt.

Their conclusion: “This hierarchical view suggests that the development of action sequences and the insensitivity of actions to changes in outcome value are essentially two sides of the same coin, explaining why these two aspects of automatic behaviour involve a shared neural structure.”

REFERENCES

Botvinick, M. M., & Bylsma, L. M. (2005). Distraction and action slips in an everyday task: Evidence for a dynamic representation of task context. Psychonomic bulletin & review, 12(6), 1011-1017.

Botvinick, M., & Weinstein, A. (2014). Model-based hierarchical reinforcement learning and human action control. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1655), 20130480.

Dezfouli, A., Lingawi, N. W., & Balleine, B. W. (2014). Habits as action sequences: hierarchical action control and changes in outcome value. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1655), 20130482.

Limits on claims of optimality

Jarvstad et al (2014) provide a worked illustration showing that it is not straightforward to declare perceptuo-motor decision making optimal, or even more optimal when compared to cognitive decisions.

They note that, in contrast to cognitive level decisions, that perceptuo-motor decisions have been described a optimal or near optimal (Seydell et al, 2008; Trommershäuser et al, 2006). But they also note that there are differences in the performance conditions and criteria of assessment as optimal between perceptuo-motor and cognitive decisions. Jarvstad et al (2013) demonstrated that when these differences are eliminated, claims about differences between domains are harder to substantiate.

In this paper, Jarvstad et al (2014) compare two reaching tasks to explore the notional optimality of human perceptuo-motor performance. They show that minor changes in task parameters can affect whether participants are classified as behaving as optimally or not (even if these changes in task parameters do not effect the level of performance of an optimal agent). Specifically they adjusted the size and distance of the reaching targets in their experiment, without qualitatively altering the experiment (nor the instructions and protocol at all).

The bound below which performance is classified as sub-optimal depends on a number of factors. The ease of task affects this (for easier tasks observed performance will be closer to optimal), but so does the variability in an optimal agent’s performance, or the accuracy of estimation around an optimal agent’s performance affect. Jarvstad et al conclude that, for this task at least, it is not straightforward to know how changes in an experiment will effect the bounds within which a subject is classified as optimal. They say (p.413):

That statements about optimality are specific and conditional in this way – that is, a behaviour is optimal given a task of this difficulty, and given these capacity constraints included in the optimal agent— may be appreciated by many, however the literatures typically do not make this explicit, and many claims are simply unsustainable once this fact is taken into account.

REFERENCES

Jarvstad, A., Hahn, U., Warren, P. A., & Rushton, S. K. (2014). Are perceptuo-motor decisions really more optimal than cognitive decisions?. Cognition, 130(3), 397-416.

Seydell, A., McCann, B. C., Trommershäuser, J., & Knill, D. C. (2008). Learning stochastic reward distributions in a speeded pointing task. The Journal of Neuroscience, 28, 4356–4367.

Trommershäuser, J., Landy, M. S., & Maloney, L. T. (2006). Humans rapidly estimate expected gain in movement planning. Psychological Science, 17, 981–988.

Perceptuo-motor, cognitive, and description-based decision-making seem equally good

Jarvstad et al (2013) show that when perceptuo-motor and ‘cognitive’ decisions are assessed in the same way there are no marked differences in performances.

The context for this is the difference between studies of perceptual-motor and perceptual decision making (which have emphasised the optimality of human performance) and studies of more cognitive choices (for which the ‘heuristics and biases’ tradition has purported to demonstrate substantial departures from optimality).

Jarvstad and colleagues note that experiments in these two domains differ in several important ways. One is the difference between basing decisions on probabilities derived from descriptions verses derived from experience (which has its own literature; Hertwig & Erev, 2009). Another is that perceptual-motor tasks often involve extensive training, with feedback, whereas cognitive decision making task are often one-shot and/or without feedback.

The definition of optimality employed also varies across the domains. Perceptual-motor tasks usually compare performance to that of an optimal agent, often modelled incorporating some constraints on task performance (e.g. motor noise). Cognitive tasks have often sought to compare performance to the standard of rational utility maximisers, designing choices in the experiments precisely to demonstrate violation of axioms on which rational choice rests (e.g. transitivity).

In short, claiming a difference in decision making across these two domains may be premature if other influences on both task performance and task assessment are not comparable.

To carry out a test of performance in the two domains, Jarvstad et al carried out the following experiment. They compared a manual aiming task (A) and a numerical arithmetic task (B). During a learning phase they assessed variability on the two tasks (ie frequency and range of error in physical (A) or numerical (B) distance). Both kinds of stimuli varied in the ease with which the required response could be successfully produced (ie. they varied in difficulty). They also elicited explicit judgements of stimuli that participants judged would match set levels of success (e.g. that they thought they would have a 50% or a 75%, say, chance of getting right).

During a decision phase they asked participants to choose between pairs of stimuli with different rewards (upon success) and different difficulties. Importantly, the difficulties were chosen – using the data provided by the learning phase – so as to match certain explicit probabilities (such as might be provided in a traditional decision from description experiment on risky choice. They also tested such decisions from explicit probabilities, in a task labelled ‘C’).

The results show that all three tasks had a comparable proportion of decisions which were optimal, in the sense of maximising chance of reward (Fig 3A). For all three tasks more optimal decisions were made on those decision which were more consequential (ie which had a bigger opportunity cost and which, consequentially, were presumably easier to discriminate between, Fig 3B – shown).

Fig3B Jarvstad, A., Hahn, U., Rushton, S. K., & Warren, P. A. (2013). Perceptuo-motor, cognitive, and description-based decision-making seem equally good. Proceedings of the National Academy of Sciences, 110(40), 16271-16276. http://www.pnas.org/content/110/40/16271.short

Using individual participant data, it is possible to recover – via model fitting – the subjective weights for value and probability functions. These show an underweighting of low objective probabilities in the perceptual-motor task (Fig 4D) and a overweighting of low objective probabilities in the classical probability-from-description cask (Fig 4F). This is in line with previous literature reporting a divergence between the domains in the way low probability events are treated (Hertwig et al, 2004). However, Jarvstad use the explicit judgements obtained in the learning phase to show that the apparent discrepancy in weighting results from differences in the subjective probability function (ie how likely success is judged in the perceptual-motor domain) rather than in the weighting given to this probability. If probability estimations are held constant, then similar weightings to low probability events are found across the domains.

They also show that an individual performance on a task is better predicted by their performance on a task in a different domain than by the average performance in that domain – ie that individual differences are more important than task differences in nature and extent of divergence from optimality.

REFERENCES

Jarvstad, A., Hahn, U., Rushton, S. K., & Warren, P. A. (2013). Perceptuo-motor, cognitive, and description-based decision-making seem equally good. Proceedings of the National Academy of Sciences, 110(40), 16271-16276.

Hertwig R, Barron G, Weber EU, Erev I (2004) Decisions from experience and the effect of rare events in risky choice. Psychol Sci 15(8):534–539.

Hertwig R, Erev I (2009) The description-experience gap in risky choice. Trends Cogn Sci 13(12):517–523.

The publication and reproducibility challenges of shared data

Poldrank and Poline’s new paper in TICS (2015) asserts pretty clearly that the field of neuroimaging is behind on open science. Data and analysis code are rarely shared, despite the clear need: studies are often underpowered, there are multiple possible analytic paths.

They offer some guidelines for best practice around data sharing and re-analysis:

  • Recognise that researcher error is not fraud
  • Share analysis code, as well as data
  • Distinguish ‘Empirical irreproducibility’ (failure to replicate a finding on the original researchers’ own terms) from ‘interpretative irreproducibility’ (failure to endorse the original researchers’ conclusions based on a difference of, e.g., analytic method)

They also over three useful best practice guidelines for any researchers who are thinking of blogging a reanalysis based on other researchers’ data (as Russ has himself)

  • Contact the original authors before publishing to give them right of reply
  • Share your analysis code, along with your conclusions
  • Allow comments

And there are some useful comments about authorship rights for research based on open data. Providing the original data alone should not entitle you to authorship on subsequent papers (unless you have also contributed significant expertise to a re-analysis). Rather, it would be better if the researchers contributing data to an open repository publish a data paper which can be cited by anyone performing additional analyses.

REFERENCE

Poldrack, R. A., & Poline, J. B. (2015). The publication and reproducibility challenges of shared data. Trends in Cognitive Sciences, 19(2), 59–61.

Light offsets as reinforcing as light onsets

Further support that surprise and not novelty supports sensory reinforcement comes from the evidence that light offsets are more-or-less as good reinforcers as light onsets (Glow, 1970; Russell and Glow, 1974). But in the case of light offset, where is the “novel” stimulus that acts as a reinforcer (by supposedly triggering dopamine)? In this case it is even more clear that it is the unexpectedness of the event (surprise), not the novelty of the stimulus (which is absent), that is at play.

From Barto, A., Mirolli, M., & Baldassarre, G. (2013). Novelty or surprise?. Frontiers in psychology, 4.

REFERENCES

Glow P. (1970). Some acquisition and performance characteristics of response contingent sensory reinforcement in the rat. Aust. J. Psychol. 22, 145–154 10.1080/00049537008254568

Russell A., Glow P. (1974). Some effects of short-term immediate prior exposure to light change on responding for light change. Learn. Behav. 2, 262–266 10.3758/BF03199191

Animal analogs of human biases and suboptimal choice

Zentall (2015) summarises a rich literature on experiments showing that analogues of canonical human biases exist in animals. Specifically, he takes the phenomena of

  • justification of effort: rewards which require more effort are overweighted
  • sunk cost fallacy: past effort is weighted in evaluation of future rewards
  • less-is-more effect: high value rewards are valued less if presented along with a low value reward
  • risk neglect: overweighting of low probability but high value rewards
  • base rate neglect: e.g. over-reliance on events which are likely to be false positives

The demonstration of all these phenomena in animals (often birds – pigeons and dogs in Zentall’s own research) presents a challenge to explanations of these biases in human choice. It suggests they are unlikely to be the result of cultural conditioning, social pressure or experience, or elaborate theories (such as theories of probability or cosmic coincidence in the case of suboptimal choice regarding probabilities, see Blanchard, Wilke and Hayden, 2014).

Zentall suggests that these demonstrations compel us to consider that suboptimal choice in the laboratory can only exist because of some adaptive value in the wild, with common mechanisms for multiple biases, or the independent evolution of each bias/heuristic in separate modules. At the end of the paper he presents some loose speculations on the possible adaptive benefit of each of the discussed biases.

Three interesting recent results from Zentall’s lab concern risky choice in pigeons

1. Laude et al (2014) showed that for individual pigeons there was a correlation between degree of suboptimal choice on a gambling task (overweighting of rare but large rewards) and impulsivity as measured by a delay discounting task. As well as seeming to show ‘individual differences’ in pigeon personality, it suggests the possibility of some common factors in these two kinds of choices (choices which experimental human work has found to be dissociable in various ways)

2. Zentall and Stagner (2011) show that the conditioned reinforcer (stimuli which predict reward) are critical in the gambling task (for pigeons). Without these intermediate stimuli, when actions lead directly to reward (still under the same probabilities of outcome), pigeons choose optimally. Zentall suggests that the thought experiment on the human case confirms the generality of this result. Would slot machines be popular without the spinning wheels? Or (my suggestion) the lottery without the ball machine? My speculation is that the promise of insight into the causal mechanism governing outcome is important. We know that human and non-human animals are guided by intrinsic motivation as well as the promise of material rewards (ie as well as being extrinsically motivated). Rats, for example, will press a lever to turn a light on or off, in the absence of the food reward normally used to train lever pressing (Kish, 1955). One plausible explanation for results like this is that our learning systems are configured to seek control or understanding of the world – to be driven by mere curiosity – in order to generate exploratory actions which will, in the long term, have adaptive benefit. Given this, it makes sense if situations where there is the possibility of causal insight – as with the intermediate stimuli in the gambling task – can inspire actions which are less focussed on exploiting know probabilities (i.e. are ‘exploratory’, in some loose sense) even if the promise of causal insight is illusory and the exploratory action are, as defined by the experiment, futile and suboptimal.

3. Pattison, Laude and Zentall (2013) showed that pigeons who were given the opportunity for social interaction (with other pigeons) were less likely to choose the improbable large reward action over lower expected value but more certain reward. Zentall’s suggestion is that the experience of social interaction diminishes the perceived magnitude of the improbable reward, making it seem like a less attractive choice (which makes sense if neglect of the probability an focus on the magnitude is part of the dynamic driving suboptimal choice in this gambling task). Whatever the reason, the result is a reminder that the choices of animals – human and non-human – cannot be studied in isolation from the experience and environment of the organism. This may sound like an obviousity, but discussion of problematic choices (think gambling, or drug use) often conceptualise behaviours as compelled, part of an immutable biological (addiction as disease) or chemical (drugs as inevitably producing catastrophic addiction) destiny. This result, and others (remember Rat Park) give lie to that characterisation.

REFERENCES

Blackburn, M., & El-Deredy, W. (2013). The future is risky: Discounting of delayed and uncertain outcomes. Behavioural processes, 94, 9-18.

Blanchard, T. C., Wilke, A., & Hayden, B. Y. (2014). Hot-hand bias in rhesus monkeys. Journal of Experimental Psychology: Animal Learning and Cognition, 40(3), 280-286.

Kish, G.: Learning when the onset of illumination is used as the reinforcing stimulus. J. Comp. Physiol. Psycho. 48(4), 261–264 (1955)

Laude, J.R., Beckmann, J.S., Daniels, C.W., Zentall, T.R., 2014. Impulsivity affects gambling-like choice by pigeons. J. Exp. Psychol. Anim. Behav. Process. 40, 2–11.

Pattison, K.F., Laude, J.R., Zentall, T.R., 2013. Social enrichment affects suboptimal, risky, gambling-like choice by pigeons. Anim. Cogn. 16, 429–434.

Zentall, T.R., Stagner, J.P., 2011. Maladaptive choice behavior by pigeons: an animal analog of gambling (sub-optimal human decision making behavior). Proc. R. Soc. B: Biol. Sci. 278, 1203–1208.

Zentall, T. R. (2015). When animals misbehave: analogs of human biases and suboptimal choice. Behavioural processes, 112, 3-13.

Discounting of delayed and uncertain outcomes

Blackburn & El-Deredy (2013) provide a nice review of the literature on temporal (delay) and probabilistic discounting. They note these features of similarity and difference:

  • Both follow hyperbolic (rather than exponential) discount functions
  • Not correlated: impulsivity might be interpreted as steeper temporal discounting, and shallower probabilistic discounting, but steep temporal discounters don’t appear to be shallow probabilistic discounters
  • Magnitude effect opposite: for probabilistic rewards, larger rewards are more steeply discounted, for delayed rewards, larger rewards are more shallowly discounted
  • Sign effect the same: Gains vs loses effect discounting similarly for temporal and probabilistic discounting (gains are more steeply discounted).

To this list their experiments add a dissociable effect of ‘uncertain outcomes’ (all or nothing) vs ‘uncertain amounts’ (graded reward).

Blackburn, M., & El-Deredy, W. (2013). The future is risky: Discounting of delayed and uncertain outcomes. Behavioural processes, 94, 9-18.

Permanent Zero

‘Email overload’ is one of those phrases everyone thinks they know the meaning of: “I get too many emails!”. Last autumn I met Steve Whittaker, who has a reasonable claim to have actually coined the phrase, way back in 1996. He explained to me that the point wasn’t to say that we get to much email, but that email is used for too many different things. We’re using it to send messages, receive messages, get notifications, schedule tasks, chat, delegate tasks, archive information and so on forever.

Shifting the focus from email as number of individual messages (too many!), to email as functions (still too many!) lets you see why the ‘Inbox Zero‘ idea doesn’t quite work. Inbox Zero appeals to my sense of being in control over my email, and it is better for me than not having a righteous scheduling system for my email, but it doesn’t split the multiple functions for which I use email.

Now, for you today, I’d like to share my newest strategy for managing my email, which is inspired by Whittaker’s ‘Email overload’ distinction.

The first thing to do is to separate off the single largest function of email – receiving messages – from the others. You need to stop emails arriving in your inbox, leaving you free to send and search without distraction. Create a filter and have all incoming mail moved to that folder. Now stare in satisfaction at “You have no new email!” in your inbox. Schedule a time to go to your received mail folder and kill as many emails as you can, using your favourite inbox zero strategies (protop: if you send emails at 4.30 you minimise the chances of someone replying that day). Now your workflow which only involves sending messages and dealing with old messages isn’t tangled up with the distraction of receiving new messages.

Next, separate off all email that isn’t personal correspondence. Set a second filter which removes all email without your email address in the ‘to’ or ‘cc’ fields. These are circulars. You can scan the titles and delete en mass.

If you are using gmail, you can import these filters (after editing to make relevant adjustments).
remove from inbox, unless sent to ‘exception’ address
remove all circulars
Right click to ‘save as’, they won’t show up in a browser. Note that my new folders begin with ‘A_’ so they are top of my alphabetised folder list.

Peak grain

Here, a graph of population size in England, 850-1550; a “speculative reconstruction” from Dyer’s “Making A Living in the Middle Ages”:

2015-01-20 20.03.51

Note the exuberant growth of 1150-1300. What a hundred years to be alive! The population more than doubled! Towns, cities, commerce, a relentless pace of change unlike anything come before

This growth slowed even before famine (1315-22) and plague (1348-50) caused such precipitous drops in population. Dyer isn’t clear why growth came to an end: perhaps crop yields collapsed, after a century of intensive farming – a generational shift in the ability to extract energy (and one more thing that makes the time analogous to our own).

And after 1350, what a world to live in. How did it feel? An end of days? The old regimes collapsing with new men free to make a new order amid the ruins? In 1381 a two month cry of freedom, Englishmen demanding an end to aristocracy and autonomous government by villages under the king. Where did that come from? And what remained of it after Wat Tyler and John Ball’s heads were on spikes?

Reference: Dyer, C. (2002). Making a living in the middle ages: the people of Britain 850-1520. Yale University Press.

Values vs Finances

I attended a University meeting recently, an open forum to discuss our strategy and vision. My small group spend most of its time talking about the conflict between values and finances. Values we might aspire to – things like helping fight climate change – and finances – the constraints from ‘the bottom line’, the need to recognise the costs of different actions. Something about how the group settled on this dichotomy disturbed me. It wasn’t that there weren’t intelligent people in the group, who make good points, but I left with the inarticulate feeling that there was something wrong with the framing around the discussion we had. I’ve been thinking about it for over a week, and I’m now a bit closer to figuring out some of the problems with the idea that values come into conflict with finances.

The first problem with this false opposition is that it positions values as a luxury, something we can only afford to think about if we service the necessity of finances. Rather, values are the necessity – and prior to any consideration of finances. How can you decide on any action unless you know what you want, and what you value? This is impossible for a person, or an institution. Sure, we have some givens – Universities teach and do research – but I’d argue they reflect implicit values which we need to articulate. Only once we know what values we share can we then start to decide what we want to do, and only then can we start to cost those actions.

The second problem with putting finances in opposition to values is that it reifies an abstract notion and gives the false impression that ‘finances’ are somehow simple and concrete. In fact, even if the University unwisely adopted the corporate directive to maximise profits that does not unpack into a clear decision strategy. Over the complex space of possible timescales, and possible strategies, and possible changes in the environment, it isn’t clear at all what actions will maximise profits. You need a sense of your mission even if you are trying to maximise profits – which we aren’t.

My sense is that in the discussion people referred to ‘finances’ as a proxy for external constraints. We’d like to teach for free, but lecturers and buildings cost money etc. My objection to vaguely referring to ‘finances’ is that it stops detailed discussion of specific external constraints – not all of which are financial (for example we’d like to recruit the best research staff from around the world, but visa restrictions hamper this).

My third and final issue with the opposition of values and finances is that it positions values as flexible – things we’ll set within whatever wiggle room finances affords us – but finances as fixed. But Universities are big enough players to change the environment within which they operate. We all are, especially though the power of collective action. Fees, funding, visa restrictions are all negotiable. We, as a society, and as a University which should play a role in shaping society, decide on how these things work. We should articulate our values and take part in doing that. I reject a fatalistic submission to the way the world is – which is often what homage to finances reflects. A ‘there is no alternative’ nihilism which promotes passivity.

Reflections on No Picnic

[A reconstruction of what I wanted to say, and what I actually did say, at the launch of the book ‘No Picnic’ on 27th May 2014. Hardcopies of the book and commentaries – including this one – are available by PayPalling £5 to webmaster@einekleine.com]

I’ve just left a University meeting where someone made an impassioned protest about the number of duties academics have. They were still despairing about the amount of work we’re asked to do, as I left to get to my bike so I could cycle here.

On the way I passed a new development of luxury student flats named “impact”. A cruel pun on the need to justify research, I wondered?

I work as an experimental psychologist, and so, as I rolled down the hill, my thoughts returned to the research that occupies so much of my time, research I’ve been doing on learning and learning curves.

But as I arrived at to No Picnic these thoughts also fell away and I turned to think about failure.

My failure.

You see, I was originally part of the Furnace Park project. In the book, Matt says some kind words about me not being able to continue being involved because I had a newborn daughter. And it’s true, I do have a daughter and that does fill up your time. But the truth is that it wasn’t just that which meant that I dropped out of the project. Really it was a question of priorities. I was focused on my research on learning curves, about writing grants and publishing papers, with a limited amount of work time. Furnace Park just…fell off the edge of the things I could do.

So I was thinking about my failure to be involved, and about the instrumentalism – the need for results – which structured my time so that I decided I couldn’t afford to be involved.

And instrumentalism turned my thoughts to my first academic job. You see I’m a recovering social psychologist, and my first job after my PhD was on a project looking at brownfield land. Brownfield land is previously used land, like Furnace Park. Previously used land can be polluted, but possible harm from that pollution is always a risk, rather than a certainty, and people think about risks in funny ways – hence my involvement as a psychologist.

One thing we looked at was who the public trusted to tell them about risk. Was it the media, local government, pressure groups or scientists? We found that the expertise of the person giving the information was nearly irrelevant – people trusted information from people they thought were on their side, regardless of whether they were qualified to judge the risks.

One day, as part of this project, I was on a site visit to a housing estate which had been built on or near polluted land. The residents of the estate were understandably upset when they discovered the extent of the pollution and were pressing for a clean-up – a clean-up of great expense and uncertain efficacy. I was being driven around the site by the chief planning officer at the local council.

“They say to me, Tom,”, he said, “they say to me ‘how much is a human life worth, eh? How much is a human life worth?'”

“What I don’t tell them is that according to us it is exactly four hundred and seventy five thousand pounds”

Instrumentalism!

Another thing I learnt from that project is that it is a myth that brownfield sites are barren and greenfield sites are always more important to protect because of the richness of the habitat. As you can see from places like Furnace Park, although left unused – often because unused – brownfield sites can become vibrant ecologies.

Thinking of this turned my mind to something Vaclav Havel once said. He was a Czech dissident in the days of the Soviet Union. He wrote samizdat – typed and illicitly copied essays which were clandestinely circulated. In those days you had to know the right people get hold of his writing (perhaps like the No Picnic book). In the 90s I could buy his writings in a book. Now you can find them all on the internet.

In one of his essays Havel writes about the value of art which isn’t aligned with the objectives of the state – purposeless culture. He says that, like the ecologies of the natural world, these ecologies of culture must be conserved and cultivated. You never know, he argued, where the thing you need most is going to come from. You never know when you’ll need to draw on the resources and wisdom stored in such a niche.

I couldn’t find that passage flicking through my copy of “Living in Truth” however.

Another passage that stuck in my mind concerns Havel’s writing on what he called the Post Totalitarian System. These, he said, were societies, both East and West, where the need for direct repression has passed. Here, he said, every person’s attention was kept nailed to floor of their self-interest. Control was maintained by material comforts, and the fear of sticking out.

I couldn’t find that passage either. Perhaps it is in his “Letters to Olga”

Instead, I found this passage, from his essay “Politics and Conscience”:

“As all I have said suggests, it seems to me that all of us, East and West, face one fundamental task from which all else should follow. That task is one of resisting vigilantly, thoughtfully, and attentively, but at the same time with total dedication, at every step and everywhere, the irrational momentum of anonymous, impersonal, and inhuman power – the power of ideologies, systems, apparat, bureaucracy, artificial languages, and political slogans. We must resist its complex and wholly alienating pressure, whether it takes the form of consumption, advertising, repression, technology, or cliché”

And that is the end of my meander in thought from the University, to learning, to instrumentalism, to ecology, to dissident publishing, and so to No Picnic. The book reminded me of the importance of spaces outside of the narrow instrumentalism that rules so much of my life, and it is a true testimony to a particular place, at a particular moment, with particular people. I look forward to reading it again.

An Open Letter to Andrew Dodman

Dear Andrew Dodman,
Director of Human Resources,
University of Sheffield

Our University has a problem with inequality. Standard undergraduate student fees have lept three-fold since 2012, with the average debts of a graduating student around £44,000 [1]. Our Vice Chancellor took home £370,000 last year, whilst the University benefits from the zero-hours and short-term contracts of many staff who are intimately involved to the administrative and intellectual life of the institution. In the middle, academics with open-ended contracts, of whom I am one, have suffered years of below inflation pay-rises [2].

This is the context for the current University and College Union (UCU) action short of a strike – a boycott on assessment by Union members, voted for by the largest turn out in the Union’s history, in support of protecting pensions – another area in which unjustified cuts are planned which will profit those who have, and squeeze those who have not. Directly these plans will reduce the pension for current staff, and it will also impact on the students and wider public to which the University is obligated, who will get less from demoralised, under-rewarded and over-managed University lecturers.

You have announced that all University of Sheffield staff participating in the action will be docked 25% pay. I would like to request that all the savings made from cutting my pay are redistributed to my students in the form of a fee rebate. The University shouldn’t profit from action staff are taking in the name of a fair reward for working here, and students deserve some compensation.

Yours

Tom

Tom Stafford
Lecturer in Psychology and Cognitive Science
University of Sheffield

[1] http://www.independent.co.uk/student/news/73-of-todays-students-will-still-be-paying-off-their-tuition-fees-in-their-50s-9249258.html

[2] http://www.timeshighereducation.co.uk/news/v-c-pay-105000-rise-for-head-as-staff-denied-living-wage/2010736.article

Teaching critical thinking: What if? What if not?

I am teaching a course, where I ask students to critically review papers reporting psychology experiments. It is making me question how you try and teach a skill as fundamental as critical thinking. I am trying to do it by example, walking the students through my reading of each paper, showing them what I do and consider when looking at it, and then providing them with a model answer (which is identical in structure to the final coursework I will require of them).

Reading a few practice questions students have handed in, I’m struck that there is a habit I want my students to acquire, which they haven’t quite got yet, and which I don’t even know the name of. Which I why I write this here, to ask you – Kind Interwebs – if you know what it is I am about to talk about and how I can convey it best to people taking my course.

This habit is that of asking the skeptical follow up questions to every proposal they make. So, for example, most undergraduate psychology students will have up their sleeve a well rehearsed list of possible flaws in experiments. Things like: was the sample representative of the population? Are there confounds in the experiment which prevent you cleanly inferring the causation?

It is nice, but inadequate, to write a review of a paper listing flaws like this. Doing so does not constitute a useful or interesting critical review (or, on my course, a gradeworthy one).

What I would like to encourage is my students to go the extra mile and, having spotted a potential flaw, assess it for plausibility, and consider what it would mean if the flaw was a significant one (i.e. how does it limit our interpretation of the current experiment, and what does it mean for future possible experiments?).

Here’s a concrete example: one paper I am teaching is one of my own, that looked at how students’ use of a course wiki predicted final exam score. A student suggested that because students knew their wiki use was being monitored, demand effects may have played a role (demand effects are a classic psychology experiment confound: participants distort their behaviour according to what they think you want to find). Now this is fair enough, but there are a number of follow up questions.

What is required for this to be the case?
That students were able and willing to alter their final exam score based on their wiki use, but not because of it, perhaps. This seems implausible

What is implied if it is the case? If the demand effect did hold, would it even mean that the wiki use wasn’t effective? For example, we might decide that since students can’t easily score higher on exams at a whim, even an effect via demand was an effect worth having

How could I test if it is the case? Demand effects may hold, but how could we tell if they do hold?

What if it isn’t the case? What are the differences between the two situations Imagine two worlds with and without demand effects. What are the crucial differences between them, and what implications do these differences have for our experiment interpretation or further research? If there are no major differences, maybe we don’t need to worry about demand effects.

I pick demand effects because I wanted to use a specific example, but my aim is to encourage students to deploy these questions about every possible flaw or improvement that they suggest. My question today, though, is is there a general principle which students could follow to guide them in asking these kind of skeptical follow up questions? It seems like there isn’t anything too domain specific about this, so even if you aren’t an expert in psychology experiments you could semi-independently develop this skill of probing the logical structure of claims about an interpretation. It also seems that, cognitively, such thinking puts a heavy demand on your working memory, since it consists of layers and iterations of hypotheticals and counter-factuals. This makes it extra hard, is there any way to make it easier?

If there is no general principle, it may be that me (and my students) are stuck with going through worked examples. I’m seeking short cuts up the mountain.

Quote #301

We’ve been led into a culture that has been engineered to leave us tired, hungry for indulgence, willing to pay a lot for convenience and entertainment, and most importantly, vaguely dissatisfied with our lives so that we continue wanting things we don’t have. We buy so much because it always seems like something is still missing

David Cain, in Your Lifestyle Has Already Been Designed

Quote #300: Graeber on the popular appeal of the right

One of the perennial complaints of the progressive left is that so many working-class Americans vote against their own economic interests—actively supporting Republican candidates who promise to slash programs that provide their families with heating oil, who savage their schools and privatize their Medicare. To some degree the reason is simply that the scraps the Democratic Party is now willing to throw its “base” at this point are so paltry it’s hard not to see their offers as an insult: especially when it comes down to the Bill Clinton– or Barack Obama–style argument “we’re not really going to fight for you, but then, why should we? It’s not really in our self-interest when we know you have no choice but to vote for us anyway.” Still, while this may be a compelling reason to avoid voting altogether—and, indeed, most working Americans have long since given up on the electoral process—it doesn’t explain voting for the other side.

The only way to explain this is not that they are somehow confused about their self-interest, but that they are indignant at the very idea that self-interest is all that politics could ever be about. The rhetoric of austerity, of “shared sacrifice” to save one’s children from the terrible consequences of government debt, might be a cynical lie, just a way of distributing even more wealth to the 1 percent, but such rhetoric at least gives ordinary people a certain credit for nobility. At a time when, for most Americans, there really isn’t anything around them worth calling a “community,” at least this is something they can do for everybody else.

The moment we realize that most Americans are not cynics, the appeal of right-wing populism becomes much easier to understand. It comes, often enough, surrounded by the most vile sorts of racism, sexism, homophobia. But what lies behind it is a genuine indignation at being cut off from the means for doing good.

Take two of the most familiar rallying cries of the populist right: hatred of the “cultural elite” and constant calls to “support our troops.” On the surface, it seems these would have nothing to do with each other. In fact, they are profoundly linked. It might seem strange that so many working-class Americans would resent that fraction of the 1 percent who work in the culture industry more than they do oil tycoons and HMO executives, but it actually represents a fairly realistic assessment of their situation: an air conditioner repairman from Nebraska is aware that while it is exceedingly unlikely that his child would ever become CEO of a large corporation, it could possibly happen; but it’s utterly unimaginable that she will ever become an international human rights lawyer or drama critic for The New York Times. Most obviously, if you wish to pursue a career that isn’t simply for the money—a career in the arts, in politics, social welfare, journalism, that is, a life dedicated to pursuing some value other than money, whether that be the pursuit of truth, beauty, charity—for the first year or two, your employers will simply refuse to pay you. As I myself discovered on graduating college, an impenetrable bastion of unpaid internships places any such careers permanently outside the reach of anyone who can’t fund several years’ free residence in a city like New York or San Francisco—which, most obviously, immediately eliminates any child of the working class. What this means in practice is that not only do the children of this (increasingly in-marrying, exclusive) class of sophisticates see most working-class Americans as so many knuckle-dragging cavemen, which is infuriating enough, but that they have developed a clever system to monopolize, for their own children, all lines of work where one can both earn a decent living and also pursue something selfless or noble. If an air conditioner repairman’s daughter does aspire to a career where she can serve some calling higher than herself, she really only has two realistic options: she can work for her local church, or she can join the army.

This was, I am convinced, the secret of the peculiar popular appeal of George W. Bush, a man born to one of the richest families in America: he talked, and acted, like a man that felt more comfortable around soldiers than professors. The militant anti-intellectualism of the populist right is more than merely a rejection of the authority of the professional-managerial class (who, for most working-class Americans, are more likely to have immediate power over their lives than CEOs), it’s also a protest against a class that they see as trying to monopolize for itself the means to live a life dedicated to anything other than material self-interest. Watching liberals express bewilderment that they thus seem to be acting against their own self-interest—by not accepting a few material scraps they are offered by Democratic candidates—presumably only makes matters worse.

David Graeber (2013), The Democracy Project: A History. A Crisis. A Movement., p123-125

link to June 2014

I’ve been listening to my tapes again

It’s now clear that CDs were a massive backwards move for personal music. My CDs are scratched, my CD players have conked out, one by one. I sit surrounded by dead storage media, while my tapes and tape players play as loud and clear as ever.

The popularity of low quality mp3s reveals the myth of fidelity that helped us buy into the CD hype. As with virtual reality, we let ourselves be fooled into believing that the primary thing that matters is high resolution (kbps, frames per second, pixels per inch, 3D, etc). CDs might have some sound quality edge over tape, but in terms of immersion quality is irrelevant. Fluidity of action drives immersion in VR. With music, the relationship we have to the production and consumption is primary; the history of obtaining, retaining, playing and enjoying.

With music, CDs accelerated us along that path that eroded music-as-object. This creates a vacuum in the emotional life of our music collections. The CD gives you shuffle, destroying the order the higher order of sequencing in favour of the individuality of tracks. You can skip in an instant, removing the distance tapes impose via effort of holding the forward key. CDs are fickle towards their digital memories, all too ready to give up to scratching, skipping or fatal “NO DISC” load failures. Frankly, less than 20 years after I bought my first CD, too many of them don’t f****ing work.

The tapes still work. I recognise my handwriting on the track listing, anticipate the start of each track from the end of the one that invariably came before. Certain artists are forever bound in my memory by accident of being taped onto opposite sites of the same tape. My hand knows the weight of a tape. Somewhere in my motor cortex a dedicated network of neurons store the pattern which allows me to stab STOP/EJECT, slip out a C90, spin it around between thumb and index finger, reinsert, slam shut holder and stab play, all within half a second.

Music-as-objects limits our choices. With a tape, if you want to skip more tracks you have to wait longer for the tape to wind forward. If you want to change your selection you need to stand up and find another tape you want to listen to. If you want to make a mix tape, you’ve only got 45 minutes a side, say, within which to do it.

The tape gives freedom through constraint in a way that is a release for anyone who has sat in front of Spotify, mouse over the search bar, thinking “a million million songs at my fingertips and I can’t think of anything I want to listen to”. Once, I could only listen to the music I had on tape (and a radio, without any pause or replay). Now I can spend 10 minutes listening to the first thirty seconds of 20 songs from a selection wider than the sky. It’s like a music diet consisting entirely of crisps.

Sometimes less is more.

ASMR: Don’t ask

A couple of years ago a nice man called Rhodri asked me about ASMR. I didn’t know anything about the phenomenon, but I was willing to comment as an experimental psychologist. The interesting thing to me was that this is a subjective experience that many people seemed to recognise, but it had no official name (until people started calling it ASMR and finding each other on the internet). “Could this be a real thing?” asked Rhodri. “Sure” I said, it’s perfectly possible that something could be real (common across people, not based on imaging or lies) and yet scientifically invisible. Maybe, I thought, now someone will look into this phenomenon and find ways of measuring it.

Since then, as far as I am aware, there hasn’t been any research on ASMR, but interest in the phenomenon grows and grows. I wrote a column about ASMR for the BBC. There’s even a wikipedia page, and yours truly is currently quoted near the top. Because of these I regularly have people with ASMR and assorted journalist types contacting me for my opinions on ASMR.

It makes me kind of sad to say, but I actually have no further opinions on ASMR. I don’t have anything extra to say than I said to Rhodri and in my column. I don’t follow the research on ASMR (if there is any now), and I have never done any research on ASMR. I only opened my big mouth in the first place because the thing that interests me is how subject experience is turned into social facts. As an experimental psychologist, that’s what I do and ASMR is an example of something that might be a real subjective experience that we can observe in the process of being turned into a socially accepted fact. That’s the thing that is interesting to me.

Regretfully, I have to refuse all opportunities to talk about ASMR itself because I literally have nothing more to say. Sorry.

Reason is no mere slave

“Human beings are not the perfectly rational creatures they would be if they strove for truth and consistency at all times. Nevertheless, if we can be motivated by a desire to eliminate inconsistency in our beliefs and actions, reason is no mere slave. We may use reason to enable us to satisfy our needs, but reason then develops its own motivating force”

Peter Singer (1981). The Expanding Circle: Ethics and Sociobiology, p. 143

On the epistemic costs of implicit bias

“…if you live in a society structured by racial categories that you disavow, either you must pay the epistemic cost of failing to encode certain sorts of base-rate or background information about cultural categories, or you must expend epistemic energy regulating the inevitable associations to which that information – encoded in ways to guarantee availability – gives rise”

Gendler, T. S. (2011). On the epistemic costs of implicit bias. Philosophical studies, 156(1), 33-63.

ur-quote on addiction and freewill

The craving for a drink in real dipsomaniacs, or for opium or chloral in those subjugated, is of a strength of which normal persons can form no conception. ‘Were a keg of rum in one corner of a room and were a cannon constantly discharging balls between me and it, I could not refrain from passing before that cannon in order to get the rum’; ‘If a bottle of brandy stood at one hand and the pit of hell yawned at the other, and I were convinced that I should be pushed in as sure as I took one glass, I could not refrain’: such statements abound in dipsomaniacs’ mouths.

William James, Principles of Psychology (New York: Henry Holt and Company, 1890), p. 543. Via Laurence. Thanks Laurence!

links for summer 2013

the extended self in interface design

As users become more familiar with an environment they situate themselves more profoundly. We believe that insights concerning the way agents become closely coupled with their environments have yet to be fully exploited in interface design

Hollan, J., Hutchins, E., & Kirsh, D. (2000). Distributed cognition: toward a new foundation for human-computer interaction research. ACM Transactions on Computer-Human Interaction (TOCHI), 7(2), 174-196.