Categories
psychology

Tools, substitutes or companions: three metaphors for thinking about technology

This reposted from the Cyberselves blog, which has died. Original date: 2018-02-05

Here are three metaphors for how we think about digital and robotic technologies:

File:Centaur (PSF).jpgFirst, as tools. Passive instruments which extend our own power. Hammers enhance your hitting, video calling extends your presence, algorithmic trading merely implements the rules you designed for trading. Tools seem like passive objects, without their own desires, but a moment’s thought will tell you that even passive objects have psychological effects (that’s why we say ‘to a man with a hammer every thing looks like a nail’).

A second metaphor is to think of technologies as substitutes. This is the metaphor which dominates robotics – and the ever repeated image of the humanoid robot, whether doing human labour (and potentially putting them out of work), or rising up and a waging a war against humans to replace them. Here’s an interesting post from Marginal Revolution, which pours cold water on self-driving trucks, explicitly because it rejects the idea that all the functions of a truck driver can be replaced by technology:

truck drivers don’t just drive trucks. They also secure loads, including determining what to load first and last and how to tie it all down securely. They act as agents for the trunking company. They verify that what they are picking up is what is on the manifest. They are the early warning system for vehicle maintenance. They deal with the government and others at weighing stations. When sleeping in the cab, they act as security for the load. If the vehicle breaks down, they set up road flares and contact authorities. If the vehicle doesn’t handle correctly, the driver has to stop and analyze what’s wrong – blown tire, shifting load, whatever. [and on]

But there is another metaphor for technology, that of working companions. This is a metaphor where technology complements human abilities, rather than merely extending them (like tools), or replacing them (like substitutes). Ironically, the quote above is a comment on an article which takes the companion metaphor as its premise, not the replacement metaphor (“Could Self-Driving Trucks Be Good for Truckers?“). Clive Thompson, in the compelling first chapter of his Smarter Than You Think labels human-technology teams ‘centaurs’. For Thompson the question “Who is better at chess – humans or computers?”- is simply the wrong question. The best chess, the most interesting chess, can be played by computer-human teams which fluidly interact and can draw on the strengths of both:

In essence, a new form of chess intelligence was emerging. You could rank the teams like this: (1) a chess grand master was good; (2) a chess grand master playing with a laptop was better. But even that laptop-equipped grand master could be beaten by (3) relative newbies, if the amateurs were extremely skilled at integrating machine assistance. “Human strategic guidance combined with the tactical acuity of a computer,” Kasparov concluded, “was overwhelming.”

Better yet, it turned out these smart amateurs could even outplay a supercomputer on the level of Deep Blue… They did it using their own talents and regular Dell and Hewlett-Packard computers, of the type you probably had sitting on your desk in 2005, with software you could buy for sixty dollars

Read an excerpt here.

The technologies of the future will be more exciting, more dangerous, more mind-altering than either tools or substitutes. How we relate do our new companions will require an exercise of the imagination, as much as anything else. Letting our thinking be captured by restricted metaphors for these new technologies will only hold us back.

Related: How To Become A Centaur at mindhacks.com 2018-02-07

Categories
politics psychology systems

Collective intelligence in twitter discussions

The UCU strike has shown how effective twitter can be. University staff from around the country have shared support, information and analysis . There has been a palpable feeling of collective intelligence at work. When the first negotiated agreement was released (at 7.15 on a Monday evening) my impression was that most people didn’t know what to make of it. I didn’t know what to make of it. Pensions are complex, and the headline feature – retention of a Defined Benefit scheme seemed positive. Overnight on twitter sentiment coalesced around the hashtag #NoCapitulation and at 10am on the Tuesday union members around the country held branch meetings – all 64 of which resoundingly rejected the agreement. The subsequent – substantially improved – offer suggests that this was the right thing for union members to do, and the speed and unanimity with which they did it wouldn’t have been possible without the twitter discussion that happened over night.

So why, on this occasion, does twitter work as a platform for collective intelligence? Often enough twitter seems to be a platform which supports idiocy, narcissism and partisan bickering. The case of UCU strike twitter contrasts with other high volume / high urgency discussions, such as the aftermath of disasters, where twitter is as likely to be used to spread fake news and political point scoring as it is for useful information and insightful analysis.

Collective intelligence: what helps, what hurts

There is a literature on collective decision making, which highlights a few things which need to hold for a group discussion to be more productive than individuals just making up their own mind.

  • Arguments must be exchanged . First off, and a factor which should hearten committed rationalist everywhere, the exchange of arguments – not just information – seems to be key to productive groups (“studies that have manipulated the amount of interaction or that have examined the content of interactions have found that the exchange of arguments is critical for these improvements to occur”, Mercier, 2016 ).

  • Agreed purpose . Productive groups need to have a shared idea of what they are trying to achieve. If, for example, half of a group like solving problems and half like having arguments, their contributions to the discussion will, sooner or later, push in different directions ( van Veelen & Ufkes, 2017 , Sperber & Mercier, 2017 )

  • Diversity, in viewpoints . The literature on the effect of diversity on collective intelligence is mixed. Too much diversity between participants may hinder group discussions ( Wooley et al, 2015 ) and demographic diversity alone certainly isn’t sufficient for the wisdom of crowds to emerge ( de Oliveira & Nisbett, 2018 ). Instead enough ‘ view point diversity ‘ to produce a cognitive division of labour without impairing group cohesion. A corollary is that the more group cohesion you have the higher your opportunity to harness group diversity.

Bang & Frith’s fantastic 2017 review on group decision making also highlights some traps which successful group decision must avoid:

  • Herding Herding is excessive agreement. This can happen when group members lack independent information or suffer overly similar viewpoints. It can also be caused by group members having the desire to align to the group for its own sake, or if they believe that others have better knowledge. The result is the same: an information cascade where a popular viewpoint attracts adherents because it is popular, and so appears more correct because it is popular, and on in a vicious circle.

  • Group decision biases One of these, according to Bang & Frith, is ‘shared information bias’ which is a bias to discuss the things everyone knows about rather than share information or discuss aspects of the decision which aren’t yet common to the group

  • Competing sub-goals As well as lacking shared a shared purpose in discussion, group decision making can be derailed by status issues(think showing off, excessive pride preventing admission of error, etc), accountability issues (such as people avoiding unpopular opinions if they will be punished if that position turns out to be in error) and ‘social loafing’ (this is the textbook phenomenon whereby people try less hard in larger groups, effectively free-riding on others’ contributions)

The #USSstrike discussion on twitter

Before trying to apply the factors identified from the literature on collective intelligence / group decision making to the #USSstrike, let’s throw up a quick list factors which seem plausible candidates for why twitter was the site of a productive conversation this time. Once we have a list of candidates, we can see how they map to the features identified in the literature as necessary conditions for useful group decision making.

So, the #USSstrike twitter conversation may have been productive because:

  • twitter discussion built on top of existing networks (academics have local connections to colleagues at their own institutions, as well as disciplinary connections at other institutions across the country.)

  • twitter discussion built on top of IRL discussions on picket lines (lots of opportunity to chat on picket lines).

  • common interest (participants in the conversation are invested in understanding the issue, and want to same thing – a positive outcome to the dispute – even if they don’t agree on what that actually means).

  • niche interest (most of the population is not that interested in academic pensions, which means fewer trolls, troublemakers and idle speculators).

  • participants have training in critically evaluating sources (i.e. hopefully have good filters for unreliable information, recognise important facts)

  • participants have experience discussing substantive issues in public, daily using twitter -as it is at its best – as a platform for information synthesis and recommendation

Combining these lists we get some traction on why academic twitter was suddenly able to transform into a vehicle for productive collective intelligence on pensions (and maybe how we can help keep it that way).

In short, our three criteria for productive group decisions were met:

  • Arguments were exchanged: arguments are the daily tools of academics, of course we exchanged arguments, not just information

  • Our purpose was agreed: the nature of the dispute did that for us. Those in the discussion had a common purpose to understand an issue with high stakes . Not only do we face the same pension cuts, but the logic of collective bargaining and action puts us all on the same side

  • Diverse viewpoints were represented: maybe it is less clear this criteria was met, but perhaps we can thank the fact that academics from all disciplines have been discussing the dispute for at least some boost in the diversity of backgrounds and assumptions that participants bring t the discussion.

The three decision traps – herding, bias and competing sub-goals – are all warnings for the future. We seem to have avoided them for the moment. but there are plenty of individual behaviours which can encourage them. Most of us, with notable exceptions, are guilty of some social loafing. Blindly following others (leading to herding) seems a particular risk given that the logic of collective action is an important part of Union identity. I also note that bad manners, such as abusing people who make mistakes or adopt alternative viewpoints, as well as being bad manners, also works to effectively punish viewpoint diversity, with a corresponding decrement in our capacity for collective intelligence.

As a student of decision making the dispute has been exhilarating to take part in and I’ll watch with interest the next rounds (and the corresponding twitter discussion).

My quick primer on the UCU strike action is here .

References

Bang, D., & Frith, C. D. (2017). Making better decisions in groups . Royal Society Open Science, 4 (8), 170193.

Mercier, H. (2016). The argumentative theory: Predictions and empirical evidence . Trends in Cognitive Sciences, 20 (9), 689-700.

de Oliveira, S., & Nisbett, R. E. (2018). Demographically diverse crowds are typically not much wiser than homogeneous crowds . Proceedings of the National Academy of Sciences, 115 (9), 2066-2071.

Woolley, A. W., Aggarwal, I., & Malone, T. W. (2015). Collective intelligence and group performance . Current Directions in Psychological Science, 24 (6), 420-424.

Categories
intellectual self-deference politics psychology

Facebook’s persuasion architecture and human reason

Facebook is a specific, known, threat to democracy, not a general unknown threat to our capacity for rationality

Zeynep Tufekci has a TED talk ‘We’re building a dystopia just to make people click on ads’. In it she talks about the power of Facebook as a ‘persuasion architecture’ and she make several true, useful, points about why we should be worried about the influence of social media platforms, platforms which have as their raison-d’être the segmentation of audiences so they can be sold ads.

But there’s one thing I want to push back on. Tufeki’s argument draws some of its rhetorical power from a false model of how persuasion works. This is a model in which persuasion by technology or advertising somehow subverts normal rational processes, intervening on our free choice in some sinister way ‘without our permission’. I’m not saying she would explicitly endorse this model, but it seems latent in the way she describes Facebook, so I thought it worth bringing into the light, pausing just for a moment to look at what we really mean when we warn about persuasion by advertising.

Here’s Tufeki’s most worrying example: targeted Facebook ads aimed at mobilising, or demobilising voters, which are effective enough in changing voter turn out to swing an election. She reports an experiment which tested a fairly standard ‘social proof’ intervention, in which some people (the control group) saw a “get out and vote” message on Facebook, and others (the intervention group) saw the same message but with extra information about which of their friends had voted. People who saw this second message were likely to vote (0.4% more likely). Through the multiplier effect of the social networks they were embedded in, the researchers estimate that 340,000 extra people voted that otherwise wouldn’t have.

Now 340,000 votes is a lot, enough to swing an election, but it would be a mistake to think that these people were coerced or tricked into acting out of character by the advert. These were people who might have voted anyway, and the advert was a nudge.

Think of it like this. Imagine you offer someone an apple and they say yes. Did you trick them into desiring fruit? In what sense did you make them want an apple? If you offer apples to millions of people you may convert hundreds of thousands into apple-eaters, but you haven’t weaved any special magic. At one end, the people who really like apples will have one already. At the other, people who hate apples won’t ever say yes. For people who are in between something about your offer may speak to them and they’ll accept. A choice doesn’t have to originate entirely from within a person, completely without reference to the options presented to them, to be a reasonable, free, choice.

No model of human rationality is harmed by the offer of these apples.

Our choices are always codetermined by ourselves and our environment. Advertising is part of the environment, but it isn’t a privileged part — it doesn’t override our beliefs, habits or values. It affects them, but it no more so and in no different way than everything else which affects us. This is easy to see when it is offers of apples, but something about advertising obscures the issue.

Take the limit case — some political candidate figures out the perfect target audience for their message and converts 100% of that audience from non-voters into voters with a Facebook advert. Would we care? What would that advert — and those voters — look like? They would be people who might vote for the candidate anyway, and who could be persuaded to vote for someone else by all the normal methods of persuasion that we already admit into the marketplace of ideas / clubhouse of democracy. They wouldn’t vote for a candidate they didn’t sincerely believe in, and the advert wouldn’t mean that their vote couldn’t be changed at some later point, whether by another advert, by new information, by arguing with friend or whatever.
There are still plenty of reasons to worry about Facebook:

  • Misinformation —how it can embed and lend velocity to lies.
  • Lack of transparency — both in who is targeting, who is targeted and why.
  • Lack of common knowledge —consensus politics is hard if we don’t all live in the same informational worlds.

Tufeki covers these factors. My position is that it hasn’t been shown that there is anything special about Facebook as a ‘persuasion architecture’ beyond these. Yes, we should worry something with the size and influence of Facebook, but we already have frameworks for thinking about ‘persuasional harm’— falsehoods are not a legitimate basis for persuasion, for example, so we are particularly concerned to hunt down fake news; or, it is worrying when one interest group controls a particular media form, such as newspapers. Yes Facebook persuades, but it doesn’t do so in a way that is itself pernicious. Condemning it in general terms would be both misplaced, a harm to any coherent model of citizens as reasonable agents, and a distraction from the specific and novel threats that Facebook and related technologies constitute to democracy.

Categories
psychology

Max Bazerman’s question: If you had to make this decision again in a year…

If you had to make this decision again in a year, what information would you want, and can you get more of it now?

One challenge executives face when reviewing a recommendation is the WYSIATI assumption: What you see is all there is. Because our intuitive mind constructs a coherent narrative based on the evidence we have, making up for holes in it, we tend to overlook what is missing. Devesh, for instance, found the acquisition proposal compelling until he realized he had not seen a legal due diligence on the target company’s patent portfolio—perhaps not a major issue if the acquisition were being made primarily to gain new customers but a critical question when the goal was to extend the product line.

To force yourself to examine the adequacy of the data, Harvard Business School professor Max Bazerman suggests asking the question above. In many cases, data are unavailable. But in some cases, useful information will be uncovered.

From Before You Make That Big Decision… by Daniel Kahneman, Dan Lovallo & Olivier Sibony in Harvard Buisness Review. The idea is similar to Gary Klein’s idea of the pre-mortem. Both, in the style of ‘What Would Jesus Do?’ type questions, ask you to take a perspective which is less involved in the decision immediately in front of you, to facilitate exploration the counter-factual space around the way things are (or are as you imagine them), and to return with questions you didn’t think to ask previously.

Categories
psychology

Learning, Motor Skill, and Long-Range Correlations

Nourrit-Lucas et al (2015) compare expert and novice performance on a ski-simulator, a complex task which is often used by human movement scientists. Acquiring skilled performance on the ski-simulator requires you to learn a particular form of control as you shift your weight from side to side (van der Pol form of damped oscillations).

Their main result involved comparing the autocorrelations (which they call serial correlations) of participants’ performance. The participants were instructed to move from side to side as fast, and widely, as possible and these movements were motion tracked. The period of the oscillations was extracted and the autocorrelation for different lags calculated (other complexity measures were also calculated, which I ignore here). The autocorrelations for novices were positive for lag 1 and possibly for other short lags, but dropped to zero for longer range lags (5-30). Expert’s autocorrelations were higher for shorter lags and did not drop to zeros for any of the lags examined (showing positive long-range correlations in performance).

Figure 3, Nourrit-Lucas et al (2015)
Figure 3, Nourrit-Lucas et al (2015)

Nourrit-Lucas et al put an impressive interpretation on their result. It undermines, they say, that motor learning involves merely a process of simplification, unification or selection of a single efficient motor programme. Instead, they say “Expert performance seems characterized by a more complex and structured dynamics than that of novices.”

They link this interpretation to the idea of degeneracy, in which learning is the coordination of a complex network so that multiple functional units become linked to all support given outcomes. “This enrichment of neural networks could explain the property of robustness of motor skills, essentially revealed in retention tests, but also the properties of generalizability and transfer”

They cite modelling by Delignieres & Mermelat (2013) which links level of degeneracy to extent of long-range correlations. Whilst they admit that other complex networks are also capable of producing the long-range correlations observed, I would go further and say that a “simple-unitary” model of motor learning might also produce long-range correlations if there was some additional structured noise on performance (e.g. drift in attention or some such). Novices of course, would also have this influence on their performance, but perhaps it is swamped by the larger variability of their yet to be optimised motor system. I don’t see why the analysis of Nouritt-Lucas excludes this interpretation, but I may be missing something.

I also note that their result contrast with that of van Beers et al (2013), who showed that lag 1 autocorrelations in experts at at an aiming task tended towards zero. They interpreted this as evidence of optimal learning (ie neither under- nor over- correction of performance based on iterated error feedback). The difference may be explained by the fact that van Beers’ task used an explicit target whilst Nourrit-Lucas’ task lacked any explicit target (merely asking participants to, in effect, “do their best” in making full and fast oscillations).

The most impressive element of the Nouritt-Lucas study is not emphasised in the paper – the expert group are recruited from a group that was trained on the task 10 years previously. In Nourrit-Lucas et al (2013) she shows that despite the ten year gap the characteristic movement pattern of experts (that damped van der Pol oscillation) is retained – a truly impressive lab demonstration of the adage that you “don’t forget how to ride a bike [or equivalently complex motor task]”.

REFERENCES

Delignieres, D., & Marmelat, V. (2013). Degeneracy and long-range correlations. Chaos: An Interdisciplinary Journal of Non-linear Science, 23, 043109.

Nourrit-Lucas, D., Tossa, A. O., Zélic, G., & Delignières, D. (2015). Learning, Motor Skill, and Long-Range Correlations. Journal of motor behavior, (ahead-of-print), 1-8.

Nourrit-Lucas, D., Zelic, G., Deschamps, T., Hilpron, M., & Delignieres, D. (2013). Persistent coordination patterns in a complex task after 10 years delay: How validate theold saying “Once you have learned how to ride a bicycle, you never forget!” Human Movement Science, 32, 1365–1378. doi:10.1016/j.humov.2013.07.005

van Beers, R. J., van der Meer, Y., & Veerman, R. M. (2013). What autocorrelation tells us about motor variability: Insights from dart throwing. PloS one, 8(5), e64332.

Categories
psychology

Limits on claims of optimality

Jarvstad et al (2014) provide a worked illustration showing that it is not straightforward to declare perceptuo-motor decision making optimal, or even more optimal when compared to cognitive decisions.

They note that, in contrast to cognitive level decisions, that perceptuo-motor decisions have been described a optimal or near optimal (Seydell et al, 2008; Trommershäuser et al, 2006). But they also note that there are differences in the performance conditions and criteria of assessment as optimal between perceptuo-motor and cognitive decisions. Jarvstad et al (2013) demonstrated that when these differences are eliminated, claims about differences between domains are harder to substantiate.

In this paper, Jarvstad et al (2014) compare two reaching tasks to explore the notional optimality of human perceptuo-motor performance. They show that minor changes in task parameters can affect whether participants are classified as behaving as optimally or not (even if these changes in task parameters do not effect the level of performance of an optimal agent). Specifically they adjusted the size and distance of the reaching targets in their experiment, without qualitatively altering the experiment (nor the instructions and protocol at all).

The bound below which performance is classified as sub-optimal depends on a number of factors. The ease of task affects this (for easier tasks observed performance will be closer to optimal), but so does the variability in an optimal agent’s performance, or the accuracy of estimation around an optimal agent’s performance affect. Jarvstad et al conclude that, for this task at least, it is not straightforward to know how changes in an experiment will effect the bounds within which a subject is classified as optimal. They say (p.413):

That statements about optimality are specific and conditional in this way – that is, a behaviour is optimal given a task of this difficulty, and given these capacity constraints included in the optimal agent— may be appreciated by many, however the literatures typically do not make this explicit, and many claims are simply unsustainable once this fact is taken into account.

REFERENCES

Jarvstad, A., Hahn, U., Warren, P. A., & Rushton, S. K. (2014). Are perceptuo-motor decisions really more optimal than cognitive decisions?. Cognition, 130(3), 397-416.

Seydell, A., McCann, B. C., Trommershäuser, J., & Knill, D. C. (2008). Learning stochastic reward distributions in a speeded pointing task. The Journal of Neuroscience, 28, 4356–4367.

Trommershäuser, J., Landy, M. S., & Maloney, L. T. (2006). Humans rapidly estimate expected gain in movement planning. Psychological Science, 17, 981–988.

Categories
psychology

Perceptuo-motor, cognitive, and description-based decision-making seem equally good

Jarvstad et al (2013) show that when perceptuo-motor and ‘cognitive’ decisions are assessed in the same way there are no marked differences in performances.

The context for this is the difference between studies of perceptual-motor and perceptual decision making (which have emphasised the optimality of human performance) and studies of more cognitive choices (for which the ‘heuristics and biases’ tradition has purported to demonstrate substantial departures from optimality).

Jarvstad and colleagues note that experiments in these two domains differ in several important ways. One is the difference between basing decisions on probabilities derived from descriptions verses derived from experience (which has its own literature; Hertwig & Erev, 2009). Another is that perceptual-motor tasks often involve extensive training, with feedback, whereas cognitive decision making task are often one-shot and/or without feedback.

The definition of optimality employed also varies across the domains. Perceptual-motor tasks usually compare performance to that of an optimal agent, often modelled incorporating some constraints on task performance (e.g. motor noise). Cognitive tasks have often sought to compare performance to the standard of rational utility maximisers, designing choices in the experiments precisely to demonstrate violation of axioms on which rational choice rests (e.g. transitivity).

In short, claiming a difference in decision making across these two domains may be premature if other influences on both task performance and task assessment are not comparable.

To carry out a test of performance in the two domains, Jarvstad et al carried out the following experiment. They compared a manual aiming task (A) and a numerical arithmetic task (B). During a learning phase they assessed variability on the two tasks (ie frequency and range of error in physical (A) or numerical (B) distance). Both kinds of stimuli varied in the ease with which the required response could be successfully produced (ie. they varied in difficulty). They also elicited explicit judgements of stimuli that participants judged would match set levels of success (e.g. that they thought they would have a 50% or a 75%, say, chance of getting right).

During a decision phase they asked participants to choose between pairs of stimuli with different rewards (upon success) and different difficulties. Importantly, the difficulties were chosen – using the data provided by the learning phase – so as to match certain explicit probabilities (such as might be provided in a traditional decision from description experiment on risky choice. They also tested such decisions from explicit probabilities, in a task labelled ‘C’).

The results show that all three tasks had a comparable proportion of decisions which were optimal, in the sense of maximising chance of reward (Fig 3A). For all three tasks more optimal decisions were made on those decision which were more consequential (ie which had a bigger opportunity cost and which, consequentially, were presumably easier to discriminate between, Fig 3B – shown).

Fig3B Jarvstad, A., Hahn, U., Rushton, S. K., & Warren, P. A. (2013). Perceptuo-motor, cognitive, and description-based decision-making seem equally good. Proceedings of the National Academy of Sciences, 110(40), 16271-16276. http://www.pnas.org/content/110/40/16271.short

Using individual participant data, it is possible to recover – via model fitting – the subjective weights for value and probability functions. These show an underweighting of low objective probabilities in the perceptual-motor task (Fig 4D) and a overweighting of low objective probabilities in the classical probability-from-description cask (Fig 4F). This is in line with previous literature reporting a divergence between the domains in the way low probability events are treated (Hertwig et al, 2004). However, Jarvstad use the explicit judgements obtained in the learning phase to show that the apparent discrepancy in weighting results from differences in the subjective probability function (ie how likely success is judged in the perceptual-motor domain) rather than in the weighting given to this probability. If probability estimations are held constant, then similar weightings to low probability events are found across the domains.

They also show that an individual performance on a task is better predicted by their performance on a task in a different domain than by the average performance in that domain – ie that individual differences are more important than task differences in nature and extent of divergence from optimality.

REFERENCES

Jarvstad, A., Hahn, U., Rushton, S. K., & Warren, P. A. (2013). Perceptuo-motor, cognitive, and description-based decision-making seem equally good. Proceedings of the National Academy of Sciences, 110(40), 16271-16276.

Hertwig R, Barron G, Weber EU, Erev I (2004) Decisions from experience and the effect of rare events in risky choice. Psychol Sci 15(8):534–539.

Hertwig R, Erev I (2009) The description-experience gap in risky choice. Trends Cogn Sci 13(12):517–523.

Categories
psychology science

The publication and reproducibility challenges of shared data

Poldrank and Poline’s new paper in TICS (2015) asserts pretty clearly that the field of neuroimaging is behind on open science. Data and analysis code are rarely shared, despite the clear need: studies are often underpowered, there are multiple possible analytic paths.

They offer some guidelines for best practice around data sharing and re-analysis:

  • Recognise that researcher error is not fraud
  • Share analysis code, as well as data
  • Distinguish ‘Empirical irreproducibility’ (failure to replicate a finding on the original researchers’ own terms) from ‘interpretative irreproducibility’ (failure to endorse the original researchers’ conclusions based on a difference of, e.g., analytic method)

They also over three useful best practice guidelines for any researchers who are thinking of blogging a reanalysis based on other researchers’ data (as Russ has himself)

  • Contact the original authors before publishing to give them right of reply
  • Share your analysis code, along with your conclusions
  • Allow comments

And there are some useful comments about authorship rights for research based on open data. Providing the original data alone should not entitle you to authorship on subsequent papers (unless you have also contributed significant expertise to a re-analysis). Rather, it would be better if the researchers contributing data to an open repository publish a data paper which can be cited by anyone performing additional analyses.

REFERENCE

Poldrack, R. A., & Poline, J. B. (2015). The publication and reproducibility challenges of shared data. Trends in Cognitive Sciences, 19(2), 59–61.

Categories
psychology

Light offsets as reinforcing as light onsets

Further support that surprise and not novelty supports sensory reinforcement comes from the evidence that light offsets are more-or-less as good reinforcers as light onsets (Glow, 1970; Russell and Glow, 1974). But in the case of light offset, where is the “novel” stimulus that acts as a reinforcer (by supposedly triggering dopamine)? In this case it is even more clear that it is the unexpectedness of the event (surprise), not the novelty of the stimulus (which is absent), that is at play.

From Barto, A., Mirolli, M., & Baldassarre, G. (2013). Novelty or surprise?. Frontiers in psychology, 4.

REFERENCES

Glow P. (1970). Some acquisition and performance characteristics of response contingent sensory reinforcement in the rat. Aust. J. Psychol. 22, 145–154 10.1080/00049537008254568

Russell A., Glow P. (1974). Some effects of short-term immediate prior exposure to light change on responding for light change. Learn. Behav. 2, 262–266 10.3758/BF03199191

Categories
psychology

Animal analogs of human biases and suboptimal choice

Zentall (2015) summarises a rich literature on experiments showing that analogues of canonical human biases exist in animals. Specifically, he takes the phenomena of

  • justification of effort: rewards which require more effort are overweighted
  • sunk cost fallacy: past effort is weighted in evaluation of future rewards
  • less-is-more effect: high value rewards are valued less if presented along with a low value reward
  • risk neglect: overweighting of low probability but high value rewards
  • base rate neglect: e.g. over-reliance on events which are likely to be false positives

The demonstration of all these phenomena in animals (often birds – pigeons and dogs in Zentall’s own research) presents a challenge to explanations of these biases in human choice. It suggests they are unlikely to be the result of cultural conditioning, social pressure or experience, or elaborate theories (such as theories of probability or cosmic coincidence in the case of suboptimal choice regarding probabilities, see Blanchard, Wilke and Hayden, 2014).

Zentall suggests that these demonstrations compel us to consider that suboptimal choice in the laboratory can only exist because of some adaptive value in the wild, with common mechanisms for multiple biases, or the independent evolution of each bias/heuristic in separate modules. At the end of the paper he presents some loose speculations on the possible adaptive benefit of each of the discussed biases.

Three interesting recent results from Zentall’s lab concern risky choice in pigeons

1. Laude et al (2014) showed that for individual pigeons there was a correlation between degree of suboptimal choice on a gambling task (overweighting of rare but large rewards) and impulsivity as measured by a delay discounting task. As well as seeming to show ‘individual differences’ in pigeon personality, it suggests the possibility of some common factors in these two kinds of choices (choices which experimental human work has found to be dissociable in various ways)

2. Zentall and Stagner (2011) show that the conditioned reinforcer (stimuli which predict reward) are critical in the gambling task (for pigeons). Without these intermediate stimuli, when actions lead directly to reward (still under the same probabilities of outcome), pigeons choose optimally. Zentall suggests that the thought experiment on the human case confirms the generality of this result. Would slot machines be popular without the spinning wheels? Or (my suggestion) the lottery without the ball machine? My speculation is that the promise of insight into the causal mechanism governing outcome is important. We know that human and non-human animals are guided by intrinsic motivation as well as the promise of material rewards (ie as well as being extrinsically motivated). Rats, for example, will press a lever to turn a light on or off, in the absence of the food reward normally used to train lever pressing (Kish, 1955). One plausible explanation for results like this is that our learning systems are configured to seek control or understanding of the world – to be driven by mere curiosity – in order to generate exploratory actions which will, in the long term, have adaptive benefit. Given this, it makes sense if situations where there is the possibility of causal insight – as with the intermediate stimuli in the gambling task – can inspire actions which are less focussed on exploiting know probabilities (i.e. are ‘exploratory’, in some loose sense) even if the promise of causal insight is illusory and the exploratory action are, as defined by the experiment, futile and suboptimal.

3. Pattison, Laude and Zentall (2013) showed that pigeons who were given the opportunity for social interaction (with other pigeons) were less likely to choose the improbable large reward action over lower expected value but more certain reward. Zentall’s suggestion is that the experience of social interaction diminishes the perceived magnitude of the improbable reward, making it seem like a less attractive choice (which makes sense if neglect of the probability an focus on the magnitude is part of the dynamic driving suboptimal choice in this gambling task). Whatever the reason, the result is a reminder that the choices of animals – human and non-human – cannot be studied in isolation from the experience and environment of the organism. This may sound like an obviousity, but discussion of problematic choices (think gambling, or drug use) often conceptualise behaviours as compelled, part of an immutable biological (addiction as disease) or chemical (drugs as inevitably producing catastrophic addiction) destiny. This result, and others (remember Rat Park) give lie to that characterisation.

REFERENCES

Blackburn, M., & El-Deredy, W. (2013). The future is risky: Discounting of delayed and uncertain outcomes. Behavioural processes, 94, 9-18.

Blanchard, T. C., Wilke, A., & Hayden, B. Y. (2014). Hot-hand bias in rhesus monkeys. Journal of Experimental Psychology: Animal Learning and Cognition, 40(3), 280-286.

Kish, G.: Learning when the onset of illumination is used as the reinforcing stimulus. J. Comp. Physiol. Psycho. 48(4), 261–264 (1955)

Laude, J.R., Beckmann, J.S., Daniels, C.W., Zentall, T.R., 2014. Impulsivity affects gambling-like choice by pigeons. J. Exp. Psychol. Anim. Behav. Process. 40, 2–11.

Pattison, K.F., Laude, J.R., Zentall, T.R., 2013. Social enrichment affects suboptimal, risky, gambling-like choice by pigeons. Anim. Cogn. 16, 429–434.

Zentall, T.R., Stagner, J.P., 2011. Maladaptive choice behavior by pigeons: an animal analog of gambling (sub-optimal human decision making behavior). Proc. R. Soc. B: Biol. Sci. 278, 1203–1208.

Zentall, T. R. (2015). When animals misbehave: analogs of human biases and suboptimal choice. Behavioural processes, 112, 3-13.

Categories
psychology

Discounting of delayed and uncertain outcomes

Blackburn & El-Deredy (2013) provide a nice review of the literature on temporal (delay) and probabilistic discounting. They note these features of similarity and difference:

  • Both follow hyperbolic (rather than exponential) discount functions
  • Not correlated: impulsivity might be interpreted as steeper temporal discounting, and shallower probabilistic discounting, but steep temporal discounters don’t appear to be shallow probabilistic discounters
  • Magnitude effect opposite: for probabilistic rewards, larger rewards are more steeply discounted, for delayed rewards, larger rewards are more shallowly discounted
  • Sign effect the same: Gains vs loses effect discounting similarly for temporal and probabilistic discounting (gains are more steeply discounted).

To this list their experiments add a dissociable effect of ‘uncertain outcomes’ (all or nothing) vs ‘uncertain amounts’ (graded reward).

Blackburn, M., & El-Deredy, W. (2013). The future is risky: Discounting of delayed and uncertain outcomes. Behavioural processes, 94, 9-18.

Categories
psychology

ASMR: Don’t ask

A couple of years ago a nice man called Rhodri asked me about ASMR. I didn’t know anything about the phenomenon, but I was willing to comment as an experimental psychologist. The interesting thing to me was that this is a subjective experience that many people seemed to recognise, but it had no official name (until people started calling it ASMR and finding each other on the internet). “Could this be a real thing?” asked Rhodri. “Sure” I said, it’s perfectly possible that something could be real (common across people, not based on imaging or lies) and yet scientifically invisible. Maybe, I thought, now someone will look into this phenomenon and find ways of measuring it.

Since then, as far as I am aware, there hasn’t been any research on ASMR, but interest in the phenomenon grows and grows. I wrote a column about ASMR for the BBC. There’s even a wikipedia page, and yours truly is currently quoted near the top. Because of these I regularly have people with ASMR and assorted journalist types contacting me for my opinions on ASMR.

It makes me kind of sad to say, but I actually have no further opinions on ASMR. I don’t have anything extra to say than I said to Rhodri and in my column. I don’t follow the research on ASMR (if there is any now), and I have never done any research on ASMR. I only opened my big mouth in the first place because the thing that interests me is how subject experience is turned into social facts. As an experimental psychologist, that’s what I do and ASMR is an example of something that might be a real subjective experience that we can observe in the process of being turned into a socially accepted fact. That’s the thing that is interesting to me.

Regretfully, I have to refuse all opportunities to talk about ASMR itself because I literally have nothing more to say. Sorry.

Categories
psychology

Cognitive Science Cinema

I’ve been trying to think of documentaries on cognitive science topics. This is what I’ve got so far. Can you help?

Categories
psychology science

Surely the hoo-har about replication could only concern a non-cumulative science?

There’s a hoo-har in psychology right now about replication. Spurred on by some high profile fraud cases, awareness of the structural biases surrounding publication and perennial rumblings about statistical malpractice, many are asking if the effects reported in the literature are real. There are some laudable projects aimed at improving best practice in science – journals of null results, pre-registration for experiments, the Center for Open Science (see previous link), but it occurs to me that all of this ignores an important bit of context. At the risk of stating the obvious: you need to build in support for replications only to the extent that these do not happen as part of normal practice.

Cumulative science inherently supports replication. For most of science, what counts on news is based on what has been done before – not just in an abstract theoretical sense, but in the sense that it relies on those results being true to make the experiments work. Since I’m a psychologist, and my greatest expertise is in my own work, I’ll give you an example from this recent paper. It’s a study of action learning, but we use a stimulus control technique from colour psychophysics (and by ‘we’, I really mean Martin, who did all the hard stuff). As part of preparing the experiment we replicated some results using stimuli of this type. Only because this work had been done (thanks Petroc!) could we design our experiment; and if this work didn’t replicate, we would have found out in the course of preparing for our study of action learning. Previously in my career I’ve had occasion to do direct replications, and I’ve almost always found the effect reported. I haven’t agreed with the interpretation of why the effect happens, or I’ve found that my beliefs about the effect from just reading the literature were wrong, but the effect has been there.

It is important that replication is possible, but I’ve been bemused that there has been such a noise about creating space for additional formal replications. It makes me wonder what people believe about psychology. If a field was one where news was made by collecting isolated interesting phenomena, then I there would be more need for structures to support formal replication. Should I take the reverse lesson from this – the extent to which people call for structures to support formal replication is evidence of the lack of cumulative science in psychology?

Categories
Me psychology

Mea culpa musings (angry cyclist edition)

I screwed up. My latest column for BBC Future is about why cyclists enrage motorists. My argument is that cyclists offend the ‘moral order’ of the roads, evoking in motorists a feeling of outrage over perceived rule breaking.

Unfortunately, I included some loose words in my article that implied things I don’t believe and wasn’t arguing. Exhibit A:

Then along comes a cyclist, who seems to believe that the rules aren’t made for them, especially the ones that hop onto the pavement, run red lights, or go the wrong way down one-way streets.

This wrongly suggests both that I think the typical cyclists breaks the law (they don’t), and/or that motorists are enraged by cyclists’ law breaking. This is not the case, rather I am arguing that motorists are engaged by cyclists’ perceived rule breaking, where I mean rule in the sense of ‘convention’. Cyclists habitually, legally, and sensibly break conventions of car-driving such as waiting in queued traffic, moving at the speed limit or not under-taking.

Exhibit A has now been changed in the article to the more pleasing:

Then along come cyclists, innocently following what they see are the rules of the road, but doing things that drivers aren’t allowed to: overtaking queues of cars, moving at well below the speed limit or undertaking on the inside.

So, my bad and apologies for this. I should have been a lot clearer than I was. I’m just grateful that a few people understood what I was getting at (if you read the whole article I hope the correct interpretation is supported by the rest of the phrasing I use). The amount and vehemence of feedback has been quite surprising. Lots of people thought I was a frustrated driver who hated cyclists. In fact, the bike is my main form of transport. I’ve ridden nearly every day for over ten years (and been hit by a car once). For this article I was trying not to sound like the self-righteous cycling proto-fascist I feel like sometimes. I obviously succeeded. Perhaps too well.

Other people thought I was claiming that this was the only factor affecting road-user’s attitudes. I don’t think this. Obviously selective memory (for bad cyclists or drivers), in- group/out-group effects and the asymmetry in vulnerability all play a role. I did write a version of the article which laid out the conceptual space a bit clearer, but I decided it was boring to read, and really I wanted to talk about evolutionary game theory and make a novel – and, I thought, interesting – claim.

I sometimes think I should get “Telling the truth, just not the whole truth” translated into Latin so I can use it as the motto for the column. Each one I write someone comes back to me with something I missed out. If I tried to be comprehensive I’d end up with a textbook, instead of a 800 word magazine column. I don’t want to write textbooks, so I’m reasonably happy with leaving things out, but I do worry that there is a line you cross when telling some of the truth amounts to a deception or distortion of the whole truth. I’m trying, each time, not to cross that line. Feedback on how to manage this is welcome.

There were many other comments of all shades. You can ‘enjoy’ some of them on the BBC Future facebook page here. If you did leave a comment on email/facebook/twitter I’m sorry I couldn’t respond to all of them. I hope this post clarifies things a bit.

Categories
psychology science

Bootstrap: corrected

So, previously on this blog (here, and here) I was playing around with the bootstrap as a way of testing if two samples are drawn from a different underlying distribution, by simulating samples with known differences and throwing different tests at the samples. The problem was that I was using the wrong bootstrap test. Tim was kind enough to look at what I’d done and point out that I should have concatenated my two sets of numbers and the pulled two samples from that set, calculated the mean difference and then used that statistic to constructed a probability distribution function against which I could compare my measured statistic (ie the difference of means) to perform a hypothesis test (viz. ‘what are the chances that I could have got this difference of means if the two distributions are not different?’). For people who prefer to think in code, the corrected bootstrap is at the end of this post.

Using the correct bootstrap method, this is what you get:

So what you can see is that, basically, the bootstrap is little improvement over the t-test. Perhaps a marginal amount. As Cosma pointed out, the ex-gaussian / reaction time distributions I’m using look pretty normal at lower sample sizes, so it isn’t too surprising that the t-test is robust. Using the median rather than the mean damages the sensitivity of the bootstrap (contra my previous, erroneous, results). My intuition is that the mean, as a statistic, is influenced by the whole distribution in a way the median isn’t, so it a better summary statistic (statisticians, you can tell me if this makes sense). The mean test is far more sensitive, but, as discussed previously, this is because it has an unacceptably high false alarm rate which is insufficiently penalised by d-prime.

Update: Cosma’s notes on the bootstrap are here and recommened if you want the fundamentals and are already degree-level comfortable with statistical theory.

Corrected boostrap function:

function H=bootstrap(s1,s2,samples,alpha,method)

difference=mean(s2)-mean(s1);

for i=1:samples
    
    sstar=[s1 s2];
    
    boot1=sstar(ceil(rand(1,length(s1))*length(sstar)));
    boot2=sstar(ceil(rand(1,length(s2))*length(sstar)));
    
    if method==1
        a(i)=mean(boot1)-mean(boot2);
    else
        a(i)=median(boot1)-median(boot2);    
    end
    
end

CI=prctile(a,[100*alpha/2,100*(1-alpha/2)]);

H = CI(1)>difference | CI(2)

		
Categories
psychology science

Bootstrap update

Update: This post used an incorrect implementation of the bootstrap, so the conclusions don’t hold. See this correction

Mike suggested that I alter the variance of the underlying distibutions. This makes total sense, since it matches what we are usually trying to do in psychological research – detect a small difference in a lot of noise. So I made the underlying distibutions look a lot like reaction time distributions, with a 30ms difference between them. The code is

    t0=200;
    s1=t0+25*(randn(1,m)+exp(randn(1,m)));
    s2=t0+25*(randn(1,m)+exp(randn(1,m)))+d;

Where m is the sample size, and d is either 0 or 30. For a very large sample, the distributions look like this:

After a discussion with Jim I looked at the hit rate and false alarm rate separately. For the simple comparison of means, the false alarm rate stays around 0.5 (as you’d predict). For the other tests it drops to about 0.05. The simple comparison of means is so sensitive to a true difference, however, that the dprime can still be superior to that of the other tests. Which suggests dprime is not a good summary statistic to me, rather than that we should do testing simply by comparing the sample means.

So I rerun the procedure I described before, but with higher variance on the underlying samples.

The results are very similar. The bootstrap using the mean as the test statistic is worse than the t-test. The bootstrap using the median is clear superior. This surprises me. I had been told that the bootstrap was superior for nonparametric distributions. In this case it seems as if using the mean as a test statistic eliminates the potential superiority of bootstrapping.

This is still a work in progress, so I will investigate further and may have to update this conclusion as the story evolves.

Categories
psychology quotes

Quote #291: Sutherland on Consciousness

Consciousness is a fascinating but elusive phenomenon; it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written on it.

Stuart Sutherland, in The International Dictionary of Psychology entry on Consciousness

Categories
psychology

An accommodation with the authority of common sense

In Science in Action, Bruno Latour talks of the birth of the modern science of geology, triumphed by Charles Lyell. He discusses Lyell’s attempt to professionalise and win respect for geology, and the need to find funds to support research in the new discipline. One solution to the need for funds it to appeal directly to the public, in Lyell’s case by writing a book that the landed gentry might read and so be convinced to donate to the cause of Geology:

If geology is successful in reshaping the earth’s history, size, composition and age, by the same token, it is also extremely shocking and unusual. You start the book in a world created by God’s will 6000 years ago, and you end it with a few poor Englishmen lost in the eons of time, preceded by hundreds of Floods and hundreds of thousands of different species. The shock might be so violent that the whole of England would be up in arms against geologists, bringing the whole discipline into disrepute. On the other hand, if Lyell softens the blow too much, then the book is not about new facts, but is a careful compromise between commonsense and the geologists’ opinion. This negotiation is all the more difficult if the new discipline runs not only against the Church’s teachings but also against Lyell’s own beliefs, as is the case with the advent of humanity into earth history which Lyell preferred to keep recent and miraculous despite his other theories. How is it possible to say simultaneously that it is useful for everyone, but runs against everyone’s beliefs? How is it possible to convince the gentry and at the same time to destroy the authority of common sense? How is it possible to assert that it is morally necessary to develop geology while agonising in private in the meantime on the position of humanity in Nature?

(p.149)

Replace 19th century geology with ‘cognitive sciences’, and gentry with ‘public’, and the essential tension is still there. The new brain science seeks attention and kudos, and in doing this must reach an uncomfortable accommodation with the ‘authority of common sense’. Psychologists and neuroscientists want to be heard in the public domain, but they will get so much more attention if they flatter received wisdom rather than attempt to overturn strongly held intuitions about human freedom, reasoning and morality.

Categories
academic Me psychology

Fundamentals of learning: the exploration-exploitation trade-off

The exploration-exploitation trade-off is a fundamental dilemma whenever you learn about the world by trying things out. The dilemma is between choosing what you know and getting something close to what you expect (‘exploitation’) and choosing something you aren’t sure about and possibly learning more (‘exploration’). For example, suppose you are in a restaurant and you look at the menu:

  • Fish and Chips
  • Chole Poori
  • Paneer Uttappam
  • Khara Dosa

Assuming for the sake of example that you’re not very good with Sri Lankan food, you’ve now got a choice. You can ‘exploit’ – go with the fish and chips, which will probably be alright – or you can ‘explore’ – try something you haven’t had before and see what you get. Obviously which you decide to do will depend on many things: how hungry you are, how good the restaurant reviews are, how adventurous you are, how often you reckon you’ll be coming back ..etc. What’s important is that the study of the best way to make these kinds of choices – called reinforcement learning – has shown that optimal learning requires that you to sometimes make some bad choices. This means that sometimes you have to choose to avoid the action you think will be most rewarding, and take an action which you think will be less rewarding. The rationale is that these ‘sub-optimal’ actions are necessary for your long term benefit – you need to go off track sometimes to learn more about the environment. The exploration-exploitation dilemma is really a trade-off : enjoy more now vs learn more now and enjoy later. You can’t avoid it, all you can do is position yourself somewhere along the spectrum.

Because the trade-off is fundamental we would expect to be able to see it in all learning domains, not just restaurant food choices. In work just published, we’ve been using a new task to look at how actions are learnt. Using a joystick we asked people to explore the space of all possible movements, giving them a signal when they made a particular target movement. This task – which we’re pretty keen on – gives us a lens to look at the relation between how people explore the possible movements they can make and which particular movements they learn to rely on to generate predictable outcomes (which we call ‘actions’).

Using data gathered from this task, it is possible to see the exploitation-exploration trade-off in action. With each target people get 10 attempts to try to identify the right movement to make. Obviously some successful movements will be more efficient than others, because it is possible to hit the target after going all “round the houses” first, adding lots of extraneous movements and taking longer than needed. If you had a success like this you could repeat it exactly (‘exploit’), or try and cut out some of the extraneous movement and risk missing the target (‘explore’). Obviously this refinement of action through trial and error is of critical interest to anyone who cares about how we learn skilled movements.

I calculated an average performance score for the first 50% and second 50% of attempts (basically a measure of distance travelled before hitting the target – so lower scores mean better performance). I also calculated how variable these performance scores were in the first 50% and second 50%. Normally we would expect people who perform best in the first half of a test to perform best in the second half (depressingly people who start out ahead usually stay there!). But this analysis showed up something interesting: a strong correlation between variability in the first half and performance in the second half. You can see this in the graph

This shows that people who are most inconsistent when they start to learn perform best towards the end of learning. Usually inconsistency is a bad sign, so it is somewhat surprising that it predicts better performance later on. The obvious interpretation is in terms of the exploration-exploitation trade-off. The inconsistent people are trying out more things at the beginning, learning more about what works and what doesn’t. This provides them with the foundation to perform well later on. This pattern holds when comparing across individuals, but it also holds for comparing across trials (so for the same individual, their later performance is better for targets on which they are most inconsistent on early in learning).

You can read about this, and more, in our new paper, which is open-access over at PLoS One A novel task for the investigation of action acquisition.

Categories
academic psychology

New paper: A novel task for the investigation of action acquisition

Our new paper, A novel task for the investigation of action acquisition, has been published in PLoS One today. The paper describes a new paradigm we’ve been using to investigate how actions are learnt.

It’s a curious fact that although psychologists have thoroughly investigated how actions are valued (i.e. how you figure out how good or bad a thing is to do), and how actions are trained (i.e. shaped and refined over time), the same effort has not gone into investigating how a behaviour is first identified and stored as a part of our repertoire. We hope this task provides a useful tool for opening up this area for investigation.

As well as the basic description of the task, the paper also contains a section outlining how the form of learning the the task makes available for inspection is different from the forms of learning made available by other ‘action learning’ tasks (such as, for example, operant conditioning tasks). In addition to serving an under-investigated area of learning research, the task also has a number of practical benefits. It is scalable in difficulty, suitable for repeated measures designs (meaning you can do it again and again – it isn’t something you learn once and then can’t be tested on any more) as well being adaptable for different species (meaning you can test humans and non-human animals on the task).

The paper is based on work done as part of the EU robotics project I’m on (‘I’M-CLeVeR‘) and on Tom Walton’s PhD thesis, The Discovery of Novel Actions

Categories
psychology quotes

Psychology’s missing link

Affordance links perception to action, as it links a creature to its environment. It links both to cognition, because it relates to meaning. Mean­ing is in the world, as much as in the mind, because meaning involves the appropriateness of an organism’s actions to its surroundings

Eleanor Gibson, in Gibson, E. J. (1988). Exploratory behavior in the development of perceiving, acting, and the acquiring of knowledge. Annual review of psychology, 39(1), 1–42.

Categories
academic psychology sheffield

We’re hiring!

The Department of Psychology at the University of Sheffield is hiring! Due to recent departures and a forthcoming expansion we have 6 academic posts to fill, for lecturers, senior lecturers/readers and chairs. Perhaps you, or someone you know, is looking for a job or a change – here’s why you should apply to work with us:

The Department: One of the very best Psychology departments in the UK for research, consistently rated ‘excellent’ (i.e. the top score) in the Research Assessment Exercises over the last 20 years. In the last RAE the department ranked 6th in the UK in terms of Research Power (i.e., quality × quantity of research activity). We have a strong tradition of interdisciplinary research and you’d be joining at a great time to renew that tradition of cognitive science. We have smart and enthusiastic Undergraduate students, 80% of whom have AAA at a-level (ie the top grades). We have one of the largest number of postgraduate students for any UK psychology department, which includes taught masters courses (I teach on this one) and PhD students. The academic faculty are dedicated and collegiate, small enough in numbers to be friendly, large enough to be a resource for you in your research. We have one of the best staff-student ratios of any UK psychology department…All this, and you get me as a colleague

The University: Times Higher Education University of the Year 2011, and globally one of the best universities in the world. The University of Sheffield has academic departments covering all major disciplines and is a ‘research intensive University‘, meaning you wouldn’t spend all your time teaching.

The City of Sheffield. Ah, Sheffield! More parkland within the city limits than any other UK city. 7 trees for every person. The so called “largest village in England”, a city renowned for its friendliness, for its sporting links, creative industries and generally too many good things to list here. And it’s in the middle of the country, so you can get about easily – two hours from the capital, three from Bristol, four Edinburgh. And cheap – I live in a house which makes my London friends who can’t afford a flat sick with jealousy. I can walk to work, or round to friend’s houses. I’m talking quality of life here people.

So, please pass the word around that we’re looking for psychologists of all types to apply for these positions. If you want to get in touch I’m happy to talk informally to anyone who is thinking about applying. Not that I have any significant power over the hiring decision, but I’m happy to spill the beans over what we’re looking for and what the department is like. You can contact me by phone or email.

(In sad, but unrelated news, we lost our Professor of Development Psychology earlier this week. These job adverts are obviously quite separate from this sudden gap we have in Developmental Psychology and about which no plans have yet been made).

Categories
advertising politics psychology

Media Violence, Unconscious Imitation, and Freedom of Speech

I really enjoyed the ideas discussed in Susan Hurley’s 2006 article “Bypassing Conscious Control: Media Violence, Unconscious Imitation, and Freedom of Speech“. The basic argument is that if we realised that we tend to automatically and unconsciously absorb and imitate patterns of behaviour that we observe, then our views of freedom of expression would be quite different from what they are. Although the presentation of the empirical psychology is sophisticated, the language does tend to slip into conceding that there is a domain of unconscious, automatic influences on behaviour and a separate realm of conscious, deliberative, choice. This is a failure to recognise, in my opinion, that for all behaviour it is causation all the way down (or all the way through, perhaps). But this quibble aside, the article gives evidential and philosophical reasons for us to be more concerned than we appear to be about the mental environment our culture promotes.

I was sad to find out that we won’t be hearing any more from Prof Hurley: Obituary by Andy Clark.

Susan L. Hurley (2006). Bypassing Conscious Control: Media Violence, Unconscious Imitation, and Freedom of Speech. In S. Pockett, W. Banks & S. Gallagher (eds.), Does Consciousness Cause Behavior? MIT Press.

Categories
advertising psychology

What if an evil corporation knew all about you?

Facebook have announced their first share offer. There was a fairly nuanced discussion on the BBC’s Today programme, which contained the useful maxim: if the service is free then you are the product. We pour personal information about ourselves – our locations, likes, friends and activities – into Facebook and Facebook sells that bit of us to advertisers. John Humphrys managed a grumble about whether we could trust a corporation with all that personal information, but nobody in the discussion seems to be able to raise much by way of concrete reasons not to give Facebook that information about yourself, they just had vague worries. Elsewhere, Cory has talked about the privacy bargain we make with corporations, and the dangers of making that bargain unknowingly or carelessly, but I want to leave that aside for a moment. Imagine a world where everyone was aware of exactly what Facebook were doing – ie selling information about our desires to advertiser. In this case, the vague worry about Facebook crystalises around a psychological question – can we be manipulated by corporations that know our desires? Imagine, if you will, that Facebook is the equivalent of the malevolent demon of Cartesian philosophy, still absolutely evil in intent, but different in that it can only control you through precisely targeted marketing messages, not through direct control of yours senses. Would you still sign up for a Facebook account? Say the Facebook Demon finds out you like lemons. Lemon Products Inc advertise you Lemon Perfume, LemonTech advertise you a lemon squeezer and Just Lemons Inc. offer you 10% off the price of lemons in their stores. Is this a bad world? The answer is only yes if you believe in the power of advertisers to make us do things we don’t want.

Categories
Me psychology science

It isn’t simple to infer cognitive modules from behaviour

Previously I blogged about an experiment which used the time it takes people to make decisions to try and elucidate something about the underlying mechanisms of information processing (Stafford, Ingram & Gurney, 2011) . This post is about the companion paper to that experiment, reporting some computational modelling inspired by the experiment (Stafford & Gurney, 2011).

The experiment contained a surprising result, or at least a result that I claim should surprise some decision theorists. We has asked people to make a simple judgement – to name out loud the ink colour of a word stimulus, the famous Stroop Task (Stroop, 1935). We found that two factors which affected the decision time had independent effects – the size of the effect of each factors was not effected by the other factor. (The factors were the strength of the colour, in terms of how pale vs deep it was, and how the word was related to the colour, matching it, contradicting it or being irrelevant). This type of result is known as “additive factors” (because they add independently of each other. On a graph of results this looks like parallel lines).

There’s a long tradition in psychology of making an inference from this pattern of experimental results to saying something about the underlying information processing that must be going on. Known as the additive factors methodology (Donders, 1868–1869/1969; Sternberg, 1998), the logic is this: if we systematically vary two things about a decision and they have independent effects on response times, then the two things are operating on separate loci in the decision making architecture – thus proving that there are separate loci in the decision making architecture. Therefore, we can use experiments which measure only outcomes – the time it takes to respond – to ask questions about cognitive architecture; i.e. questions about how information is transformed and combined as it travels between input and output.

The problem with this approach is that it commits a logical fallacy. True separate information processing modules can produce additive factors in response data (A -> B), but that doesn’t mean that additive factors in response time data imply separate information processing modules (B -> A). My work involved taking a widely used model of information processing in the Stroop task (Cohen et al, 1990) and altering it so it contained discrete processing stages, or not. This allowed me to simulate response times in a situation where I knew for certain the architecture – because I’d built the information processing system. The result was surprising. Yes, a system of discrete stages could generate the pattern of data I’d observed experimentally and reported in Stafford, Ingram & Gurney (2011), but so could a single stage system in which all information was continuously processed in parallel, with no discrete information processing modules. Even stranger, both of these kind of systems could be made to produce either additive or non-additive factors without changing their underlying architecture.

The conclusion is straightforward. Although inferring different processing stages (or ‘modules’) from additive factors in data is a venerable tradition in psychology, and one that remains popular (Sternberg, 2011), it is a mistake. As Henson (2011) points out, there’s too much non-linearity in cognitive processing, so that you need additional constraints if you want to make inferences about cognitive modules.

Thanks to Jon Simons for spotting the Sternberg and Henson papers, and so inadvertantly promting this bit of research blogging

References

Cohen, J. D., Dunbar, K., and McClelland, J. L. (1990). On the control of automatic processes – a parallel distributed-processing account of the Stroop effect. Psychol. Rev. 97, 332–361.

Donders, F. (1868–1869/1969). “Over de snelheid van psychische processen. onderzoekingen gedann in het physiologish laboratorium der utrechtsche hoogeshool,” in Attention and Performance, Vol. II, ed. W. G. Koster (Amsterdam: North-Holland).

Henson, R. N. (2011). How to discover modules in mind and brain: The curse of nonlinearity, and blessing of neuroimaging. A comment on Sternberg (2011). Cognitive Neuropsychology, 28(3-4), 209-223. doi:10.1080/02643294.2011.561305

Stafford, T. and Gurney, K. N.(2011), Additive factors do not imply discrete processing stages: a worked example using models of the Stroop task, Frontiers in Psychology, 2:287.

Stafford, T., Ingram, L., and Gurney, K. N. (2011), Pieron’s Law holds during Stroop conflict: insights into the architecture of decision making, Cognitive Science 35, 1553–1566.

Sternberg, S. (1998). “Discovering mental processing stages: the method of additive factors,” in An Invitation to Cognitive Science: Methods, Models, and Conceptual Issues, 2nd Edn, eds D. Scarborough, and S. Sternberg (Cambridge, MA: MIT Press), 702–863.

Sternberg, S. (2011). Modular processes in mind and brain. Cognitive Neuropsychology, 28(3-4), 156-208. doi:10.1080/02643294.2011.557231

Stroop, J. (1935). Studies of interference in serial verbal reactions. J. Exp. Psychol. 18, 643–662.

Categories
Me psychology science

An experimental test of ‘optimal’ decision making

I’ve had a pair of papers published recently and I thought I’d have a go at putting simply what the research reported in them shows.

The first is called ‘Pieron’s Law holds during Stroop conflict: insights into the architecture of decision making‘. It reports a variation on the famous Stroop task. The Stroop task involves naming the ink colour of various words, words which can themselves be the name of colours. So you find yourself looking at the word GREEN in red ink and your job is to say “red”. If the word matches the ink colour people respond faster and more accurately; if the word doesn’t match, they are slower and less accurate. What we did was vary the strength of the colour component of the stimilus – e.g. we used more and less intense red ‘ink’ (actually we presented the stimuli on a computer screen, so the ink was pixel values). There’s a well established relationship between stimulus strength and responding – the ‘Pieron’s Law’ of the title – showing how response speed decreases with increasing stimulus strength.

So our experiment simply took two well know psychological findings and combined them in a single experiment. The result is interesting because it can help us arbitrate between different theories of how decisions are made. One popular theory of decision making is that all the information relevant to the decision is optimally combined to produce the swiftest and most accuracte response (Bogacz, 2007). There’s lots of support for this theory, including evidence from looking at responses of humans making simple judgements, recordings from the brain cells of monkeys and deep connections to statistical theory. It’s without doubt that the brain can and does integrate information optimally in some circumstances. What is interesting to me is that this optimal information integration perspective is completely at odds with the most successful research programme in post-war psychology: the heuristics and biases approach. This body of evidence suggest that human decision making is very non-optimal, with all sorts of systemmatic errors creeping into the way people combine information to make a decision. The explanation for these errors is that we process information using heuristics, mental shortcuts which give a good answer most of the time and cut down on the amount of effort which have to expend in deciding (“do what you did last time” is probably the most common decision heuristic).

My experiment connects to these ideas because it asked people to make a simple judgement (the colour of the ink), like the experiments supporting an optimal information integration perspective on decision making, but the judgement requested was just marginally more complex because we manipulate both Stroop condition (whether the word and ink matched) and colour strength. If you are a straight-down-the-line optimal information decision theorists then you must believe that evidence about the decision based on the word is combined with evidence about the decision based on the colour to make a single ‘amount of evidence’ variable which drives the decision. In the paper I call this the ‘common metric’ hypothesis. The logic is a bit involved (see the paper), but a consequence of this hypothesis is that the size of the effect of the word condition should vary across the colour strength condition, and vice versa. In other words, you should see an interaction. Visually, the lines on the graph of results would be non-parallel.

Here’s what we found:

What you’re looking at is a graph of response times (the y-axis) for different colour strengths (the x-axis). The three lines are the three Stroop conditions: when the word matches the colour (‘congruent’), when it doesn’t match (‘conflict’) and when there is no word (‘control’). The result: there is no interaction between these two factors – the lines are parallel.

The implication is that you don’t need to move very far from simple perceptual decision making before human decision making starts to look non-optimal – or at least non optimal in the sense of combining information from different sources. This is important because of the widespread celebration of decision making as informationally optimal. Reconciling this research programme with the wider heuristics and biases approach is important work, and fits more generally with an honourable tradition in science of finding “boundary conditions” where one way the world works gives way to another way.

Coming up next: Infering from behavioural results to underlying cognitive architecture – its not as simple as we were told (Stafford & Gurney, 2011).

References:

Bogacz, R. (2007). Optimal decision-making theories: linking neurobiology with behaviour. Trends in Cognitive Sciences, 11(3), 118–125.

Stafford, T., Ingram, L. and Gurney, K.N.(2011), Pieron’s Law holds during Stroop conflict: insights into the architecture of decision making, Cognitive Science 35, 1553–1566.

Tom Stafford and Kevin N. Gurney (2011), Additive factors do not imply discrete processing stages: a worked example using models of the Stroop task, Frontiers in Psychology, 2:287.

Categories
People I know psychology sheffield

What Sheffield’s sharing (bit.ly hack day report)

Yesterday was my research group’s first hackday. It’s a concept I borrowed from the software geeks, but which I thought we could use a bit of in psychological science. The plan was for the whole lab to get together and spend the day working on the same dataset, to see what we could come up with after a day of intense work.

Inspiration was provided by visiting data wizard Mike Dewar, who works with the link shortening service bit.ly. Mike was able to give us a slice of bit.ly data – all the shared links which the people of Sheffield had clicked on in a week. The leap from tech/internet business to psychology department isn’t so weird when you think about it. We’re both interested in taking high volume measurements of behaviour and trying to understand what is really going on (for us, inside the mind, for bit.ly, with the users behind the clicks).

We got together in one room and Mike guided us though some of the nuances of analysing the data. After a few busy hours, and along with those essential hackday accompaniments – takeaway food and cola (open source of course) – we had a snapshot of the kind of sites that people in Sheffield shared with each other.

This plot shows the trend of the weeks’ clicks for the top ten shared sites for Sheffield (with total click rate on the y-axis, and time on the x-axis). The scale is a bit small (click to expand), so here in a list is Sheffield’s top ten shared links for the analysed week:

1. Facebook (of course)
2. BBC (public service broadcasting FTW)
3. YouTube
4. GiveMeFootball
5. Celebuzz
6. Guardian
7. Google
8. Linksynergy
9. southyorkshire
10. swfc

Perhaps not a surprise, but we can see that people are sharing information on facebook, on news sites and about celebrities and football. And I note that the Owls win the Sheffield link-sharing derby! You can also see the daily peaks in click activity (at lunchtime? Or just after lunch perhaps!). With a bit more time we could delve into what times people preferred to click on different types of links (news vs business vs gossip would be an interesting comparison), and how the activity of a particular links changes over time, as it spreads out along social networks, passing from person to person, and a thousand other things. So think of this as a work in progress report. I’ll come back to you if we generate anything else.

Thanks to Mike and bit.ly for allowing us to play with their data, and to C, Maria, Donny, Tom, Martin and Stu for taking part.

Categories
psychology sheffield

Can you touch-type? Would you like to get to know your brain better?

We’re looking for touch-typing data-geeks who’d like to have their brainwaves recorded, all in the name of science. All you have to do is be able to touch-type, and be willing to come to see us at the Department of Psychology, University of Sheffield for an hour and a half. We’ll record your neural activity while you show off your typing skills for us. Afterwards, we’ll provide you with your brainwave data, and the behavioural data of what you actually typed.

We collect a record of what your brain is doing using an 128-channel EEG net. This looks like this and works by recording electrical activity at the scalp. This electrical activity changes depending on the activity of your brain cells – as they produce the billions of electrochemical signals that are the basis for your every thought and action. We’ll be analysing this data ourselves, because we’re interested in typing as an example of complex skill performance, but we’d also like to give everyone who helps us out the chance to take away their individual data. We’re really curious to see how people outside of the Psychology department might use it. EEG data contains lots of oscillations and lots of spreading and merging waves of activity. As well as telling us something about when and how certain brain regions become active, this means it can also be used to generate cool pictures and sounds! If you’re comfortable with processing numbers and would like to try out your skills on some numbers that come direct from your most intimate organ, please get in touch!

It’s t dot stafford at sheffield dot ac dot uk or @tomstafford

Categories
intellectual self-defence psychology

For a theatre of the mind

“Neuro” is fashionable these days, from neuroethics to behaviour change (one of which is philosophy, and the other of which is psychology, but both are promoted on their connection with neuroscience). Something which is under-discussed is that psychology has a rich set of fundamental, and different, perspectives on how we ought to think about the mind and brain. These compete with, and complement, each other. The one you adopt will dramatically affect how you read a situation and the “psychological” solutions you are inspired to propose. Probably most people are aware of the neuroscience perspective, and the associated worldview of the mind as a piece of biochemical machinery. From this we get drugs for schizophrenia and brain scans for lie detection. This is the view of the mind which is ascendant. Probably, also, most people are vaguely aware of the Freudian perspective, that dark territory of the undermind with its repressed monsters and tragic struggles. From here we get recovered memory therapy and self-esteem workshops for young offenders. Although people will be aware of these perspectives, will they also be aware of the contradictions between them, and the complements, or the fact that both are viewed by some professionals in psychology as optional, or even harmful, ways of thinking about the mind? And what about the chorus of other perspectives, not all necessary contradictory, but all catalysing insights into mind and behaviour; evolution, cybernetics, cognitivism, situationism, narrative approaches, dynamic systems theory. Each of these will not just give you different answers, but promote entirely different classes of questions as the central task of psychology.

I’d love to work on an theatre or exhibition piece about conceptions of the mind, something which dramatised the different understandings of mind. I think it could be a freshing change from a lot of “art-science” pieces about psychology, which unthinkingly accept the cog-neuro consensus of anglo-US psychology and/or see their purpose as bludgoning the public with a bunch of information they have decided “people should know”. Something about perspectives, rather than facts, would inherantly lend itself to art-dramatic intepretations, and open a space for people enage with how they understand psychological science, rather than being threatened, as is so common, with what scientists thing they should understand.