He also articulates one of my main reasons for having a weblog We just don’t see what we can’t formalize
How wierd is this- I was looking at a review paper  of the development of visual acuity in human infants and I plotted the average acuity measurements for the first three years of life. It’s a straight line. A very straight line.
Not only is it testament to the experimental rigour of the studies included in the review but it’s also pretty developmentally odd- I mean what else develops linearly? Height doesn’t. Vocabularly use doesn’t. Number conservation doesn’t.
Does anything else develop linearly? There must be so many non-linear processes involved in the neural development of vision it’s a marvel it comes out linear with respect to age.
 Courage, M.L., and Adams, R.J. (1990). Visual acuity assessment from birth to three years using the acuity card procedure: Cross-sectional and longitudinal samples. Ophthalmic and Physiological Optics, 67(9), 713-718.
Acuity was measured with black and white gratings of different spatial frequencies and is shown here in cpd, cycles per degree of visual angle (this is an inverse function of the Snellen rating (eg 20/20).
I mean, Christ, I’m not the first to say it, but some psychological research is just so obvious I want to bite out my own eyes. To pick unfairly, and at random, something I came across today, why exactly are we doing research like this? –
“Women’s and Men’s Personal Goals During the Transition to Parenthood” (Salmela-Aro et al, 2000)
Abstract: To investigate how women’s and men’s personal goals change during the transition to parenthood, the authors studied 348 women (152 primiparous and 196 multiparous) and 277 of their partners at 3 times: early in pregnancy, 1 month before the birth, and 3 months afterward. At each measurement, participants completed the Personal Project Analysis questionnaire (B. R. Little, 1983). The results showed that during pregnancy women became more interested in goals related to childbirth, the child’s health, and motherhood and less interested in achievement-related goals. After the birth women were more interested in family- and health-related issues. These changes were more substantial among the primiparous than among the multiparous mothers. Although the men’s personal goals changed during the transition to parenthood, these changes were less substantial than those found among the women. description and explanation in psychological science.
Can this be as pointless as it sounds? Women worry more about impending motherhood while pregnant, and less about other things. Hold the front page.
Now there’s a few arguments you can make for researching ‘common sense’.
you confirm 99% of it, but you falsify 1% of it, and that’s the important bit. common sense is just a set of circumstance-variable prejudices. Not only does ‘common sense’ contain multiple, often erroneous and/or contradictory, positions, but it’s easy for people to say that’s just common sense after the fact. it might be obvious that something is so, but exactly how is it so? Women deprioritise career-goals during pregnancy – obviously. But how much do they do this? What is the variation? How does this change across demographics? Across cultures? Across generations? (this said, if this is the main justification for the research then there is a fairly major problem with the communication of it).
But despite this, I think we’ve missed a fairly major distinction between description and explanation here. Psychological science needs more of the latter. An explanation provides a connection between different levels of descriptions or between different phenomenon. Granted, you have to sort your descriptions to some degree first before you can do this, but come on people
And don’t think that I’m just talking about social psychology here. The brainporn fetish of cognitive neuroscience is just as much to blame. The next time I see a functional imaging study that demonstrates that a task involving mental activity requires various different bits of the cortex I shall weep.
The added difficulty for social psychology is that most of the concepts involved
have already had been explored with far greater finesse and insight than science can ever manage by millennia of culture activity. If you’re going to do some research here you need to bring some added value. Here’s my provisional list of those cases in which this might be possible:
When we know something to be true, but we need to know exactly how true it is – the extent, the variability, the limits of the effect and the interaction with other factors. When we know something to be true, and science can show that it isn’t.(eg
graphology, people wouldn’t electrocute others just because they are ordered
to by a scientist). [and related to this]
When the common sense perception of individuals is persistently biased (eg self-rating of ability to detect lies, judgements about how fast queues move, perception of sleep duration among insomniacs, etc) When we know something to be true, but we don’t know how or why it is true (enter, stage right, cognitive neuroscience and recourse to explanatory primitives from lower levels of description)
Katariina Salmela-Aro, Jari-Erik Nurmib, Terhi Saistoc and Erja Halmesm?kic (2000). Women’s and Men’s Personal Goals During the Transition to Parenthood. Journal of Family Psychology Volume 14, Issue 2, June 2000, Pages 171-186
Systems theory, like catastrophe theory before it, is a descriptive theory not a predictive theory. Which, means that it’s harder to say if it’s any use (and, indeed, you can always re-phrase any discoveries within that framework using the language of the old framework, once you have made them).
Given this, we’d expect the most utility of systems theory to be in fields which are suffering most from a lack of adequate epistemological tools. Which is why, I guess, I’m convinced of the necessity of some kind of systems thinking in cognitive neuroscience and for social psychology.
And why, maybe, to date the best systems theory work in psychology has been in developmental psychology
In a review by Steven Poole of Edelman and Tononi’s Consciousness: How Matter Becomes Imagination I found this:
…they claim that Schopenhauer called the problem of consciousness the “world knot”, and adopt this lovely image as their catchphrase. But that is not what Schopenhauer said. What he calls the “world knot”, in On the Fourfold Root of the Principle of Sufficient Reason, is “the identity of the subject of willing with that of knowing”. Edelman and Tononi give a remarkably rich and provocative hypothesis of the subject of knowing, but the will soars free, as yet untethered by physical explanation.
Great image – very norse – and the identity of the subject of will with the subject of knowing is definitely a biggie, both for the psychology and metaphysics of consciousness.
There’s a difference between scientific explanations and explanations involving science.
Something for the fMRI crowd to look out for, me thinks. I’d like to know a little more than function X activates region Y of the brain please.
A time-lapse 3-D movie that compresses 15 years of human brain maturation, ages 5 to 20, into seconds shows gray matter – the working tissue of the brain’s cortex – diminishing in a back-to-front wave, likely reflecting the pruning of unused neuronal connections during the teen years. Cortex areas can be seen maturing at ages in which relevant cognitive and functional developmental milestones occur. The sequence of maturation also roughly parallels the evolution of the mammalian brain, suggest Drs. Nitin Gogtay, Judith Rapoport, NIMH, and Paul Thompson, Arthur Toga, UCLA, and colleagues, whose study is published online during the week of May 17, 2004 in The Proceedings of the National Academy of Sciences.
Yummy, mpeg here
hypothesis: ideological isolation is impossible without social isolation
falsifying counter-examples anyone?
A book review in the New Yorker by Christopher Caldwell of Barry Schwartz’s ‘The Paradox of Choice’
Schwartz looks at the particular patterns of our irrationality, relying on the sort of research pioneered by two Israeli-American psychologists, Daniel Kahneman and the late Amos Tversky. It turns out, for instance, that people will often consciously choose against their own happiness. Tversky and a colleague once asked subjects whether they?d prefer to be making thirty-five thousand dollars a year while those around them were making thirty-eight thousand or thirty-three thousand while those around them were making thirty thousand. They answered, in effect, that it depends on what the meaning of the word ?prefer? is. Sixty-two per cent said they?d be happier in the latter case, but eighty-four per cent said they?d choose the former.
Research in the wake of Kahneman and Tversky has unearthed a number of conundrums around choice. For one thing, choice can be ?de-motivating.? In a study conducted several years ago, shoppers who were offered free samples of six different jams were more likely to buy one than shoppers who were offered free samples of twenty-four. This result seems irrational?surely you?re more apt to find something you like from a range four times as large?but it can be replicated in a variety of contexts. Students who are offered six topics they can write about for extra credit, for instance, are more likely to write a paper than students who are offered thirty.
Nor is the ?paradox of choice? limited to the shopping aisle. It helps explain why so many people at age thirty are still flailing about, trying to choose a career?and why so many marriageable singles wind up alone. You await a spouse who combines the kindness of your mom, the wit of the smartest person you met in grad school, and the looks of someone you dated in 1983 (as she was in 1983) . . . and you wind up spending middle age by yourself, watching the Sports Channel at 2 a.m. in a studio apartment strewn with pizza boxes.
[and, after discussing one, solution, that of limiting choice, Caldwell discusses the extent to which consumers are already using their freedom of choice to choose, in effect, a limiting of their choices]
Robert Reich, in his recent book ?The Future of Success,? notes that modern consumers, like corporations, respond to the marketplace by ?outsourcing? choice. They hire experts?critics, in the old way of looking at things. While many experts, such as interior decorators, offer personalized service and charge a mint, the masses have access to choosing services that are essentially free. That, in effect, is what a ?brand? is.
One function of certain New Economy innovations is to make choosing easier by automating it. TiVo, in theory, allows television addicts to lose themselves in ever more programming choices, but it can also be used as a filter, a means of allowing viewers to dispense with choosing altogether. Internet grocery services, such as Peapod, allow shoppers to fill out a template that protects them from having to rechoose every week. In practical terms, the Peapod shopper is confronted with far fewer new brands and choices than was a suburban housewife pushing her cart down a grocery aisle during the Kennedy Administration.
[although this does look like one kind of solution i’m yet to be convinced that this is the way forward – the consumption of a tailored set of limited choices customised to my desires seems very limiting for the potential of human growth, certainly bad in terms of social capital (lots of bonding, no bridging in the terminology) and pretty sinister in implications for social control as well
but Caldwell is off to other terrain for the end of the review…]
…the phenomenon?sometimes called the ?hedonic treadmill? can also explain why disaster, whether bankruptcy or incapacitation, seldom burdens our spirits for very long.
Strangely, we lose sight of our human resilience when we make big choices. People are consistently puzzled that so many things they had dreaded?from getting fired to being ditched by a spouse??turned out for the best.? Gilbert and Wilson even speculate (in a diplomatic way) that our inability to forecast this adaptive capacity spurs some people to a belief in God. ?Because people are largely unaware that their internal dynamics promote such positive change,? they write, ?they look outward for an explanation.? A tendency to overestimate the joy we?ll get from buying baubles and winning honors is only half of a complex predisposition. The other half is our enormous capacity for happiness, even in the absence of such things. The surprise isn?t how often we make bad choices; the surprise is how seldom they defeat us.
[all book reviews should be like this!]
Via steveberlinjohnson.com (in this edited excerpt Steve Johnson is quoting Antonio Damasio)
On the face of it, idea that the speed of modern life will lead to cognitive overload is a familiar complaint: cultural critics like David Shenk and the late Neil Postman have warned of the dangers of accelerated society. But Damasio has a twist: he’s not saying that the brain can’t keep up with the society — he’s saying that part of the brain can’t keep up with the society, while another part, thus far, has been game to go along for the ride.
“We really have two systems that are totally integrated and work perfectly well with each other, but that are very different in their time constants. One is the emotional system, which is the basic regulatory system that works very slowly, with time scales of a second or more. Than you have the cognitive system, which is much faster, because of the way it’s wired, and because a lot of the fiber systems are totally mylenated — which means it works much faster. So you can do a lot of reasoning, a lot of recognition of objects, remembering names, in just a few hundredths of a second. And in fact it has been suggested that we’re optimizing those times — that we’re working faster and faster…
[however] there is no evidence whatsoever that the emotional system is going to speed up…In fact, I think that it’s pretty clear that the emotional system, because it is a body regulatory system, is going to stay at those same slow time constants. There’s this constant limit, which is that the fibers are unmylenated. So the conduction is very slow.” In a sense, this is an engineering problem: The system that builds somatic markers — the system that encodes the stream of consciousness with value — works more slowly than the system that feeds it data to encode. The result is not a short-circuit of our cognitive machinery. (We can in fact process all that data, and perhaps more.) The danger comes from the emotional system shorting out.
Young children (eg age 8) say they prefer savannah landscapes over other types of natural landscape (Balling & Falk, 1982). Older children and adults don’t exhibit this preference. The evolutionary psychology interpretation of this is that there is an innate preference for the environment within which modern humans evolved, but that this preference is over-ridden by lifetime development of aesthetic preferences which are influenced by your personality and environment.
Or, put another way, you’re born with a feel for the plains of east Africa, but as you get older you can grow to love the flats of Peckham.
In lots of ways this seems like a typical piece of evolutionary psychology. It could be true – and if it was true it might be interesting – but there’s no reason why it has to be true. Has the experiment been replicated? Has it been replicated cross-culturally? Has it been replicated when controlling for scene complexity and for the adaptive value of the landscape (ie the prospect-refuge affordances). The answers to these questions seem to be either ‘no’ or ‘not alot’ (ie not very well). Obviously i could be wrong and some more delving into the literature might turn up some more references .
It also seems to be crying out for a replication with pre-linguistic infants using a preferential-looking paradigm…
 I think my further reading would begin here:
Appleton, J. 1996. The Experience of Landscape. Revised edition. New York, Wiley.
Orians, G.H. & Heerwagen, J.H. 1992. Evolved responses to landscapes. In Barkow, J.H., Cosmides, L. & Tooby, J. (eds) The Adapted Mind. Oxford, Oxford University Press, pp. 555-579.
The idea that brain science is somehow going to do something which will “exceed the wildest dreams of poets and philosophers” is very light and very ignorant. It is, however, a characteristic idea of our time.
Which is true, as far as it goes, but he also says:
Equally, our minds work on the basis of myriad assumptions. If these are exposed as the deterministic workings of mere chemistry, then we might not even be able to get through the day, never mind the next million years.
which Paul Myers characterises well as
Well. I guess we’d better stop studying the brain then, shouldn’t we? Who knows, we might actually learn things about how it works that don’t involve angels or ghosts, and then people will get depressed.
The great thing about consciousness is it’s sheer obstinacy in the face of contradictory evidence. We don’t need to worry too much about the existential dangers of too much scientific information (the damage has already beeen done in that respect). We do need to worry about the social use, misuse and abuse of scientific information…But that’s another story.
Via http://www.cns.caltech.edu/~carlos/coolpapers/, a list of ‘cool scientific papers’:
Jennifer Linden (neuroscientist) wrote:
Here are my paper suggestions. It’s been years since I read these papers, but I still remember them — the experiment is truly beautiful. The background: Normally, each eye innervates a single tectum in the frog, with no competition between the eyes. The experiment: What happens when you implant a third eye in a frog, so that two eyes are forced to innervate the same tectum? The result: ocular dominance stripes, in an animal which normally doesn’t have them. A beautifully clean demonstration that ocular dominance stripes arise from competition between the two eyes, rather than from some kind of pre-established pattern of innervation.
Constantine-Paton, M. and Law, M.I., “Eye-specific termination bands in tecta of three-eyed frogs”, Science 202 : 639-641 (1978)
Law, M.I. and Constantine-Paton, M., “Anatomy and physiology of experimentally produced striped tecta”, Journal of Neuroscience 1 : 741-759 (1981)
From the Natural History Museum picture library, a sensory homunculus:
This model shows what a man’s body would look like if each part grew in proportion to the area of the cortex of the brain concerned with its sensory perception.
A post from Onemonkey, Context is everything, on the perpetual construction and reconstruction of memory and the research findings of Susan Engel.
From the outset our so-called episodic memories act in ways far removed from the faithful verbatim recording we often feel we’ve experienced….[Children] only really relive the past when prompted to do so, and rely heavily on confirmation and elaboration from the adult or their peers. And when they tell these stories to others and later to themselves, the world is seen through [the distorting lens of their limited linguistic capacities].
This seemed important:
Environmental enrichment prevents effects of dark-rearing in the rat visual cortex
Nature Neuroscience, March 2004 Volume 7 Number 3 pp 215 – 216
Alessandro Bartoletti, Paolo Medini, Nicoletta Berardi & Lamberto Maffei
Abstract: Environmental enrichment potentiates neural plasticity, enhancing acquisition and consolidation of memory traces. In the sensory cortices, after cortical circuit maturation and sensory function acquisition are completed, neural plasticity declines and the critical period ‘closes’. In the visual cortex, this process can be prevented by dark-rearing, and here we show that environmental enrichment can promote physiological maturation and consolidation of visual cortical connections in dark-reared rats, leading to critical period closure.
Presumably because environmental enrichment encourages cannibalisation of the proto-visual areas by other functions. Another tale of activity dependent neural development…
If you enjoyed Relevant History on word spacing (and you should have) or me on the invention of perspective you must read Walter Ong’s Orality and Literacy. I can’t do the erudition, sweep and profundity of the book justice, but here’s a few quotes. You’ll have to excuse me if I leap to the conclusions rather than pr?cis the arguments here. My thoughts are preceded by a !, everything else is a quote, paraphrase or summary of Ong.
Walter Ong (1982,2002) Orality and Literacy – The Technologizing of the Word. Routledge, New York.
Writing makes words appear to be things. In oral culture words have no residue, they are just potential. They only exist in transience. The visual form of words gives you control over them. Without stable form they are spectres – always actions, always transient, always willed – intrinsically agenic. (p14)
When and often-told story is not actually being told, all that exists of it is the potential in certain human beings to tell it (p11)
In oral cultures you only know what you can recall – in literate cultures you know what you can look up. Formulas and themes are central to oral culture, for they provide structure for works which rely upon human memory to persist. By removing this constraint, literacy unleashes chaos on knowledge.
Writing separates the knower from the known and thus sets up conditions for ‘objectivity’ p45
oral societies live very much in a present which keeps itself in equilibrium or homeostasis by sloughing off memories which no longer have present relevance
Table summarising Chapter 3: Some psychodynamics of orality
|words as actions||words as objects|
|narrative||facts & lists|
|ever-present||past and future looking|
|empathetic & participatory||objectively distanced|
|restricted code||elaborated code|
! It is impossible not to note at this point how the features of oral culture are those idealised by the environmental and new-age movements.
! While literate knowledge is abundant, oral knowledge (‘ancient wisdom’) is concentrated. Single items – koans, kata, poems, rituals, icons – physically embody depth of information and can reveal it to the individual through study of that single thing (compare: knowledge is explicit, in multiple sources, and the individual can collect that information by acquiring, ie reading, those sources).
The ‘restricted’ linguistic codes of primarily oral cultures are just as specific and expressive, but much content is embedded in the context the language is used in. The ‘elaborated codes’ of text-based culture have their meaning rooted within the language itself.
! Note the paradoxical nature of creation myths of pre-literate cultures (eg Norse or Ancient Greek) – pre-literate cultures are ever-present. They do not see the need for creation ab nihilo, so the writing down of these myths makes them seem nonsensical (ie illogical). The great, later, religions – with their sacred texts – are only possible because of the development of literacy and the grafting of a new literate form on top of passing oral cultures.
Is the idea of a Jaynesian software rewrite of self-consciousness subsumed within the idea of a transition from oral to literate culture? (p28)
By separating the knower from the known…writing makes possible increasingly articulate introspectivity, opening the psyche as never before not only to the external objective world quite distinct from itself but also to the interior self against whom the objective world is set (p104)
Because spoken language is necessarily shared, it promoted groupness. Language is only what can be mutually understood. Reading is done individually. Literate culture promotes individuality and introspection. In writing the audience is always imagined (simulated) rather than actual.
Self-analysis requires a certain demolition of situational thinking. It calls for isolation of the self, around which the entire lived world swirls for each individual person, removal of the center of every situation from that situation enough to allow the centre, the self, to be examined and described.
By removing words from the world of sound where they first had their origin in active human interchange and relegating them definitively to visual surface, and by otherwise exploiting visual space for the management of knowledge, print encourages human beings to think of their own interior conscious and unconscious resources as more and more thing-like, impersonal and religiously neutral. Print encouraged the mind to sense that its possessions were held in some sort of inert mental space (p129)
! Writing is a cognitive technology for transforming meaning across sensory codes. It takes what defines human uniqueness and subsumes it to work in our most powerful modality. This raises the question of what kinds of operation are best subserved by audition. I suggest associative recall (the chaining of items in sequences) – note that the order of the alphabet is learn auditorially but employed visually (in indexes)!
What’s amazing is that our cognitive abilities have coped so well with such a radical technological hybridisation. It’s as astounding as the fact that we can live in cities of millions when we evolved to live in tribes of hundreds.
The present-day phenomenological sense of existence is richer in its conscious and articulate reflection than anything that preceded it. But it is salutary to recognise that this sense depends on the technologies of writing and print, deeply interiorised, made part of our own psychic resources. The tremendous store of historical, psychological and other knowledge which can go into sophisticated narrative and characterisation today could be accumulated only through the use of writing and print (and now electronics). But these technologies of the world do not merely store what we know. They style what we known in ways which made it quite inaccessible and indeed unthinkable in an oral culture.
! So, back to my original compulsion – how was inner life experienced before the ascendancy of individual perspective? At the very least the articulation of that experience couldn’t have occurred in the way is does now. If a tree falls in the forest and nobody hears, does it make a sound? If a qualia cannot be articulated, is it experienced?
I feel like I?ve reached the end point of this question?s productivity. Which isn?t to say that it is answered, but rather that the question will have to be changed to go forward.
Not only do different people call different structures in the brain by different names, depending on which classificatory scheme they use and which species they mainly invesitgate, but also the different structures are all heirarchically organised so that any given structure is probably also part of several supa-structures and will contain a number of sub-structures.
Help is at hand.
This is a basic crib sheet for the basic terminology on prefixes, directional terminology, etc
BrainInfo is great for definitions of areas, showing where they are in the heirarchy, what else they are called and what else they contain.
And the Whole Brain Atlas is another great resource for orientating yourself.
A study of a class of Quebec medical students has prompted researchers to ask whether a hidden curriculum exists in the structure of medical education that inhibits rather than facilitates moral reasoning. The study appears in the April 1 edition of the Canadian Association Medical Journal (CMAJ 2003;168:840-4).
Using a french-varient of the Kohlberg moral reasoning scale…
The authors say that in the results they did not observe the increase in the development of moral reasoning that was expected with maturation and involvement in university studies: “We found a significant decrease in weighted average scores after three years of medical education.”
I’d love to see the appropriate controls for all other kinds of further education. Reminds me a bit of the anecdata about selfish (aka ‘rational’) behaviour increasing as economics students progress in their studies
So I was in a meeting at the OU the other day, talking to this developmental psychology professor and we got onto the topic of status hierarchies. If you go around a class of children and ask everyone who is popular and who is unpopular you can classify the children into accepted (i.e. liked), rejected (i.e. disliked) and controversial. Then you have another group of children – the neglected – who simply aren’t mentioned by anyone else. They don’t appear on the social radar at all! Sadly although kids in the first three categories tend to move around – the rejected can become accepted, the accepted controversial, etc – the neglected category is far and the most stable. And worse than that, belonging to that category is strongly associated with poor academic performance, with behavioural problems and low self-esteem.
So far, so standard sociometry, you say. What it was that this professor said that interested me was that with children’s status hierarchies you can disrupt them most successfully by removing the kids at the bottom (the rejected, that is, not the neglected). If you just remove the kid at the top then the second most popular kid becomes the most popular, and so on down the hierarchy. Remove the kid at the bottom and there is a tendency for the whole thing to reassemble. A whole new micro-social order is created.
He also said that the same thing was true for pecking order in chickens.
I love the way this inverses the way you might think about the importance of people in a hierarchy with respect to the definition of that hierarchy. The guys at the top are immediately replaceable. It’s the guys at the bottom who define the social structure. Speaking to Matt about this he suggested that it was all about the referent classes that the micro-society uses to define itself against. Everyone uses the people at the bottom as the standard they set themselves apart from, the object of their scorn which they use to demonstrate their position on the social ladder. (At least that’s what i think he meant).
Politically the moral is exciting – if you want to change society, don’t replace the leaders, get rid of the oppressed.
It’d like to hear from anyone who can easily reference this, by the way
Autism as the defining symptom of the internet-age and/or the internet as a bridge through which the space of society is widened to include individuals who aren’t neurotypical: read “Autism & The Internet” or “It’s The Wiring, Stupid” by Harvey Blume.
Bruce Mazlish is…..in The Fourth Discontinuity: the Co-evolution of Humans and Machines (1993)… argues that human history has been marked by four discontinuities, each considered unbridgeable while it prevailed. The first discontinuity was between humanity and cosmos. This was overcome by Copernican astronomy, which located earth within a universe of stars, planets, and other galactic phenomenon. The second discontinuity was between human and beast. This, in turn, was bridged by Darwin. The third discontinuity pertains to the distinction between ego and instinct, the presumably autonomous individual and the unconscious. Freud showed this to be a permeable membrane at best.
The last discontinuity is between human and machine. What with smart machines, and cybernetic models of the human mind, Mazlish sees that discontinuity as giving way in our own time. The computer opens a Northwest Passage between natural and artificial intelligence, the organism and the mechanism. The last of the discontinuities that make humanity special, a creation unto itself, is being scaled.
Except, of course, that the true discontinuity is not between human and machine but between life and non-life. Blume’s point is still true at heart – that a neurological view is a neurofunctional view, which is a type of mechanism. But
With neurology comes neurobabble. As Americans we will certainly not refuse the chance to simplify and babble-ize any paradigm that comes our way.
If only it was just Americans!
I find the use of the label ‘autistic’ to include everyone on the autistic spectrum disturbing. Most clinically defined autistics probably don’t even use language, let alone the internet. Grouping clinical and sub-clinical populations is a linguistic dilution which confuses the issue and marginalises clinical cases. It confuses because it continues the zeitgeist for medicalising and/or pathologising everything.
High functioning ‘autistics’ are able to talk about the patterns of ability/disability. The average person is able to emphasise with the way the profile is presented and the average parent is able to spend money on ‘curing’, treating or preventing autism in their child. We start to think of autism as a quirk of personality or to expect savantism in every autistic – something that is unfair to autistics who won’t conform to our misled prejudices and hence disappoint or be cast in roles that don’t suit them
Linguistic reservations aside, Blume’s essay has lots of truth in it and is engaging and thought-provoking.
Still on the look out for cognitive neuroscience bloggers i’ve found Brain Waves, Cog News, Psychscape, Brainworld and, also, Cognitive Engineering, who reports on hearing a radio evangelist evoke the Stroop effect to explain the temptation of evil. Apparently:
?just like you need great concentration and will-power to prevent reading the name of the word, you need great strength to prevent the temptations and influences of Satan.?
I hear the nascent field of neuro-theology beckoning!
I’m still worrying about the development of individual perspective. The evolutionary psychologist in me is absolutely skeptical of the idea that our cultural development might have created a radical change in individual self-consciousness (a Jaynesian rewrite of the software of self-perception).
Can we be confident that a change in the historical expression of individual perspective is a different thing from the fundamental experience of individual perspective in history? (If only Nagel had written What Is It Like To Be A Pre-Renaissance Artist instead). By assuming there is a fundamental human experience to look for, am I already too deeply enmeshed in the fascistic reductionism of biological psychology?
So, ultimately, this is the question: can our experience of ourselves as agents be altered by our culture. Or perhaps more useful, how can our culture affect our experience of ourselves? Next week: cross-cultural evidence. But this week: developmental evidence.
Young children speak to themselves all the time. In fact the majority of their utterances are self-directed rather than other-directed. This is strange for something – language – which is an ability which can only be learnt in a social context. Laura Berk has written a good introduction to this Why Children Talk to Themselves.
As they get older the tendency for self-talk diminishes – it becomes internal speech. A child who is learning not to speak aloud can be developmentally reverted by being made to perform some tricky task. They then need self-instruction to help guide their actions (i seem to remember some experiment in which the participants – either adults or children, i can’t remember now – were forbidden from self-instructing aloud and their performance decreased. Need to chase the details).
So, obviously, externalising thoughts as speech does cognitive work – we can then operate on those representations, and this reflexion is in turn easier if those representations are placed in our short-term auditory memory with the force that comes from them being made physical sounds rather than merely sub-vocally articulated.
Inner speech seems analogous in some way to the development of silent reading – it’s so natural now that it’s hard to spot that it needed to be invented. The question is would inner speech be discovered independently by each child during development, or would it need to be culturally discovered. Imagine if, as an adult society, we never developed the use of inner speech. We would be forced to rely on actual social interaction to perform the cognitive work of internal speech.
This might be a reason why privacy was a non-concept before the modern period – the cognitive costs were too high. It might explain why adults so often tell children off, and each other, for talking to themselves – it’s the historical legacy of a culture that had to learn to internalise speech.
I’ve added to the links on the left – I’m trying to compile a list of bloggers with an interest in the cognitive sciences. Andrew Brown is hilarious with wide interests, including evolution and consciousness. Carl Zimmer is authoritive on evolution and biology. Steven Johnson has just published Mind Wide Open. Ade is a friend of a friend who i shamelessly google-stalked, and she linked to Casper, who also seems to have somethings to say about cogsci. I just stumpled across Jef Allbright while looking for something else and he seems to have enjoyably eclectic interests.
So, does anyone know any one else for the list?
What does this mean?
Whereas 20 per cent of submitted manuscripts are rejected by physics journals, this rate reaches 80 per cent in psychology.
Adair, J.G. & Vohra, N. (2003).The explosion of knowledge, references, and citations: Psychology?s unique response to a crisis. American Psychologist, 58(1), 15?23.
Perhaps it means that psychology journals have higher standards? Or that that psychologists submitting papers have lower standards? Both seem unlikely, especially since the authors and the reviewers are often the same people. Does an incoherent intellectual culture make the business of publishing research more problematic? Answers on a postcard please…
For some reason, today it became important to know when silent reading developed. In the middle ages, and in antiquity, the custom was to always read aloud. Monastic libraries would be full of mutterings and it must have been impossible to read private letters in public.
Then i found this essay and everything started to go ‘zing!’ inside my head. Seems the diffusion of the custom of word spacing (strange to think that something as unconscious as putting spaces between words needed invention. Strangetothinkthatsomethingasunconsciousasputtingspacesbetweenwordsneededinvention) was instrumental in allowing the development of silent copying and silent reading.
And- fantastic!- you can trace the historical development of this by looking at changes in the rules of monastic orders:
Reading likewise became a silent activity, as evidenced by changing interpretation of the rule of silence. Before about the 10th century, “oral group reading and composition [were] in practice no more considered a breach of silence than were confession or the recitation of prayers. Cluniac monks were judged to have violated their vows of silence only when a word they spoke was not written in the text.” (383) But later, “silence” comes to mean real silence.
And, again from Relevant History, silent reading lets us interact with the written word in new ways
Books that were meant to be read silently differed from those meant to be read aloud: they were more visually complex, and their design could incorporate metadata and visual cross-references that wouldn’t make sense in books that were read aloud. What other scholars have referred to as paratexts– .e.g, “tables of contents, alphabetical glosses, subject indexes, running headings” (408)– only really work in books that you interact with visually rather than orally.
All these changes, marking “the transformation from an oral monastic culture to a visual scholastic one between the end of the 12th and the beginning of the 14th centuries” (405) are first confined to the ecclesiastical worlds, but from the 14th century they spread in lay literate culture.
And then silent reading changes the nature of self-consciousness, another part in the development of the primacy of individual perspective. Is it coincidence that the widespread adoption of silent reading is coincident with the beginning of what we know of as the Modern period?
…private reading becomes a space for “individual critical thinking” that encourages “the development of skepticism and intellectual heresy.” (399) Likewise, spiritual literature in the 14th century was meant to be read alone, turning reading itself into a kind of meditation (that incidentally involved the highest of the senses, sight).
The privacy afforded by silent reading had the same effects in lay society that it did in scholastic circles. It made easier the cultivation of individual opinions and subversive thoughts…It also made religious feeling into a more private matter.
How did people see the world before they invented perspective, sometime in the 15th century? It would be comforting to assume that the world was the same, even though people could only imagine representations of the world which to us look awkward, childlke. If they saw the world like we do now, why was perspective not obvious sooner? Why couldn’t they see?
How did people experience the world before individual inner life was legistamised as a social object. Before we have a concept for self-consciousness, can it exist in the same way? Language might not condition our fundamental perception of the world (all the evidence I’ve seen persuades me to reject the Sapir-Whorf hypothesis), but it might condition the reflexive use of cognition. In other words, until the concept exists out there, it can’t be operated upon by our words or thoughts. What affect would this have on our feelings and thoughts about our feelings and thoughts?
So many questions…
Remember guardian article from november 2002 by David Lodge. He talks about the development of the novel, about how the ‘interiority of experience’ came to be a focus of literature after Descartes’ cogito put consciousness as the foundation of philosophy:
Ian Watt, in…The Rise of the Novel , suggests that "both the philosophical and the literary innovations must be seen as parallel manifestations of larger change – that vast transformation of Western civilization since the Renaissance which has replaced the unified world picture of the Middle Ages with another very different one – one which presents us, essentially, with a developing but unplanned aggregate of particular individuals having particular experiences at particular times and in particular places.
Watt observed that whereas earlier narrative literature usually recycled familiar stories, novelists were the first storytellers to pretend that their stories had never been told before, that they were entirely new and unique, as is each of our own lives according to the empirical, historical, and individualistic concept of human life. They did this partly by imitating empirical forms of narrative like auto biography, confessions, letters, and early journalism. Defoe and Richardson are obvious examples.
Remember Baumeister‘s How the self became a problem. He observes than in Western culture people could expect to spend 25 hours of every day in other people’s company: to eat, sleep, shit, make love, play, work with others present. Privacy was a concept that just wouldn’t make sense.
Remember Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind. Claiming that pre-homeric humans were not conscious. Experienced a will as external direction coming from gods or leaders, sort of hive-living robots. Fascinating stuff as he looks at the literature and archaeology to try an trace the development of modern consciousness. Madness. Genius.
I’ve just finished The Nurture Assumption by Judith Rich Harris (1998, Bloomsbury). Fantastic book – and I think it says something about psychology that something so seminal can be published by a writer of textbooks rather than professional academic. Or perhaps, like Nicol suggested, it would take a writer of textbooks to be able to synthesise across fields without the blinkers of disciplinary indoctrination which are normally acquired by specialist professionals. I’d love to know more about the profressional response to the book. She must have really annoyed some people.
I put my notes online. The take-home message is this: It is a cultural myth that parenting style influences how children turn out. The nature-nurture dichotomy is a false one, because it suggests that aside from nature/genetics it is only parents who have an influence on how children develop (thank Freud for that one). Genetics makes children similar to parents. Being socialised by a peer group with the same values as the parents makes children similar to parents. Parents don’t make children similar to parents.
Think language: the children of immigrants take on the language of their host country as their native tongue, not the language of their parents.
Think twin-studies: we all know the stories about twins reared-apart who in adulthood are amazinginly similar. You don’t hear the flip side mentioned so often. Twins reared together are no more similar. Having the same parental up-bringing doesn’t add anything to the existing effect of genetics.
There is no scientific evidence that parents have any affect beyond providing genes and a socio-economic peer group which is most likely congruent with their own socialisation.
That’s the important thing, I think. No scientific evidence. We have to remember that what is real isn’t the same as what is scientifically demonstrable. But if psychology wants to be a science it needs to rely on the scientifically demonstrable and I think Judith Rich Harris shows that the discipline has spent too long chasing confirmation of a folk myth, rather than doing properly controlled stuides. ‘Group Socialisation Theory’, as JRH calls the alternative hypothesis that peers are more influential than parents, is another good example of the use of using evolutionary theory as an integrative framework within psychology. Not that you should naively apply evolutionary theory to all aspects of psychology, but that nothing in biology makes sense except in light of evolution. And psychology is ultimately biological, and you need to use the same principles – ie trying to work out function – to understand either.