Anomalous Data

It’s definitely an anomaly, but what kind of anomaly is it? In the British Journal of Psychology this month there is a meta-analysis of studies of electrodermal response to being stared at – or, to put it another way, a review of the evidence on whether we have a sixth sense to detect when people are looking at us. The review looked at two different paradigms and, quoting the conclusion:

We conclude that for both data sets that there is a small, but significant effect. This result corresponds to the recent findings of studies on distant healing and the `feeling of being stared at?. Therefore, the existence of some anomaly related to distant intentions cannot be ruled out. The lack of methodological rigour in the existing database prohibits final conclusions and calls for further research, especially for independent replications on larger data sets. There is no specific theoretical conception we know of that can incorporate this phenomenon into the current body of scientific knowledge. Thus, theoretical research allowing for and describing plausible mechanisms for such effects is necessary. [My emphasis].

So that’s okay then – just casually suggest that the entire physicalist basis of western science may be in error. Surely this result should either be better supported and in a better journal, or not published at all.

Thoughts anyone?

Schmidt, S., Schneider, R., Utts, J. & Walach, H. (2004). Distant intentionality and the feeling of being stared at: two meta-analyses. British Journal of Psychology, 95, 235-247

3 replies on “Anomalous Data”

Alex, good point well made. I guess I had a knee-jerk reaction against publishing ammunition for the ‘the other side’, but, ultimately scientific scrutiny should be open and if things get mis-understood and mis-used then that’s regretable but not as regretable as not discussing or investigating these things.

That said, i was told in an email by someone else:


My suspicion is that the small effect they found across the studies was erroneous. When they restricted the analysis to the studies they rated as ‘high quality’ – no significant effect was found. The problem for the authors was that the 7 or so studies they rated as ‘high-quality’ were all their own, conducted in the same lab. They couldn’t rightly go ahead and dismiss everybody’s work but their own, and that’s probably why they went with the conservative conclusion of a small, but significant effect across studies rated ‘medium’ & ‘high quality’.


Interesting – that?s an unusual double bind they find themselves in: in order to not privilege their work over the rest of the field they have to ‘settle’ for an effect, rather than selectively choosing findings to end up with a null effect (that they might have been more comfortable with espousing)!

In my opinion, the journal that considers the work holds some responsibility to make clear that the finding is in fact somewhat opaque. As I think about it, the second piece of text you highlight and the following sentence is crucial, and I am somewhat uncomfortable with it. It feels too strong, a statement that there is a phemonenon, rather than some evidence to support it.

I’ve never done meta-analyses, so I don’t know where one draws the line and says the balance of evidence proves this is real – would a “small but significant effect” come adrift if a new null study was published (or an old one had been file-drawered, never to be seen)? It’s certainly muddier waters than a single study, where it can stand in the tall grass until it?s replicated, and gains credibility, or non-replicated, and falls down.. I think I still stand by what I say, but the claims are rather strong.

Oh, and my site can be a pain with log-ins. I assume this is a blogspot-hosting issue, as I don’t care if father christmas wants to drop me a line early.

Comments are closed.