Breaking out of slightly-better-than-chance effect

Contribution: 
Conference proceedings / talk
Keywords: 
iIIRG
Deception
interview
game theory
meta-analysis
.
Reference: 
Levine, T. (2013). Breaking out of slightly-better-than-chance effect. Paper presented at the iIIRG 2013, Maastricht, NL.
Summary / Abstract: 

Notes: meta analysis of interviews - 54% accuracy. Nothing matters (channel of communication, experts vs students, method, age, IQ).  Over many studies the data is reasonably normally distributed. bell curve ends are populated with studies with small samples where there was a lot of noise. It turns out that people are about as good at detecting deception as they are at predicting random future events (BEM scale - 53% accuracy). Levine (2007, 2009, 2012) articles (some are in this repository) - they tried to get the people to cheat (In questions I asked him how successful they were past the baseline and he said that they weren't).

Content and Context helps in detecting deception (75% accuracy across 7 experiments - interviewers were told exact methodology so they were very familiar with the context, but they were not told who cheated). In the same conditions Levine et al (article under review) got the experts to score from 90-100% accuracy and students on average 94%. So about the same and wildly above the baseline. There are competing hypotheses as to why experts don't do better than students. But so far (11 experiments with different students and experts) this seems to be an outlier method.

Author: Timothy Levine, Korea university

Breaking out of slightly-better-than-chance effect