fMRI lie detection and the Wonder Woman problem

Wired Science has covered a legal case where fMRI brain scan ‘lie detection’ data was offered as evidence. While the lawyer was initially hopeful, it was ruled inadmissible by the judge on the basis that judgements of witness credibility by the jury should be based on their impression of the witness.

It not clear from the reports exactly why fMRI evidence should not, in principal, contribute to the jury’s judgement of witness credibility along with other evidence, but arguments usually centre on the reliability of the technology based on an evaluation known as the Frye or Daubert test which assesses whether the technology is ‘generally accepted’ by the scientific community.

The tests are essentially the same and the basis of both is the 1923 Frye vs United States court case which involved, interestingly enough, an unsuccessful attempt to admit evidence from an early lie detector that used a measure of blood pressure.

Even more interestingly, the inventor of the ‘lie detector’ in this case was psychologist William Moulton Marston who is more famous as being the creator of Wonder Woman. It is no coincidence that the female super hero has a Lasso of Truth that wraps around the body and compels the person not to lie.

Marston’s device was the forerunner of the polygraph test which is only admissible in some state courts in the USA and generally falls foul of the Frye and Daubert ‘general acceptance’ criteria.

fMRI lie detection also fails to make this grade. Although studies have found that in some instances the technique can detect lies better than chance, the experiments have produced variable results, using situations that aren’t necessarily good matches to everyday situations (such as asking participants to lie about a playing card they saw) and have led some neuroscientsts to call for a suspension of its use.

However, the issue is not as clear cut as it seems and Frederick Schauer from the University of Virginia School of Law makes a convincing case in an upcoming article for the Cornell Law Review that scientific standards of evidence should not be applied wholesale to courts of law.

Most of the arguments from neuroscientists focus on the scenario where someone ‘might be sent to prison’ on the basis of fMRI evidence, but Schauer notes that this is only a tiny proportion of court cases and that evidence should be evaluated depending on the context.

Schauer argues that if the decision was genuinely about sending someone to prison the highest standards of reliability must apply, but lawyers regularly introduce less reliable evidence as part of a bigger picture.

For example, when a lawyer says ‘would an upstanding community man like this really be likely to kill his business partner?’ everyone accepts that this is not a highly reliable guide to whether someone is a murderer but as part of a collection of evidence it might help show that the prosecution cannot prove ‘beyond reasonable doubt’ that the accused is guilty. Numerous other types of similarly weak circumstantial evidence might also be presented.

This, Schauer says, could be where technology like fMRI lie detection could play a part. If it is 60% reliable and is simply a small part of a larger picture it seems daft to not allow it when similarly ‘unreliable’ evidence is admitted all the time. As he notes “Although slight evidence ought not to be good enough for scientists, it is a large part of the law.”

Furthermore, in civil cases the burden of proof is different and cases may be decided ‘on the balance of probabilities’ rather than the more stringent ‘beyond reasonable doubt’. Additionally, lawyers may want to submit fMRI evidence not as evidence for deciding the case but as evidence for awarding damages.

In these cases, Schauer argues that applying the standards of science to legal cases without judging the context would be as bad as applying legal standards to science – like trying to decide a scientific question by inviting two people with opposing views and deciding who seems more credible.

The commercial fMRI lie-detector companies are currently trying as hard as they can to get the first evidence from their not very effective technology accepted in a court case. Eventually, it will probably happen but likely on some minor point in the bigger picture.

When it happens it will be widely hyped and the danger will be not that such evidence is allowed, but that it will be over-interpreted and misunderstood, in the same way that other scientific evidence is widely misinterpreted.

Indeed, if we needed warning about the dangers of this, it was illustrated by a recent case in India where unproven EEG ‘lie detection’ technology was accepted as key evidence in the conviction of a woman for murder.

Link to Wired Science to attempt to admit fMRI lie detection.
Link to Wired science on the evidence being rejected.

5 thoughts on “fMRI lie detection and the Wonder Woman problem”

  1. There is also a great piece from 1997 in the journal History of the Human Sciences on Marston and Wonder Woman: Geoffrey C. Bunn, “The lie detector, Wonder Woman and liberty: the life and work of William Moulton Marston”

  2. I think the main problem with using this technology in court is that the jurors may not understand how or why the evidence is not very reliable. MRI scans appear to be the ultimate in lie detectors – you are “seeing inside someone’s brain” (though people who follow the literature know that does not mean you are reading the person’s mind). It is easier to understand why a lie detector or blood pressure monitoring device is not reliable than why a MRI scan is not reliable.

  3. we can’t blame the court if they don’t want to accept the results from the fMRI, anything new that is introduced to a traditional setting has a little chance of being accepted. it still has to pass several debates, criticisms and all things negative but hopefully fMRI results will be accepted as credible because it will really be a big help.

Leave a comment