« Why Obama must commit to cluster munitions ban | Bush's Parting 'FUCK YOU' to America » |
Louise Whiteley
Thoughtcrime does not entail death. Thoughtcrime IS death. ~George Orwell’s Nineteen Eighty-Four.
Unlike the contestants on Big Brother, the citizens in Orwell’s novel tend to hold their tongues, but ultimately surveillance of their actions is guaranteed to uncover any deviant thoughts they might entertain. Recently, neuroscientists have started to decipher the thoughts of individuals even in the absence of incriminating actions, as they lie still and silent in a brain scanner.
Functional Magnetic Resonance Imaging (fMRI) uses the magnetic properties of blood to infer the amount of oxygen in brain tissue, and thus indirectly the level of neural activity. This allows scientists to test hypotheses about where in the brain various mental processes are carried out, and, once particular brain areas or patterns of activity have functional ‘labels’, to work out which processes are active when the subject cannot, or will not, say what they are thinking. Researchers have been able to identify regions in the brain that produce a higher fMRI signal when subjects tell a lie, and media reports now speculate about a future in which this could be used to judge whether someone is dissembling in court. This future is in fact disturbingly close – a commercial company called ‘No Lie MRI’ is already “working to have its testing allowed as evidence” in the US, and the emerging field of neuroethics is calling for an urgent debate about the potential uses of such research in civil society and the military.
One danger is that fMRI data would be treated as indubitable evidence rather than as a fallible basis for inference. In fact, it suffers from many of the problems of accuracy and interpretation that dog the much-maligned polygraph test. This raises questions about how the judicial process, and the media, can be made sufficiently au fait with its limitations, but these questions are not unique. Think of the vacillation surrounding DNA evidence in the Madeleine McCann case, or the flawed estimate of the probability of two successive cot deaths that led to Sally Clark’s wrongful conviction. With fMRI images, which highlight cross-sections of the brain, the problem is exacerbated by the fact that they are often implicitly treated like photographs, as if they show rather than tell. Far from being snapshots of deception, such images are in fact just visualisations of a statistical test.
So should our concerns take on a new flavour? Why the need for a neuro-ethics? One could argue that fMRI investigation would constitute a novel invasion of privacy – reading thoughts in the absence of incriminating ‘actions’ such as speech, sweating, or facial expression. The coercion that would likely be required to get someone to lie in a scanner and cooperate with calibrating a supposedly passive fMRI test is perhaps not that different to the use of bribes or threats to extract verbal information. And as spies can train themselves to outwit a polygraph, we could perhaps train ourselves to outfox a scanner. Maybe the correct distinction is not thoughts vs. actions, but whether a particular manifestation of our thoughts can be observed unaided or not.
Moving beyond deception, a recent study led by Professor John Dylan Haynes employed an fMRI signal to predict whether an individual intended to add, or subtract, two numbers before they appeared on a computer screen. Some commentators worry about a Minority Report future, in which people could be arrested before actually committing a crime. It is worth remembering that in Philip K. Dick’s story, this system fails for the very reason it is able to exist – the ‘minority report’ is a prediction by one of three psychics that disagrees with the other two, avoiding the determinism that would make intervention moot. A marker of civilised society (and one subtly eroded by our increasing ability to identify genetic and neural predispositions) is our freedom to choose whether to translate our thoughts into actions, and it is questionable whether neural evidence for ‘intention’ should ever be admissible in court or the workplace.
Proponents of their legal use have insisted that these tools would be used primarily for proving innocence. This is intended to reassure, but reflects the pernicious belief that if you have nothing to hide, invasions of privacy should give you nothing to fear. To hold this belief requires a reasonable match between what the state deems it necessary for the citizen to hide, and what the citizen considers wrong. As New Labour legislation spawns a cornucopia of new offences, we should worry about losing this luxury. And as neuroscience makes some baby steps towards mind-reading, it is important to reconsider crucial questions about evidence and privacy, and worry about whether having ‘nothing to hide’ might eventually become a concept applicable only to the newborn or angelic amongst us.
¤ ¤ ¤ ¤ ¤
Source:
http://www.martinfrost.ws/htmlfiles/oct2008/thoughtcrime-scanning.html
http://truthspring.info/wp-content/uploads/2008/11/restraing_chair.gif