By Mark Joseph Stern | National Post
Nine days before death row inmate Earl Washington’s scheduled execution, his lawyers informed the state of Virginia that it was about to murder an innocent man.
Forensic analysis of semen introduced at trial had convinced the jury that Washington, whose mental abilities matched those of a 10-year-old, had brutally raped and murdered a young woman in 1982. Washington’s lawyers uncovered evidence that the analysis was faulty. The state halted the impending execution, and following a gubernatorial pardon, Washington was released from prison in 2001. He had been there for 17 years.
How could forensic evidence, widely seen as factual and unbiased, nearly send an innocent person to his death? The answer is profoundly disturbing — and suggests that for every Earl Washington freed, untold more are sent to their deaths. Far from an infallible science, forensics is a decades-long experiment in which undertrained lab workers jettison the scientific method in favor of speedy results that fit prosecutors’ hunches. No one knows exactly how many people have been wrongly imprisoned—or executed—due to flawed forensics. But the number, most experts agree, is horrifyingly high. The most respected scientific organization in the country has revealed how deeply, fundamentally unscientific forensics is. A complete overhaul of our evidence analysis is desperately needed. Without it, the number of falsely convicted will only keep growing.
Behind the myriad technical defects of modern forensics lie two extremely basic scientific problems. The first is a pretty clear case of cognitive bias: A startling number of forensics analysts are told by prosecutors what they think the result of any given test will be. This isn’t mere prosecutorial mischief; analysts often ask for as much information about the case as possible—including the identity of the suspect—claiming it helps them know what to look for. Even the most upright analyst is liable to be subconsciously swayed when she already has a conclusion in mind. Yet few forensics labs follow the typical blind experiment model to eliminate bias. Instead, they reenact a small-scale version of Inception, in which analysts are unconsciously convinced of their conclusion before their experiment even begins.
The second flaw that plagues forensics is even more alarming: For decades, nobody knew how accurate forensic analyses were, or whether they were accurate at all. There’s no central agency that evaluates each test for precision or reliability before approving its use, and most were developed with barely a gesture toward the scientific method and with little input from the scientific community. Nor did the creators of forensics tests publish their methods in peer-reviewed scientific journals. And why should they? Without a government agency overseeing the field, forensic analysts had no incentive to subject their tests to stricter scrutiny. Groups such as the Innocence Project have continually put pressure on the U.S. Department of Justice — which almost certainly should have supervised crime labs from the start — to regulate forensics. But until recently, no agency has been willing to wade into the decentralized mess that hundreds of labs across the country had unintentionally created.
It might sound astonishing that undependable forensic tests have been able to slip through the cracks for so many decades. But in light of their origin and use, it’s really no surprise at all. Unlike medical diagnostic tools—which undergo rigorous testing by government agencies—forensic analyses are developed exclusively for law enforcement. Almost everybody will need a cancer screening some day, but you won’t need a semen analysis until you’re convicted of rape. So long as forensics remains the sole province of law enforcement, police and prosecutors have no incentive to screen their tests for accuracy once they’ve been adopted. Exposing a bad test might mean exposing a wrongful prosecution, and few prosecutors are eager to admit that they might be sending innocent people to prison.
In 2009, a U.S. National Academy of Sciences committee embarked on a long-overdue quest to study typical forensics analyses with an appropriate level of scientific scrutiny — and the results were deeply chilling. Aside from DNA analysis, not a single forensic practice held up to rigorous inspection. The committee condemned common methods of fingerprint and hair analysis, questioning their accuracy, consistent application, and general validity. Bite-mark analysis — frequently employed in rape and murder cases, including capital cases — was subject to special scorn; the committee questioned whether bite marks could ever be used to positively identify a perpetrator. Ballistics and handwriting analysis, the committee noted, are also based on tenuous and largely untested science. The report amounted to a searing condemnation of the current practice of forensics and an ominous warning that death row may be filled with innocents.
Read the full article at: nationalpost.com
READ: The FBI and the Myth of the Fingerprint
Tune into Red Ice Radio to hear more on the fallibility of science, and the belief and dogma that continues to drive it:
Rupert Sheldrake - The Science Delusion
Johan Oldenkamp - Hour 1 - Wholly Science
Gregg Braden - Crisis in Thinking & False Assumptions of an Incomplete Science
William Lyne - Hour 1 - Occult Science Dictatorship
Phillip D. Collins - The Scientific Dictatorship
James Follett - The Church of Global Warming
Tim Ball - Climategate & The Anthropogenic Global Warming Fraud
Graham Hancock - Entangled, Supernatural, Shamanism, The Origins of Consciousness & The Destiny of America