Quantcast

Should Fingerprints Evidence Be Used In Court?

Peru 2010/10/22 19:53:35
Related Topics: FBI, Supreme Court
You!
Add Photos & Videos
Since 1999, nearly 40 judges have considered whether fingerprint evidence meets the Daubert
test, the Supreme Court’s standard for the admissibility of expert
evidence in federal court, or the equivalent state standard. Every single judge who has considered the issue
has determined that fingerprinting passes the test.

And yet, Judge Pollak’s first opinion concluded, after surveying the evidence, that, “fingerprint identification techniques have not been tested in a
manner that could be properly characterized as scientific.” All in all,
he found fingerprinting identification techniques “hard to square” with Daubert,
which asks judges to serve as gatekeepers to ensure that the expert
evidence used in court is sufficiently valid and reliable. Daubert
invites judges to examine whether the proffered expert evidence has
been adequately tested, whether it has a known error rate, whether it
has standards and techniques that control its operation, whether it has
been subject to meaningful peer review, and whether it is generally
accepted by the relevant community of experts. Pollak found that
fingerprinting flunked the Daubert test, meeting only one of the
criteria, that of general acceptance. Surprising though it may sound,
Pollak’s judgment was correct. Although fingerprinting retains
considerable cultural authority, there has been woefully little careful
empirical examination of the key claims made by fingerprint examiners.
Despite nearly 100 years of routine use by police and prosecutors,
central assertions of fingerprint examiners have simply not yet been
either verified or tested in a number of important ways.


Consider the following


Fingerprint examiners lack objective standards for
evaluating whether two prints “match.” There is simply no uniform
approach to deciding what counts as a sufficient basis for making an
identification. Some fingerprint examiners use a “point-counting” method
that entails counting the number of similar ridge characteristics on
the prints, but there is no fixed requirement about how many points of
similarity are needed. Six points, nine, twelve? Local practices vary,
and no established minimum or norm exists. Others reject point-counting
for a more holistic approach. Either way, there is no generally
agreed-on standard for determining precisely when to declare a match.
Although fingerprint experts insist that a qualified expert can
infallibly know when two fingerprints match, there is, in fact, no
carefully articulated protocol for ensuring that different experts reach
the same conclusion.


Although it is known that different individuals can
share certain ridge characteristics, the chance of two individuals
sharing any given number of identifying characteristics is not known.
How likely is it that two people could have four points of resemblance,
or five, or eight? Are the odds of two partial prints from different
people matching one in a thousand, one in a hundred thousand, or one in a
billion? No fingerprint examiner can honestly answer such questions,
even though the answers are critical to evaluating the probative value
of the evidence of a match. Moreover, with the partial, potentially
smudged fingerprints typical of forensic identification, the chance that
two prints will appear to share similar characteristics remains equally
uncertain.


The potential error rate for fingerprint
identification in actual practice has received virtually no systematic
study. How often do real-life fingerprint examiners find a match when
none exists? How often do experts erroneously declare two prints to come
from a common source? We lack credible answers to these questions.
Although some FBI proficiency tests show examiners making few or no
errors, these tests have been criticized, even by other fingerprint
examiners, as unrealistically easy. Other proficiency tests show more
disturbing results: In one 1995 test, 34 percent of test-takers made an
erroneous identification. Especially when an examiner evaluates a
partial latent print—a print that may be smudged, distorted, and
incomplete—it is impossible on the basis of our current knowledge to
have any real idea of how likely she is to make an honest mistake. The
real-world error rate might be low or might be high; we just don’t know.


Fingerprint examiners routinely testify in court
that they have “absolute certainty” about a match. Indeed, it is a
violation of their professional norms to testify about a match in
probabilistic terms. This is truly strange, for fingerprint
identification must inherently be probabilistic. The right question for
fingerprint examiners to answer is: How likely is it that any two people
might share a given number of fingerprint characteristics? However, a
valid statistical model of fingerprint variation does not exist. Without
either a plausible statistical model of fingerprinting or careful
empirical testing of the frequency of different ridge characteristics, a
satisfying answer to this question is simply not possible. Thus, when
fingerprint experts claim certainty, they are clearly overreaching,
making a claim that is not scientifically grounded. Even if we assume
that all people have unique fingerprints (an inductive claim, impossible
itself to prove), this does not mean that the partial fragments on
which identifications are based cannot sometimes be, or appear to be,
identical.


Defenders of fingerprinting identification
emphasize that the technique has been used, to all appearances
successfully, for nearly 100 years by police and prosecutors alike. If
it did not work, how could it have done so well in court? Even if
certain kinds of scientific testing have never been done, the technique
has been subject to a full century of adversarial testing in the
courtroom. Doesn’t this continuous, seemingly effective use provide
persuasive evidence about the technique’s validity? This argument has a
certain degree of merit; obviously, fingerprinting often does “work.”
For example, when prints found at a crime scene lead the police to a
suspect, and other independent evidence confirms the suspect’s presence
at the scene, this corroboration indicates that the fingerprint expert
has made a correct identification.


However, although the routine and successful police
use of fingerprints certainly does suggest that they can offer a
powerful form of identification, there are two problems with the
argument that fingerprint identification’s courtroom success proves its
merit. First, until very recently fingerprinting was challenged in court
very infrequently. Though adversarial testing was available in theory,
in practice, defense experts in fingerprint identification were almost
never used. Most of the time, experts did not even receive vigorous
cross-examination; instead, the accuracy of the identification was
typically taken for granted by prosecutor and defendant alike. So
although adversarial testing might prove something if it had truly
existed, the century of courtroom use should not be seen as a century’s
worth of testing. Second, as Judge Pollack recognizes in his first
opinion in Llera Plaza, adversarial testing through cross-examination is
not the right criterion for judges to use in deciding whether a
technique has been tested under Daubert. As Pollack writes, “If
‘adversarial’ testing were the benchmark—that is if the validity of a
technique were submitted to the jury in each instance—then the
preliminary role of the judge in determining the scientific validity of a
technique would never come into play.”


The history of fingerprinting suggests that without
adversarial testing, limitations in research and problematic
assumptions may long escape the notice of experts and judges alike.

So what’s the bottom line: Is fingerprinting
reliable or isn’t it? The point is that we cannot answer that question
on the basis of what is presently known, except to say that its
reliability is surprisingly untested. It is possible, perhaps even
probable, that the pursuit of meaningful proficiency tests that actually
challenge examiners with difficult identifications, more sophisticated
efforts to develop a sound statistical basis for fingerprinting, and
additional empirical study will combine to reveal that latent
fingerprinting is indeed a reliable identification method. But until
this careful study is done, we ought, at a minimum, to treat fingerprint
identification with greater skepticism, for the gold standard could
turn out to be tarnished brass.


Recognizing how much we simply do not know about
the reliability of fingerprint identification raises a number of
additional questions. First, given the lack of information about the
validity of fingerprint identification, why and how did it come to be
accepted as a form of legal evidence? Second, why is it being challenged
now? And finally, why aren’t the courts (with the exception of Judge
Pollack the first time around) taking these challenges seriously?


A long history


Fingerprint evidence was accepted as a legitimate
form of legal evidence very rapidly, and with strikingly little careful
scrutiny. Consider, for example, the first case in the United States in
which fingerprints were introduced in evidence: the 1910 trial of Thomas
Jennings for the murder of Clarence Hiller. The defendant was linked to
the crime by some suspicious circumstantial evidence, but there was
nothing definitive against him. However, the Hiller family had just
finished painting their house, and on the railing of their back porch,
four fingers of a left hand had been imprinted in the still-wet paint.
The prosecution wanted to introduce expert testimony concluding that
these fingerprints belonged to none other than Thomas Jennings.


Four witnesses from various bureaus of
identification testified for the prosecution, and all concluded that the
fingerprints on the rail were made by the defendant’s hand. The judge
allowed their testimony, and Jennings was convicted. The defendant
argued unsuccessfully on appeal that the prints were improperly
admitted. Citing authorities such as the Encyclopedia Britannica and a
treatise on handwriting identification, the court emphasized that
“standard authorities on scientific subjects discuss the use of
fingerprints as a system of identification, concluding that experience
has shown it to be reliable.” On the basis of these sources and the
witnesses’ testimony, the court concluded that fingerprinting had a
scientific basis and admitted it into evidence.


What was striking in Jennings, as well as the cases
that followed it, is that courts largely failed to ask any difficult
questions of the new identification technique. Just how confident could
fingerprint identification experts be that no two fingerprints were
really alike? How often might examiners make mistakes? How reliable was
their technique for determining whether two prints actually matched? How
was forensic use of fingerprints different from police use? The judge
did not analyze in detail either the technique or the experts’ claims to
knowledge; instead, he believed that the new technique worked
flawlessly based only on interested participants’ say-so. The Jennings
decision proved quite influential. In the years following, courts in
other states admitted fingerprints without any substantial analysis at
all, relying instead on Jennings and other cases as precedent.


From the beginning, fingerprinting greatly
impressed judges and jurors alike. Experts showed juries blown-up visual
representations of the fingerprints themselves, carefully marked to
emphasize the points of similarity, inviting jurors to look down at the
ridges of their own fingers with new-found respect. The jurors saw, or
at least seemed to see, nature speaking directly. Moreover, even in the
very first cases, fingerprint experts attempted to distinguish their
knowledge from other forms of expert testimony by declaring that they
offered not opinion but fact, claiming that their knowledge was special,
more certain than other claims of knowledge. But they never established
conclusively that all fingerprints are unique or that their technique
was infallible even with less-than-perfect fingerprints found at crime
scenes.


In all events, just a few years after Jennings was
decided, the evidential legitimacy of fingerprints was deeply
entrenched, taken for granted as accepted doctrine. Judges were as
confident about fingerprinting as was Puddn’head Wilson, a character in
an 1894 Mark Twain novella, who believed that “ ‘God’s finger print
language,’ that voiceless speech and the indelible writing,” could
provide “unquestionable evidence of identity in all cases.”
Occasionally, Pudd’nhead Wilson itself was cited as an authority by
judges.


Why was fingerprinting accepted so rapidly and with
so little skepticism? In part, early 20th-century courts simply weren’t
in the habit of rigorously scrutinizing scientific evidence. Moreover,
the judicial habit of relying on precedent created a snowballing effect:
Once a number of courts accepted fingerprinting as evidence, later
courts simply followed their lead rather than investigating the merits
of the technique for themselves. But there are additional explanations
for the new technique’s easy acceptance. First, fingerprinting and its
claims that individual distinctiveness was marked on the tips of the
fingers had inherent cultural plausibility. The notion that identity and
even character could be read from the physical body was widely shared,
both in popular culture and in certain more professional and scientific
arenas as well. Berthillonage, for example, the measurement system
widely used by police departments across the globe, was based on the
notion that if people’s bodies were measured carefully, they inevitably
differed one from the other. Similarly, Lombrosion criminology and
criminal anthropology, influential around the turn of the century, had
as its basic tenet that born criminals differed from normal law-abiding
citizens in physically identifiable ways. The widespread belief in
nature’s infinite variety meant that just as every person was different,
just as every snowflake was unique, every fingerprint must be
distinctive too, if it was only examined in sufficient detail. The idea
that upon the tips of fingers were minute patterns, fixed from birth and
unique to the carrier, made cultural sense; it fit with the order of
things.


One could argue, from the vantage point of 100
years of experience, that the reason fingerprinting seemed so plausible
at the time was because its claims were true, rather than because it fit
within a particular cultural paradigm or ideology. But this would be
the worst form of Whig history. Many of the other circulating beliefs of
the period, such as, for example, criminal anthropology, are now quite
discredited. The reason fingerprinting was not subject to scrutiny by
judges was not because it obviously worked; in fact, it may have become
obvious that it worked in part precisely because it was not subject to
careful scrutiny.


Moreover, fingerprint examiners’ strong claim of
certain, incontestable knowledge made fingerprinting appealing not only
to prosecutors but to judges as well. In fact, there was an especially
powerful fit between fingerprinting and that which the legal system
hoped that science could provide. In the late 19th century, legal
commentators and judges saw in expert testimony the potential for a
particularly authoritative mode of evidence, a kind of knowledge that
could have been and should have been far superior to that of mere
eyewitnesses, whose weaknesses and limitations were beginning to be
better understood.


Expert evidence held out the promise of offering a
superior method of proof—rigorous, disinterested, and objective. But in
practice, scientific evidence almost never lived up to these hopes.
Instead, at the turn of the century, as one lawyer griped, the testimony
of experts had become “the subject of everybody’s sneer and the object
of everybody’s derision. It has become a newspaper jest. The public has
no confidence in expert testimony.” Experts perpetually disagreed. Too
often, experts were quacks or partisans, and even when they were
respected members of their profession, their evidence was usually
inconsistent and conflicting. Judges and commentators were angry and
disillusioned by the actual use of expert evidence in court, and often
said so in their opinions. (In this respect, there are noticeable
similarities between the 19th-century reaction to expert testimony and
present-day responses.)


There should be serious efforts to test and
validate fingerprinting methodologies and to develop difficult and
meaningful proficiency tests for practitioners.

Even if experts did not become zealous partisans,
the very fact of disagreement was a problem. It forced juries to choose
between competing experts, even though the whole reason for the expert
in the first place was that the jury lacked the expertise to make a
determination for itself. Given this context, fingerprinting seemed to
offer something astonishing. Fingerprinting—unlike the evidence of
physicians, chemists, handwriting experts, surveyors, or
engineers—seemed to offer precisely the kind of scientific certainty
that judges and commentators, weary of the perpetual battles of the
expert, yearned for. Writers on fingerprinting routinely emphasized that
fingerprint identification could not be erroneous. Unlike so much other
expert evidence, which could be and generally was disputed by other
qualified experts, fingerprint examiners seemed always to agree.
Generally, the defendants in fingerprinting cases did not offer
fingerprint experts of their own. Because no one challenged
fingerprinting in court, either its theoretical foundations or, for the
most part, the operation of the technique in the particular instance, it
seemed especially powerful.

Read More: http://www.issues.org/20.1/mnookin.html

Add a comment above

Sort By
  • Most Raves
  • Least Raves
  • Oldest
  • Newest
Opinions

  • Common Sense Conservative 2010/12/02 21:24:48
    Yes.
    Common Sense Conservative
    Anything should be used to find out who committed the crime.
  • Peru 2010/10/22 19:54:37 (edited)
    No.
    Peru
    For the reasons stated in the text and for other reasons, I conclude that fingerprints found at the scene of the crime prove absolutely nothing.

See Votes by State

The map above displays the winning answer by region.

News & Politics

2014/07/29 22:53:53

Hot Questions on SodaHead
More Hot Questions

More Community More Originals