top of page
Search

AI and Facial Recognition - Detroit is leading the way. It’s still not enough.

  • Writer: Rob Padgett
    Rob Padgett
  • Dec 18, 2024
  • 6 min read

Eric C. Williams

Managing Director @ Detroit Justice Center | Founder @ Eric C. Williams, PLLC October 7, 2024


ree

What law enforcement doesn't want you to know.


This past July, a number of Michigan civil rights organizations secured a settlement with the City of Detroit in the wrongful arrest case of Robert Williams. [1] Mr. Williams was arrested outside his house in front of his wife, two young children, and neighbors for a theft he didn’t commit. His 30 hour ordeal made him the first publicly reported  instance of a wrongful arrested based on a facial recognition technology (FRT) match. He would not be the last. However, following the settlement of his case, there is at least a chance there won’t be many more in Detroit. Sadly, the same can’t be said for the rest of the country.


My hometown of Detroit is one of America’s Blackest cities. Detroit is also one of America’s most surveilled cities. Given FRT’s well documented tendency to misidentify people of color, women, and older people, the repeated misidentifications[2] shouldn’t surprise anyone.[3]  However, what might surprise people is that in most instances, a person arrested on the basis of a facial recognition match will never know the role that facial recognition, or any other AI enhanced surveillance,[4] played in their arrest.

To fully appreciate the impact of keeping this information from defendants, you have to understand how the FRT works, the technology’s inherent limitations, and how law enforcement uses the technolpogy. In general terms[5], systems like the one used by DPD extract facial features from a “probe image” (in Detroit, usually taken from a Project Greenlightcamera) to create a template or “faceprint”. This template is then compared to other templates of (usually) known individuals in a database (in Michigan, The Statewide Network of Agency Photos). Finally, the system generates a “match score” indicating how similar the probe template is to templates in the enrollment database. The match score is a value, often between 0 – 1, that  indicates the degree of similarity between these two templates. On a 0 - 1 scale, 0 indicates the lowest similarity, and 1 indicates the highest similarity.


But here is the catch: The match score isn’t some objective number based on science; it’s selected by the software provider or the user. This means that if a user like the Detroit Police Department (DPD) wants to set a wide net, it uses a lower number to qualify as a “match”. If the user wants a narrower search with fewer “matches” a higher number is used. In other, words, facial recognition matches are as much a product of the software provider’s default and the police department’s need for as suspect as anything else.


For Black folks, the margin for error in match scores can be multiplied by poor quality probe images, flawed training data, human bias in deployment of the cameras, and human bias in interpretating data.[6]  In addition, these issues are frequently amplified by police policies that fixate on the FRT match and/or create digital lineups with photos the FRT has decided looks like the suspect without other evidence linking that person to the crime.


I’m not saying facial recognition is junk science, like bite mark analysis or microscopic hair comparison. However, it’s pretty clear that FRT “matches” are not as accurate as the police and prosecutors publicly claim. In fact, under certain circumstances, one can even imagine that FRTs shortcomings might amount to reasonable doubt. Perhaps that’s why law enforcement tends to go out of the way to avoid mentioning the role of FRT in an arrest. Neither law enforcement nor prosecutors want to deal with defending the product of a technology with such well documented flaws.


A recent article in The Washington Post examined documentation provided by 15 states regarding their use of FRT over the past four years. The Post found that “authorities routinely failed to inform defendants about their use of the software — denying them the opportunity to contest the results of an emerging technology that is prone to error, especially when identifying people of color.” Actually, “failed to inform” is an understatement. The article noted that the use of FRT as the basis for arrest was often deliberately omitted or obscured by euphemisms such as “identified through investigative means” or “utilization of investigative databases.” 

eu·phe·mism - /ˈyo͞ofəˌmiz(ə)m,ˈyo͞ofm̩iz(ə)m/ noun: a mild or indirect word or expression substituted for one considered to be too harsh or blunt when referring to something unpleasant or embarrassing. "downsizing” as a euphemism for cuts --Oxford Languages

The failure to inform a criminal defendant of the role FRT in the arrest helps prosecutors avoid legitimate questions about reliability. It is comparable to hiding the fact that a witness received a lighter sentence in return for testimony. Under Brady v. Maryland, prosecutors must disclose all material evidence that could help the defense in a criminal case, including evidence that exonerates the accused, impeaches a fact or witness, lessens the punishment, and supports a valid defense. Brady would certainly seem to mandate disclosure that an arrest was precipitated by  technology and processes that, in the words of Minnesota District Court Judge Gordon Andrew “do not reliably and consistently produce accurate results.


Which brings me back to the settlement in Williams v. City of Detroit. The settlement implements changes in DPD policies on the use and disclosure of FRT in investigations going forward, disclosure of the use of FRT in past cases, and reform of the way lineups are conducted. I’m listing these in detail because these are probably the most stringent policies on FRT in the country:


  • A lineup may never be conducted based solely on an FRT investigative lead without further independent and reliable evidence linking a suspect to a crime. An FRT lead, combined with a lineup identification, may never be a sufficient basis for seeking an arrest warrant. Before seeking an arrest warrant, a detective must document their independent investigative steps establishing probable cause (other than the FRT lead and any lineup procedure) and obtain sign-off from two supervisory officials.

  • When requesting and conducting an FRT search, investigators and analysts must complete detailed forms that document critical information about the FRT search—including the quality of the input photo and how many other photos of the individual identified as a lead were in the photo database that was searched but did not show up as a potential lead. In any investigation in which FRT was used and charges are eventually filed against anyone, the DPD must provide the FRT forms to the prosecutor on the case. Thus, information about the use of FRT in any investigation may be available to defense counsel in discovery as potentially exculpatory information.

  • Eyewitness lineup identifications lineups may not incorporate the same photograph of a possible suspect that FRT identified as an investigative lead.

  • Witnesses performing lineup identifications may not be told that FRT identified anyone as an investigative lead.

  • Witnesses must report how confident they are in any identification.

  • Following best practices for reducing false identifications, lineups must be conducted “sequentially”, meaning that a witness is shown only one photo at a time, instead of seeing all of the photos and potentially selecting the one that looks most like the suspect even if it is not truly a match.


The settlement also requires additional training on FRT and eyewitness identification for all DPD officers and more specific training for investigating officers using FRT.

This isn’t a perfect outcome. I’d prefer that DPD eliminated its AI enhanced surveillance infrastructure, including FRT. I’d also like to see case law, or better yet legislation, explicitly requiring the disclosure FRT’s role in a criminal investigation. However, a victory is a victory. The Detroit Police Department (willingly or not) has taken a step forward in protecting  innocent Detroiters and the integrity of our criminal justice system. (And believe me, the latter needs all the help it can get.) It would be nice to see the rest of the country follow. 


[1] I have to mention the attorneys on this case, hardworking and underappreciated lawyers making a real difference. They include my friends Michael Steinberg at the University of Michigan Law School Civil Rights Litigation Initiative, and ACLU of Michigan attorneys Philip MayorDan Korobkin, and Ramis Wadood. Also, kudos to student attorneys Julia Kahn, Nethra Raman, and Collin Christner.


[2] Detroit is responsible for almost half of our nations’ facial recognition based wrongful arrests.


[3] I have real issues with the use of facial recognition in Detroit. I don’t think it’s out of line to wonder if a law enforcement program that was more likely misidentify White people would be used in say, Boise, Idaho.


[4] Surveillance companies claim the capabilities AI bring to law enforcement also include facial analysis, demographic analysis, and emotion analysis. 🙄


[5] For a more detailed explanation, see https://partnershiponai.org/paper/facial-recognition-systems/


 
 

Subscribe to Our Newsletter

Thanks for subscribing!

© 2023 Eric C. Williams | ALL RIGHTS RESERVED.  Website Powered by PeachtreeCityWebsites.com

bottom of page