That’s out of the 94 approved requests by officers hoping to utilize the technology over the last six months, the memo states. The only question from the committee: why isn’t it being used more?
“I think the first couple of months that we had it, (we were) just trying to get the word out there and talking to the [Crimes Against Persons] unit because all the offenses that we could use it for reside within the CAPERS division,” Major Brian Lamberson with DPD’s criminal intelligence unit said. “[Requests] were kind of slow coming in … but over time I think we're starting to see them pick up a lot more.”
The Public Safety Committee approved the adoption of the facial recognition technology last May, with then-police Chief Eddie Garcia assuring the council that the system would be a “game changer” for detectives. While other North Texas cities such as Arlington and Fort Worth beat Dallas to the facial recognition punch, officials said the delay offered DPD the opportunity to build a privacy safety net into the program.
The software was developed by a group called Clearview AI and is only used by the department for violent offenses or in the case of imminent public safety threats. A detective has to request that an image be run through the software, and if the request is approved, an FBI-trained analyst is responsible for running the search. A second analyst is charged with combing through the results.
According to the Dallas Police Department, four requests for facial recognition were denied either because the case involved an offense that is not at the severity of the AI program’s intended use, or because a supervisor had not approved the request.
“I have always had a lot of concerns about privacy, whether it is data or other things. This feels very comfortable for me. This feels like efficiency and just the next step," Council member Cara Mendelsohn said last spring when the program was approved.
Since that initial approval, though, new concerns surrounding Clearview AI’s ethicality and privacy protections have emerged.
Misidentifications and Political Targeting
Last fall, the Netherlands fined Clearview AI for $33.7 million, accusing the company of building an “illegal database” that utilizes images from the internet and social media to create a collection of faces that can be matched to images submitted by law enforcement. The Netherlands’ Data Protection Agency warned that it is illegal for Dutch companies to utilize the service.This decision, which Clearview AI officials described to the Associated Press as “unlawful, devoid of due process and is unenforceable,” came just weeks after the company settled a lawsuit in an Illinois court that consolidated complaints from across the U.S. The settlement was estimated to cost as much as $50 million and was made in response to complaints that the social media scraping technology that the facial recognition software utilizes amounts to a privacy violation.
“The use of dragnet surveillance is not consensual. They're not anything anyone's opting into,” Will Owen, communications director of the Surveillance Technology Oversight Project, told the Observer. “Companies like Clearview AI are just taking our images and building their databases.”
Dragnet surveillance refers to the practice of surveillance by broad and widespread data collection, rather than focusing on a specific suspect.
Clearview AI has become increasingly popular across police forces in recent years, Owen said. He finds that concerning from a basic surveillance perspective, but also in light of recent reporting that the founders of Clearview were aware their technology could be used to mark immigrants or political targets.
Government records show that since 2020, U.S. Immigration and Customs Enforcement has paid millions in contracts to Clearview AI, and a recent Mother Jones article outlined the ways in which Clearview is helping the Trump administration carry out its crackdown on immigration. That being said, the software also aided in federal investigations following the Jan. 6 Capitol insurrection.
Even with all that aside, Owen worries that facial recognition isn’t where it needs to be in terms of accuracy for policing. While the Dallas Police Department says image analysts receive training to avoid misidentification and bias, both are common issues on the technology’s end. Facial recognition systems have routinely been shown to be more inaccurate when identifying Black or brown faces than when looking at white ones. Other nuances, such as whether or not an image is of a cisgender person, can add to the artificial intelligence’s likelihood of making mistakes.
“Facial recognition, at large, has expanded greatly across the United States, and it is very unregulated and highly biased in the way it's deployed. Facial recognition in law enforcement drives over policing of immigrant communities,” Owen said. “(Clearview AI’s) founder is very explicit in how the technology can be used to drive the anti-immigrant policies of the Trump administration. So I fear that its use is only going to expand further in the current political climate.”