Journal article: Facial recognition technologies and the rise of algorithmic policing in America

In the first part of a two-part series for Anthropology Today, Professor Roberto J. González critically examines the rise of predictive policing platforms and facial recognition technologies and their impact on US law enforcement practices. 

In the article, González explores the development of policing technologies by companies like PredPol, Palantir, and Clearview AI. Predpol and Palantir develop place-based predictive technologies that identify crime hotspots, and people-based systems that target individuals deemed “at risk” of committing crimes. Although marketed as solutions for improving law enforcement practices, González highlights that these tools often rely on biased data, creating the potential for what the Stop LAPD Spying Coalition has termed a “racist feedback loop”. This results in increased police presence and arrests in specific neighborhoods, reinforcing the biases embedded in the original data.

The article furthermore points out the controversial practices of companies like Clearview AI, which scrape billions of photos from social media platforms without user consent to improve their facial recognition systems. While such companies often claim high accuracy in identifying individuals through their facial recognition technologies, González raises concerns over the risk of false positives, where innocent individuals may be wrongfully detained or arrested due to flawed algorithmic matches.

González argues that despite the privacy, surveillance, and social control issues, the commercialization of these algorithmic policing tools continues to grow. As venture capital investments keep fuelling the expansion of algorithmic policing technologies, their allure continues to outpace public debate on their societal impact.


Read the full article by Prof. Roberto J. González here.