top of page

Facial Recognition & the Use of Artificial Intelligence for Predictive Policing: Is it All that Bad?

Varad Mohan,

Research Intern,

Indian Society of Artificial Intelligence and Law.


 

Artificial intelligence has undoubtedly made several contributions to the advancement of humanity, but is it going too far? Calls to ban research into developing algorithms that can help police by way of facial recognition have been gaining a lot of traction recently. Let us examine what lies at the heart of this issue and what implications this technology may have in the context of policing.


Technology and its Analysis

Facial recognition, in the simplest terms, is the technology that can identify individuals from images and videos. This technology has been around for a while but has recently garnered a lot of attention because of its potential implications. Especially amidst a global shifting of perspective on policing and the role of the state, facial recognition may not be the best way forward as it may do more harm than good.

The central argument against the use of facial recognition in policing is that the technology is inherently flawed as it is corrupted by racial biases. At its very core, criminal profiling based on facial recognition finds its roots in Criminology’s positivist school of thought. Cesare Lombroso (1835-1909), regarded as the father of modern criminology, was a chief proponent of using physiognomy for identifying criminals. Physiognomy relies on analyzing the physical features of criminals and establishing their links with criminality. This theory intuitively seems problematic and that is the view held by modern criminologists as Lombroso’s work has been widely discredited for being unreliable. (Little, 2019) Facial recognition, unfortunately, is based on similar principles. The idea is that artificial intelligence can be used to analyze commonalities in facial features and match those with individuals to identify criminals who are caught on some footage.


The Problems

This technology prima facie appears to be extremely beneficial. Unfortunately, the picture, in this case, is not as clear as it might seem.

  1. Unreliability – The images and videos used for such an analysis cannot be expected to be of high quality. The primary source that will be utilized for policing is quite obviously going to be surveillance footage. Since most surveillance footage is not nearly sophisticated to pick up subtle features on individuals’ facial features, the technology is going to be highly unreliable.

  2. Easily Fooled – Unfortunately, artificial intelligence is vulnerable to deception. A simple change in the attire from casual to formals can sway the algorithm to change its decision about a person’s guilt. Additionally, some basic accessories can be worn by individuals, like glasses, which can easily throw off algorithms designed to identify facial features.

  3. Garbage In, Garbage Out – Arguably the most important component in creating artificial intelligence is the collection of data. At the end of the day, the algorithm can only work with the data that it has been provided with. Historically speaking, minorities have always been treated unfairly and the data disproportionality exhibits that racial bias. Therefore, any algorithm that utilizes such data will inevitably be racially biased. For example, it will be more likely to identify a minority as a threat because the data suggests that minorities are more likely to commit crimes.

  4. Racial Bias – Police in France, Australia, and the USA rely on a French company’s (IDEMIA) algorithm for facial recognition. Through various tests, it has been established that the algorithm tends to misidentify people with darker skin ten times more frequently than Caucasian people. (Simonite, 2019)

Is it all bad?

If it is all that bad, why are we trying to improve facial recognition technology in the first place? The argument here is that facial recognition has many more benefits than drawbacks. This technology is currently being used to identify known criminals, who were caught on tape, and has proven to be accurate in several cases. Additionally, this technology continues to help authorities in identifying victims of human trafficking and senior citizens who have been separated from their caretakers. (Marr, 2019) Undoubtedly, facial recognition technology has the potential of being a great help to make society more secure.

Predictive Policing

We have already seen that despite its benefits, facial recognition technology is riddled with flaws. These flaws get amplified in cases where this technology is utilized for identifying potential offenders. Such algorithms are inevitably identifying minorities as potential offenders because the data sets they rely on suggest the same probabilities. This invariably leads to false identification of minorities as potential offenders, which in turn leads to further propagation of racial biases. The technology is extremely unreliable at this stage. Governments and corporations are taking cognizance of this. Tech giants such as Microsoft, IBM, and Amazon are refusing to supply the authorities with their facial recognition algorithms until adequate regulations are put in place. (Greene, 2020)

Conclusion

It is abundantly clear that facial recognition is currently not equipped to handle predictive policing. There are various glaring vulnerabilities and flaws which need to be addressed. There is also a lack of regulation regarding the implementation of such technology. Several measures must be taken to perfect this technology if it is ever implemented because the implications here are immeasurable. However, it is unlikely that such a technology can ever truly work because of the inherent biases riddled within it. Unfortunately, the only purpose this technology seems to serve is to accelerate and amplify institutionalized racism. (Vincent, 2020) Just as Lombroso’s work, the best way forward would be to discredit facial recognition technology for predictive policing and learn from our mistakes.

Bibliography

Greene, Jay. 2020. Microsoft won’t sell police its facial-recognition technology, following similar moves by Amazon and IBM. WashingtonPost.com. [Online] Washington Post, June 12, 2020. [Cited: July 04, 2020.] https://www.washingtonpost.com/technology/2020/06/11/microsoft-facial-recognition/.

Little, Becky. 2019. What Type of Criminal Are You? 19th-Century Doctors Claimed to Know by Your Face. History.com. [Online] History, August 08, 2019. [Cited: July 04, 2020.] https://www.history.com/news/born-criminal-theory-criminology.

Marr, Bernard. 2019. Facial Recognition Technology: Here Are The Important Pros And Cons. Forbes.com. [Online] Forbes , August 19, 2019. [Cited: July 04, 2020.] https://www.forbes.com/sites/bernardmarr/2019/08/19/facial-recognition-technology-here-are-the-important-pros-and-cons/#debd17214d16.

Simonite, Tom. 2019. The Best Algorithms Struggle to Recognize Black Faces Equally. Wired.com. [Online] Wired, July 22, 2019. [Cited: July 05, 2020.] https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/.

Vincent, James. 2020. AI experts say research into algorithms that claim to predict criminality must end. TheVerge.com. [Online] The Verge, June 24, 2020. [Cited: July 04, 2020.] https://www.theverge.com/2020/6/24/21301465/ai-machine-learning-racist-crime-prediction-coalition-critical-technology-springer-study.

The Indian Society of Artificial Intelligence and Law is a technology law think tank founded by Abhivardhan in 2018. Our mission as a non-profit industry body for the analytics & AI industry in India is to promote responsible development of artificial intelligence and its standardisation in India.

 

Since 2022, the research operations of the Society have been subsumed under VLiGTA® by Indic Pacific Legal Research.

ISAIL has supported two independent journals, namely - the Indic Journal of International Law and the Indian Journal of Artificial Intelligence and Law. It also supports an independent media and podcast initiative - The Bharat Pacific.

bottom of page