top of page

Amazon's Anti-Racism Stand over Facial Recognition Systems

Kshitij Naik,

Associate Editor,

The Indian Learning.


 

Facial Recognition has today become widespread, wonderful solutions are being developed with the use of facial recognition algorithms from basic solutions like unlocking smartphones to some serious applications like finding missing persons and forensic investigations, facial recognition has also been widely used as a security measure or by law enforcement agencies to keep people secure from threats like in School where such technology is being used to identify dangerous parents, drug dealers etc. at casinos where this technology is used to identify cheaters or advantage gamblers, the application of face recognition technology is vast and new applications are being developed with each passing day, however, what we need to understand that all these software's or Algorithms need to identify individuals and be able to distinguish one from another and they can make mistakes.

Understanding Face Recognition

To understand how face recognition systems can fail we need to view a brief history of face recognition software and how these systems recognise 'faces' of people.

One of the first facial Recognition algorithms was developed by Woodrow Wilson Bledsoe. In the 1960s, however, this system needed manual Entering of data due to the limited computing power at the time, a much more sophisticated facial recognition algorithm was developed in the 1970s which recognised faces based on 21 facial markers like Lip thickness, the shape of the nose etc. What a facial recognition system does is it uses biometrics to map facial features from a photograph or video, It compares the information with a database of known faces to find a match, and most of the companies source this database from the one that is already available on the market instead of developing their own however there are major Privacy and the data being biased, algorithms that are being used today are based on these principals however due to the development of artificial intelligence and neural networks much more sophisticated and advanced algorithms are now available that can process large amounts of faces or facial data in one go or can compare photographs to its database at a much faster rate however most of the times the data itself is faulty, and sometimes the algorithms themselves can make mistakes in the process.

The Problem

The problem with Facial Recognition is similar to what any other Algorithm may have, it needs data and a large amount of data in order to function more efficiently, here the processing of data is not the problem but sourcing it is , and what most of the companies tend to do is that they open source data or use data that is already available on the market instead of developing their own data base, now the problem is with this open source data, the data is largely biased it may not include people of certain colour or gender and as a result the algorithm tends to be biased towards a certain race or gender because it does not have anything similar to the face that is in front of it or even if it does it has not been trained to recognise it and may misidentify, in a study on gender and racial biases embedded in commercial facial recognition systems in 2018 MIT researcher Joy Buolamwini and Timnit Gebru, then a Microsoft researcher found out that the most faulty Facial Recognition system IBM got 34.4 percentage points worse at classifying gender for dark-skinned women than light-skinned men, Joy when she was working on a college project realised that the face recognition algorithm that she had developed did not recognise her face because the data that she used to train the machine learning algorithm was probably did not have enough similar images to compare her face to and was biased, which clearly showed that existing facial recognition systems had a bias and that they weren't as accurate as the companies claimed them to be.

The other problem is " Function Creep " what is function creep?, it is the term used to describe the expansion of a process or system, where the data collected for one purpose is used for another totally unauthorised purpose, function creep raises major questions of ethics but is being practised by various companies and Government agencies for surveillance or to train their systems etc. For example, facial biometrics should be used only to unlock a certain device or avail a particular service and nothing else, unfortunately, this is not the case, facial recognition has recently been highlighted for being tools of mass surveillance for the Police and other law enforcement agencies. One of the major concerns that people around the world have is that face recognition can be used as a tool to monitor them round the clock by law enforcement agencies which could be a major threat to their privacy, similar to what China has been doing.

Facial Recognition a threat to privacy?

By now we know that facial recognition algorithms can be biased and faulty because of the data that been used to train them, however in recent years Silicon valley giants like Microsoft and Google have been called out for selling facial recognition data to security agencies while they knew that such data could potentially be biased towards a particular community, this raised various concerns about privacy and how peoples data was being used without their knowledge, most of the people that were using face recognition based products had no idea that data from their device or the service that they were availing was being used by these organizations for law enforcement. For E.g. "Ring" Amazons newly acquired subsidiary that makes outdoor motion detection cameras and video-enabled doorbells signed deals with various law enforcement agencies to use 'data' or video footage from their cameras in criminal investigations, owners of these home devices could refuse from co-operating with law enforcement agencies however they were never really informed of this right.

A fight to stop Amazon from selling face recognition to the police

In a recent move Tech companies like Amazon, Microsoft and IBM decided to stop selling their facial recognition technology to the police temporarily until there was a Legislation around the subject. Amazons Rekognition based is a cloud-based Algorithms (SaaS) computer vision platform that uses machine learning to recognise people, things, activities Etc. In a particular image, Rekognition is largely used by American Government agencies especially police agencies The American Civil Liberties Union (ACLU) of Washington delivered over 150,000 petition signatures as well as another letter from the company’s shareholders expressing demands to stop Amazon from selling its services to the Police because such a system may be potentially used as a surveillance tool and because such machine learning algorithms can be highly biased, however, after the recent George Floyd protests around racial discrimination Amazon decided to stop selling the Police any facial recognition systems followings IBM's decision to do so after they were pressurised by various Civil Rights activists and other stakeholders.

various researchers and Civil rights activists had been trying to bring this to Amazon's attention for a long time however it was only after the George Floyd Protests that Amazon acted on the calls by these researchers and activists. The real question is whether this was only a political move as the 102-word announcement that Amazon made was vague and did not give details whether law enforcement agencies that it mentioned also included agencies such as US Immigration and Customs Enforcement or the Department of Homeland Security.

Amazon trying to discredit research while other tech companies try to learn

After 'Gender Shades' research on the racial bias of face recognition systems by various companies like Amazon, Kairos, IBM, Face++ and Microsoft was published by Raji and Buolamwini, IBM was the first company to reach out to them to understand how they could solve their bias problem, while Amazon had a totally opposite reaction. When Raji and Buolamwini expanded their scope of research to Rekognition they found huge technical inaccuracies in the system, Rekognition was classifying the gender of dark-skinned women 31.4 percentage points less accurate than that of light-skinned men. Amazon instead of acknowledging the results tried to discredit their research by publishing Blog posts however the research was defended by 80 AI researchers.

After Amazon decided not to sell its face-recognition tech to law enforcement agencies Raji, said “It’s incredible that Amazon’s actually responding within this current conversation around racism,”

The takeaway

What we need to understand is that these Emerging technologies give humans Great power, the power that they could never imagine they would have however with great power comes great responsibility, the responsibility to use Technology in an Ethical way and in a way that it is not weaponized against communities, somewhere we will have to draw a line where technologies such as this one are doing more harm to humans than good.


The Indian Society of Artificial Intelligence and Law is a technology law think tank founded by Abhivardhan in 2018. Our mission as a non-profit industry body for the analytics & AI industry in India is to promote responsible development of artificial intelligence and its standardisation in India.

 

Since 2022, the research operations of the Society have been subsumed under VLiGTA® by Indic Pacific Legal Research.

ISAIL has supported two independent journals, namely - the Indic Journal of International Law and the Indian Journal of Artificial Intelligence and Law. It also supports an independent media and podcast initiative - The Bharat Pacific.

bottom of page