When was FRT first introduced in Delhi? What are the concerns with using the technology on a massive scale?

When was FRT first introduced in Delhi? What are the concerns with using the technology on a massive scale?

History so far: Right to Information (RTI) responses received by the Internet Freedom Foundation, a digital rights organization based in New Delhi, reveal that the Delhi Police handles over 80% matches generated by its system of facial recognition technology (FRT) as positive results.

Why is Delhi Police using facial recognition technology?

The Delhi Police initially received the FRT with the aim of tracing and identifying the missing children. According to RTI responses received by the Delhi Police, the procurement was authorized under a 2018 directive of the Delhi High Court in Sadhan Haldar vs NCT of Delhi. However, in 2018 itself, the Delhi Police submitted to the Delhi High Court that the accuracy of the technology procured by them was only 2% and “not good”.

Things took a turn after multiple reports surfaced that the Delhi Police was using the FRT to monitor the anti-CAA protests in 2019. In 2020, the Delhi Police stated in an RTI reply that although they obtained the FRT under Sadhan Haldar direction specifically related to finding missing children, they were using FRT for police investigations. Broadening the scope for using FRT clearly shows an example of ‘function creep’ where a technology or system gradually expands its scope from its initial purpose to include and fulfill wider functions. According to available information, Delhi Police has subsequently used FRT for investigative purposes and also notably during the 2020 Northeast Delhi riots, 2021 Red Force violence and 2022 Jahangirpur riots.

What is facial recognition?

Facial recognition is an algorithm-based technology that creates a digital face map by identifying and mapping an individual’s facial features, which is then matched against a database to which they have access. It can be used for two purposes: firstly, 1:1 identity verification where the face map is taken in order to match it with the person’s photograph in a database to prove their identity. For example, 1:1 verification is used to unlock phones. However, it is increasingly being used to secure access to any government benefit or scheme. Second, there is 1:ni identity recognition where the face map is taken from a photo or video and then matched against the entire database to identify the person in the photo or video. Law enforcement agencies like Delhi Police usually procure FRT for identification 1:n.

For the 1st identification, the FRT generates a probability or match score between the suspect to be identified and the available database of identified criminals. A list of possible matches is created based on their likelihood of being the best match with the respective match results. However, it is ultimately a human analyst who selects the final possible match from the list of matches generated by the FRT. According to the Internet Freedom Foundation’s Panoptic Project, which tracks the spread of FRT in India, there are at least 124 government-authorized FRT projects in the country.

Why is the use of FRT harmful?

India has seen rapid deployment of FRTs in recent years, both by the Union and state governments, without enacting any laws to regulate their use. The use of FRT presents two issues: issues related to misidentification due to the inaccuracy of the technology, and issues related to mass surveillance due to misuse of the technology. Extensive research on the technology has found that its accuracy rates fall sharply by race and gender. This can result in a false positive, where a person is mistakenly identified as someone else, or a false negative where a person is not verified as themselves. Instances of a false positive result may lead to bias against the individual who is misidentified. In 2018, the American Civil Liberties Union found that Amazon’s facial recognition technology, Rekognition, wrongly identified 28 members of Congress as people who have been arrested for a crime. Of the 28, a disproportionate number were people of color. Also in 2018, researchers Joy Buolamwini and Timnit Gebru found that facial recognition systems had higher error rates when identifying women and people of color, with the highest error rates when identifying black women. The use of this technology by law enforcement authorities has already led to three people in the US being wrongfully arrested. On the other hand, cases of false negative results may lead to the exclusion of the individual from access to essential schemes which may use FRT as a means of securing access. An example of such exclusion is the failure of biometric based authentication under Aadhaar, which has resulted in many people being excluded from getting essential government services, which in turn has led to starvation.

However, even if accurate, this technology could result in irreversible harm as it could be used as a tool to facilitate state-sponsored mass surveillance. Currently, India does not have a data protection law or a specific FRT regulation to protect against misuse. In such a legal vacuum, there is no guarantee to ensure that the authorities use the FRT only for the purposes for which they are authorized, as is the case with the Delhi Police. FRT can enable continuous surveillance of an individual resulting in violation of their fundamental right to privacy.

What did Delhi Police’s RTI 2022 responses reveal?

The RTI replies dated July 25, 2022 were shared by the Delhi Police after the Internet Freedom Foundation filed an appeal before the Central Information Commission seeking the information after it was repeatedly rejected by the Delhi Police. In their response, Delhi Police has revealed that matches above 80% similarity are treated as positive results while matches below 80% similarity are treated as false positives which require “corroborative evidence”. It is unclear why 80% was chosen as the threshold between positive and false positive. There is no justification to support Delhi Police’s claim that a match above 80% is sufficient to assume that the results are correct. Second, categorizing results below 80% as false positives instead of negatives indicates that Delhi Police can still investigate results below 80% further. Thus, people who share familiar facial features, such as in extended families or communities, may end up being targeted. This can result in the targeting of communities that have historically been over-policed ​​and faced discrimination by law enforcement authorities.

The responses also mention that the Delhi Police is matching the photographs/videos with the photographs collected under section three and four of the Identification of Prisoners Act, 1920, which has now been replaced by the Criminal Procedure (Identification) Act, 2022. This Act allows for broader categories of data to be collected from a wider range of people, eg, “convicts and other persons for the purposes of identification and investigation of criminal cases”. It is suspected that the law will lead to the widespread collection of personal data contrary to internationally recognized best practices for data collection and processing. This discovery raises many concerns as the use of facial recognition can lead to wrongful arrests and mass surveillance resulting in privacy violations. Delhi is not the only city where such surveillance is going on. Numerous cities, including Kolkata, Bengaluru, Hyderabad, Ahmedabad and Lucknow are rolling out “Safe City” programs that implement surveillance infrastructures to reduce gender-based violence in the absence of any legal regulatory framework to act as a safeguard .

Anushka Jain is an Associate Policy Advisor and Gyan Prakash Tripathi is a Policy Intern at the Internet Freedom Foundation, New Delhi

STATE

RTI responses received by the Internet Freedom Foundation reveal that the Delhi Police treats matches with over 80% similarities generated by its facial recognition technology system as positive results. Facial recognition is an algorithm-based technology that creates a digital face map by identifying and mapping an individual’s facial features, which is then matched against a database to which they have access.

The Delhi Police initially received the FRT for the purpose of tracing and identifying the missing children as directed by the Delhi High Court in Sadhan Haldar vs NCT of Delhi.

Extensive research on the FRT has found that its accuracy rates decline sharply by race and gender. This can result in a false positive, where a person is mistakenly identified as someone else, or a false negative where a person is not verified as themselves. The technology can also be used as a tool to facilitate state-sponsored mass surveillance.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *