Has the horse bolted? Dealing with legal and practical challenges of facial recognition

0

1. Introduction

New technological systems are widespread in many sectors of our society and are affecting our everyday life as well. Among all technologies, biometric systems are probably considered as the most intrusive ones, as they involve the collection, categorisation and recognition of data related to the human body. The maximum example of biometric systems is facial recognition, which targets peoples’ most peculiar aspect: their face. Facial recognition technology is already deployed widely around the world. It has indeed various applications, from Face-ID features on smartphones to law enforcement, which pose different challenges and questions.

2. The EU legal framework on facial recognition

As artificial intelligence (hereinafter “AI”) technologies, facial recognition systems have been under the scrutiny of European institutions over the last years, with unclear results.

The first attempt to set out the legal framework for facial recognition systems was in the 2020 White Paper on AI. Although the White Paper cannot be considered as a legal binding document, its provisions may be useful to understand what the initial European Union (hereinafter “EU”) approach on biometric system was. Face recognition was indeed already categorised as a high-risk AI application, having important implications for fundamental rights. Therefore, the conclusion of the White Paper, following the EU data protection rules, was that the usage of AI in facial recognition systems should be duly justified, proportionate and subject to adequate safeguards.[1]

One year after the publication of the White Paper, the European Commission (hereinafter “EC”) published on the 21st of April its first regulation proposal on AI, the AI Act.  Opting for a “risk-based” approach, the Commission classifies AI applications into four different categories: unacceptable risks (Title II); high risks (Title III); limited risks (Title IV); minimal risks (Title IX).

Before outlining which are the provisions related to facial recognition systems, the EC provides a notion of biometric identification systems. The EC makes a clear distinction between “real-time” and “post” systems: while the first one involves the use of live material, such as camera-videos, in the second one, the data collected are identified and compared after a significant delay.[2] Despite facial recognition systems fall under both categories, the EC pays attention to the first one.

In particular, “real time” remote biometric identification systems have been classified as prohibited AI systems, with some exceptions. As stated in Art. 5, the Draft AI Act allows Member States to use or authorise such technologies in publicly accessible spaces for law enforcement only for specific purposes: «the targeted search for specific potential victims of crime, including missing children; the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack; the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence»[3], punishable by an order for a maximum period of at least three years, allowing the issue of the European Arrest Warrant [Art. 2(2) of Council Framework Decision 2002/584/JHA].

In addition to that, law enforcement authorities should take into account the following key elements before using “real time” remote biometric identification systems: «the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system; the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences».[4]

Although facial recognition systems have strong limitations in their use, there are three critical flaws that should be pointed out.

First and foremost, there are no prohibitions to sell such technologies outside EU territories. This means that EU companies may sell their product to oppressive regimes where facial recognition is considered as “the new normal”.

Thirdly, the prohibition does not ban actors from using this technology for other purposes, such as crowd control or public health.[5]

In its examination of the AI Act, the European Parliament (hereinafter “EP”) has given its vision as well. In its resolution of 6 October 2021 on artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters, the EP called for a ban on facial recognition in public spaces and for private facial recognition databases, such as Clearview AI – which will be more widely discussed in the next paragraph. Although affirming the importance of using facial recognition systems for law enforcement for the purpose of identification of victims of crimes, the EP highlighted the power asymmetry existing between public actors using AI tools and their private suppliers. Recalling the opinion of the European Data Protection Board (EDPB),[6] the Parliament expressed great concerns over the use of private facial recognition databases by Member State’s authorities for its non-compliance with the EU data protection regime.[7]

This vision has been lately confirmed in a draft report published by the Special Committee on Artificial Intelligence in a Digital Age (AIDA). In the document, which provides the EU Roadmap for AI, there is a clear reference on how facial recognition should be used. While AI uses for acquisition of biometric data can be appropriate and beneficial for individuals and the general public, the draft report acknowledges that this same technology poses crucial ethical and legal challenges. In particular, AIDA’s concerns were addressed to: private entities that collect data and whose activities are not supervised, referring as a threat to fundamental rights and specifically the right to privacy; EU companies that have sold their biometric systems to authoritarian regimes outside EU territories.[9]

On the other hand, the Council of the European Union follows a different approach. In a recent report published by the Slovenian Presidency on 29 November 2021, the use of facial recognition systems has been extended also to those (private) actors who do not cooperate with law enforcement authorities. Also, the list of objectives allowing public authorities to use “real-time” remote biometric identification has been extended.[10]

As shown, facial recognition is seen differently by the various EU institutions. While the EP and agencies such as the EDPB call for strong limitations (especially for private actors), the EC and particularly the Council have been less strict on that. All the cards are still on the table.

3. A practical and dangerous example: Clearview AI

The attention on the matter is justified by the general concern that has risen over the use of AI facial recognition systems in many countries around the world, with outcomes that are rarely reassuring.

In the United States, in particular, several face recognition pieces of software have been used by law enforcement agencies for years, to help investigators recognise suspects whose photos have been captured by security cameras – or in any other way – while committing crimes.

The first generation of software used by law enforcement agencies took advantage of driving licence pictures and mugshots. These databases of photos were then processed by some coding algorithms which carry out a process called “hashing”. Hashing detects several biometric data, such as the distance between the pupils, between the eyes and the nose, the relative length of the head, etc. It creates a geography of the face and then uses it to find matches between the person that needs to be recognised and photos in the database.

These early systems were largely overcome by the arrival of a new important player: Clearview AI, which was founded in 2017 but became largely used in 2019. To the present day, it is used by more than 2,200 law enforcement agencies, private companies, and individuals, including police departments, the US Custom and Border Protection, universities and even the FBI.[11] The main difference between Clearview AI and its predecessors relies on the fact that Clearview AI gathers data from all open-source images that are available online. Social networks provide Clearview AI with an immense number of pictures that are uploaded voluntarily by users and often come with a name attached. It is estimated that the entire database of Clearview AI consists of over 10 billion data points, and it includes faces as well as other parts of the body, that allow recognition thanks to tattoos and other body details.

The process of scraping data from the internet is not prohibited: for the time being, there are no clear rules on what can be scraped online. It is undeniable that data collection through scraping lays the very basis of the development of the internet, and more recently of all technologies and research projects that make use of big data. It is questionable though if the resort to such a practice for the development of facial recognition systems, without explicit consent, should be allowed, given that pictures on the internet are often uploaded by third parties without the knowledge of the person depicted. Social media applications, for their part, provide privacy settings that are difficult for the user to manage, and sometimes do not allow each and every picture to be private, allowing for example people to preview the main profile picture when using search engines.

The use of Clearview AI and other face recognition systems by the police has been brought into the spotlight following some cases of manifest misuse and inefficiency that lead to wrongful arrests.

In 2020 a man called Robert Williams was arrested in Detroit for a theft from a luxury store that had taken place almost one year before. Williams was accused based on a match produced by a face recognition system, which included in its database a picture from William’s old driving licence. His picture was then included in a photo line-up that was shown to a security contractor who, at the time of the theft, was not even present at the crime scene – she just saw the footage. She identified Williams, and shortly after he was arrested and imprisoned.

Williams was soon declared innocent, but his case – which didn’t remain isolated – is emblematic of just how many factors should be taken into account when using facial recognition technologies.

First and above all, face recognition has been proved to be biased and to produce way less accurate results in recognizing people belonging to minority groups.[12] Although it aims at producing  more objective results than humans, research has shown that people of colour or belonging to other minorities, women, and 18-30 aged cohorts are recognised with a significantly lower accuracy rate.[13] This can be due to the way AI is trained, and to the fact that bad lighting can make darker skin tones even harder to scan.

Shadows, angles and bad poses, on the other hand, represent a challenge for almost every face recognition process. Newer generation technologies are now able to take into account different perspectives and lights when scanning faces. They are even capable of considering the presence of glasses and face masks. Nonetheless, the custom of manipulating pictures to make them more easily machine-readable, with tweaks that include adjusting colours of the image, but even pasting open eyes on faces, has been reported. Moreover, while first-generation systems took at least one reference point from a dataset made of consistent frontal, well-lit pictures, with Clearview AI the two terms of comparisons could be substandard pictures that need major manipulation to be hashed.

The entire identification process must not follow any guideline or official rule, and it’s conducted in a completely uncontrolled and autonomous way within police departments adopting these systems. There are no existing rules to specify what kind of manipulation can be done, what checks and balances should be put in place. Clearview AI and other facial recognition companies always state that their software is meant to be used as a first step in the investigation process, as all matches must be supported by other evidence. And yet, Williams’ case shows that none of these further checks was made before he got arrested. No state nor federal rule establishes a duty to inform the suspect, throughout the entire duration of the process, that his face was originally recognized by an AI system.

Machines’ biases then tend to translate into human bias. In a world that is more and more machine-guided, where AI has generally been proved to be more efficient than the average human, it is becoming harder to strike a balance between the trust we put in the machines and the constant need for accountability and transparency that calls for a human countercheck. A human operator who is called to confirm or reject an identity match will suffer an objectivity bias: the more the human trusts the algorithm for the fact it succeeded in the past, the less the control will be effective. The risk is for this countercheck to become a sheer formality.

Furthermore, an AI that relies on such a big dataset as Clearview AI has a higher match rate, but also an increased error rate if compared to first-generation facial recognition systems. The accuracy rate of 99,85% declared by the company is often challenged by the suspicion of non-seriousness of corresponding tests.[14]

Ethical issues of all kinds have been raised on the possibility that this tool could be largely used to identify and reject immigrants or to identify and successively punish activists who have been seen taking part to protest events such as the Black Lives Matter ones.

4. Conclusion

Having a complete ban on this technology today seems unrealistic: it is already widespread all over our society, especially in the law enforcement. It would be very difficult to shut the stable door after the horse has bolted. Moreover, despite having focused mainly on the criticalities of facia recognition, we cannot deny that good outcomes could also arise from a virtuous use of it.

What should be done is instead to outline clear rules both for the vendors and the users.

As for the vendors, they should follow precise rules in designing the algorithms at the core of biometric identification systems. First, AI behind facial recognition should have correct and proper input, i.e., having “good data”. That means that the dataset should include different populations, racial minorities, and genders. This leads to the second point: avoiding a biased AI. Indeed, as explained earlier, facial recognition systems today run into several mistakes and lead to discriminatory outcomes. Thirdly, the criteria behind the match should be transparent and sure: which data has been used for the hashing process? Which was the path behind it? Was the accuracy rate tested by an independent and reliable study?

As for the practical use of the tool, it seems unacceptable that the criminal procedural law of the different States that are already using it do not address facial recognition at all, whereas every other identification system has been regulated since the advent of modern proceedings. It seems indeed counterintuitive that, while facial composite or identity parades are duly regulated by the laws,[15] facial recognition tools, which have been defined by some scholars as a “perpetual line-up”, [16] are not even mentioned.

The law should provide for a well-defined set of checks and balances. Their need is clearly stated by vendor companies as well, for their part, in an effort to free themselves from responsibility; the law should take a position and re-balance responsibilities between the vendor and the law enforcement operator using it, to avoid pointless reciprocal blaming. Human countercheck should be lawfully imposed and effective.

Facial recognition is a powerful yet intrusive method, and it should not be used for every crime. Not every infringement is serious enough to justify the risks it underlies. The most serious risk is to slide in a scenario where facial recognition is considered praxis and used indiscriminately. It is necessary to establish a threshold of seriousness for the investigated crime to allow the use of facial recognition, expressed in a minimum statutory punishment. The Draft AI Act already does that, but the minimum punishment does not seem excessively severe, and it allows numerous crimes to be included in the list.

Lastly, the investigated person – as well as every other person involved in the proceeding – must be informed that an AI played a role in the identification, in order to calibrate an effective defensive strategy.

_____________________

[1] See European Commission, White paper on Artificial Intelligence – A European approach to excellence and trust, COM (2020) 65 final, 2020.

[2] See European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, COM/2021/206 final, Recital 8.

[3]  Ivi, Art. 5.

[4] Ibid.

[5] See M. Veale – F.Z. Borgesius, Demystifying the Draft EU Artificial Intelligence Act, July 2021, 8-9.

[6] See European Data Protection Board, EDPB & EDPS call for ban on use of AI for automated recognition of human features in publicly accessible spaces, and some other uses of AI that can lead to unfair discrimination, 21 June 2021.

[7] See European Parliament resolution of 6 October 2021 on artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters, (2020/2016(INI)).

[8] Special Committee on Artificial Intelligence in a Digital Age, Draft report on artificial intelligence in a digital age, (2020/2266(INI)), 2 November 2021, recital 62.

[9] Ibid.

[10] See Council of European Union, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts – Presidency compromise text, 2021/0106(COD), 29 November 2021.

[11] Following a data leak the website Buzzfeednews published, in February 2020, of a list of Clearview AI’s clients. Many organisations later denied any connection with the company. See R. Mac – C. Haskins – L. McDonald, Clearview’s Facial Recognition App Has Been Used By The Justice Department, ICE, Macy’s, Walmart, And The NBA, 2020, available at: https://www.buzzfeednews.com/article/ryanmac/clearview-ai-fbi-ice-global-law-enforcement.

[12] See P. Grother – M. Ngan – K. Hanaoka, Face Recognition Vendor Test (FRVT), 2019.

[13] Ivi.

[14] Clearview AI refers to a test that was conducted in 2019 by an allegedly Independent Review Panel, which reported an accuracy rate, consistent across all racial & demographic groups. See H. Lippmann – N. Cassimatis – A. Renn, Clearview AI Accuracy Test Report, October 2019, available at:  https://www.documentcloud.org/documents/6772775-Clearveiw-Ai-Accuracy-Test-Oct-2019.html

Among others, the American Civil Liberty Union (ACLU) challenges the methodology and the results of this test, as well as the panel’s expertise. In 2020 ACLU took Clearview AI to court, in the state of Illinois, accusing the company of violating the privacy rights of Illinois residents under the Illinois Biometric Information Privacy Act (BIPA). See ACLU v. Clearview AI’s Complaint, available at: https://www.aclu.org/legal-document/aclu-v-clearview-ai-complaint.

[15] E.g. artt. 213-361 of the Italian Criminal Procedural Code.

[16] See C. Garvie – A. Bedoya – J. Frankle, The Perpetual Line-up. Unregulated face recognition in America, Georgetown Law – Center on Privacy & Technology, 18 October 2016.

Share this article!
Share.

About Author

Leave A Reply