Divergent Approaches to Regulating Live Facial Recognition
Dr. Asress Adimi Gikay, a Senior Lecturer in AI, Disruptive Innovation, and Law at Brunel University London, explores the contrasting approaches to regulating live facial recognition (LFR) in the UK and the European Union.
Recent suggestions by UK officials to use LFR for searching national passport databases to combat shoplifting have sparked debate, particularly as London, a city with a high concentration of CCTV cameras, grapples with AI safety concerns. While the UK police have been utilizing LFR for several years, with reported successes in apprehending suspects, the EU’s proposed AI Act initially aimed to ban the technology outright, though recent negotiations suggest a potential compromise.
Dr. Gikay argues that the EU should consider an incremental approach to regulating LFR, similar to the UK model, which allows for adjustments based on evidence of actual harm. This approach, as detailed in Dr. Gikay’s forthcoming article in the Cambridge Law Journal, emphasizes evidence-based regulation, taking into account public perception, the technology’s benefits, and the efficacy of existing safeguards.
Why the EU’s Approach Should Be Incremental
The incrementalist approach advocates for regulating LFR and similar technologies by progressively modifying existing legal frameworks in response to demonstrable risks and harms, rather than relying solely on theoretical assessments.
Dr. Gikay’s proposed theory, as elaborated in the aforementioned article, emphasizes four key components: sectoralism, reliance on existing legal structures, evidence-based regulation, and flexibility. This post specifically focuses on the importance of evidence-based regulation, drawing on the UK’s experience with LFR.
Evidence-Based Regulation
The EU’s stance on LFR appears to lack a comprehensive evaluation of the technology’s potential benefits and risks, public opinion, and the capacity of law enforcement to use it responsibly. The UK’s experience offers valuable insights in this regard.
Public Support and Technological Advantages
Surveys in the UK reveal considerable public support for police use of facial recognition technology, particularly in criminal investigations and public safety efforts. This aligns with reported instances where LFR has aided in apprehending suspects, including those involved in violent crimes and retail offenses, demonstrating its practical value.
While concerns regarding the accuracy of facial recognition systems, especially concerning potential biases against certain demographics, persist, studies have shown statistically insignificant discrepancies across demographic groups in UK police-used systems. Importantly, existing legal frameworks and safeguards in the UK mitigate the risk of harm stemming from any inaccuracies.
Safe and Proportionate Application
Despite concerns about potential misuse, the UK has not witnessed reported cases of serious harm due to LFR deployment by law enforcement. This contrasts with incidents in the US, where wrongful arrests highlight potential pitfalls primarily linked to police misconduct rather than inherent flaws in the technology. Notably, several US states are now revising their initial bans on facial recognition, opting for regulated use instead.
The UK’s robust legal framework, including the Human Rights Act, Equality Act, and data protection regulations, ensures the responsible use of LFR. Mandatory equality impact assessments, adherence to the European Human Rights Convention regarding privacy, and strict data retention policies contribute to mitigating potential risks.
Furthermore, the UK police operate under a national code of practice that outlines detailed procedures for deploying LFR, emphasizing proportionality and prohibiting its use in sensitive locations like hospitals and schools.
While concerns about privacy intrusion and surveillance expansion are understandable, these are addressed through legal limitations on the duration, purpose, and context of LFR deployment in the UK. Safeguards are in place to prevent arbitrary surveillance and ensure the protection of individual rights.
Despite criticisms, the UK legal framework provides a basis for the legitimate use of facial recognition technology by law enforcement, with avenues for redress in case of wrongful actions. Existing legal mechanisms allow for the gathering of information for crime prevention, including the use of new technologies, and provide a basis for holding police accountable for any harm caused by LFR misapplication.
A Need for the EU to Reconsider
The EU Commission’s initial draft of the AI Act proposed a more permissive approach to LFR for law enforcement, allowing its use in specific circumstances with strict safeguards. However, the subsequent push for a complete ban, while well-intentioned, appears to be driven more by fear-mongering than concrete evidence.
The UK’s experience demonstrates that a balanced approach, as opposed to an outright ban or overly restrictive measures, can allow for the beneficial use of LFR while addressing potential risks.
Towards Measured Regulation
While the UK system might have areas for improvement, such as clearer guidelines on the types of crimes warranting LFR deployment, a complete overhaul is unnecessary. Instead, an incremental approach allows for targeted adjustments based on empirical evidence of harm.
The EU’s current trajectory risks depriving society of the potential benefits of LFR based on speculative fears rather than concrete evidence. A measured approach, learning from the UK’s experience and prioritizing evidence-based regulation, offers a more balanced and ultimately more effective path forward.