• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
Interface

Interface

  • about
  • collaborators
  • aesthetics
  • technology
  • transplants
  • blog

biometrics

Regulating facial recognition and other biometric technologies

August 31, 2022 by Fay

Regulating facial recognition and other biometric technologies

The Author

George King

This blog on regulating facial recognition is the third installment of our series on facial recognition. Don’t miss our first blog by AboutFace PI Fay Bound Alberti, about what history can teach us about technological innovation, or our second by guest author Dr Sharrona Pearl, on human facial recognition, face blindness and super-recognisers.

Regulating facial recognition and other biometric technologies

Sara Wasson, Lancaster University

Our faces are unique and intimately connected to our sense of self and identity. Most of us are able to recognise a very large number of faces and take this quintessentially human ability for granted.

But this important skill is no longer limited to humans. Algorithms can do it too. Specific measurements, such as the distance between our eyes, nose, mouth, ears and so on, can be automatically captured and fed into AI systems. These systems are capable of identifying us within a database or picking us out from a crowd.

Biometric (‘biological measurement’) data is the term for any data derived from measuring our physical characteristics, and this includes our faces, fingerprints, walking style (gait) and tone of voice. Biometric technologies can be used to recognise and identify us, but they are also being used to categorise and make inferences about us.

These technologies were previously almost exclusively used within policing. However, they are now being used by a growing number of private and public actors, including employers, schools and retailers to identify but also to categorise.

This raises a number of legal, ethical and societal concerns. Our human rights, such as our rights to privacy, free expression, free association and free assembly, are potentially at risk.

Discrimination and Bias

There are also issues of bias and discrimination. Some biometric technologies – particularly facial recognition – function less accurately for people with darker skin. But even if the technology could be improved to accurately match faces from all racial groups, ethical problems would persist.

Discrimination and bias can also arise from the social context of policing and surveillance. Facial recognition may be disproportionately used against marginalised communities. Shops may disproportionately add people of colour to ‘watchlists’. Simply making the tech more accurate is not enough to make it harmless or acceptable.

To disentangle these challenges and investigate potential reforms, the Ada Lovelace Institute undertook a three-year programme of public engagement, legal analysis and policy research exploring the governance needed to ensure biometrics are used with public legitimacy.

Through in-depth public engagement research, we found serious public concerns about the impact on rights and freedoms.

Negative Impact on Rights and Freedoms

We began by conducting the first nationally representative survey on UK public attitudes towards facial recognition technology, Beyond Face Value. Respondents were given a brief definition of the technology and answered questions about its use in a range of contexts, such as policing, schools, companies, supermarkets, airports and public transport.

The survey found that a majority of people (55%) want the UK Government to impose restrictions on police use of facial recognition and that nearly half the public (46%) want the right to opt out. This figure was higher for people from minority ethnic groups (56%), for whom the technology is less accurate.

The Citizens’ Biometrics Council, a demographically diverse group of 50 members of the UK public, heard from experts about how they’re used, the ethical questions raised and the current state of regulatory oversight. After deliberating on the issues, the Council concluded that there is need for a strong legal framework to ensure that biometrics are used in a way that is responsible, trustworthy and proportionate.

However, an independent legal review, led by Matthew Ryder QC, has found that the legal protections in place are inadequate. The review shows that existing legislation and oversight mechanisms are fragmented, unclear, ineffective and failing to keep pace.

The review was commissioned by the Institute in 2020, after the House of Commons Science and Technology Select Committee called for ‘an independent review of options for the use and retention of biometric data’.

Building on the independent legal review and our public engagement research, we published a policy report setting out a series of recommendations for policymakers to take forward. A recording of our launch event is available on our website.

Policy Recommendations

Firstly, there is an urgent need for new, primary legislation to govern the use of biometric technologies. The oversight and enforcement of this legislation should sit within a new regulatory function, specific to biometrics, which is national, independent and adequately resourced.

This regulatory function should be equipped to make two types of assessment:

  • It should assess all biometric technologies against scientific standards of accuracy, reliability and validity.
  • It should assess proportionality in context, prior to use, for those that are used by in the public sector, public services and publicly accessible spaces, or those that make significant decisions about a person.

Finally, we are also calling for an immediate moratorium on the use of biometric technologies for one-to-many identification in publicly accessible spaces (e.g. live facial recognition) and for categorisation in the public sector, public services and publicly accessible spaces, until comprehensive legislation is passed.

Biometric technologies impact our daily lives in powerful ways, and are proliferating without an adequate legal framework. Policymakers need to take action to prevent harms and ensure that these technologies work for people and society.

This blog on regulating facial recognition was written by George King. George is a Communications Manager at the Ada Lovelace Institute, with a focus on external relations and engagement. Prior to joining Ada, George worked at the Royal College of Psychiatrists as Communications Officer in their External Affairs team, working across press and public affairs. He has worked for a range of research-based organisations, including the Francis Crick Institute.

Connect with George King (@George_W_King) and the Ada Lovelace Institute (@AdaLovelaceInst) on Twitter.

Further reading

view all
March 10, 2023 | 4 MIN READ

The making of a blueprint. How historical, qualitative research should inform face transplant policy and practice.

January 23, 2023 | 4 MIN READ

Before and After? What the humanities bring to medical images

January 23, 2023 | 4 MIN READ

Diminishing their Voices: Face Transplants, Patients, and Social Media

January 23, 2023 | 4 MIN READ

Robert Chelsea and the First African American Face Transplant: Two Years On

January 23, 2023 | 4 MIN READ

History has Many Faces: researching histories of facial surgery

January 23, 2023 | 4 MIN READ

When face transplants fail: Carmen Tarleton and the world’s second retransplant

January 23, 2023 | 4 MIN READ

Drag Face: exploring my identity through masculine performance

January 23, 2023 | 4 MIN READ

Future Faces

January 23, 2023 | 4 MIN READ

Reflecting on Reflections

January 23, 2023 | 4 MIN READ

Owning My Face

January 27, 2023 | 4 MIN READ

Portrait of an Angry Man – or not?

January 23, 2023 | 4 MIN READ

Picturing Death: Dealing with Post-Mortem Images

Filed Under: biometrics, ethics, faces, facial recognition, guest blog

Facial recognition technology, history and the meanings of the face

July 27, 2022 by Fay

Facial recognition technology, history and the meanings of the face

The Author

Fay Bound Alberti

Fay alberti

This blog by Fay Bound Alberti was originally published on 17 February 2020 by Foundation for Science and Technology.

Facial recognition is increasingly commonplace, yet controversial. A technology capable of identifying or verifying an individual from a digital image or video frame it has an array of public and private uses – from smartphones to banks, airports, shopping centres and city streets. Live facial recognition will be used by the Metropolitan Police from 2020, having been trialled since 2016. And facial recognition is big business. One study published in June 2019 estimated that by 2024 the global market would generate $7 billion of revenue.

The proliferation and spread of facial recognition systems has its critics. In 2019, the House of Commons Science and Technology Committee called for a moratorium on their use in the UK until a legislative framework is introduced. The concerns raised were ethical and philosophical as much as practical.

This is the context in which a ‘Facial Recognition and Biometrics – Technology and Ethics’ discussion was convened by the Foundation for Science and Technology at the Royal Society on 29 January 2020. Discussants included the Baroness Kidron, OBE, Carly Kind, Director of the Ada Lovelace Institute, James Dipple-Johnstone, Information Commissioner’s Office, Professor Carsten Maple, Professor of Cyber Systems Engineering at the University of Warwick, and Matthew Ryder QC, Matrix Chambers. Their presentations are referred to below, and available here.

Like any technology, facial recognition has advantages and disadvantages. Speedy and relatively easy to deploy, it has uses in law enforcement, health, marketing and retail. But each of these areas has distinct interests and motivations, and these are reflected in public attitudes. There is greater acceptance of facial recognition to reduce crime than when it is used to pursue profit, as discussed by Carly and Matthew.

This tension between private and public interest is but one aspect of a complex global landscape, in which the meanings and legitimacy of the state come into play. We can see this at work in China, one of the global regions with fastest growth in the sector. China deploys an extensive video surveillance network with 600 million+ cameras. This is apparently part of its drive towards a ‘social credit’ system that assesses the value of citizens, a plot twist reminiscent of the movie ‘Rated’ (2016), in which every adult has a visible ‘star’ rating.

This intersection between fact and fiction is relevant in other ways. Despite considerable economic and political investment in facial recognition systems, their results are variable. Compared to other biometric data – fingerprint, iris, palm vein and voice authentication – facial recognition has one of the highest false acceptance and rejection rates. It is also skewed by ethnicity and gender. A study by the US National Institute of Standards and Technology found that algorithms sold in the market misidentified members of some groups – especially women and people of colour –100 times more frequently than others.

It is unsurprising that technology betrays the same forms of bias that exist in society. As Carsten identified, we need to understand facial recognition, as other forms of biometrics, not in isolation but as part of complex systems influenced by other factors. The challenge for regulators is not only the reliability of facial recognition, but also the speed of change. It is a difficult task for those tasked with regulating, like James, who has urged greater collaboration between policy-makers, innovators, the public and the legislators.

From a historical perspective, these issues are not new. There is often a time lag between the speed of research innovation and the pace of ethical understandings or regulatory and policy frameworks. It is easy for perceived positive outcomes (e.g. public protection) to be framed emotively in the media while drowning out negative outcomes (e.g. the enforcement of social inequity). Ethical values also differ between people and countries, and the psychological and cultural perception of facial recognition matters.

We can learn much about the emergence, development and regulation of facial recognition systems by considering how innovative technologies have been received and implemented in the past, whether the printing press in the sixteenth century or the telephone in the nineteenth. Whatever legitimate or imagined challenges are brought by new technologies, it is impossible to uninvent them. So it is important to focus on their known and potential effects, including how they might alleviate or exacerbate systemic social problems. History shows that it is the sophistication of policy and regulatory response – that includes consulting with the public and innovators – that determines success.

Historical context is equally critical to understanding the cultural meanings of facial recognition. In the 18th century, the pseudoscience physiognomy suggested that character and emotional aptitude could be detected via facial characteristics, in ways that are discomfortingly similar to the ‘emotion detection’ claims of some facial recognition systems. In the 21st century it has similarly and erroneously been claimed that sexuality or intelligence could be read in the face. Faces, it is presumed, tell the world who we are.

But technology is never neutral. And not all people have publicly ‘acceptable’ faces, or the faces they had at birth. Facial discrimination is a core element of the #VisibleHate campaign.

By accident or illness, surgery or time, faces have the capacity to change and transform. Sometimes this is deliberate. Facial recognition technologies can be occluded and confused – by masks, by camouflage (like CV Dazzle), by cosmetic and plastic surgery.

I work on the history of face transplants, an innovative and challenging form of surgical intervention reserved for the most severe forms of facial damage. Those undergoing face transplants do so for medical rather than social reasons, though that line can be blurred by contemporary concerns for appearance. Whether recipients’ sense of identity and self-hood is transformed by a new face is a subject of ongoing debate. Yet the capacity for radical transformation of the face exists.

Facial recognition technology not only raises questions about the ethical, legal, practical and emotional use of biometric evidence, but also presumes the face is a constant, individual unit of identity. What happens, on an individual and a social level, if that is not the case?

Author Bio

Fay alberti

Prof Fay Bound Alberti, Professor of Modern History, UKRI Future Leaders Fellow and Director of Interface

Further reading

view all

March 10, 2023 | 4 MIN READ

The making of a blueprint. How historical, qualitative research should inform face transplant policy and practice.

January 23, 2023 | 4 MIN READ

Before and After? What the humanities bring to medical images

January 23, 2023 | 4 MIN READ

Diminishing their Voices: Face Transplants, Patients, and Social Media

January 23, 2023 | 4 MIN READ

Robert Chelsea and the First African American Face Transplant: Two Years On

January 23, 2023 | 4 MIN READ

History has Many Faces: researching histories of facial surgery

January 23, 2023 | 4 MIN READ

When face transplants fail: Carmen Tarleton and the world’s second retransplant

January 23, 2023 | 4 MIN READ

Drag Face: exploring my identity through masculine performance

January 23, 2023 | 4 MIN READ

Future Faces

January 23, 2023 | 4 MIN READ

Reflecting on Reflections

January 23, 2023 | 4 MIN READ

Owning My Face

January 27, 2023 | 4 MIN READ

Portrait of an Angry Man – or not?

January 23, 2023 | 4 MIN READ

Picturing Death: Dealing with Post-Mortem Images

Filed Under: biometrics, facial recognition, history

Primary Sidebar

Footer

logo logo

interface@kcl.ac.uk

Privacy policy

©2023 Interface.