• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
Interface

Interface

  • about
  • collaborators
  • aesthetics
  • technology
  • transplants
  • blog

facial recognition

Artificial Intelligence and Facial Discrimination

October 6, 2022 by Fay

AI

Artificial Intelligence and Facial Discrimination

The Author

Phyllida Swift

Phyllida Swift

This blog on artificial intelligence and facial discrimination is the fourth and final installment of our series on facial recognition. Don’t miss our first blog by AboutFace PI Fay Bound Alberti, about what history can teach us about technological innovation, our second by guest author Dr Sharrona Pearl, on human facial recognition, face blindness and super-recognisers, or our third by George King at the Ada Lovelace Institute, on regulating facial recognition technologies.

Artificial Intelligence and Facial Discrimination

Over the past couple of years, here at Face Equality International we have experienced increasing numbers of requests from academics, policymakers, government bodies and businesses to input into commentary and research on artificial intelligence, and in particular ethical considerations around the effect of AI technologies on the facial difference community. The most obvious technology of concern is facial recognition and its potential for bias, exclusion and censorship. All of which are issues with a growing evidence base, but with little progress or acknowledgement of such evidence from technology companies, regulators, or businesses adopting AI into their practice.

At Face Equality International (FEI), we campaign as an Alliance of global organisations to end the discrimination and indignity experienced by people with facial disfigurements (FD) around the globe. We do this by positioning face equality as a social justice issue, rather than simply a health issue, which is all too often the case.

For any equality organisation, the public dialogue on how AI has been proven to replicate and reinforce human bias against marginalised groups is deeply concerning. Granted, it’s reassuring to see increased recognition in society, but this is not without great fear from social justice movements that generations of advancements could relapse at the hands of unregulated AI.

Because as it stands, AI is currently unregulated. A regulatory framework is in development for Europe, but ‘the second half of 2024 is the earliest time the regulation could become applicable to operators with the standards ready and the first conformity assessments carried out.’

Back in March, I was invited to share a statement at an event attached to the United Human Rights Council led by Gerard Quinn, the UN Special Rapporteur on the Rights of Persons with Disabilities. This came off the back of a thematic report into the impact of AI on the disabled community. The themes in this blog will follow similar lines as the statement, in less formal terms.

AI and the disabled community

It’s unsurprising that the most apparent AI-related threat that is relevant to us is facial recognition software. For an already marginalised and mistreated community, AI poses the threat of further degrading treatment. For instance, we already see constant abuse and hate speech on social media, where people with facial differences are referred to as ‘sub-human’, ‘monster’, or ‘that thing’. But algorithms often fail to pick up on such slurs as being derogatory to the facial difference (FD) community, which should fall into the protected group under disability policies.

Social media also poses the problem of censorship through AI, where on several occasions we have seen photos of people with disfigurements blurred out and marked as ‘sensitive’, ‘violent’, ‘graphic’ content. When reported, platforms and their human moderators are still failing to remove these warnings.

There is growing evidence to demonstrate the extent of harms caused by AI software in disadvantaging certain groups. Such as when Google Photos grouped a photo series of black people into a folder titled, ‘gorillas’. We know that several FD community members have reported having their photos blurred out and marked as sensitive, graphic or violent on social media, effectively censoring the facial difference community and inhibiting their freedom of expression to post photos of their faces online.

We know from research that many people make assumptions about someone’s character and ability based on the way they look. A study in America from Rankin and Borah found that photos of people with disfigurements were rated as significantly ‘less honest, less employable, less intelligent, less trustworthy’, the list goes on – when compared to photos where the disfigurement was removed.

Facial_Difference

AI, Dehumanisation, and Negative Bias

Sadly, we’re seeing these assumptions play out in AI led hiring practices too, where language choice, facial expression, even clothing have been shown to disadvantage candidates, whose scores are affected negatively. In a notorious Amazon example, a machine had taught itself to search for candidates using particular word choices to describe themselves and their activities, which ended up favouring male candidates who more commonly used those words. How can we expect someone with facial palsy, for example, to pass tests based on ‘positive’ facial expressions.

We have heard several cases of passport gates failing to work for people with facial disfigurements, and the same goes for applying for passports and ID online. Essentially, this is because the various software tests required to submit photos are not recognising people’s faces as human faces when they are put through. For an already all too often dehumanised community, this is simply not good enough.

Non-recognition of people with disfigurements was recorded by World Bank when it was found that someone with Down’s Syndrome was denied a photo ID card as the technology failed to recognise his non-standard face as a face. This was also apparent for people with Albinism.

There are often alternative routes to verify identity outside of facial recognition, for instance when problems arise with smartphone apps which rely on facial recognition to access bank accounts or similar services. Systems which ask the user to perform an action – such as blinking – can cause difficulties for people with some conditions, such as Moebius Syndrome or scarring. Some apps offer an alternative route for people unable to use the automatic system, but this goes against the principle of inclusive design and may be more cumbersome for people with facial differences. As is often talked about in disability spaces, the additional admin required of someone with a disability or disfigurement can take an emotional toll. Self-advocacy of this kind can be a life-long occupation.

Ethical AI?

So the problem for us is not necessarily in proving that there is a glitch in the system, it lies in making ourselves known to the technological gatekeepers. Those with the power to turn the tide on this ever-evolving issue. Whilst building coalitions with fellow organisations pushing for ethical AI, such as Disability Ethical? AI.

Princeton University Computer Science professor, Olga Russakovsky, said, “A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities.” “We’re a fairly homogeneous population, so it’s a challenge to think broadly about world issues.”

What’s interesting to note is that when we have asked our communities to relay to us their potential concerns about the growing use of AI, across every aspect of society, through polls and forms promoted across social media and via our membership, the response has been rather limited. There is often a consistent dialogue between us and our online communities when discussing issues that affect the FD community, but it appears that when it comes to AI, there has been far less of a response.

A ‘Transparency Void’

After further investigation, our team believes this could be for a number of reasons. Firstly, AI is too broad a technological term that conjures up distant, futuristic notions of robots driving our cars and taking over the planet. Which is very much what I thought of when this topic first landed on my own desk.
The second potential reason could be what we’ve started to refer to in our commentary on the issue as a ‘transparency void’. Meaning that it is far less obvious when a machine is creating barriers, bias or discriminating against an individual on the grounds that they are facially diverse, than it is if it were to be a human giving away cues in their language, their eye contact and their behaviours. In a recent Advisory Council meeting, a member spoke of the frustrations of trying to navigate automated phone lines with set questions, when your facial difference also affects speech. How does one get through to an actual human when there is no option to pass certain automated tests?

AI discrimination will continue to place the burden on the victim of the discrimination to challenge the decision, rather than on the (often well-resourced) entity using the technology. Existing research shows that the number of cases brought in relation to breaches of employment law legislation is just a tiny fraction of those which occur, so this is not an effective enforcement mechanism.

A Rapidly Escalating Issue

This is perhaps the most insidious threat regarding the negative impact of AI on furthering the face equality movement. Who do we hold accountable when AI discriminates based on facial appearance? Because we know for sure that it is already happening, as therein lies another fear for us at FEI, in that many members of the FD community will already be experiencing disadvantages at the hands of AI, without realising it, or without comprehension for how quickly this issue is escalating, with the use of AI in recruitment, security, identification, policing, assessing insurance, financial assessments and across our online spaces. These are not emerging technologies, AI is already here with us in force, and it’s growing exponentially.

It seems the crux of the issue lies in narrow data sets. In simple terms, the faces that AI is used to seeing are only certain types of faces. ‘Normative’, non-diverse, non-facially different faces that is.
We at FEI want to get to the source of the problem, and prevent further damage. It is our understanding, as a social justice organisation, as opposed to a tech company, that the best way to do this is to lend ourselves to the meaningful, robust and ethical consultation and involvement of our community. Whether it’s a question of us supporting companies to widen the pool of faces to diversify their date sets, or us continuing to feed into research and policy consultation, we are committed to making our cause, and the people we aim to serve known to the companies that so often ignore them.

Author Bio

Phyllida Swift

Phyllida is CEO at Face Equality International. Phyllida was involved in a car accident in Ghana in 2015 and sustaining facial scarring. After which, she set out to reshape the narrative around scars and facial differences in the public eye, to champion positive, holistic representation that didn’t sensationalise, or other the facial difference community any further. She started out by sharing her story as a media volunteer for Changing Faces, before taking on a role as Campaigns Officer, and later Manager. During that time, she led the award winning, Home Office funded disfigurement hate crime campaign, along with working on multiple Face Equality Days, ‘Portrait Positive’ and ‘I Am Not Your Villain’. She shared her own experiences of how societal attitudes and poor media representation impacted upon being a young woman with facial scarring in her TEDX talk in 2018. Phyllida sits on the AboutFace Lived Experience Advisory Panel (LEAP).

Further reading

view all
March 10, 2023 | 4 MIN READ

The making of a blueprint. How historical, qualitative research should inform face transplant policy and practice.

January 23, 2023 | 4 MIN READ

Before and After? What the humanities bring to medical images

January 23, 2023 | 4 MIN READ

Diminishing their Voices: Face Transplants, Patients, and Social Media

January 23, 2023 | 4 MIN READ

Robert Chelsea and the First African American Face Transplant: Two Years On

January 23, 2023 | 4 MIN READ

History has Many Faces: researching histories of facial surgery

January 23, 2023 | 4 MIN READ

When face transplants fail: Carmen Tarleton and the world’s second retransplant

January 23, 2023 | 4 MIN READ

Drag Face: exploring my identity through masculine performance

January 23, 2023 | 4 MIN READ

Future Faces

January 23, 2023 | 4 MIN READ

Reflecting on Reflections

January 23, 2023 | 4 MIN READ

Owning My Face

January 27, 2023 | 4 MIN READ

Portrait of an Angry Man – or not?

January 23, 2023 | 4 MIN READ

Picturing Death: Dealing with Post-Mortem Images

Filed Under: facial recognition, guest blog, Visible Facial Difference

Regulating facial recognition and other biometric technologies

August 31, 2022 by Fay

Regulating facial recognition and other biometric technologies

The Author

George King

This blog on regulating facial recognition is the third installment of our series on facial recognition. Don’t miss our first blog by AboutFace PI Fay Bound Alberti, about what history can teach us about technological innovation, or our second by guest author Dr Sharrona Pearl, on human facial recognition, face blindness and super-recognisers.

Regulating facial recognition and other biometric technologies

Sara Wasson, Lancaster University

Our faces are unique and intimately connected to our sense of self and identity. Most of us are able to recognise a very large number of faces and take this quintessentially human ability for granted.

But this important skill is no longer limited to humans. Algorithms can do it too. Specific measurements, such as the distance between our eyes, nose, mouth, ears and so on, can be automatically captured and fed into AI systems. These systems are capable of identifying us within a database or picking us out from a crowd.

Biometric (‘biological measurement’) data is the term for any data derived from measuring our physical characteristics, and this includes our faces, fingerprints, walking style (gait) and tone of voice. Biometric technologies can be used to recognise and identify us, but they are also being used to categorise and make inferences about us.

These technologies were previously almost exclusively used within policing. However, they are now being used by a growing number of private and public actors, including employers, schools and retailers to identify but also to categorise.

This raises a number of legal, ethical and societal concerns. Our human rights, such as our rights to privacy, free expression, free association and free assembly, are potentially at risk.

Discrimination and Bias

There are also issues of bias and discrimination. Some biometric technologies – particularly facial recognition – function less accurately for people with darker skin. But even if the technology could be improved to accurately match faces from all racial groups, ethical problems would persist.

Discrimination and bias can also arise from the social context of policing and surveillance. Facial recognition may be disproportionately used against marginalised communities. Shops may disproportionately add people of colour to ‘watchlists’. Simply making the tech more accurate is not enough to make it harmless or acceptable.

To disentangle these challenges and investigate potential reforms, the Ada Lovelace Institute undertook a three-year programme of public engagement, legal analysis and policy research exploring the governance needed to ensure biometrics are used with public legitimacy.

Through in-depth public engagement research, we found serious public concerns about the impact on rights and freedoms.

Negative Impact on Rights and Freedoms

We began by conducting the first nationally representative survey on UK public attitudes towards facial recognition technology, Beyond Face Value. Respondents were given a brief definition of the technology and answered questions about its use in a range of contexts, such as policing, schools, companies, supermarkets, airports and public transport.

The survey found that a majority of people (55%) want the UK Government to impose restrictions on police use of facial recognition and that nearly half the public (46%) want the right to opt out. This figure was higher for people from minority ethnic groups (56%), for whom the technology is less accurate.

The Citizens’ Biometrics Council, a demographically diverse group of 50 members of the UK public, heard from experts about how they’re used, the ethical questions raised and the current state of regulatory oversight. After deliberating on the issues, the Council concluded that there is need for a strong legal framework to ensure that biometrics are used in a way that is responsible, trustworthy and proportionate.

However, an independent legal review, led by Matthew Ryder QC, has found that the legal protections in place are inadequate. The review shows that existing legislation and oversight mechanisms are fragmented, unclear, ineffective and failing to keep pace.

The review was commissioned by the Institute in 2020, after the House of Commons Science and Technology Select Committee called for ‘an independent review of options for the use and retention of biometric data’.

Building on the independent legal review and our public engagement research, we published a policy report setting out a series of recommendations for policymakers to take forward. A recording of our launch event is available on our website.

Policy Recommendations

Firstly, there is an urgent need for new, primary legislation to govern the use of biometric technologies. The oversight and enforcement of this legislation should sit within a new regulatory function, specific to biometrics, which is national, independent and adequately resourced.

This regulatory function should be equipped to make two types of assessment:

  • It should assess all biometric technologies against scientific standards of accuracy, reliability and validity.
  • It should assess proportionality in context, prior to use, for those that are used by in the public sector, public services and publicly accessible spaces, or those that make significant decisions about a person.

Finally, we are also calling for an immediate moratorium on the use of biometric technologies for one-to-many identification in publicly accessible spaces (e.g. live facial recognition) and for categorisation in the public sector, public services and publicly accessible spaces, until comprehensive legislation is passed.

Biometric technologies impact our daily lives in powerful ways, and are proliferating without an adequate legal framework. Policymakers need to take action to prevent harms and ensure that these technologies work for people and society.

This blog on regulating facial recognition was written by George King. George is a Communications Manager at the Ada Lovelace Institute, with a focus on external relations and engagement. Prior to joining Ada, George worked at the Royal College of Psychiatrists as Communications Officer in their External Affairs team, working across press and public affairs. He has worked for a range of research-based organisations, including the Francis Crick Institute.

Connect with George King (@George_W_King) and the Ada Lovelace Institute (@AdaLovelaceInst) on Twitter.

Further reading

view all

March 10, 2023 | 4 MIN READ

The making of a blueprint. How historical, qualitative research should inform face transplant policy and practice.

January 23, 2023 | 4 MIN READ

Before and After? What the humanities bring to medical images

January 23, 2023 | 4 MIN READ

Diminishing their Voices: Face Transplants, Patients, and Social Media

January 23, 2023 | 4 MIN READ

Robert Chelsea and the First African American Face Transplant: Two Years On

January 23, 2023 | 4 MIN READ

History has Many Faces: researching histories of facial surgery

January 23, 2023 | 4 MIN READ

When face transplants fail: Carmen Tarleton and the world’s second retransplant

January 23, 2023 | 4 MIN READ

Drag Face: exploring my identity through masculine performance

January 23, 2023 | 4 MIN READ

Future Faces

January 23, 2023 | 4 MIN READ

Reflecting on Reflections

January 23, 2023 | 4 MIN READ

Owning My Face

January 27, 2023 | 4 MIN READ

Portrait of an Angry Man – or not?

January 23, 2023 | 4 MIN READ

Picturing Death: Dealing with Post-Mortem Images

Filed Under: biometrics, ethics, faces, facial recognition, guest blog

Facial recognition technology, history and the meanings of the face

July 27, 2022 by Fay

Facial recognition technology, history and the meanings of the face

The Author

Fay Bound Alberti

Fay alberti

This blog by Fay Bound Alberti was originally published on 17 February 2020 by Foundation for Science and Technology.

Facial recognition is increasingly commonplace, yet controversial. A technology capable of identifying or verifying an individual from a digital image or video frame it has an array of public and private uses – from smartphones to banks, airports, shopping centres and city streets. Live facial recognition will be used by the Metropolitan Police from 2020, having been trialled since 2016. And facial recognition is big business. One study published in June 2019 estimated that by 2024 the global market would generate $7 billion of revenue.

The proliferation and spread of facial recognition systems has its critics. In 2019, the House of Commons Science and Technology Committee called for a moratorium on their use in the UK until a legislative framework is introduced. The concerns raised were ethical and philosophical as much as practical.

This is the context in which a ‘Facial Recognition and Biometrics – Technology and Ethics’ discussion was convened by the Foundation for Science and Technology at the Royal Society on 29 January 2020. Discussants included the Baroness Kidron, OBE, Carly Kind, Director of the Ada Lovelace Institute, James Dipple-Johnstone, Information Commissioner’s Office, Professor Carsten Maple, Professor of Cyber Systems Engineering at the University of Warwick, and Matthew Ryder QC, Matrix Chambers. Their presentations are referred to below, and available here.

Like any technology, facial recognition has advantages and disadvantages. Speedy and relatively easy to deploy, it has uses in law enforcement, health, marketing and retail. But each of these areas has distinct interests and motivations, and these are reflected in public attitudes. There is greater acceptance of facial recognition to reduce crime than when it is used to pursue profit, as discussed by Carly and Matthew.

This tension between private and public interest is but one aspect of a complex global landscape, in which the meanings and legitimacy of the state come into play. We can see this at work in China, one of the global regions with fastest growth in the sector. China deploys an extensive video surveillance network with 600 million+ cameras. This is apparently part of its drive towards a ‘social credit’ system that assesses the value of citizens, a plot twist reminiscent of the movie ‘Rated’ (2016), in which every adult has a visible ‘star’ rating.

This intersection between fact and fiction is relevant in other ways. Despite considerable economic and political investment in facial recognition systems, their results are variable. Compared to other biometric data – fingerprint, iris, palm vein and voice authentication – facial recognition has one of the highest false acceptance and rejection rates. It is also skewed by ethnicity and gender. A study by the US National Institute of Standards and Technology found that algorithms sold in the market misidentified members of some groups – especially women and people of colour –100 times more frequently than others.

It is unsurprising that technology betrays the same forms of bias that exist in society. As Carsten identified, we need to understand facial recognition, as other forms of biometrics, not in isolation but as part of complex systems influenced by other factors. The challenge for regulators is not only the reliability of facial recognition, but also the speed of change. It is a difficult task for those tasked with regulating, like James, who has urged greater collaboration between policy-makers, innovators, the public and the legislators.

From a historical perspective, these issues are not new. There is often a time lag between the speed of research innovation and the pace of ethical understandings or regulatory and policy frameworks. It is easy for perceived positive outcomes (e.g. public protection) to be framed emotively in the media while drowning out negative outcomes (e.g. the enforcement of social inequity). Ethical values also differ between people and countries, and the psychological and cultural perception of facial recognition matters.

We can learn much about the emergence, development and regulation of facial recognition systems by considering how innovative technologies have been received and implemented in the past, whether the printing press in the sixteenth century or the telephone in the nineteenth. Whatever legitimate or imagined challenges are brought by new technologies, it is impossible to uninvent them. So it is important to focus on their known and potential effects, including how they might alleviate or exacerbate systemic social problems. History shows that it is the sophistication of policy and regulatory response – that includes consulting with the public and innovators – that determines success.

Historical context is equally critical to understanding the cultural meanings of facial recognition. In the 18th century, the pseudoscience physiognomy suggested that character and emotional aptitude could be detected via facial characteristics, in ways that are discomfortingly similar to the ‘emotion detection’ claims of some facial recognition systems. In the 21st century it has similarly and erroneously been claimed that sexuality or intelligence could be read in the face. Faces, it is presumed, tell the world who we are.

But technology is never neutral. And not all people have publicly ‘acceptable’ faces, or the faces they had at birth. Facial discrimination is a core element of the #VisibleHate campaign.

By accident or illness, surgery or time, faces have the capacity to change and transform. Sometimes this is deliberate. Facial recognition technologies can be occluded and confused – by masks, by camouflage (like CV Dazzle), by cosmetic and plastic surgery.

I work on the history of face transplants, an innovative and challenging form of surgical intervention reserved for the most severe forms of facial damage. Those undergoing face transplants do so for medical rather than social reasons, though that line can be blurred by contemporary concerns for appearance. Whether recipients’ sense of identity and self-hood is transformed by a new face is a subject of ongoing debate. Yet the capacity for radical transformation of the face exists.

Facial recognition technology not only raises questions about the ethical, legal, practical and emotional use of biometric evidence, but also presumes the face is a constant, individual unit of identity. What happens, on an individual and a social level, if that is not the case?

Author Bio

Fay alberti

Prof Fay Bound Alberti, Professor of Modern History, UKRI Future Leaders Fellow and Director of Interface

Further reading

view all

March 10, 2023 | 4 MIN READ

The making of a blueprint. How historical, qualitative research should inform face transplant policy and practice.

January 23, 2023 | 4 MIN READ

Before and After? What the humanities bring to medical images

January 23, 2023 | 4 MIN READ

Diminishing their Voices: Face Transplants, Patients, and Social Media

January 23, 2023 | 4 MIN READ

Robert Chelsea and the First African American Face Transplant: Two Years On

January 23, 2023 | 4 MIN READ

History has Many Faces: researching histories of facial surgery

January 23, 2023 | 4 MIN READ

When face transplants fail: Carmen Tarleton and the world’s second retransplant

January 23, 2023 | 4 MIN READ

Drag Face: exploring my identity through masculine performance

January 23, 2023 | 4 MIN READ

Future Faces

January 23, 2023 | 4 MIN READ

Reflecting on Reflections

January 23, 2023 | 4 MIN READ

Owning My Face

January 27, 2023 | 4 MIN READ

Portrait of an Angry Man – or not?

January 23, 2023 | 4 MIN READ

Picturing Death: Dealing with Post-Mortem Images

Filed Under: biometrics, facial recognition, history

Facial Recognition: From Face Blindness to Super Recognisers

July 27, 2022 by Fay

Facial Recognition: From Face Blindness to Super Recognisers

The Author

Sharrona Pearl

Sharrona Pearl

This blog is part of our series on facial recognition. Check out our first blog on this theme, by PI Fay Bound Alberti, on facial recognition software, history, and the meanings of the face, and what we can learn about this technological innovation by looking at the past. Today’s blog is written by Sharrona Pearl, and explores the scale of human face recognition, from face blindness to super recognisers.

Facial Recognition: From Face Blindness to Super Recognisers

Face recognition is a wonderful and complicated neurological process.  We are learning more about it every day.  But it’s also a deeply cultural and social and emotional and human one.  Faces are, as I’ve argued in my books and articles, a key part of how we make sense of others, build relationships, communicate, make judgements.  Recognizing faces helps with interpreting emotion.  It tells us something about where people are looking and what they might be looking at.  All this gives us cues about how to interact with others and our surroundings.  Recognizing faces can help us recognize social cues about how to act and what to do in a given situation, and with a given person.  We spend quite some time on how we present our own faces, and we imagine we know all sorts of things about others based on their faces.  The face is both a thing and a collection of things and feelings and ideas.  It has a history and that history is changing.  Face recognition, and the invention of the face recognition spectrum, is part of that history.  The naming that emerged with these categories of “face blindness” and “super recognition” helped people understand something about themselves and how they make sense of others.  They gave name to experience, and in so doing, created new kinds of experiences.  This is true of all categories, but there is special resonance to the face.  Because are faces are, or at least we think they are, who we are. 

Face Recognition

A lot of things have to happen in the brain for us to recognize faces.  It’s actually less extraordinary that some people can’t do it than that so many of us can.  People, in general, are pretty good at recognizing faces; a face may often seem familiar even when we can’t remember names or context.  For some people, it just doesn’t happen as well, and for others, it doesn’t happen at all.  And it’s one of those things that seem to be impossible to understand how it works in others.  As a person who recognises faces pretty well, face blindness just doesn’t make sense to me.  There are all kinds of metaphors and explanations and attempts: imagine you were shown a picture with a pile of lego of different lengths and colours.  Imagine the picture is then taken away.  You would certainly know and recall that it contained lego, and maybe even some broad features of color and shape.  But you are unlikely to remember precisely the order and configuration of each piece.  That, maybe, is what face blindness is like: people can remember that they saw a face, with eyes and a nose and a mouth.  But which eyes; what nose; whose mouth, disappears immediately the face is gone.  Voices, to those with an ear or who have developed this adaptation, may offer consistent clues.  Gait is often ingrained and can be linked to a particular person.  Hairstyle and shape, distinctive piercings and moles and tattoos and glasses all contain lasting resonance.  Face blind people rely heavily on such markers.  But many of these markers can change, sometimes with no notice.  That leaves face blind people without reliable ways of recognizing others; a change of hairstyle or clothing may mean that someone who was identifiable in the morning becomes impossible to distinguish in the afternoon, no matter how hard they look. 

For about 1-2% of the population, no amount of staring at a face will help.  No amount of training will help.  Colour blind people cannot be taught to see colour.  Face blind people cannot be trained to see faces.  Profoundly face blind people simply will not recall the features of a face.  Others can do it slightly better and so on and so on, with the bulk of the population mostly able to mostly recognize most faces.  Relationships help.  Repeated exposure helps.  Paying attention – for most people – helps.  Sharing a powerful moment helps.  And, while memory and face recognition are broadly unrelated at the extremes, memory, taken in conjunction with everything else, helps.  The top 1-2% of the population can do it better than anyone else.  And everyone else can’t be trained to do that either.  How odd it would be to condemn a colour blind person for not being able to distinguish red and blue.  And how odd it would be to castigate someone for failing to recognize a face, or, indeed, for recognizing only some faces rather than essentially all of them. 

Face Blindness and Super Recognition

When face blindness was first described, scientists thought that it was a pathology, a disability that some had to greater or lesser degree.  And everyone else could just recognize faces more or less equally well.  That changed pretty dramatically when Harvard post-doctoral fellow Richard Russell and his team clinically identified super recognition in 2009.  If there is a bottom 1-2%, they theorized, there is likely a top 1-2%.  Those are the supers.  And while they aren’t perfect at face recognition, whatever that means, they perform at the top end of the scale compared to the rest of the tested world.  Which means that it’s still a pretty big bucket, and that the top 1-2% of the top 1-2% are going to look significantly different than even other super recognizers.  As I discuss throughout my forthcoming book, supers are extremely good at a wide variety of face recognition skills: matching images to people or other images; aging people over time; and recalling people in different contexts and identifying them as the same.  While super recognizers do not have a photographic memory for faces, and they actually do sometimes forget a face, they do it much less than everyone else.  Really, super recognizers recall faces independent of the depth of the interaction or relationship they have had with someone.  Most of us recognize the faces of those we know and love better than anyone else.  We are more likely to recall those with whom we have shared a powerful experiences.  For supers, even a brief or fleeting or non-interaction with someone is often enough.  That doesn’t mean they recognize all faces always: it does mean they recall most faces better than others.  Even those they have encountered only briefly, and without meaningful exchanges or relationships. 

Diagnosis or Explanation?

Facial recognition does not exist in a vacuum.  We all recognize faces differently in different contexts and with different cues, even for those of us who can’t do it at all.  In some ways, actually, it’s easier now to get by without recognizing faces, especially if you spend a lot of your time interaction with others online.  (As I write this, we are two years into a global pandemic in which faces are often masked and maybe universally unrecognizable in public, and most encounters with others are on digital platforms that provide names below the faces.  As most of the world got zoomed out, a small group of people quietly celebrated the ability to always know to whom they were talking.  For face blind people, zoom was an unlooked for, unasked for gift that gave the elusive possibility of recognition.)  Recognizing people has never been harder and never been easier.  But also: there are a lot more faces now.  And we encounter them a lot more both in person and through media.  

We can know people don’t recognize faces from their testimony.  But – and this is important – there are a lot of ways to be bad at recognizing faces.  So: is face blindness a proscriptive term or a descriptive one?  Is it a diagnosis or an explanation?  The answer is, of course, yes: it is all these things.  Faceblindness, or prosopagnosia, is a very specific term generated in a specific moment in history with specific reasons, narratives, and causes.  It has to do with specific stuff in a specific part of the brain.  And maybe some of those people in the past had that stuff in their brains, and maybe not.  It’s interesting, of course, to speculate as to whether some of these case studies with associated recognition challenges were actually examples of face blindness.  Many people have made precisely those speculations.  Others refuse to. 

I, as ever, say yes and. 

Author Bio

SHarrona pearl

Sharrona Pearl is Associate Professor of Medical Ethics and History at Drexel University.  A historian and theorist of the face and body, Pearl has published widely on Victorian history of medicine, media and religion, and critical race, gender, and disability studies.  Her current book, from which this material is drawn, is on the face recognition spectrum from face blindness to prosopagnosia and is forthcoming from Johns Hopkins University Press.  This book is the third in her face trilogy, following  Face/On: Face Transplants and the Ethics of the Other and About Faces: Physiognomy in Nineteenth-Century Britain.  She is currently writing a book on “The Mask” under contract with Bloomsbury Academic.  Pearl maintains an active freelance profile, with bylines in a variety of newspapers and magazines including The Washington Post, Lilith, and Real Life Magazine.  Say hi on twitter @sharronapearl.

Further reading

view all

March 10, 2023 | 4 MIN READ

The making of a blueprint. How historical, qualitative research should inform face transplant policy and practice.

January 23, 2023 | 4 MIN READ

Before and After? What the humanities bring to medical images

January 23, 2023 | 4 MIN READ

Diminishing their Voices: Face Transplants, Patients, and Social Media

January 23, 2023 | 4 MIN READ

Robert Chelsea and the First African American Face Transplant: Two Years On

January 23, 2023 | 4 MIN READ

History has Many Faces: researching histories of facial surgery

January 23, 2023 | 4 MIN READ

When face transplants fail: Carmen Tarleton and the world’s second retransplant

January 23, 2023 | 4 MIN READ

Drag Face: exploring my identity through masculine performance

January 23, 2023 | 4 MIN READ

Future Faces

January 23, 2023 | 4 MIN READ

Reflecting on Reflections

January 23, 2023 | 4 MIN READ

Owning My Face

January 27, 2023 | 4 MIN READ

Portrait of an Angry Man – or not?

January 23, 2023 | 4 MIN READ

Picturing Death: Dealing with Post-Mortem Images

Filed Under: facial recognition, guest blog

Primary Sidebar

Footer

logo logo

interface@kcl.ac.uk

Privacy policy

©2023 Interface.