Live COVID-19 Cases
  • World N/A
    World
    Confirmed: N/A
    Active: N/A
    Recovered: N/A
    Death: N/A
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
homeless man on street waiting for vaccine

BY Leo Hynett

Technology

Facial Recognition Tech: The Death of Anonymity

The social repercussions of Facial Recognition are being felt around the world as use of the technology spreads.

May 11  2021

Follow

The use of facial recognition technology (FRT) is becoming common around the world. It has many applications from securing mobile devices and authenticating payments to helping police locate people on their wanted lists.

With FRT available as a convenient way to unlock mobile devices, facial recognition is something many people choose to use on a daily basis. A lot of the debate, however, is around the uses of FRT that people have not consented to. There are concerns that if people can be identified wherever they go they may not feel comfortable attending a protest or their places of worship.

Under observation

Face recognition systems pick out distinctive facial details using computer algorithms and then convert these details into representative data, allowing the data to be compared against an existing database. This can either be compared to a previous photo of a user (a one-to-one match, the kind of thing used to unlock smartphones) or a whole database (a one-to-many match, such as searching for a face in a police database). Many of these technologies have been improved during the pandemic, now working better at identifying faces obscured by masks.

With campaigners even fundraising to stop FRT in the UK it is clear that many members of the public are concerned about the unchecked rise of this surveillance technology. There are many concerns around misidentification, but also concerns that:

‘Even if the technology is accurate, it allows governments to monitor people’s habits and movements, creating potential chilling effects on freedoms of expression, association, and assembly. It can also be used to single out individuals in discriminatory or arbitrary ways, including for their ethnicity or religion.’

 

Flaws in the technology

Recent documentary Coded Bias follows M.I.T. Media Lab researcher Joy Buolamwini’s work in challenging the biases that exist within our algorithms, something she came across when one of her designs struggled to recognise her face until she put on a white mask.

Facial recognition software is best at recognising white male faces, and worst at recognising women of colour. This is largely down to the datasets these algorithms are trained on; algorithms created in Asia performed better at identifying Asian faces. While this is only the beginning of challenging this coded bias, ‘these results are an encouraging sign that more diverse training data may produce more equitable outcomes, should it be possible for developers to use such data.’

There are a lot of concerns about the impact of these biases in recognition, especially in police use. Researchers for the National Institute of Standards and Technology found that ‘algorithms falsely identified African-American and Asian faces 10 to 100 times more than Caucasian faces’. Increasing the number of faces the algorithm is trained with can improve accuracy, but only up to a point. While large data sets may allow for more effective training of algorithms they may also be counter-productive; when more faces are included in the data set it becomes more likely that two people may look similar enough for the algorithm to make a false-positive identification.

Algorithms and AI are not entirely neutral. This is because they are made by humans and any existing biases of their creators (even unconscious ones) can make their way into the algorithms themselves. ORCAA audits algorithms to identify any biases and recommend steps to remedy them. They operate based on the understanding that ‘an algorithm isn’t good or bad per se – it is just a tool.’ They explore the reasons behind the use of the algorithms and the consequences of it, going ‘beyond traditional accuracy metrics to ask how the algorithm is used to make decisions, and who is affected.’

 

Limiting freedoms

In China, facial recognition cameras generate more than 6.8 million records daily. This practice includes tracking jaywalkers and sharing photos of those who have broken the rules on billboards in cities. Where UK and US uses of FRT are centred around security and serious law enforcement cases, Chinese systems are logging the facial data of people as young as 9 years old and use the technology for comparatively minor infractions.

In Myanmar, the military junta has access to ‘a new public camera system equipped with facial recognition and license plate recognition’ which Human Rights Watch has called a ‘serious threat to basic rights in the country’. FRT poses a risk to freedom of expression and freedom to protest. The lack of anonymity puts activists and protestors at greater risk, therefore making many afraid to voice their political opinions.

When it comes to UK policing, The Metropolitan Police have confirmed that ‘any “match” identified by the technology is manually reviewed before police officers decide on the next stages.’ However, this is dependent on admission that the technology is fallible. In the wake of Black Lives Matter protests across the UK, there is concern about the use of FRT due to the issues of accurate identification – and more frequent false positives – when the technology is used on non-white skin tones.

 

Conclusion

Biases that are currently inherent to the technology mean that we need to be cautious about the use of FRT. Human review of matches made by any algorithms is vital as the technology is not perfect and therefore cannot be used on its own.

Facial recognition is a part of many people’s daily lives, whether this is something they have consented to or not. We are no longer able to remain anonymous and this has many repercussions for our freedoms, making the use of facial recognition technology as much a political issue as a technological one.

About the Author: Leo Hynett

Leo Hynett is a contributing Features Writer, with a particular interest in Culture, the Arts and LGBTQ+ Politics.

Recommended for you

Mindfulness Classes Fail to Deliver

An 8-year study found school mindfulness classes are less effective than hoped. However, there have been some positive side effects.

Trending