The dangers of facial recognition | Uber workers bring race discrimination claim
Artificial Intelligence (AI) capabilities have evolved dramatically in recent decades, reaching levels unimaginable only a generation ago. AI can transform and improve many areas of human life, including efficiencies in the workplace, but many are still apprehensive about its use.
One key concern is that algorithmic decision-making could lead to discrimination. While not a new issue – indeed, articles on racial and gender bias in AI date back as far as 2017 – it’s one that has recently resurfaced due to Uber’s “ineffective” facial recognition system, which has prompted workers to bring an Employment Tribunal claim for race discrimination.
"Cold and callous"
The claim, spearheaded by the App Drivers and Couriers Union, has not yet been heard within the Central London Employment Tribunal but focuses on alleged unfair dismissal and racial discrimination suffered by Black Uber drivers. It states that the facial recognition software used by the company doesn’t work as it should, more often in the case of drivers who are people of colour.
The software, which is provided by Microsoft, prompts workers to provide a photograph of themselves in real time. If the photograph doesn’t match the registered photo saved on file, workers may be let go. This process was introduced by Uber in March 2020 to verify the identity of drivers and couriers.
Henry Chango Lopez, general secretary of the Independent Workers’ Union of Great Britain, which is backing the action, said: “Uber’s continued use of a facial recognition algorithm that is ineffective on people of colour is discriminatory.” He added: “Hundreds of drivers and couriers who served through the pandemic have lost their jobs without any due process or evidence of wrongdoing.”
The driver in the test case worked on the Uber platform from 2016 until 2021 and has his account terminated after the system failed to recognise him. Others have shared similar experiences; a Nigerian driver who worked on the Uber Eats platform was locked out of the app in March after several failed attempts using the facial verification software, resulting in a loss of income.
These stories have provoked unions and MPs to demand a fairer termination process, which would give drivers the right to plead their case before dismissal, as well as the right to appeal the decision and receive union representation. One MP described the company’s current system as “extremely cold and callous”.
This claim – which argues that the system doesn’t have the intended accuracy results, an issue which disproportionately disadvantages people of colour – has received support from the Equality and Human Rights Commission (ECHR) and the Worker Info Exchange. Studies conducted by the US National Institute of Standards and Technology, Alan Turing Institute and Massachusetts Institute of Technology have all reported errors with facial recognition software misidentifying people based on skin colour.
Do you need support?
Speak to us for an honest, no obligation chat on:
0345 226 8393 Lines are open 9am – 5pm
A proportionate means of achieving a legitimate aim?
From an employment law perspective, race is one of nine protected characteristics under Section 9 of the Equality Act 2010 and includes colour, nationality and ethnic or national origins.
Under Section 19 of the Equality Act 2010, indirect discrimination will occur when a seemingly neutral policy or practice that is applied across the board (in this case, the requirement for drivers to verify their identity using the app’s facial recognition software) has a disproportionately negative impact on a group sharing a protected characteristic (in this case, people of colour).
Employers can defend claims for indirect discrimination if they can show that the policy or practice was “a proportionate means of achieving a legitimate aim”. This is known as objective justification, and there are four factors that a Tribunal will consider:
- Was the employer’s aim in taking the steps complained of sufficiently important to justify the less favourable treatment or disadvantage suffered? (Only once the legitimate aim has been clearly identified is it possible to consider the issue of proportionality).
- Is there any rational connection between the aim and the disadvantage suffered?
- Were the means chosen by the employer more than was necessary to accomplish its objective? In other words, were there less discriminatory ways the employer could have achieved its aim?
- Do the steps complained of strike a fair balance between the need to accomplish the aim and the detriment suffered?
This is ultimately what Uber’s case will rest on.
Whilst the claim has yet to be decided, other organisations who use the same or similar facial recognition software for employees – including Facebook, Amazon, IBM and Axon – have chosen to distance themselves from this technology altogether.
Need expert advice?
If you’re worried that any of your HR policies or working practices might be discriminatory, it’s always safest to seek specialist advice to avoid falling foul of the law. Whether you need guidance on an immediate issue, help drawing up a new-legally compliant policy or just an expert review of what you currently have in place, our Employment Law and HR experts can help.
For more information on our range of services and to discuss your specific needs, call 0345 226 8393 or request your free consultation using the button below.