• Thursday, October 10, 2024

Business

Tackle racial bias in AI to save labour market, say experts

It comes after an Uber Eats delivery driver received a payout for race discrimination after ethnically-biased facial recognition technology led to his suspension from the app.

A delivery driver for Uber Eats received financial compensation after alleging the company’s facial recognition system discriminated against him based on race

By: Nadeem Badshah

BRITAIN’S labour landscape risks turning into the “Wild West” if bosses fail to tackle racial bias in Artificial Intelligence (AI), experts have warned.

It comes after an Uber Eats delivery driver received a payout for race discrimination after ethnically-biased facial recognition technology led to his suspension from the app.

Pa Edrissa Manjang was removed from the app’s drivers platform after a failed recognition check. Uber Eats told him they had found “continued mismatches” in the photos of his face he had taken to access the platform. An employment tribunal case in March involving Manjang, from Oxfordshire, sparked concerns about using AI to screen employees and the potential for discrimination.

Kate Bell, the TUC’s assistant general secretary who leads the union’s work on AI, told Eastern Eye: “AI is already making life-changing decisions about the way millions work – including how people are hired, performance-managed and fired.

“But UK employment law is way behind the curve – leaving many workers vulnerable to exploitation and discrimination. We urgently need new employment legislation, so workers and employers know where they stand.

“Without proper regulation of AI, our labour market risks turning into a wild west. That is why the TUC is working with a range of stakeholders to draft an AI and Employment Bill.”

Meanwhile, the Information Commissioner’s Office is investigating whether AI systems are showing racial bias when dealing with job applications.

Regulators are concerned that AI tools could produce outcomes that disadvantage certain groups if they are not represented accurately or fairly in the datasets they are trained and tested on.

LEAD AI racism INSET Jasvir Singh
Jasvir Singh CBE

Jasvir Singh CBE, a barrister, believes it is problematic when more research is showing that AI is known to have some racial biases.

He told Eastern Eye: “In Mr Manjang’s case, the technology was in effect denying that he existed.

“Facial recognition software can be useful in some circumstances, but there needs to be more work done to ensure that any glitches in technology can be ironed out to prevent any racial bias or discrimination from taking place.

“Until that happens, we may see more cases of this nature coming to employment tribunals and the courts in general, and the courts will need to develop a better understanding about the impact of AI in everyday life in order to tackle such issues properly and justly.”

The Equality and Human Rights Commission (EHRC) and the App Drivers and Couriers Union (ADCU) provided funding for Manjang’s case.

Baroness Kishwer Falkner, chairwoman of the EHRC, said: “AI is complex, and presents unique challenges for employers, lawyers and regulators.

“It is important to understand that as AI usage increases, the technology can lead to discrimination and human rights abuses. We are particularly concerned that Mr Manjang was not made aware that his account was in the process of deactivation, nor provided any clear and effective route to challenge the technology. More needs to be done to ensure employers are transparent and open with their workforces about when and how they use AI. When such companies rely on automation to help manage their staff they need to guard against unlawful discrimination.”

An Uber spokesperson said: “Our realtime ID check is designed to help keep everyone who uses our app safe and includes robust human review to make sure that we’re not making decisions about someone’s livelihood in a vacuum, without oversight.”

Elsewhere, new research from recruitment tech firm Tribepad suggested that 89 per cent of candidates believe recruiters show bias when hiring. Some 16 per cent felt there was a racial bias while 33 per cent believe that diversity data is used by prospective employers in a way that benefits them.

Dean Sadler, CEO of Tribepad, said: “These new findings paint a mixed picture. In some cases perceived bias is on the up – yet candidates do seem to be more trusting that employers are using diversity data for good. But it’s still not enough.

“We need a world where it’s not about where you’ve come from, what you look like, or your family situation, but the opportunities, skills and aptitude to land you a job.

Biases can be so ingrained, and unconsciously so, making it difficult to change mindsets, but it can be done.”

Last year, Bank of England officials warned that racist and sexist AI bots pose a risk to the financial system.

A report said self-teaching algorithms could pick up biases from datasets and wider society which could then be used to discriminate against customers or staff in the workplace.

The government recently published guidance on responsible AI in recruitment to help employers.

It outlines what organisations should consider, including asking what problems they are trying to solve and how AI can help address them, how the organisation will communicate the use of the tech to potential job applicants, whether the AI systems on the market have the capabilities to produce the desired outputs, and whether employees will need training or additional resources to use the system.

Related Stories
Videos

Nushrratt Bharuccha on Chhorii, pressure of comparison with Lapachhapi, upcoming…

Abhimanyu Dassani on Meenakshi Sundareshwar, how his mom Bhagyashree reacted…

It’s a wrap for Prabhas, Kriti Sanon and Saif Ali…