BANK OF ENGLAND officials have issued a warning about the menacing presence of discriminatory artificial intelligence (AI) bots that have the potential to endanger the financial system.
The report from the bank’s fintech hub highlighted the risks associated with AI bots perpetuating racist and sexist biases, raising concerns about their capacity to discriminate against both customers and employees, reported The Telegraph.
The report underscores the susceptibility of self-teaching algorithms to absorbing biases from the datasets they are trained on and the broader societal context.
Authored by analyst Kathleen Blake, the report emphasises the disruptive issues these biases may create for financial institutions, insurers, and the overall financial system.
Blake pointed out that potential AI-driven discrimination has the capacity to “exacerbate” risks related to financial stability by eroding trust within the system.
The use of “biased or unfair AI” not only poses reputational hazards but also legal risks for companies, Blake added, which could subsequently attract scrutiny from regulatory authorities.
Several noteworthy AI-related incidents were cited in the report, including an algorithm employed by Apple and Goldman Sachs for assessing credit card applications, which reportedly offered lower credit limits to women compared to men.
This particular issue underwent scrutiny by the New York state department of financial services in 2021, ultimately revealing that while not deliberate, it showcased significant deficiencies in customer service and transparency.
Another instance discussed was Amazon’s experience with a recruitment algorithm, which, as Blake highlighted, unfairly penalised female applicants.
This discriminatory outcome was attributed to the algorithm’s training on resumes submitted over a ten-year period, reflecting the prevailing male dominance in the industry, the report said.
Consequently, the algorithm was discontinued in 2018 due to concerns of sexism, particularly in cases where applicants used terms such as “women’s,” such as “women’s chess club captain” on their CVs.
In recent months, the government has raised alarms about the potential misuse of AI in creating bio-weapons and the loss of control over such software.
The Department for Science, Innovation, and Technology conveyed in a statement that humanity stands at a pivotal juncture in history, emphasising the significance of addressing AI challenges rather than turning a blind eye to the issues at hand.