• Sunday, April 28, 2024

News

Concerns mount over UK’s preparedness for deepfake-affected general elections

Former Justice Secretary Sir Robert Buckland is advocating for stronger governmental action against what he perceives as a pressing threat to UK democracy

As part of the government’s efforts to bolster electoral integrity, they have launched the Defending Democracy Taskforce, chaired by Home Office security minister Tom Tugendhat (Photo by BEN STANSALL/AFP via Getty Images)

By: Kimberly Rodrigues

Artificial Intelligence (AI) has been utilised globally to interfere with elections, sparking concerns among senior politicians and security services about potential risks in the UK.

Former Justice Secretary Sir Robert Buckland is advocating for stronger governmental action against what he perceives as a pressing threat to UK democracy.

As chair of the Northern Ireland select committee, the Conservative MP is particularly alarmed by the emergence of deepfakes—convincing audio and video clips depicting politicians saying things they never did, the BBC reported.

According to Buckland, the peril from AI-generated misinformation isn’t a far-off futuristic scenario but an immediate reality. “The future is here. It’s happening,” he said, urging UK policymakers to take proactive measures domestically and internationally.

Expressing concerns about potential disruptions similar to the 2017 election pause following the Manchester Arena bombing, Buckland fears a similar issue in the upcoming general election, scheduled by January 2025.

However, the government claims to be proactive in safeguarding elections against foreign interference.

As part of their efforts to bolster electoral integrity, they have launched the Defending Democracy Taskforce, chaired by Home Office security minister Tom Tugendhat.

Many of the targeted threats are not novel. Misinformation and underhanded tactics have been a persistent aspect of election campaigns worldwide. Techniques like photoshopped images, memes, and altered audio of politicians have existed for decades.

However, what’s new, as highlighted in the annual report of the National Cyber Security Centre (NCSC), an arm of GCHQ, is the widespread availability of potent generative AI tools capable of creating highly convincing fakes.

The surge in expansive language models like ChatGPT, alongside advancements in text-to-speech or text-to-video software, is viewed by some as a boon for disruptors of elections, ranging from individuals in their bedrooms causing mischief to malicious state actors.

“Large language models will almost certainly be used to generate fabricated content, AI-created hyper-realistic bots will make the spread of disinformation easier and the manipulation of media for use in deepfake campaigns will likely become more advanced,” warns the NCSC in its report.

During its party conference in September, the Labour Party encountered a glimpse of potential challenges when an audio clip surfaced on social media portraying leader Sir Keir Starmer apparently verbally abusing aides. Despite swift dismissal as a fake, the clip garnered 1.5 million views.

In November, a fake audio clip featuring London mayor Sadiq Khan advocating for rescheduling Armistice Day due to a pro-Palestinian march spread widely across social media platforms.

Expressing concern, Khan cautioned about the risks of unregulated deepfakes, highlighting their threat to democracy. This came after the Metropolitan Police concluded that no offense had occurred.

For Buckland and others apprehensive about this issue, the worst-case scenario involves a deepfake emergence of a party leader just before polling day in a tightly contested election.

This is exactly what happened in Slovakia’s general election in September, a fake audio clip surfaced featuring Michal Šimečka, leader of the liberal Progressive Slovakia party, apparently discussing election manipulation.

Šimečka went on to lose the election to the populist pro-Moscow Smer-SSD party.

Reflecting on this, Tugendhat remarked in a recent speech, “Who knows how many votes it changed—or how many were convinced not to vote at all?”

AI-generated images and audio have influenced recent elections and referendums globally, such as in Argentina, where right-wing libertarian Javier Milei emerged victorious.

Highlighting the significance of robust regulations, Buckland emphasises the need for proper laws. He urges the government to accelerate plans to reinforce Ofcom’s oversight of misinformation.

Furthermore, as part of a group of Tory MPs, Buckland has co-signed a letter addressed to Science Secretary Michelle Donelan demanding clearer guidance for social media firms to facilitate compliance with newly enacted national security laws, aimed at countering foreign interference.

Last week, Donelan informed a group of Labour, Tory, and SNP MPs about the government’s highly serious approach toward the AI threat.

As a member of the Defending Democracy Taskforce, Donelan dismissed the possibility of new laws but stressed the UK’s collaboration with social media companies and international allies, including the US, in countering this threat.

During her appearance before the science and technology committee, she said, “I expect that by the next general election we will have robust mechanisms in place that will be able to tackle these topics.”

Regarding measures to curb deepfakes from undermining democracy, some advocate for making them illegal (the government has already enacted legislation to prohibit the sharing of pornographic deepfakes in England and Wales).

However, others, like Donelan, argue that employing technology for detecting and neutralising fake content forms part of the solution.

In determining whether a clip is unquestionably fake, Jan Nicola Beyer, research coordinator at the Democracy Reporting International think tank, describes it as an ongoing “cat and mouse game.”

He said,”The detection mechanisms get better, but in the moment they get better, the generative AI models get better in order to generate even more convincing and even harder to detect content.”

He highlighted the significant challenge in debunking audio content, which proves particularly arduous.

Emphasising the role of fact checkers and media, Beyer stressed the importance of calling out probable fakes while providing evidence for their assessment. Equally crucial, in his view, is preventing their viral spread.

Major tech companies are actively developing systems to safeguard elections globally in 2024. However, Beyer recommended ensuring that only trustworthy material is suggested to users, advocating for the “demonetisation” of unreliable sources.

Contrary to popular belief, Ken McCallum, the director general of MI5 collaborating with the government against foreign election interference, warned against fixating solely on one risk, suggesting that deepfakes might not be the core issue.

“And then if you’ve got creative adversaries, they decide not to play that card and do something quite different,” he said.

“So, I wouldn’t want to make some sort of strong prediction that [deepfakes] will feature in the forthcoming election, but we would be not doing our jobs properly if we didn’t really think through the possibility.”

According to a security source, while deepfakes might pose a long-term threat, the more immediate concern revolves around AI’s utilisation to create more compelling “spearphishing” emails. These deceptive emails entice individuals to click on links that lead to their computers being compromised.

This tactic was employed by Russian intelligence back in 2016, aiming to obtain the emails of the chair of Hillary Clinton’s presidential campaign.

The obtained emails were subsequently leaked online during a closely contested election she ultimately lost.

With the upcoming US election expected to be similarly fiercely contested next November, some UK security officials privately hope that foreign spies might prioritise events in the US, reducing their capacity to interfere in a simultaneous UK election.

Another fear expressed by senior national security figures is that excessive emphasis on the risk of deepfakes and AI meddling in politics could spread fear, undermining trust in the political process.

Regardless of whether deepfakes become a significant issue, experts fear a flooded social media environment with synthetic images and text, potentially leading voters to struggle in discerning reality.

This could lead to a situation where unscrupulous politicians exploit the ambiguity, termed the “liar’s dividend” by researchers.

Buckland echoed warnings about the “liar’s dividend,” emphasising the corrosive impact on the veracity of information which makes people cease trusting anything.

“Also, those who want to undermine the process will simply say attempts to deal with deepfakes are censorships rather than something more legitimate designed to protect the sanctity of the truth,” he said.

As the next general election approaches, the media, tech giants, security services, and political parties all face the challenge posed by this evolving landscape.

Related Stories

Videos

Mrunal Thakur on Dhamaka, experience of working with Kartik Aaryan,…
Nushrratt Bharuccha on Chhorii, pressure of comparison with Lapachhapi, upcoming…
Abhimanyu Dassani on Meenakshi Sundareshwar, how his mom Bhagyashree reacted…