TECH firms OpenAI and Microsoft have joined an international coalition led by the UK to support research into making advanced artificial intelligence systems safer and more reliable.
These companies will back the AI Security Institute’s Alignment Project, which focuses on ensuring AI systems behave as intended and do not cause harm as their capabilities grow. It was announced at the AI Impact Summit in India on Friday (20).
The new commitments bring total funding for the project to more than £27 million. OpenAI will provide £5.6m, with further support coming from Microsoft and other international partners.
According to a statement, the funding will support research into AI alignment, an area that looks at how advanced systems can remain under human control.
Government confirmed that grants have now been awarded to about 60 research projects across eight countries. A second round of funding is expected to open later this year.
Deputy prime minister David Lammy said AI offered major opportunities but warned that safety must be built in from the beginning.
“We will always be clear-eyed on the need to ensure safety is baked into it from the outset,” he said, adding that the support of OpenAI and Microsoft would be “invaluable” in taking the work forward.
AI minister Kanishka Narayan said trust remained one of the biggest obstacles to wider use of artificial intelligence.
“We can only unlock the full power of AI if people trust it,” he said, adding that alignment research “tackles this head-on”.
Alignment research aims to guide AI systems so they act as expected, even as they become more powerful. Experts say that without such work, future systems could behave in ways that are difficult to predict or control.
Mia Glaese, vice-president of research at OpenAI, said alignment efforts must keep pace with rapid advances in AI. She said, “The hardest problems won’t be solved by any one organisation working in isolation. The project would help strengthen a wider research effort focused on keeping systems “reliable and controllable”.
The Alignment Project is run by the UK’s AI Security Institute and is supported by a range of international partners, including Microsoft, research bodies and public funders.





