THE HOME Office is using an AI-based tool, known as the Identify and Prioritise Immigration Cases (IPIC), to suggest actions for immigration cases, including enforcement actions for both adult and child migrants.
Critics argue this system risks making decisions too automated, as caseworkers may “rubberstamp” the algorithm’s recommendations instead of critically assessing each case. Migrant advocacy groups, including Privacy International, claim the system could “encode injustices” and are calling for its removal, The Guardian reported.
The government maintains that the AI system, operational since 2019-2020, enhances efficiency by guiding caseworkers on prioritising actions, but insists a human official reviews each recommendation.
Currently, the system is used for cases involving removal action among the approximately 41,000 asylum seekers, the newspaper reported.
Privacy International, however, has highlighted that those affected are not informed that AI influences their case processing.
Following a lengthy information request by Privacy International, released documents reveal that IPIC gathers various personal details, such as biometric data, ethnicity, health markers, and criminal records, to evaluate cases for enforcement priority.
Fizza Qureshi, head of the Migrants’ Rights Network, raised concerns that the system could foster racial bias due to the data inputted and lead to increased surveillance, reported The Guardian.
Privacy International lawyer Jonah Mendelsohn warned the tool affects “hundreds of thousands” without providing transparency about its role in individual cases. He emphasised the need for algorithmic accountability as the Home Office moves towards a “digital by default” approach by 2025.