Pramod Thomas is a senior correspondent with Asian Media Group since 2020, bringing 19 years of journalism experience across business, politics, sports, communities, and international relations. His career spans both traditional and digital media platforms, with eight years specifically focused on digital journalism. This blend of experience positions him well to navigate the evolving media landscape and deliver content across various formats. He has worked with national and international media organisations, giving him a broad perspective on global news trends and reporting standards.
A FORMER City compliance officer has been awarded over £500,000 after an employment tribunal found he was unfairly dismissed for whistleblowing.
Bharat Bhagani, who worked for Goldenway Global Investments, a British subsidiary of a Hong Kong-based group, raised concerns about potential espionage and financial misconduct.
During the hearing at a central London tribunal, Bhagani claimed he was ordered to secure a UK visa for an individual later identified as a Chinese espionage agent. After alerting authorities, he faced questioning before the alleged spy was deported.
Matters escalated when Bhagani resisted pressure from Hong Kong executives to facilitate the appointment of two directors to the London office, believing such moves would breach British financial regulations.
Taking his concerns to the Financial Conduct Authority (FCA), Bhagani also flagged potential money laundering activities within the company.
The tribunal, led by Judge Mark Emery, found Bhagani's actions to be justified and in the public interest. Judge Emery noted that Bhagani's evidence regarding the alleged spy went largely unchallenged, and his belief in the company's attempts to recruit a Chinese agent was deemed reasonable.
Goldenway Global Investments, which has since lost its authorisation to operate in the UK, swiftly terminated Bhagani's employment following his disclosures to the FCA. The tribunal ruled this dismissal unfair, concluding that the Hong Kong leadership viewed Bhagani as a threat to their operations.
The judge's ruling highlighted the company's fear that Bhagani's continued presence and disclosures would have caused significant regulatory issues. The tribunal also accepted Bhagani's belief that his dismissal was linked to expediting the transfer of potentially illicit funds.
As a result of the ruling, Bhagani has been awarded approximately £564,000 in compensation for unfair dismissal.
In January, the Chinese Embassy in the UK issued a statement strongly denying the espionage allegations, calling them "completely based on hearsay evidence" and "created out of nothing".
“The so-called ‘Chinese espionage agent’ related to an employment dispute case, is completely based on hearsay evidence and also is created out of nothing. We firmly oppose any malicious slander against China," the statement said.
Users can now restrict AI-generated visuals across select categories.
Pinterest will make “AI-modified” content labels more visible.
The update aims to restore trust amid growing user backlash.
Pinterest responds to complaints over AI-generated ‘slop’
Pinterest has rolled out new controls allowing users to reduce the amount of AI-generated content in their feeds, following widespread criticism over an influx of synthetic images across the platform.
The company confirmed on Thursday that users can now personalise their experience by limiting generative imagery within specific categories such as beauty, art, fashion, and home décor. The move comes as many long-time users voiced frustration that their feeds were increasingly dominated by low-quality AI visuals, often referred to online as “AI slop.”
Pinterest, which serves as a hub for creative inspiration and shopping ideas, has faced growing scrutiny from both users and media outlets questioning whether its algorithmic changes have diluted the quality and authenticity of its content.
New personalisation settings and clearer labels
The new controls can be found under the “Refine your recommendations” section in the app’s Settings menu. Users will be able to opt for reduced exposure to AI-generated posts in certain categories, with more options expected to be added later based on feedback.
In addition, Pinterest said it will make its existing “AI-modified” labels more prominent. These labels appear on posts identified through image metadata or Pinterest’s detection systems as being partially or fully AI-generated.
The platform is also encouraging user feedback. When users encounter Pins they find less appealing due to synthetic imagery, they can use the three-dot menu to flag them and adjust their preferences accordingly.
The update has started rolling out across Pinterest’s website and Android app, with iOS support to follow in the coming weeks.
Balancing creativity with user trust
Matt Madrigal, Pinterest’s Chief Technology Officer, said the company’s focus remains on maintaining an authentic, inspiring experience for its community.
“With our new GenAI controls, we’re empowering people to personalise their Pinterest experience more than ever, striking the right balance between human creativity and innovation,” Madrigal said.
Pinterest’s move comes as research cited by the company suggests that AI-generated visuals now account for more than half of all online content. By giving users direct control over how much of that material they see, Pinterest hopes to preserve its reputation as a platform driven by genuine creativity rather than automated output.
By clicking the 'Subscribe’, you agree to receive our newsletter, marketing communications and industry
partners/sponsors sharing promotional product information via email and print communication from Garavi Gujarat
Publications Ltd and subsidiaries. You have the right to withdraw your consent at any time by clicking the
unsubscribe link in our emails. We will use your email address to personalize our communications and send you
relevant offers. Your data will be stored up to 30 days after unsubscribing.
Contact us at data@amg.biz to see how we manage and store your data.