Highlights
- ChatGPT caricature trend requires users to share detailed personal and professional information with AI tool.
- Security experts warn uploaded data gets stored and used to train AI models with unclear long-term uses.
- OpenAI's privacy policy allows broad sharing with affiliates and service providers without detailed.
The AI-created caricatures are generated by entering a prompt into ChatGPT "Create a caricature of me and my job based on everything you know about me."
While the results prove popular on social media platforms, creating them requires users to share substantial personal information with the AI tool.
Jake Moore, global cybersecurity adviser at ESET, told Forbes "Every time these trends pop up, people are often quicker to jump on the bandwagon than to question what might actually lie behind them."
When users upload photos and personal details to ChatGPT, the platform "collects all of this information and analyses it," with data getting "stored and used to train these impressive models alongside unclear long term uses."
Matt Conlon, CEO and co-founder at security firm Cytidel, warned that users are "actively feeding increasingly detailed personal information into generative models to improve the output."
This behaviour "raises serious data privacy concerns and could increase the risk of identity theft in the future," with no guarantee uploaded information can be fully removed or controlled.
Data control measures
Chris Linnell, associate director of data privacy at consultancy Bridewell, warned of risks in "normalising the sharing of photos and detailed personal or professional information with AI tools, without fully considering how that data might be used or retained."
According to OpenAI's privacy policy, submitted content may be used to provide services, improve products and conduct research, with broad sharing permitted amongst affiliates and service providers.
OpenAI offers memory control features, with "reference saved memories" and "reference chat history" settings available. These remain off by default for UK and EU users.
Users can manage specific saved memories, switch to Temporary Chat for memory-free sessions, or avoid uploading real photos.
Oliver Simonnet, lead cybersecurity researcher at CultureAI, advised users to "keep prompts generic, and review data retention policies and settings."
While the ChatGPT caricature trend offers some entertainment, experts warn that the personal data shared may remain long after the trend fades. Users are urged to consider long -term risks of sharing personal information with AI tools, rather than focusing only on the short-term run.





