Highlights
- Sharma led Anthropic's safeguards team examining AI misuse, misalignment and catastrophe prevention.
- Resignation follows release of Opus 4.6, a more powerful version of Anthropic's Claude chatbot.
- Half of xAI's original 12-strong founding team have also quit Elon Musk's company since 2023.
Safety research legacy
Sharma led Anthropic's safeguards team, launched a year ago to tackle AI security research including examining "model misuse and misalignment" and exploring AI "catastrophe prevention."
His team also investigated risks posed by overly sycophantic AI bots and how chatbot reliance could disempower human users.
Ethan Perez, an AI safety leader at Anthropic told The Telegraph, Sharma's work had been "critical to helping us and other AI labs achieve a much higher level of safety than we otherwise would have."
Already a published poet who describes himself as a "poet, mystic and ecstatic DJ and facilitator," Sharma signed off his resignation letter with a poem by American poet William Stafford.
Industry departures
Sharma's resignation adds to a wave of AI safety departures across Silicon Valley. Elon Musk's xAI has seen several senior departures. Co-founder Tony Wu and executive Jimmy Ba both announced exits this week, meaning half of xAI's original 12-member founding team have left since its 2023 launch.
Anthropic founder Dario Amodei has repeatedly warned that powerful AI could eliminate half of all white-collar jobs and that tools of "almost unimaginable power" are "imminent."
Despite these warnings, Anthropic continues launching increasingly powerful AI systems while raising more than $20 bn and reportedly preparing for a public listing.





