Highlights:
- OpenAI said to be working on a tool that produces music from text and audio prompts
- Juilliard students reportedly helping annotate musical scores for model training
- The move could spark new disputes with record labels and rival platforms
OpenAI explores a new music creation tool
OpenAI, best known for its generative video model Sora 2, is reportedly developing a new tool designed to produce music based on short text or audio inputs.
According to The Information, the company has partnered with music students from New York’s Juilliard School to help annotate musical scores – though the school itself has denied any formal involvement. The students’ participation is believed to be part of an effort to teach the system how to interpret musical structures such as rhythm, harmony, and instrumentation.
The tool, still unnamed, could allow users to generate musical accompaniments, background scores, or instrumental layers to fit specific moods, tempos, or visuals similar to how users currently create AI-generated videos or images from short prompts.
From MuseNet to a more mature sound
This isn’t OpenAI’s first venture into music. The company’s earlier projects, MuseNet (2019) and Jukebox (2020), offered a glimpse into machine-generated composition and vocals, though both were relatively experimental.
MuseNet could produce multi-instrumental pieces in various styles but was limited to MIDI outputs. Jukebox went further, creating full vocal tracks, but its sound quality lagged behind more recent models developed by startups such as Suno and Udio.
The new project appears to be a more ambitious attempt to merge those early experiments with OpenAI’s broader multimedia ambitions, following the path of its Sora video model.
A crowded and contentious field
If launched, OpenAI’s tool will enter an already competitive and legally fraught landscape. Platforms such as Suno, Udio, and Google’s Music Sandbox have gained traction with musicians and hobbyists, but they’ve also drawn sharp criticism from the music industry.
Both Universal Music Group and Warner Music Group have filed lawsuits against Suno and Udio, alleging copyright violations. Those disputes mirror ongoing cases against OpenAI over its use of copyrighted material in training datasets, suggesting the company’s potential entry into music generation could invite further scrutiny.
Streaming services are already struggling to manage a flood of AI-generated tracks, many of which are not properly labelled and sometimes marketed as human-made. The arrival of another major player could intensify those challenges.
Redefining what counts as music
The rise of machine-generated songs raises questions about creativity, ownership, and authenticity in modern music. Supporters argue these tools make music production more accessible and flexible, while critics warn that they risk devaluing the work of real artists.
If OpenAI’s project proceeds, it will test whether audiences are ready to embrace music made with minimal human input or whether listeners will begin to draw clearer lines around what they consider truly “human” art.
Either way, the company’s latest experiment suggests a future where music, like photography and film, becomes a programmable medium shaped as much by code as by creativity.














