OpenAI has blocked users from creating videos of Martin Luther King Jr. on its Sora app after the late civil rights leader’s estate objected to what it called “disrespectful depictions” spreading online.
Since Sora’s launch three weeks ago, users have produced and shared hyper-realistic deepfake videos of King saying crude or racist remarks and appearing in fabricated scenes, including clips of him stealing from a store, fleeing police and reinforcing racial stereotypes.
Late Thursday, OpenAI and King’s estate released a joint statement announcing that AI-generated videos of King would be restricted as the company “strengthens guardrails for historical figures.”
OpenAI said it recognizes “strong free speech interests” in allowing AI depictions of public figures, but agreed that estates should have final say over how likenesses are used.
App Faces Backlash Over Deepfakes
The Sora app, still in invite-only testing, lets users generate lifelike video content by combining recorded footage of themselves with AI-generated scenes. While users can choose whether others may make “cameo” videos of them, the app initially allowed anyone to generate videos featuring celebrities and historical figures without consent.
That feature enabled users to create fake clips of Princess Diana, John F. Kennedy, Malcolm X, Kurt Cobain and others — prompting growing concern from intellectual property experts, artists and disinformation researchers.
Kristelia García, a Georgetown Law professor specializing in intellectual property, said OpenAI’s response only after King’s estate complained reflects a “forgiveness, not permission” attitude.
“The AI industry seems to move really quickly, and first-to-market appears to be the currency of the day, certainly over a contemplative, ethics-minded approach,” García told NPR in an email.
She added that state-by-state differences in right-of-publicity and defamation laws mean AI companies may face “little legal downside to just letting things ride unless and until someone complains.”
Under California law, heirs or estates of public figures retain control of a celebrity’s likeness for up to 70 years after death.
OpenAI Adjusts Policy After Criticism
After backlash, OpenAI CEO Sam Altman said the company is revising its policy to require explicit opt-in permission from rights holders before their likenesses can be used in AI videos.
Still, families of deceased celebrities have condemned Sora’s rollout for allowing vulgar or exploitative depictions. Zelda Williams, daughter of actor Robin Williams, pleaded on Instagram for users to stop making AI-generated clips of her father. “Please, just stop sending me AI videos of my dad,” she wrote. “It’s NOT what he’d want.”
Bernice King, daughter of Martin Luther King Jr., shared a similar plea on X, writing simply: “Please stop.”
Broader Industry Concerns
Hollywood studios and talent agencies have also voiced alarm over Sora’s use of likenesses without consent, comparing it to how ChatGPT was initially trained on copyrighted material before OpenAI began licensing deals with publishers.
The controversy adds to ongoing legal battles over how AI companies use creative and personal works without authorization.
As OpenAI refines its policies, experts say the Sora controversy underscores a broader challenge: balancing innovation and free expression against ethical and legal protections for real people, living and dead.



