OpenAI’s Sora 2 has taken the internet by storm, allowing users to generate realistic, AI-driven videos with astonishing detail. From creating fictional landscapes to animating historical events, the technology demonstrates a breathtaking leap in generative AI. But as its popularity grows, so does a troubling trend—AI deepfakes of deceased celebrities appearing online with lifelike accuracy.
These videos, stamped with Sora 2’s watermark, have raised serious ethical and legal concerns. If OpenAI claims to restrict the creation of AI-generated depictions of public figures, why are dead celebrities still appearing in these videos? And more importantly, should likeness protections extend to those who are no longer alive?
Read More: OpenAI Takes Bold Action to Eliminate Bias and Build a Fairer ChatGPT in 2025
The Sora 2 Revolution: Boundless Imagination Meets Ethical Limits
Sora 2 represents the next evolution in AI-generated video. It allows users to create fully rendered scenes by simply typing prompts—ranging from cinematic clips to stylized animations. The accessibility and quality of Sora 2 have made it a viral sensation among creators, marketers, and storytellers.
However, the line between creativity and exploitation has blurred. What began as an impressive creative tool has quickly evolved into a platform capable of resurrecting the dead in digital form—without consent, control, or context.
According to a report by Ars Technica, OpenAI clearly stated in its release notes that Sora 2 restricts depictions of public figures, including celebrities and politicians. The company explained that only individuals who have opted in to appear as “cameos” can be digitally represented. These participants are allowed to review and approve how their likeness is used.
Yet, this opt-in system has a fatal flaw: dead celebrities can’t consent.
A Loophole Exploited: How Deceased Celebrities Return Through AI
Since the deceased cannot agree to OpenAI’s cameo policy, creators have found ways around these restrictions. By tweaking prompts or modifying likenesses slightly, users can generate eerily accurate videos of long-gone icons. These creations often spread rapidly across social media, gathering millions of views.
Examples have included a comedic cooking skit featuring Michael Jackson, a skateboarding physicist resembling Stephen Hawking, and even an unsettling recreation of Martin Luther King Jr.’s “I Have a Dream” speech, where the AI version stutters and forgets lines.
While some users defend these clips as tributes or harmless humor, others see them as a disturbing violation of legacy and dignity.
Family Members and Fans Push Back
The backlash has been swift. Zelda Williams, daughter of the late actor Robin Williams, publicly condemned the trend. In an Instagram Story (archived by Deadline), she expressed outrage over creators using her father’s likeness in AI videos without permission.
“Living actors deserve the right to consent to their likeness, but so do the dead,” she said. “This technology is reanimating people without their will or soul.”
Her statement echoes growing public unease. Many estates of deceased stars have strict image rights protections, which are often licensed for official merchandise or biographical films. But AI deepfakes bypass traditional rights management, creating a gray zone between homage and identity theft.
Legal Gray Areas: Who Owns a Face After Death?
In the United States, the “right of publicity” grants individuals control over the commercial use of their name and likeness. However, these protections vary by state—and often expire upon death.
For instance:
- California extends posthumous publicity rights for up to 70 years after death.
- New York, until recently, offered no posthumous protection at all.
Many countries have no legal framework addressing AI-based likeness use.
This lack of uniformity allows AI tools like Sora 2 to operate in legal ambiguity. Technically, users creating videos of deceased figures might not be breaking the law—but the ethical implications remain severe.
Legal experts warn that OpenAI could face lawsuits if estates claim reputational or emotional damages from unauthorized depictions. Given OpenAI’s past legal disputes over AI-generated content involving celebrities, this controversy could spark new challenges for the company.
OpenAI’s Response and the Role of Watermarking
To promote transparency, OpenAI embeds digital watermarks in every Sora 2 video, signaling that it is AI-generated. While this helps prevent misinformation, it does little to stop the spread of ethically questionable material.
Once uploaded, these clips are rapidly shared, downloaded, and remixed—making removal nearly impossible. Watermarks can be cropped or blurred, allowing misleading content to circulate freely.
OpenAI insists it is actively refining safeguards to detect and block misuse. But the speed of user innovation often outpaces enforcement, leaving the platform in a constant game of ethical catch-up.
The Broader Implications for AI Ethics
The debate over Sora 2 deepfakes isn’t just about famous faces—it’s about human dignity in the age of machine creativity. As AI becomes capable of recreating anyone, alive or dead, society must define clear boundaries.
Ethicists argue that digital resurrection without consent undermines personal identity and erodes public trust in media. Viewers may soon struggle to distinguish reality from fabrication, especially as AI-generated content becomes more photorealistic.
Some experts advocate for global AI likeness regulations, requiring consent databases or posthumous licensing systems. Others propose digital watermark registries that link every AI-generated image to verified metadata, ensuring traceability and accountability.
Public Reaction: Awe, Amusement, and Alarm
Reactions to Sora 2’s capabilities have been mixed. Many users admire the platform’s artistic potential and view AI recreations as modern tributes. Fan-generated videos, such as “Elvis performing a new song” or “Marilyn Monroe in today’s fashion,” receive millions of likes.
Yet critics argue that the same technology can be weaponized—spreading misinformation, fake endorsements, or defamatory portrayals of people who can’t defend themselves. The emotional impact on families, friends, and fans of the deceased cannot be ignored.
As one commentator put it:
“AI isn’t just bringing people back to life—it’s rewriting their story.”
Frequently Asked Questions:
What is OpenAI’s Sora 2?
OpenAI’s Sora 2 is an advanced AI video generator that creates realistic, high-quality videos directly from text prompts. Users can generate lifelike animations, short films, or realistic human movements—sometimes even replicating real people.
Why is Sora 2 facing criticism?
Sora 2 has drawn criticism for enabling users to create deepfake videos of deceased celebrities. These unauthorized digital recreations raise ethical questions about consent, legacy, and respect for the dead.
Does OpenAI allow videos of public figures?
No. OpenAI’s policy prohibits creating videos of public figures, including politicians and celebrities, unless they have opted in through its “cameo” consent system. However, this safeguard doesn’t cover deceased individuals.
Why are people creating deepfakes of dead celebrities?
Some users exploit loopholes in Sora 2’s system to recreate famous figures who can’t consent, such as historical icons or late entertainers. These videos often go viral due to curiosity or nostalgia.
Are AI-generated videos of dead celebrities legal?
Legality varies by country. In some U.S. states, posthumous likeness rights are protected for decades after death. In others, there are no clear laws, leaving the issue in a legal gray area.
What are the ethical concerns behind these deepfakes?
AI recreations of dead celebrities can distort their legacy, misrepresent their beliefs, or cause emotional distress to surviving family members. Many argue that these digital resurrections violate human dignity.
Has OpenAI responded to these concerns?
Yes. OpenAI has reiterated that it blocks the creation of public figure deepfakes and watermarks all Sora 2 videos. Still, users often find creative ways to bypass these restrictions.
Conclusion
OpenAI’s Sora 2 has redefined what’s possible in digital creativity, allowing anyone to bring imagination to life through AI-generated video. Yet, its rise has exposed deep ethical and emotional fault lines surrounding the digital resurrection of deceased celebrities. While some view these recreations as artistic tributes, others see them as invasions of privacy and disrespect to the dead. The absence of clear global laws and the loopholes in AI consent systems have left creators, families, and platforms struggling to draw the line between innovation and exploitation.
