OpenAI's release of its video app Sora has been marred by the appearance of violent and racist content, highlighting that its safety measures are ineffective.

OpenAI's release of its video app Sora has been marred by the appearance of violent and racist content, highlighting that its safety measures are ineffective.

OpenAI released its newest AI video generator on Tuesday, introducing a social feed where users can share strikingly realistic videos. However, within hours of Sora 2’s launch, the feed—and older social media platforms—were flooded with videos showing copyrighted characters in inappropriate scenarios, along with violent and racist content. This goes against OpenAI’s own rules for Sora and ChatGPT, which ban material that encourages violence or causes harm.

The Guardian reviewed several prompts and clips, finding that Sora produced videos of bomb threats and mass shootings, with terrified people fleeing college campuses and crowded spots like New York’s Grand Central Station. Other prompts generated war scenes from Gaza and Myanmar, featuring AI-made children describing their homes being destroyed. One video, prompted by “Ethiopia footage civil war news style,” showed a reporter in a bulletproof vest reporting on clashes in residential areas. Another, using just “Charlottesville rally,” depicted a Black protester in protective gear shouting a white supremacist slogan: “You will not replace us.”

Sora is currently invite-only and not available to the public. Despite this, it quickly rose to the top of Apple’s App Store just three days after its limited release, surpassing OpenAI’s own ChatGPT.

Bill Peebles, head of Sora, expressed excitement on X, saying, “It’s been epic to see what the collective creativity of humanity is capable of so far,” and promised more invite codes soon.

The app offers a preview of a future where distinguishing real from fake could become much harder, especially as these videos start spreading beyond the AI platform. Misinformation experts warn that such realistic scenes could obscure the truth and be exploited for fraud, bullying, and intimidation.

Joan Donovan, a Boston University professor specializing in media manipulation, noted, “It has no fidelity to history, it has no relationship to the truth. When cruel people get their hands on tools like this, they will use them for hate, harassment and incitement.”

OpenAI CEO Sam Altman praised Sora 2’s launch as “really great,” calling it a “ChatGPT for creativity” moment that feels fresh and fun. He acknowledged some concerns about social media addiction, bullying, and the risk of “slop”—low-quality, repetitive videos that clutter platforms. Altman emphasized that the team worked hard to avoid these pitfalls and implemented safeguards against using people’s likenesses or creating disturbing or illegal content. For instance, the app blocked a request to generate a video of Donald Trump and Vladimir Putin sharing cotton candy.

Still, in the first three days, many Sora videos spread online. A Washington Post reporter created a clip depicting Altman as a WWII military leader and reported making videos with “ragebait, fake crimes, and women splattered with white goo.” The Sora feed also features numerous videos of copyrighted characters from shows like SpongeBob SquarePants, South Park, and Rick and Morty, and had no issue producing a video of Pikachu raising tariffs.In one instance, AI-generated videos depicted China stealing roses from the White House Rose Garden or joining a Black Lives Matter protest with SpongeBob, who in another clip declared and planned a war on the United States. A video captured by 404 Media showed SpongeBob dressed as Adolf Hitler.

Paramount, Warner Bros., and Pokémon Co. did not respond to requests for comment.

David Karpf, an associate professor at George Washington University’s School of Media and Public Affairs, said he has seen videos of copyrighted characters endorsing cryptocurrency scams. He believes it’s clear that OpenAI’s safeguards for its Sora video tool are failing.

Karpf remarked, “The guardrails aren’t real if people are already using copyrighted characters to push fake crypto scams. In 2022, tech companies would have emphasized hiring content moderators. By 2025, they’ve decided they don’t care.”

Just before OpenAI launched Sora 2, the company contacted talent agencies and studios, informing them they would need to opt out if they didn’t want their copyrighted material replicated by the video generator, as reported by the Wall Street Journal.

OpenAI told the Guardian that content owners can report copyright infringement using a disputes form, but individual artists or studios cannot opt out entirely. Varun Shetty, OpenAI’s head of media partnerships, stated, “We’ll work with rights holders to block characters from Sora upon request and address takedown requests.”

Emily Bender, a professor at the University of Washington and author of The AI Con, warned that Sora is creating a dangerous environment where it’s “harder to find trustworthy sources and harder to trust them once found.” She described synthetic media tools as a “scourge on our information ecosystem,” comparing their impact to an oil spill that erodes trust in technical and social systems.

Nick Robins-Early contributed to this report.

Frequently Asked Questions
Of course Here is a list of FAQs about the safety concerns surrounding OpenAIs video app Sora written in a clear and natural tone

General Beginner Questions

1 What is OpenAIs Sora
Sora is an AI model from OpenAI that can create realistic and imaginative video clips from a simple text description

2 What is the main safety issue people are talking about with Sora
Despite safety rules users have been able to create and share violent and racist videos using Sora showing that its current protections arent working as intended

3 Why is this a big deal
This is a major concern because it shows the technology can be easily misused to spread harmful content which could lead to realworld harassment misinformation and other serious harms

4 Didnt OpenAI say they were testing Sora for safety
Yes OpenAI stated they were taking safety seriously and conducting rigorous testing with a small group of experts before a wider release The appearance of this harmful content suggests their initial safety measures were not enough to prevent misuse

5 What is OpenAI doing about this now
OpenAI has acknowledged the problem and stated they are working to strengthen their safety guardrails which includes improving their content filters and usage policies to block the creation of such harmful material

Advanced InDepth Questions

6 How are people bypassing Soras safety filters
Users are likely using a technique called jailbreaking where they rephrase their requests or use coded language to trick the AI into ignoring its safety guidelines and generating the prohibited content

7 What are safety guardrails in AI like Sora
Safety guardrails are a set of rules filters and classifiers built into the AI to prevent it from generating harmful biased or unsafe content They are supposed to block prompts related to violence hate speech and other policy violations

8 Is this problem unique to Sora or do other AI video generators have it too
This is a challenge for the entire generative AI industry However this incident with Sora has highlighted how difficult it is to create perfect safety systems even for a leading company like OpenAI with extensive resources

9 Whats the difference between a technical failure and a policy failure in this context