
The era of easily identifying a “fake” online based on obvious flaws like poorly edited photos is long gone. Nowadays, we are inundated with AI-generated content and deepfakes ranging from counterfeit celebrity endorsements to fabricated emergency alerts. The advancement of technology has significantly blurred the boundary between reality and fiction, making it increasingly challenging to distinguish what is authentic.
The situation is rapidly intensifying as OpenAI’s Sora, along with its viral “social media app” Sora 2, gains popularity as the internet’s most deceptive platform. Described by some as a “deepfake fever dream,” Sora 2 operates like a TikTok feed entirely devoted to fabricated content. The platform continuously enhances its ability to make fantasy appear realistic, presenting substantial real-world implications.
If you find it difficult to discern between genuine and AI-generated content, you are not alone. To navigate through the abundance of AI-inspired creations and uncover the truth, here are some practical tips that can guide you.
In terms of technical capabilities, videos produced using Sora stand out in comparison to competitors such as Midjourney V1 and Google Veo 3. These videos boast high resolution, synchronized audio, and remarkable creativity. Notably, Sora’s standout feature known as “cameo” enables users to incorporate others’ likenesses into nearly any AI-generated scenario, resulting in remarkably lifelike videos.
The widespread concern surrounding Sora stems from its potential to facilitate the creation of harmful deepfakes, propagate misinformation, and blur the line between reality and falsehood. Notably, public figures and celebrities are particularly susceptible to these deepfakes, prompting organizations like SAG-AFTRA to urge OpenAI to bolster its protective measures.
Recognizing AI-generated content remains an ongoing challenge for technology companies, social media platforms, and the general public alike. However, there are indicators that can assist in determining if a video was generated using Sora.
Each video produced through the Sora iOS application includes a watermark upon download—a white Sora logo resembling a cloud icon that moves around the video’s edges. This watermarking technique is similar to that used in TikTok videos and serves as a visual cue for identifying AI-generated content.
While watermarks provide valuable clues, they are not infallible. Static watermarks can be easily removed through cropping, and even moving watermarks like Sora’s can be eliminated using specialized apps. Therefore, solely relying on watermarks may not always be foolproof.
Examining a video’s metadata might seem daunting at first glance but can be a useful method for determining its authenticity. Metadata contains information automatically embedded in content during creation, offering insights into how an image or video was produced.
By analyzing a Sora video using this tool, one can verify that it was “issued by OpenAI” and confirm that it is AI-generated. This confirmation allows users to ascertain whether a video was created using Sora.
On Meta-owned social media platforms like Instagram or Facebook, users may receive assistance in identifying AI content as Meta has internal mechanisms for flagging such content. While these systems are not flawless, flagged posts are clearly labeled as containing AI-generated material.
Ultimately, the most reliable means of confirming whether content is AI-generated is through disclosure by the creator. Many social media platforms now provide options for users to label their posts as AI-generated.
There is no definitive method for instantly distinguishing between real and AI-generated videos. To safeguard against deception online, it is essential not to unquestioningly accept everything seen on the internet without critical evaluation.
