OpenAI’s New Product Makes Incredibly Realistic Fake Videos

Episode 716,   Feb 26, 09:00 PM

A security expert weighs in on Sora, OpenAI’s new text-to-video generator, and the risks it could pose, especially during an election year.

OpenAI, the company behind the chatbot ChatGPT and the image generator DALL-E, unveiled its newest generative AI product last week, called Sora, which can produce extremely realistic video from just a text prompt. In one example released by the company, viewers follow a drone’s-eye view of a couple walking hand-in-hand through snowy Tokyo streets. In another, a woman tosses and turns in bed as her cat paws at her. Unless you’re an eagle-eyed AI expert, it’s nearly impossible to distinguish these artificial videos from those shot by a drone or a smartphone.

Unlike previous OpenAI products, Sora won’t be released right away. The company says that for now, its latest AI will only be available to researchers, and that it will gather input from artists and videographers before it releases Sora to the wider public.

But the fidelity of the videos prompted a polarizing response on social media. Some marveled at how far the technology had come while others expressed alarm at the unintended consequences of releasing such a powerful product to the public—especially during an election year.

Rachel Tobac, an ethical hacker and CEO of SocialProof Security, joins guest host Sophie Bushwick to talk about Sora and what it could mean for the rest of us.

Transcripts for each segment will be available the week after the show airs on sciencefriday.com.

Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.