ByteDance's Seedance 2.0 Shakes Hollywood With Quad-Modal AI Video Generation
A New Standard in AI Video Generation
ByteDance's Seedance 2.0, unveiled February 12, became the most-discussed AI tool on the internet within 72 hours. The model accepts text prompts, reference images, video clips, and audio files simultaneously — a quad-modal architecture that enables up to 20-second clips at 1080p resolution with director-level control over motion, lighting, framing, and character consistency. TechCrunch reports the videos are strikingly close to real studio productions.
Hollywood Fires Back
Within a day of launch, users generated viral clips featuring synthesized celebrities, Disney characters, and scenes nearly indistinguishable from licensed IP. The Motion Picture Association stated the model engaged in unauthorized use of U.S. copyrighted works on a massive scale. The Walt Disney Company sent ByteDance a cease-and-desist letter on February 13 alleging training on Disney works without compensation, while Paramount Skydance accused ByteDance of blatant infringement of properties including Star Trek and South Park. Deadpool screenwriter Rhett Reese wrote on social media: it's likely over for us.
ByteDance's Response and Availability
ByteDance pledged to strengthen safeguards following the backlash. The model is currently accessible directly in China; international users are awaiting a rollout through CapCut with no confirmed global release date.
Related Articles
ByteDance officially launched Seedance 2.0, its AI video generation model. Game Science's CEO called it 'the strongest video-generation model on the planet,' while strict real-person content restrictions were implemented.
ByteDance's Seedance 2.0 has arrived seemingly out of nowhere, generating hyperrealistic AI videos that have Hollywood insiders deeply concerned. The model creates footage indistinguishable from real camera recordings using a single text prompt.
EPFL researchers have developed a method that essentially eliminates drift in generative video, enabling stable, high-quality videos lasting several minutes without increased computational demands. To be presented at ICLR 2026.
Comments (0)
No comments yet. Be the first to comment!