Runway launches Characters, a GWM-1 video agent API for custom conversational avatars
Original: Introducing Runway Characters View original →
Runway announced Runway Characters on March 9, 2026 as a real-time video agent API for building conversational avatars. The company says the product is powered by GWM-1 and can generate a speaking, listening digital character from a single reference image, without fine-tuning.
According to Runway, developers can choose photorealistic or stylized appearances and then control voice, personality, knowledge, and actions through API. The company highlights facial expression, eye movement, lip-sync, and gesture control as part of the real-time interaction loop, framing the launch as a shift from text chat toward video-first interfaces.
- One image can define a character's appearance.
- The API exposes controls for voice, personality, knowledge, and actions.
- Runway says the system can maintain quality across longer conversations.
Runway is positioning Characters for enterprise deployment rather than just demos. The launch post calls out customer support, learning and development, and brand experiences as primary use cases. In those workflows, a character can connect to enterprise knowledge bases, create support tickets, or trigger actions such as order handling in real time. Partners including BBC and Silverside are already using the product, according to the company.
The rollout is split between developers and consumers. Enterprise teams can access the API through Runway's developer platform at dev.runwayml.com starting immediately, while consumers get preset avatars in the Runway web app. The significance is less about a single avatar demo and more about Runway turning its world-model research into a programmable product for customer-facing AI experiences.
Related Articles
r/LocalLLaMA reacted because this was not a polished game pitch. The hook was a local world model turning photos and sketches into a strange little play space on an iPad.
Runway has raised $315 million in Series E financing led by General Atlantic. The company says the capital will fund next-generation world-model pretraining and expansion into new products and industries.
HY-World 2.0 turns text, images, multi-view inputs, or video into 3D Gaussian Splatting scenes. The stronger signal is reproducibility: the authors say model weights, code, and technical details are available.
Comments (0)
No comments yet. Be the first to comment!