24 C
New York
Friday, September 20, 2024

Runway unveils new hyper realistic AI video model Gen-3 Alpha


It’s time to celebrate the incredible women leading the way in AI! Nominate your inspiring leaders for VentureBeat’s Women in AI Awards today before June 18. Learn More


New York City-based Runway ML, also known as Runway, was among the earliest startups to focus on realistic high-quality generative AI video creation models.

But following the debut of its Gen-1 model in February 2023 and Gen-2 in June 2023, the company has since seen its star occluded by other highly realistic AI video generators, namely OpenAI’s still-unreleased Sora model and Luma AI’s Dream Machine model released last week.

That changes today, however, as Runway is hitting back in the generative AI video wars in a big way: today, it announced Gen-3 Alpha, what it says is in a blog post is the “first of an upcoming series of models trained by Runway on a new infrastructure built for large-scale multimodal training,” and a “a step towards building General World Models,” or AI models that can “represent and simulate a wide range of situations and interactions, like those encountered in the real world.” See sample videos made with Gen-3 Alpha by Runway below throughout this article:

Gen-3 Alpha allows users to generate high-quality, detailed, highly realistic video clips of 10 seconds in length, with high precision and a range of emotional expressions and camera movements.


VB Transform 2024 Registration is Open

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now


According to an email from a Runway spokesperson to VentureBeat: “This initial rollout will support 5 and 10-second generations, with noticeably faster generation times. A 5-second clip takes 45 seconds to generate, and a 10-second clip takes 90 seconds to generate.”

No precise release date has yet been given for the model, with Runway only showing demo videos on its website and social account on X, and it is unclear if it will be available through Runway’s free tier or require a paid subscription to access (which starts at $15 per month or $144 per year).

After this article was published, VentureBeat interviewed Runway co-founder and chief technology officer (CTO) Anastasis Germanidis who confirmed the new Gen-3 Alpha model would be available in “days” to paying Runway subscribers first, but that the free tier was on deck to get the model at some yet-to-be announced point in the future.

A Runway spokesperson echoed this pronouncement, emailing VentureBeat to say: “Gen-3 Alpha will be available in the coming days, available to paid Runway subscribers, our Creative Partners Program, and Enterprise users.”

On LinkedIn, one Runway user Gabe Michael stated he expected to receive access later this week:

On X, Germanidis wrote that Gen-3 Alpha “will soon be available in the Runway product, and will power all the existing modes that you’re used to (text-to-video, image-to-video, video-to-video), and some new ones that only are only now possible with a more capable base model”

Germanidis also wrote that since releasing Gen-2 in 2023, Runway learned “video diffusion models are nowhere close to saturating performance gains from scaling, and that those models, in learning the task of predicting video, build really powerful representations of the visual world.”

Diffusion is the process by which an AI model is trained to recompose visuals (still or moving) of concepts from pixellated “noise,” based on learning those concepts from annotated image/video and text pairs.

Runway says in its blog post that Gen 3-Alpha is “trained jointly on videos and images,” and “was a collaborative effort from a cross-disciplinary team of research scientists, engineers, and artists,” though specific data sets have not yet been disclosed — following a trend of most other leading AI media generators who also don’t disclose precisely what data their models were trained on, and if any was procured through paid licensing deals or just scraped on the web.

Critics argue AI model makers should be paying the original creators of their training data through licensing deals and have even filed copyright infringement lawsuits to this effect, but AI model companies by and large take the stance that they are legally allowed to train on any publicly posted data.

Runway’s spokesperson emailed VentureBeat the following question when asked what training data was used in Gen-3 Alpha: “We have an in-house research team that oversees all of our training and we use curated, internal datasets to train our models.”

Interestingly, Runway also notes that it has already been “collaborating and partnering with leading entertainment and media organizations to create custom versions of Gen-3,” which “allows for more stylistically controlled and consistent characters, and targets specific artistic and narrative requirements, among other features.”

No specific organizations are mentioned, but previously, filmmakers behind acclaimed and award-winning films such as Everything, Everywhere, All at Once and The People’s Joker have disclosed they used Runway to make effects for portions of their films.

Runway includes a form in its Gen-3 Alpha announcement inviting other organizations interested in getting their hands on custom versions of the new model to apply here. No price has been publicly posted for how much training a custom model costs.

Meanwhile, it’s clear that Runway is not giving up the fight to be a dominant player or leader in the fast-moving gen AI video creation space.


Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles