Gen-3 Alpha is a new neural network for video generation


The first artificial intelligence model for video generation was developed by Runaway last year but got overshadowed by other powerful projects. After some time, the company decided to respond to competitors by introducing Gen-3 Alpha. According to Runaway, the new neural network for video generation is capable of "representing and simulating various situations and interactions encountered in the real world." It can create video clips up to 10 seconds long, featuring precise content such as camera movements and characters with different emotions. However, the release date has not been determined yet.


gen-3-alpha-output-001 (2).gif

Users can generate videos in Text-to-Video modes. Additionally, there are extra editing tools available:
- Motion Blur - a brush to highlight objects in the image animated by the neural network.
- Advanced Camera Control - allows adjusting camera positions and achieving camera fly-through effects.
- Director Mode - an interface for convenient management of generation parameters and instant previewing of results with editing capabilities.


It is emphasized that the new Gen-3 Alpha neural network is equipped with a visual content moderation system and complies with safety rules, which is done to prevent the generation of illegal content.

gen-3-alpha-output-001 (1).gif

In addition to the listed features, Runaway offers services where the company's engineers can train the neural network on additional datasets. This capability will be beneficial for film companies and animation studios that plan to utilize generative neural networks in their work.

561 0 850 6