We explore large-scale training of generative models on video data. Specifically, we train text-conditional diffusion models jointly on videos and images of variable durations, resolutions and aspect ratios. We leverage a transformer architecture that operates on spacetime patches of video and image latent codes. Our largest model, Sora, is capable of generating a minute of high fidelity video. Our results suggest that scaling video generation models is a promising path towards building general purpose simulators of the physical world.
![](https://i1.wp.com/www.aibots.dk/wp-content/uploads/2024/03/aibots_dk_logo.jpg?w=1920&resize=1920,1280&ssl=1)