Back to Search Start Over

Movie Gen: A Cast of Media Foundation Models

Authors :
Polyak, Adam
Zohar, Amit
Brown, Andrew
Tjandra, Andros
Sinha, Animesh
Lee, Ann
Vyas, Apoorv
Shi, Bowen
Ma, Chih-Yao
Chuang, Ching-Yao
Yan, David
Choudhary, Dhruv
Wang, Dingkang
Sethi, Geet
Pang, Guan
Ma, Haoyu
Misra, Ishan
Hou, Ji
Wang, Jialiang
Jagadeesh, Kiran
Li, Kunpeng
Zhang, Luxin
Singh, Mannat
Williamson, Mary
Le, Matt
Yu, Matthew
Singh, Mitesh Kumar
Zhang, Peizhao
Vajda, Peter
Duval, Quentin
Girdhar, Rohit
Sumbaly, Roshan
Rambhatla, Sai Saketh
Tsai, Sam
Azadi, Samaneh
Datta, Samyak
Chen, Sanyuan
Bell, Sean
Ramaswamy, Sharadh
Sheynin, Shelly
Bhattacharya, Siddharth
Motwani, Simran
Xu, Tao
Li, Tianhe
Hou, Tingbo
Hsu, Wei-Ning
Yin, Xi
Dai, Xiaoliang
Taigman, Yaniv
Luo, Yaqiao
Liu, Yen-Cheng
Wu, Yi-Chiao
Zhao, Yue
Kirstain, Yuval
He, Zecheng
He, Zijian
Pumarola, Albert
Thabet, Ali
Sanakoyeu, Artsiom
Mallya, Arun
Guo, Baishan
Araya, Boris
Kerr, Breena
Wood, Carleigh
Liu, Ce
Peng, Cen
Vengertsev, Dimitry
Schonfeld, Edgar
Blanchard, Elliot
Juefei-Xu, Felix
Nord, Fraylie
Liang, Jeff
Hoffman, John
Kohler, Jonas
Fire, Kaolin
Sivakumar, Karthik
Chen, Lawrence
Yu, Licheng
Gao, Luya
Georgopoulos, Markos
Moritz, Rashel
Sampson, Sara K.
Li, Shikai
Parmeggiani, Simone
Fine, Steve
Fowler, Tara
Petrovic, Vladan
Du, Yuming
Publication Year :
2024

Abstract

We present Movie Gen, a cast of foundation models that generates high-quality, 1080p HD videos with different aspect ratios and synchronized audio. We also show additional capabilities such as precise instruction-based video editing and generation of personalized videos based on a user's image. Our models set a new state-of-the-art on multiple tasks: text-to-video synthesis, video personalization, video editing, video-to-audio generation, and text-to-audio generation. Our largest video generation model is a 30B parameter transformer trained with a maximum context length of 73K video tokens, corresponding to a generated video of 16 seconds at 16 frames-per-second. We show multiple technical innovations and simplifications on the architecture, latent spaces, training objectives and recipes, data curation, evaluation protocols, parallelization techniques, and inference optimizations that allow us to reap the benefits of scaling pre-training data, model size, and training compute for training large scale media generation models. We hope this paper helps the research community to accelerate progress and innovation in media generation models. All videos from this paper are available at https://go.fb.me/MovieGenResearchVideos.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.13720
Document Type :
Working Paper