- cross-posted to:
- technology@lemmy.zip
- stablediffusion@lemmit.online
- cross-posted to:
- technology@lemmy.zip
- stablediffusion@lemmit.online
On Tuesday at Google I/O 2024, Google announced Veo, a new AI video-synthesis model that can create HD videos from text, image, or video prompts, similar to OpenAI’s Sora. It can generate 1080p videos lasting over a minute and edit videos from written instructions, but it has not yet been released for broad use.
This is the best summary I could come up with:
Veo’s example videos include a cowboy riding a horse, a fast-tracking shot down a suburban street, kebabs roasting on a grill, a time-lapse of a sunflower opening, and more.
Conspicuously absent are any detailed depictions of humans, which have historically been tricky for AI image and video models to generate without obvious deformations.
Google says that Veo builds upon the company’s previous video-generation models, including Generative Query Network (GQN), DVD-GAN, Imagen-Video, Phenaki, WALT, VideoPoet, and Lumiere.
While the demos seem impressive at first glance (especially compared to Will Smith eating spaghetti), Google acknowledges AI video-generation is difficult.
But the company is confident enough in the model that it is working with actor Donald Glover and his studio, Gilga, to create an AI-generated demonstration film that will debut soon.
Initially, Veo will be accessible to select creators through VideoFX, a new experimental tool available on Google’s AI Test Kitchen website, labs.google.
The original article contains 701 words, the summary contains 150 words. Saved 79%. I’m a bot and I’m open source!