ByteDance unveiled two new video generators at an event in Shenzhen last week — PixelDance and Seaweed.
PixelDance focuses on AI-driven character animation, generating 10-second videos of characters with lifelike human movements that include walking, turning, picking up objects, and interacting with their environment. The model maintains consistency in character appearance, proportions, and scene details across varying camera angles and shots, which is a bone of contention among users of other existing models. With a single text prompt, users can build complex camera movements like 360-degree pans, zooms, and tracking shots.
Seaweed offers similar features, but stretches video generation to 30 seconds, with the ability to create up to to 2 minutes of consistent shots. Both models are in an invite-only testing phase and only available to a limited number of users, however, the models could be made publicly available next month.
ByteDance aren't the only ones entering the generative AI video race…
Last week Meta launched Movie Gen, a generative AI tool that uses text inputs to automatically generate videos and audio up to 16 seconds in length, as well as edit existing footage and still images. Since the audio added to videos is also AI-generated, it can match the imagery with ambient noises, sound effects, and background music.
Meta says Movie Gen can also create custom videos from images or change elements of an existing video. For example, the company showed a still headshot of a woman transformed into a video of her sitting in a pumpkin patch sipping a drink.
As impressive as it is, Meta's chief product officer, Chris Cox, wrote on Threads that Meta isn't ready to release the product anytime soon, as it's still expensive and generation time is too long.
ByteDance and Meta's new generative-AI video models will eventually compete with OpenAI's Sora, Kuaishou's Kling AI, Pika Labs, and other pioneers in the space.