Adobe announced a new upgrade for his firefly video model on Thursday, and introduced new models of third -party artificial intelligence (AI) that will be available on its platform. California -based software Dev has said he is now improving the movement of his firefly video model to make it more natural and smooth. In addition, the company is adding modern video control to users to produce more permanent video output. Moreover, Adobe also introduced four new models of the third party that are being included in the fireflyboards.
A Blog PostSoftware Dev describes new features and tools in detail in which Adobe Flow Fly users will soon receive. These features will only be accessible to paid users, some of them are special for the web app now.
Adobe’s firefly video model already produces videos with realistic, physics -based movement. Now, the company is increasing its ability to provide smooth, more natural transfer. This improvement applies to both 2D and 3D materials, not only for the characters but also for elements such as floating bubbles, stirring leaves, and flowing clouds.
The recently released firefly app also has the support of new third party new models. Adobe introduces labs’ icons and videos of Aplicas and Monilli. They will soon be included in the fireflyboards. On the other hand, Luma AI’s Ray 2 and Packa 2.2 AI model, which are already available on the board, will soon support the ability of video generation (currently, they can only be used to produce images).
Coming into new video controls, Adobe has added additional tools to minimize additional tools, and reduce the need to edit inline. The first tool allows users to upload the video as a reference, and will follow its actual synthesis in the firefly manufactured output.
Another new inclusion is the style of the preset tool. Customers who produce AI videos can now choose a style such as clams, mobile phones, line art, or 2D, and firefly will follow the style directive in the final output. The key frame crop is also possible at the stage of publishing. Users can upload the first and last frame of a video, and the firefly will produce a video that is similar to the proportion of format and aspects.
In addition, Adobe Beta is also introducing a new tool, dumb generated sound effects. Tool allows users to make customs audio using sound or text prompt, and puts it on AI -made video. When using their sound, the user can also order sound time and intensity as the firefly will produce customized audio from sound energy and rhythm.
Finally, the company also introduces a text in the Avatar feature that transforms the script into avatar -led videos. Customers will be able to choose their preferred avatar from Adobe’s pre -listed library, customize the background, and even select the tone of the speech created.