Facebook parent Meta announces AI video generator Make-a-Video
The parent company of Facebook, Meta, has introduced a new tool that will let users to develop videos using artificial intelligence. The videos may then be shared on Facebook.
The most recent addition to the recent drive for AI-generated art is a tool called Make-a-Video, which is not yet accessible for public use but will soon provide users the ability to build videos based on a text input.
The movies last little longer than five seconds and have no soundtrack, but they represent a big jump in AI-generated art from still photos to video clips. The leap was made possible by the advancement of the technology.
The instrument was developed by a group of machine learning engineers working at Meta. They produced a white paper on the findings of their research, and it can be seen on the website of Cornell University.
The company has also provided movies and the written prompts that were used to generate them that were produced using the programme.
“This is fairly incredible forward movement. It is a lot more difficult to generate video than it is to generate still images because the system not only needs to correctly generate each pixel, but it also needs to predict how they’ll change over time “Mark Zuckerberg, who is also the chief executive officer of Meta and the co-founder of Facebook, said in a statement.
“Make-A-Video provides a solution to this problem by including an additional layer of unsupervised learning. This gives the system the ability to comprehend motion in the real world and apply this comprehension to the process of conventional text-to-image production. In the future, we intend to present this as a demonstration. Please take some time to enjoy the films in the meantime.”
It is expected that the success of Meta’s AI model will encourage other businesses and institutions to boost their investments in videos made by artificial intelligence.
A piece of artwork created by an artist located in New York City and employing a technique known as “latent diffusion artificial intelligence” was granted the first known registered copyright late last month.
According to a statement that was published on their Instagram account, Kris Kashtanova was granted a copyright for a graphic novel titled Zarya of the Dawn that was created with the help of the commercial AI art generator Midjourney. UPI was able to confirm ownership of the copyright by consulting public documents.
Kashtanova’s claim is the first known to have been registered that utilised models driven by latent diffusion. It is possible that AI-generated artwork has been registered in the past with the United States Copyright Office; nevertheless, this is the first known instance of such a registration.
The model developed by Meta, in contrast to models developed previously, is trained not under the supervision of humans but rather on unlabeled video footage, in addition to pairs of images and captions.
It does this by taking visual static and denoising it until the image referred to in the prompt appears. These techniques are used in the process of video generation.
“Make-A-Video research draws on recent progress made in text-to-image generation technology created to enable text-to-video creation,” Meta claimed in the Make-a-Video website. “Make-A-Video research builds on recent work made in text-to-image generation technology.”
“The system learns what the world looks like by comparing photographs with descriptions of what it sees and how people typically describe it. In addition to this, it watches unlabeled videos to figure out how the world works. Make-A-Video allows you to bring your imagination to life by making quirky and one-of-a-kind videos using just a few words or lines of text. With this information, you can bring your imagination to life with Make-A-Video.”
news source: upi