On Thursday, Meta (META.O) introduced a groundbreaking artificial intelligence model called Meta Motivo, which aims to revolutionize the Metaverse by enabling more lifelike, human-like movements in digital avatars. This new AI technology is designed to control the body movements of human-like digital agents, enhancing realism in virtual environments. The company believes that Meta Motivo could significantly improve the Metaverse experience by addressing common issues with digital avatars, creating more natural, human-like movements and expanding the potential for fully embodied agents.
Meta’s investments in AI, augmented reality, and Metaverse technologies have been substantial, with the company increasing its capital expense forecast for 2024 to a record high of $37 billion to $40 billion. In line with its open approach, Meta has also made many of its AI models available for free to developers, encouraging the creation of innovative tools that could benefit its platform and services.
Alongside Meta Motivo, Meta introduced a new training model for language modeling called the Large Concept Model (LCM). Unlike traditional language models, which predict the next token in a sequence, the LCM predicts the next concept or high-level idea, represented by a full sentence in a multimodal and multilingual embedding space. This new model aims to decouple reasoning from language representation, providing more sophisticated and versatile AI capabilities.
Meta also revealed other AI tools, including Video Seal, which allows for the embedding of an invisible watermark in videos. This technology ensures traceability without visible markers, adding an extra layer of security and authenticity to video content.
These advancements highlight Meta’s commitment to pushing the boundaries of AI, virtual reality, and the Metaverse, with the potential to create new, immersive experiences for users and developers alike.