Meta V-JEPA 2: Meta Introduces Advanced 1.2 Billion-Parameter World Model for Visual Understanding, Prediction for Robot Interaction With Unfamiliar Objects, Environment
Meta V-JEPA 2, a new advanced 1.2 billion-parameter world model designed for visual understanding and prediction. V-JEPA 2 is trained primarily on video, it enables robots to interact with unfamiliar objects and environments by imagining consequences before acting.

Meta AI launched a new world model, V-JEPA 2, that comes with state-of-the-art performance in the visual understanding and prediction of the physical world. The Meta Video Joint Embedding Predictive Architecture 2 (V-JEPA 2) is a 1.2 billion-parameter model that comes with improvements on the first model, V-JEPA, shared in 2022. The new model enables the robots to interact with unfamiliar (unseen) objects and environments in order to complete a task. Before taking action, the Meta V-JEPA 2 model, trained primarily on video, imagines the potential consequences like understanding, predicting and planning. OpenAI o3 Pro Model Now Available to ChatGPT Team, Comes With Advanced Reasoning, Visual Input Support and Web Search Capabilities.
Meta V-JEPA 2 Released, Trained on Video for Robots Interactions, Action Predictions
(SocialLY brings you all the latest breaking news, fact checks and information from social media world, including Twitter (X), Instagram and Youtube. The above post contains publicly available embedded media, directly from the user's social media account and the views appearing in the social media post do not reflect the opinions of LatestLY.)