News
14don MSN
Meta unveils V-JEPA 2, a groundbreaking AI model trained on video to observe, predict, and reason like humans, bringing smarter, self-learning robots closer to reality.
Dubbed as a “world model,” Meta’s New V-JEPA 2 AI model uses visual understanding and physical intuition to enhance reasoning ...
Meta has launched V-JEPA 2, an open-source AI that uses spatial reasoning to predict physical events without video, aiding real-world tasks like self-driving cars.
Hosted on MSN15d
Meta’s new V-JEPA 2 AI helps robots predict and plan using nothing but raw video dataThese machines used V-JEPA 2 to pick up unfamiliar objects, reach for targets, and place items in new locations. This marks a step forward in enabling robots to function in unpredictable environments.
V-Jepa 2 is an AI world model that works differently to large language models – it learns independently. Meta sees this as the future.
V-JEPA 2 is an extension of the V-JEPA model that Meta released last year, which was trained on over 1 million hours of video. This training data is supposed to help robots or other AI agents ...
Meta has unveiled V-JEPA 2, a clever bit of AI that gives robots something approaching common sense about the physical world.
Meta today introduced V-JEPA 2, a 1.2-billion-parameter world model trained primarily on video to support robotic systems.
Meta challenges rivals with V-JEPA 2, its new open-source AI world model. By learning from video, it aims to give robots physical common sense for advanced, real-world tasks.
Results that may be inaccessible to you are currently showing.
Hide inaccessible results