Mark Zuckerberg's Meta introduced an open and reproducible vision-language model, the PLM (Perception Language Model). The Meta Perception Language Model is capable of handling challenging visual tasks. It can watch a video and offer details. The Meta PLM can also produce a detailed description of a subject's action and also indicate where it takes place. The company said that the new AI model could help the open source community build more capable computer vision systems. OpenAI Stargate 1: Sam Altman Shares Update on Massive AI Facility Construction in Abilene, Texas in Partnership With Oracle, Likely To Cost USD 100 Billion.

Meta PLM (Meta Perception Language Model) Launched for Handling Visual Task 

(SocialLY brings you all the latest breaking news, viral trends and information from social media world, including Twitter (X), Instagram and Youtube. The above post is embeded directly from the user's social media account and LatestLY Staff may not have modified or edited the content body. The views and facts appearing in the social media post do not reflect the opinions of LatestLY, also LatestLY does not assume any responsibility or liability for the same.)