YouTube is rolling out an expanded “likeness detection” tool to protect government officials, journalists and political candidates from unauthorised AI impersonations, including deepfakes. Originally designed for creators in the YouTube Partner Program, the feature scans for AI-generated content that mimics a person’s likeness, allowing verified users to review and request removal if it violates privacy policies. The pilot programme prioritises free expression, exempting parody and satire, and does not enforce automatic takedowns. Identity verification is mandatory, and the collected data will not be used to train Google’s AI models. YouTube is also supporting legislation such as the NO FAKES Act as concerns over AI misuse continue to grow. Grok Imagine New Feature Update: Elon Musk’s xAI Launches 10-Second Video Generation; Comes With Improved Audio and Video Quality.
YouTube 'AI Likeness' Detection Expanded for Journalists, Politicians
We’re expanding likeness detection on @YouTube to government officials, journalists and political candidates. This tool provides a new powerful way to manage unauthorized AI-impersonation — like deepfakes — and request removal if it violates our privacy guidelines.
— News from Google (@NewsFromGoogle) March 10, 2026
(SocialLY brings you all the latest breaking news, fact checks and information from social media world, including Twitter (X), Instagram and Youtube. The above post contains publicly available embedded media, directly from the user's social media account and the views appearing in the social media post do not reflect the opinions of LatestLY.)













Quickly


