AI Can Steal Your Fingerprints From Selfies: Why Peace Sign Photos May Put Your Identity at Risk
What looks like a harmless peace sign selfie could now become a serious cybersecurity threat. Security experts are warning that advances in artificial intelligence are making it easier for hackers to extract fingerprints from ordinary photos shared online.
What looks like a harmless peace sign selfie could now become a serious cybersecurity threat. Security experts are warning that advances in artificial intelligence are making it easier for hackers to extract fingerprints from ordinary photos shared online.
According to a report by the South China Morning Post, Chinese security expert Li Chang recently demonstrated how AI tools can harvest fingerprint data from selfies where fingers are clearly visible. During a Chinese workplace reality show, Li used a celebrity’s peace sign photo to reveal how much biometric information could be recovered from a simple image.
Li explained that if fingertips directly face the camera from within 1.5 metres, AI-powered enhancement tools can capture highly detailed fingerprint patterns. Even photos taken from up to 3 metres away may still expose partial fingerprint details that hackers can use. Cybersecurity Alert: SilverFox Group Launches Global Phishing Attacks Using ‘ABCDoor’ Python Backdoor, Targets Indian Firms.
Using photo-editing software and AI enhancement technology, Li sharpened blurry fingerprint images into clearer biometric data. The warning has sparked fresh concerns over digital privacy because fingerprints and facial features are permanent identifiers that cannot easily be changed once compromised.
Cybersecurity experts say stolen biometric data could potentially be used for identity theft, financial fraud, or unauthorised access to secure systems. What Is Claude Mythos AI? Why Banks Are Reviewing Cybersecurity After Anthropic Warning.
The concerns come amid growing fears over AI-powered cybercrime. Recent findings from Google's threat intelligence group revealed that cybercriminals and state-backed actors from countries including China, North Korea, and Russia are increasingly using advanced AI models to scale attacks, create malware, and improve hacking operations.
“Threat actors are using AI to boost the speed, scale, and sophistication of their attacks,” said John Hultquist, chief analyst at Google’s threat intelligence division.
Meanwhile, AI safety concerns intensified after Anthropic reportedly chose not to publicly release its latest AI model, Mythos, due to fears it could be weaponised against governments, financial systems, and critical infrastructure.
Experts now advise users to avoid posting high-resolution selfies with clearly visible fingertips online, especially on public social media accounts, as AI-driven biometric theft becomes an emerging global security risk.
(The above story first appeared on LatestLY on May 16, 2026 03:22 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).