Toronto, April 16: The Chinese government has deployed a powerful artificial intelligence tool against its own population – it is using facial recognition software powered by AI to track Uighur Muslims in public spaces.
It is the first known example of a government intentionally using artificial intelligence for racial profiling, experts said according to the New York Times. The facial recognition technology has been integrated into the government’s surveillance networks which have been deployed specifically against its Uighur population – a Muslim population from the autonomous Xinjiang province of China. China has been accused of mass detention and abuse of human rights against the Uighur population but this is the first report that highlights the extensive network deployed against them by the Chinese state.
The NYT reports that this technology has been deployed in other parts of China apart from Xinjiang province – including cities like Hangzhou, Wenzhou, and the province of Fujian. The software tracks the movement of Uighurs and converts it into data points to maintain a record of their movements over an extended period of time. Read: China Legalises 'Re-education' Camps for its Uighur Population
The report says that Chinese police departments and technology companies described the practice as “minority identification,” though the phrase is understood to be a euphemism for a tool that sought to identify people belonging to Uighur ethnicity exclusively. The Chinese A.I. companies behind the software include Yitu, Megvii, SenseTime, and CloudWalk, which are each valued at more than $1 billion.
With China’s Uighur monitoring tool, the country has positioned itself at the forefront of facial recognition technology powered by machine learning. The NYT quoted Chinese tech investor Kai-Fu Lee on the leaps and bounds made in this segment. Lee said, China has an advantage in developing A.I. because its leaders are less fussed by “legal intricacies” or “moral consensus.” “We are not passive spectators in the story of A.I. — we are the authors of it,” Lee wrote last year. “That means the values underpinning our visions of an A.I. future could well become self-fulfilling prophecies.”