Christchurch Terror Attack: Facebook Concedes AI Tools 'Were Not Perfect' to Detect Horrific Live Video
Image used for representational purpose | (Photo Credits: Getty Images)

Auckland, March 21: Social media giant Facebook on Thursday conceded that its artificial intelligence (AI) tools "were not perfect" enough to detect the horrific live video streamed by Brenton Tarrant, the 28-year-old Australian who massacred 50 Muslims worshipping at two mosques in New Zealand's Christchurch last Friday.

In a statement issued earlier in the day, Facebook promised to strengthen the AI setup, to ensure that such graphic content is curbed immediately, before it gets circulated on its platform. The company, however, ruled out imposing a time delay to Facebook Live, similar to the broadcast delay sometimes used by TV stations. New Zealand's Biggest Companies to Pull Ads From Facebook, Google After Christchurch Attack.

"There are millions of Live broadcasts daily, which means a delay would not help address the problem due to the sheer number of videos," Guy Rosen, Facebook's Vice President of Integrity, said in a statement.

"More importantly, given the importance of user reports, adding a delay would only further slow down videos getting reported, reviewed and first responders being alerted to provide help on the ground," Rosen added.

Strapped with a GoPro camera to his head, the gunman broadcast graphic footage of the New Zealand shooting via Facebook Live for 17 minutes, which was later shared in millions on other social media platforms, including Twitter and YouTube.

Fifty people were killed and dozens injured in the shootings at Al Noor Mosque and the Linwood Avenue Masjid in Christchurch on March 15 after 28-year-old Australian Brenton Tarrant opened indiscriminate firings.

The circulation of the video on social media platforms attracted widespread criticism from different quarters.

In a letter to CEOs of Facebook, Twitter, YouTube and Microsoft, House Homeland Security Committee Chairman Bennie Thompson asked the technology companies to brief the US Congress on March 27 regarding their response to dissemination of the video on their platforms.

Thompson also warned the technology companies that unless they do better in removing violent content, the Congress could consider policies to bar such content on social media.

Facebook on Thursday said it was exploring how AI could help it react faster to this kind of content on a live streamed video.

"AI has made massive progress over the years and in many areas, which has enabled us to proactively detect the vast majority of the content we remove. But it's not perfect.

"However, this particular video did not trigger our automatic detection systems," Rosen said, referring to the New Zealand attack video.

(With IANS inputs)