World News | Singaporeans Face Rising Number of Harmful Content on Social Media
Get latest articles and stories on World at LatestLY. Singaporeans are facing a rise in the number of harmful content encounters on social media platforms, a government survey has found.
Singapore, Jul 25 (PTI) Singaporeans are facing a rise in the number of harmful content encounters on social media platforms, a government survey has found.
While cyberbullying and sexual content remained the most common, there was a significant climb in content that incited racial or religious tension, as well as violent content, the Ministry of Digital Development and Information (MDDI) said on Thursday.
Social media also carried more harmful content than other platforms, such as messaging apps, search engines and gaming platforms, Channel News Asia reported, citing the survey.
MDDI, previously known as the Ministry of Communications and Information, conducted the annual Online Safety Poll in April.
It surveyed 2,098 Singapore respondents aged 15 years old and above, to understand the experiences of Singapore users with harmful online content, and their action to address such content.
It included social media services designated by the Infocomm Media Development Authority (IMDA) under the Code of Practice for Online Safety, namely Facebook, HardwareZone, Instagram, TikTok, X and YouTube.
About three-quarters (74 per cent) of those polled encountered harmful content online, an increase from 65 per cent last year.
Two-thirds of respondents (66 per cent) encountered the content on the said social media platforms, up from 57 per cent last year.
In comparison, 28 per cent came across such content on other platforms, such as messaging websites and apps, search engines, email, news websites, gaming platforms and app stores, said MDDI. This was similar to last year's level.
Cyberbullying and sexual content remained the most common types of harmful content on social media, with 45 per cent of respondents encountering them, according to the survey as reported by the Channel.
However, there was a “notable increase” from last year in encounters with content that incited racial or religious tension (13 per cent increase) and violent content (19 per cent increase), said MDDI.
Close to 60 per cent of respondents came across the harmful content on Facebook, while 45 per cent faced them on Instagram. Both platforms are owned by Meta.
“While the prevalence of harmful content on these platforms may be partially explained by their bigger user base compared to other platforms, it also serves as a reminder of the bigger responsibility these platforms bear,” the Channel quoted MDDI as saying.
When taking action against harmful social media content, only a quarter of respondents reported to the platform. About one-third blocked the offending account or user.
Eight in 10 of those who tried making reports experienced issues with the reporting process, noted MDDI.
These included the platforms not removing the content in question or disabling the account responsible, not providing an update on the outcome, and also allowing the removed content to be posted again.
However, six in 10 respondents simply ignored the nefarious content without taking further action.
Commonly cited reasons included respondents not seeing the need to do anything, being unconcerned about the issue, or believing that making a report would not make a difference.
“Given the complex, dynamic and multi-faceted nature of online harms, the government, industry, and people must work together to build a safer online environment,” MDDI was quoted as saying.
Amendments to the Broadcasting Act kicked in February last year, letting the government quickly disable access to egregious content on the designated social media platforms.
The Code of Practice for Online Safety also came into effect in July last year, requiring the platforms to take steps to minimise children's exposure to inappropriate content.
The platforms are due to submit their first online safety compliance reports by the end of this month, said MDDI.
“It will provide greater transparency to help users understand the effectiveness of each platform in addressing online harms. The IMDA will evaluate their compliance and assess if any requirements need to be tightened,” said the ministry.
“Beyond the Government's legislative moves, the survey findings showed that there is room for all stakeholders, especially designated social media services, to do more to reduce harmful online content and to make the reporting process easier and more effective,” it said.
The ministry also urged users to do their part to act proactively against harmful online content by reporting to the respective platforms.
Workshops, webinars, and family activities are also being organised as part of the IMDA's Digital for Life movement, to provide users with knowledge and tools to keep themselves and their children safe online, said MDDI.
(The above story is verified and authored by Press Trust of India (PTI) staff. PTI, India’s premier news agency, employs more than 400 journalists and 500 stringers to cover almost every district and small town in India.. The views appearing in the above post do not reflect the opinions of LatestLY)