New Delhi, Mar 23 (PTI) Microblogging platform Koo on Thursday announced the launch of new proactive content moderation features, geared to provide users with a safer social media experience.

The new features developed in-house are capable of proactively detecting and blocking any form of nudity or child sexual abuse materials in less than 5 seconds, labelling misinformation and hiding toxic comments and hate speech on the platform, Koo said in a release.

Also Read | Lok Sabha Passes Budget Envisaging Expenditure of Rs 45 Lakh Crore for FY 2023-24.

Twitter-rival Koo said it is committed to providing a safe and positive experience for its users, being an inclusive platform that is built with a language-first approach.

Koo, while announcing the launch of the new proactive content moderation features, said these are designed to provide a safer and more secure social media experience to users.

Also Read | Virat Kohli, Anushka Sharma Launch New Non-profit Initiative SeVVA to Help People in Need.

"In order to provide users with a wholesome community and meaningful engagement Koo has identified few areas which have a high impact on user safety that is Child Sexual Abuse Materials and Nudity, Toxic comments and hate speech, misinformation and disinformation, and impersonation and is working to actively remove their occurrence on the platform," it said.

The new features are an important step towards achieving this goal.

Mayank Bidawatka, co-founder of Koo, said the platform's mission is to create a friendly social media space for healthy discussions.

"While moderation is an ongoing journey, we will always be ahead of the curve in this area with our focus on it. Our endeavour is to keep developing new systems and processes to proactively detect and remove harmful content from the platform and restrict the spread of viral misinformation. Our proactive content moderation processes are probably the best in the world," Bidawatka said.

Koo's in-house 'No Nudity Algorithm' proactively and instantaneously detects and blocks any attempt by a user to upload a picture or video containing child sexual abuse materials or nudity or sexual content. The detections and blocking happen in under 5 seconds.

Users posting sexually explicit content are immediately blocked from posting content; being discovered by other users; being featured in trending posts, or being able to engage with other users in any manner.

The safety features also actively detect, hide or remove toxic comments and hate speech in less than 10 seconds so they are not available for public viewing.

Content containing excessive blood, gore or acts of violence is overlaid with a warning for users.

Koo's in-house 'MisRep Algorithm' scans the platform for profiles who use the content or photos or videos or descriptions of well-known personalities to detect impersonated profiles and block them. On detection, the pictures and videos of well-known personalities are immediately removed from the profiles and such accounts are flagged for monitoring of bad behaviour in the future.

Koo's in-house 'Misinfo and Disinfo Algorithm' actively, and in real-time, scans all viral and reported fake news basis public and private sources of fake news, to detect and label misinformation and disinformation on a post. This minimises the spread of viral misinformation on the platform.

(The above story is verified and authored by Press Trust of India (PTI) staff. PTI, India’s premier news agency, employs more than 400 journalists and 500 stringers to cover almost every district and small town in India.. The views appearing in the above post do not reflect the opinions of LatestLY)