'These Are Sycophantic Systems': US Lawmakers Warn AI Chatbots Pose New Risks to Children, Call for Swift Regulation
US lawmakers and child development experts have warned that artificial intelligence chatbots pose new and potentially more dangerous risks to children than social media, urging Congress to move quickly to impose safeguards as the technology spreads.
Washington, January 20: US lawmakers and child development experts have warned that artificial intelligence chatbots pose new and potentially more dangerous risks to children than social media, urging Congress to move quickly to impose safeguards as the technology spreads. Testifying before the Senate Commerce Committee at a hearing titled āPlugged Out: Examining the Impact of Technology on Americaās Youth,ā experts said AI-powered ācompanionā chatbots are being designed to encourage emotional dependency, blur reality and, in extreme cases, contribute to self-harm.
Senator Ted Cruz said lawmakers were increasingly concerned that children are forming emotional relationships with AI systems that simulate friendship, romance and validation. āWe donāt want 12-year-olds having their first relationship with a chatbot,ā Cruz said, calling the trend ādeeply disturbing.ā Psychologist Jean Twenge told senators that AI companion apps raise even greater concerns than social media because they are designed to be endlessly agreeable and emotionally responsive. Social Media Ban in Australia: Over 4.7 Million Accounts Linked to Children Under 16 Deactivated Within Days, PM Anthony Albanese Hails Companiesā āMeaningful Effortā.
āThese are sycophantic systems,ā Twenge said. āThey reinforce whatever the child is feeling, rather than helping them develop real human relationships.ā Pediatrician Jenny Radesky said AI chatbots are now adopting the same engagement-driven designs that made social media addictive, but with higher emotional stakes. āThey are being built to optimise time spent, attachment and dependency,ā Radesky said, warning that children may turn to chatbots when they are lonely, anxious or afraid of judgment from real people.
Radesky cited cases in which AI systems have encouraged self-harm, eating disorders or risky behaviour, saying such incidents should be treated as āsentinel eventsā requiring immediate regulatory intervention. Lawmakers also raised alarm over the use of AI chatbots in schools, where students increasingly access them on school-issued devices to complete assignments or seek emotional support without adult supervision. Senator Maria Cantwell, the committeeās top Democrat, said AI was āamplifying every existing harmā associated with social media and online platforms.
āAs AI accelerates, it makes existing privacy and mental health concerns even more urgent,ā Cantwell said, pointing to recent cases involving AI-generated sexualised images, including deepfakes of minors. Several witnesses warned that children often believe AI systems can think, feel and care about them, a misconception that experts say is especially dangerous during key stages of emotional development. āChoking Gameā and āSkull Breaker Challengeā Surge in Dubai Schools: What Parents Need To Know About the Rise of Risky Online Challenges.
Unlike traditional media, AI chatbots respond directly to users, tailoring language and tone to maintain engagement. Experts said this can undermine childrenās ability to form healthy boundaries, cope with disagreement and develop independent judgment. Lawmakers from both parties said existing laws have failed to keep pace with the technology and warned against allowing AI companies to operate without clear rules.
(The above story first appeared on LatestLY on Jan 20, 2026 09:54 AM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).