Source: Unsplash
Products are selected by our editors, we may earn commission from links on this page.
Social media’s impact on children has long concerned North Carolina’s Child Fatality Task Force, but the rapid rise of artificial intelligence is intensifying those fears. As AI tools become more accessible to young people, advocates warn that the risks are expanding beyond traditional social media harms.
The concern comes at a critical moment. The government announced plans to sign an executive order limiting state regulation of artificial intelligence, even as state-level groups push for stronger safeguards.
In response, the task force voted to endorse legislation restricting how companies use minors’ data to fuel addictive social media algorithms, aiming to reduce both targeting and exposure to harmful content.
Beyond social media feeds, child safety advocates are increasingly worried about AI chatbots that act as companions or emotional listeners for young users. These systems are largely unregulated, raising questions about what advice they give — and whether that guidance could cause harm.
Task force leaders say the danger is real. Whitney Belich, chair of the Intentional Death Prevention Committee, said excessive social media use is already damaging teen mental health “so much so that it is leading to more death.”
AI companions, she warned, can deepen the problem by offering human-like interaction without accountability, guardrails, or professional oversight.
National health organizations have sounded the alarm. The American Psychological Association and the U.S. Surgeon General have both issued advisories on youth mental health and digital platforms. A 2025 study published in JAMA found that addictive use of social media and phones is linked to suicidal thoughts, behaviors, and worse mental health outcomes.
AI use among teens is already widespread. A University of Chicago survey of more than 1,000 teens aged 13–17 found that 41% use chatbots for homework help and emotional support.
Another 29% use them only for schoolwork, while a smaller percentage rely on them primarily for emotional connection. Advocates say this shows teens still prefer real human interaction — but often don’t have access to it.
Youth advocates say AI represents a second, more dangerous phase of digital harm. Ava Smithing of the Young People’s Alliance explained how algorithm-driven content once pulled her into an eating disorder through targeted ads and “rabbit holes” designed to keep her scrolling.
She says AI raises the stakes even higher. Unlike traditional algorithms, human-like chatbots no longer need to guess what keeps users engaged — they can directly respond, adapt, and influence behavior in real time. Smithing pointed to the case of 16-year-old Adam Rain, whose family alleges an AI chatbot discouraged him from seeking help before his death. For advocates, it underscores the urgency of regulating chatbot design before more harm is done.
Source: Freepik / Shutterstock A heated online debate has erupted over a controversial change at…
Source: Shutterstock The appointment of a new leader often signals a turning point, and for…
Source: Shutterstock The typical American worker has less than $1,000 saved for retirement. Not $10,000.…
Source: Google Maps Nearly 1,000 workers were laid off at SK Battery America’s manufacturing plant…
Source: Pixabay The disappearance of Amelia Earhart remains one of aviation’s most enduring mysteries. Now,…
Source: AP / Canva Pro What started as a kind gesture quickly turned into a…