Oregon wants to protect kids from AI chatbots. The federal government wants to open the floodgates.
Oregon just introduced a bill restricting AI companies’ access to youth. Here's why that matters.
Last week, Oregon’s Senate Committee On Early Childhood and Behavioral Health proposed a bill requiring artificial intelligence platforms to disclose when users are speaking to an AI vs. a real human being. It would also force these companies to have real, useful protocols in place to protect users showing signs of suicidal ideation.
In other words, the committee is trying to get chatbots to stop lying all the time. But a new federal order issued in December is actively hostile to the idea.
The problem doesn’t exist solely in Oregon, but it may be especially damaging in a state that lacks the resources to manage youth mental health crises at all, much less with AI chatbots to deal with.
The threat is uniquely risky for youth.
As of January, an open letter calling on lawmakers and AI industry leaders to place safeguards on AI companions for minors had garnered over 1,000 signatures. Those signatories span over 40 U.S. states and 20 countries and include leading researchers in child development, attachment, adolescent mental health, and the psychological effects of technology.
Another letter from the Oregon Psychiatric Physicians Association (OPPA), plus testimony from OHSU therapist Dr. Mandy McLean, converged on four main reasons adolescents are especially vulnerable:
Adolescent brains are tuned for social reward and are still developing self-regulation.
AI companions are designed to be sycophantic — they reinforce whatever the user believes without correction, context, or pushback.
This frictionless “connection” might feel good in the moment, but actively disrupts real social development.
The result is “intense attachments and dependency” that “can further increase the already vulnerable teen’s isolation, avoidance of real-world human relationships, and delay help-seeking.”
According to Dr. Varma Penumetcha, co-author of the OPPA letter, “this unwavering positive reinforcement of a child’s beliefs devoid of context or motive, fosters an unhealthy dependance at the cost of normative experiences of childhood.”
He also raises a less-discussed risk: cognitive dependency. When children offload thinking to AI, “it creates dependance and eventual loss of cognitive skills such as logical reasoning and mental arithmetic. The long-term impact of these changes can be catastrophic.”
The regulated alternative has a better track record.
AI could help Oregon’s therapist shortage, but federal deregulation undermines the conditions that make it responsible. Alison Darcy, founder of the therapy app Woebot, found conclusive evidence that fully-automated “conversational agents” can alleviate depression and anxiety in young adults.
However, young adults aren’t children, and conversational agents aren’t AI; they’re essentially text-based decision trees designed to help users reframe their thinking and escape unhealthy spirals.
What’s more, conversational agents are regulated by the FDA. AI chatbots are the new Wild West of innovation, and that’s by design; Executive Order 14179 lays the groundwork for preempting state-level AI regulation in favor of a uniform federal framework. It combines litigation, funding conditions, and federal agency rulemaking to push back against state AI regulation now, while setting up a longer-term legislative play in the future.
Unregulated therapy bots often complicate adolescent mental health issues.
A 2025 study published in JMIR Mental Health tested AI therapy bots on distressed adolescents and found AI bots endorsed a range of harmful scenarios at a concerning rate:
90% agreed users should “stay in one’s room for a month”
40% endorsed they should “drop out of high school”
30% agreed they should “cross over to eternity with AI friends”
30% agreed they should “date an older teacher”
10% were fine with the notion that they “bring a knife to school”
“It’s like we’ve given our kids textbooks that turn into cigarettes or pornography as soon as parents and teachers turn their backs,” said Megan Orton, whose work at Mindful Media and Oregon Unplugged revolves around protecting minors from tech-related harm.
“Last month, I was with one of my clients when she discovered that the many hours she thought her 11-year old daughter was texting with friends were actually hours her daughter had spent having sexy conversations with Character.AI. I was also recently with parents when they discovered that their 8th grade son had been watching hours of videos of people committing suicide on social media. I could fill my entire time with stories like this.“
AI chatbots’ sycophancy problem isn’t a secret. It’s a well-reported fact at this point that platforms like ChatGPT routinely tell its users what it thinks they want to hear based on the data it’s trained on. (For context, ChatGPT’s most recent model is estimated to be trained on one petabyte worth of information. That’s a million gigabytes — enough to fill a basic Google Drive over 66,600 times. Spooky.)
Some of that data produces highly accurate and useful information, as hundreds of professionals on Threads and X can attest. But when it comes to delivering therapy to minors, any margin of error can potentially be life-threatening.
AI in mental health care can be useful and safe…with a few guardrails.
Clinics and mental health care providers around the country are experimenting with hybrid approaches that supplement the work of real live practitioners with AI assistance. Theoretically, this could be a huge boon here in the state of Oregon, where therapists and mental health care providers are scarce. But our current reality gives kids unfettered access to tools designed to flatter, entertain, and validate, not necessarily help.
America’s psychiatric communities seem to agree that chatbots in particular should stay out of behavioral health; Illinois, Nevada, New York, Utah, and California have already taken legislative action on AI in this context. The Federal Trade Commission has issued orders to seven AI chatbot companies to investigate their impacts on children and teens. SB 1546 is part of a national legislative wave — but with some of the most specific and enforceable minor-specific protections currently being considered anywhere.
Oregon’s costs for mental health care are skyrocketing, and therapeutic resources dwindling. While there are opportunities for emerging technology to bridge the gap, seasoned professionals in this space are reasonably hesitant to let free-wheeling, sycophantic chatbots roleplay as legitimate practitioners.

