top of page

US Watchdog Targets AI Companies Over Children’s Safety

ree


Takeaways

  • The Federal Trade Commission will send letters to OpenAI, Meta AI, Character.AI and others to collect documents on how their chatbots affect children’s mental health.

  • Regulators are increasingly concerned about emotional harm, addictive use, and unlicensed “therapy bot” risks.

  • The inquiry could be an early step toward rulemaking or enforcement in the AI space.

The FTC is gearing up to question major AI players- OpenAI, Meta AI, and Character.AI - about how their chatbots might be affecting children’s mental health and privacy. It plans to issue document requests, potentially kicking off a broader regulatory probe!


Andrew Ferguson, chair of the Federal Trade Commission. Picture Credit- Graeme Sloan/Bloomberg News
Andrew Ferguson, chair of the Federal Trade Commission. Picture Credit- Graeme Sloan/Bloomberg News

The campaign has White House backing, and officials say it comes alongside pressure from parents, lawmakers, and advocacy groups. Concerns center on teens forming emotional attachments to chatbots and being exposed to misleading mental health advice.








Why Regulators are Sounding the Alarm


Regulators want to understand whether AI chatbots are causing harm to young users- whether via emotional distress, addictive behaviors, or privacy breaches. In June, over 20 consumer groups lodged complaints with the FTC and state attorneys general, accusing AI firms of deploying unlicensed “therapy bots.”

ree

Texas Attorney General Ken Paxton has launched a separate probe into Meta AI Studio and Character.AI, accusing them of misleading kids with AI-generated mental health services masquerading as licensed therapy. Privacy misrepresentations and deceptive advertising are key charges.


The pressure isn’t isolated; a bipartisan group of 44 state attorneys general issued a warning: “If you knowingly harm kids, you will answer for it,” signaling willingness to take enforcement actions if companies don’t comply.


What Companies are saying & Doing

Character.AI commented it hasn’t received a letter yet but expressed willingness to work with regulators as safety legislation develops.

Neither OpenAI nor Meta AI have issued public comments on the FTC matter yet. But both firms have publicly rolled out or announced youth-focused safety measures- such as teen accounts with parental oversight, distress alerts, and blocking sensitive topic conversations like suicide or self-harm. Meta previously moved to stop its chatbots from engaging in romantic or sensual conversations with minors and restricted teen access to certain AI characters.

The Regulatory & Compliance Angle

This inquiry signals a broader shift: technology risk and compliance functions are no longer afterthoughts- they’re becoming center stage. For AI companies, these letters aren’t just about public relations- they’re about whether documentation, safety protocols, age-verification practices, and audit-ready processes hold up under scrutiny.


Regulatory frameworks- like COPPA in the U.S., the EU AI Act, and India's upcoming DPDP rules- are tightening expectations around data handling, consent, and children's protection. Compliance teams must be ready not just to defend practices, but to show proactive governance.


For regulators, this is about precedent. If harms are traced back to inadequate age screening, privacy disclosures, or unverifiable mental health claims, we may see enforcement actions or even rulemakings. This could extend beyond FTC oversight to FTC guidance, state-level litigation, or coordination with global regulators.


This isn’t just about kids and chatbots - it’s about how AI companies prove they can operate responsibly under growing scrutiny. If compliance keeps playing catch-up, regulators will fill the gap. The FTC’s inquiry may only be a first step, but it sets the tone: in the age of AI, safety and accountability aren’t optional add-ons, they’re the baseline for trust!


Stay connected with Riskinfo.ai for the latest on AI, risk, & innovation- 
LinkedIn: Riskinfo.ai, Email: info@riskinfo.ai 


Comments


bottom of page