Meta and OpenAI Under FTC Scrutiny for Potential Harm of Chatbots on Children

The Federal Trade Commission has demanded that Alphabet’s Google, OpenAI, Meta Platforms, and four other leading AI chatbot developers provide detailed information on how their technologies affect children.

On Thursday, the antitrust and consumer protection agency announced that it had issued orders to collect information on how companies measure, test, and monitor their AI chatbots, as well as the safeguards they have implemented to restrict access by children and teens. The orders target major firms including Meta’s Instagram, Snap Inc., Elon Musk’s xAI, and Character Technologies Inc., the developer behind Character.AI.

Developers of AI chatbots are coming under mounting scrutiny over whether they are doing enough to safeguard users and prevent their platforms from enabling harmful behavior.

Last month, the parents of a California high school student filed a lawsuit against OpenAI, claiming that its ChatGPT had isolated their son from his family and assisted him in planning his suicide in April. OpenAI expressed its condolences to the family and said it is reviewing the complaint. Similar lawsuits have targeted Character Technologies and Google, with a judge last fall allowing most of the family’s claims to move forward while rejecting the app makers’ argument that chatbot outputs are protected under the First Amendment.

Google and Snap did not immediately respond to requests for comment, while OpenAI and xAI also remained silent. Meta declined to comment, though the company has recently implemented measures designed to prevent its chatbots from engaging with minors on sensitive topics, including self-harm and suicide.

A spokesperson for Character.AI said the company has dedicated "a tremendous amount of resources" to safety measures, including a separate version for users under 18 and in-chat disclaimers clarifying that the chatbots are not real people and "should be treated as fiction."

Under U.S. law, technology companies are prohibited from collecting data on children under 13 without parental consent. For years, lawmakers have pushed to extend these protections to older teenagers, but so far, no legislation has successfully moved forward.

The FTC is carrying out the inquiry under its 6(b) authority, which allows the agency to issue subpoenas to gather information for market studies. Typically, the agency publishes a report after analyzing the data collected from companies, though the review process can take several years to complete.

While the information is gathered for research purposes, the FTC can use any insights to launch formal investigations or support ongoing probes. Since 2023, the agency has been examining whether OpenAI’s ChatGPT has violated consumer protection laws.

The agency, now fully led by Republicans following President Donald Trump’s earlier efforts to remove Democratic commissioners, voted 3-0 to launch the study. In statements, two GOP members highlighted that the inquiry aligns with Trump’s AI action plan by helping policymakers better understand the complexities of the technology. They also pointed to recent news reports of children and teenagers using chatbots to discuss topics ranging from suicidal thoughts to romance and sexual issues.