The Federal Trade Commission (FTC) has launched an inquiry into consumer-facing AI chatbots, focusing on how these technologies may affect children and teens. The agency announced it had issued 6(b) orders to seven major companies, requiring them to disclose how they measure, test, and mitigate potential harms.
The orders were sent to Alphabet, Character Technologies, Instagram, Meta Platforms, OpenAI, Snap, and X.AI. These companies operate popular chatbots that use generative AI to simulate human-like conversations, often mimicking emotions, intentions, and personalities. Regulators are concerned that such interactions could encourage vulnerable users to trust and form relationships with chatbots in ways that expose them to risks.
“As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,” said FTC Chairman Andrew N. Ferguson.
The FTC’s inquiry will examine how companies monetize engagement, process inputs, and design chatbot characters. It will also assess whether firms test for negative impacts before and after deployment, restrict use by minors, and comply with the Children’s Online Privacy Protection Act. Additionally, the agency is seeking details on how these firms inform parents and users about risks, data handling, and age restrictions.
The Commission voted unanimously to issue the orders, with Commissioners Melissa Holyoak and Mark R. Meador filing separate statements. The review is led by Alysa Bernstein and Erik Jones of the FTC’s Bureau of Consumer Protection.
The study is exploratory and not tied to specific enforcement actions but could shape future regulation of AI companions.
Need Help?
If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.