Freedom Online Coalition Issues Joint Statement on Responsible AI Governance

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 10/02/2024
In News

UPDATE — AUGUST 2025: Since the FOC released its 2024 joint statement on responsible AI use, the bloc has moved from principles toward implementation. In May 2025 it created a Responsible AI in Government Task Force, now drafting best-practice guidance for bias audits, transparency, and human-rights safeguards in high-risk government deployments such as law enforcement and healthcare. This effort aligns with broader global frameworks: the Council of Europe’s AI and Human Rights Convention, the EU AI Act, and the UN General Assembly’s March 2025 resolution urging safeguards for state use of AI.

At the national level, many FOC members—including the U.S., UK, Canada, and Japan—have been updating procurement standards and oversight rules for public-sector AI systems. In the U.S., for instance, Biden’s 2023 AI Executive Order has been followed by NIST’s AI Risk Management Framework v1.1 (spring 2025) to better align with global standards, including FOC principles. UNESCO has also expanded its “Ethics of AI” recommendations with new state-level implementation guidance.

ORIGINAL NEWS STORY:

Freedom Online Coalition Issues Joint Statement on Responsible AI Governance

 

The Freedom Online Coalition (FOC), a group of 37 member states committed to upholding human rights in the digital age, recently issued a joint statement calling for responsible government practices in the development and deployment of artificial intelligence (AI). The statement was released following a meeting of the coalition, which includes countries like the United States, the United Kingdom, Canada, Germany, and Japan, among others. As AI adoption expands in public services, law enforcement, and justice systems, the coalition warned that unchecked deployment could threaten human rights. While AI can make government services more efficient, the FOC said poor oversight may lead to discrimination, bias, or misuse.

 

Balancing Innovation with Human Rights

 

The FOC statement described AI as a tool with “dual potential”—capable of driving progress or deepening inequality. “Safe, secure, and trustworthy AI systems offer immense opportunities,” the statement read. “But without safeguards, they can undermine human rights and fundamental freedoms.” The coalition identified privacy violations, algorithmic bias, and surveillance misuse among key threats. It also warned about deepfakes and other harmful content, particularly their impact on marginalized groups such as women, racial minorities, and people with disabilities. Governments, the FOC said, must ensure that AI used in law enforcement or social services undergoes strict testing for fairness, accuracy, and transparency before deployment.

 

Ensuring Accountability and Transparency

 

The coalition urged governments to conduct thorough risk assessments before deploying AI, especially in high-risk areas like healthcare and policing. These reviews should include stakeholder consultation and consider the potential impact on human rights. To maintain accountability, the FOC encouraged public feedback systems where citizens can report bias or harm caused by AI decisions. Governments should also provide ways for individuals to request human review of outcomes and seek redress when rights are violated. Transparency was another central demand. The FOC said governments must publicly disclose where and how AI is used in public services. Clear reporting, it added, builds public trust and reinforces ethical standards.

 

A Global Framework for Responsible AI

 

The joint statement aligns with the UN Guiding Principles on Business and Human Rights and the OECD’s AI Principles. The FOC said these global standards provide a foundation for effective AI governance. By following them, governments can reduce bias, strengthen oversight, and promote fairness in AI deployment. The coalition concluded by calling on nations to adopt policies that uphold human rights as AI reshapes economies and societies.

 

 

Need Help?

 

If you have questions or concerns about AI guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter