Freedom Online Coalition Issues Joint Statement on Responsible AI Governance

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 10/02/2024
In News

The Freedom Online Coalition (FOC), a group of 37 member states committed to upholding human rights in the digital age, recently issued a joint statement calling for responsible government practices in the development and deployment of artificial intelligence (AI). The statement was released following a meeting of the coalition, which includes countries like the United States, the United Kingdom, Canada, Germany, and Japan, among others.

 

As AI continues to advance rapidly, governments worldwide are increasingly adopting AI systems for various purposes, including public service delivery, law enforcement, and judicial decisions. While these systems offer significant potential to enhance efficiency and foster sustainable development, the FOC highlighted the risks that AI could pose to human rights if not deployed with proper safeguards.

 

The FOC’s joint statement emphasized the dual nature of AI—its ability to create positive change while also posing substantial risks. AI, if used responsibly, can improve public services and advance the 2030 Agenda for Sustainable Development. However, the absence of adequate protections can lead to violations of human rights, the amplification of biases, and the erosion of democratic values.

 

“We recognize that safe, secure, and trustworthy AI systems offer immense opportunities for governments to improve public service delivery and foster inclusive development,” the statement read. “However, without adequate safeguards, AI systems can undermine human rights and fundamental freedoms.

 

The FOC pointed out specific risks associated with AI, including privacy violations, algorithmic bias, the misuse of AI for surveillance, and the creation of harmful content like deepfakes. Such risks are especially concerning for vulnerable populations, such as marginalized racial and ethnic groups, women, the LGBTQ+ community, and people with disabilities, who may experience disproportionate harm from biased AI systems.

 

The FOC placed particular emphasis on the risks AI poses in the public sector. When AI is used by governments in sensitive areas like law enforcement, judicial decision-making, and social services, there is a heightened risk of exacerbating inequality and discrimination. The coalition stressed the need for governments to ensure that AI systems used in these contexts are rigorously tested for fairness, accuracy, and transparency before deployment.

 

Biased AI tools in the public sector, according to the statement, can “create new forms of marginalization and vulnerability,” particularly if used without proper safeguards. As AI technologies become more integrated into governmental functions, the FOC called for robust frameworks to protect individuals from the unintended consequences of AI-driven decisions.

 

The FOC reaffirmed its commitment to ensuring that AI systems are designed, developed, and deployed in ways that uphold international human rights standards. The coalition’s joint statement drew from existing frameworks, including the UN Guiding Principles on Business and Human Rights and the Organisation for Economic Co-operation and Development’s (OECD) AI Principles, to guide responsible AI governance.

 

One of the key calls to action in the statement was for governments to conduct thorough risk assessments before deploying AI systems, particularly in high-risk contexts such as healthcare, law enforcement, and justice. These assessments should consider the potential human rights impacts of AI technologies and take measures to mitigate risks. Governments are urged to engage with stakeholders throughout the AI system’s development and deployment to ensure its responsible use.

 

The coalition also stressed the importance of ongoing monitoring and evaluation of AI systems to identify and address issues related to bias, fairness, and safety. Governments are encouraged to establish feedback mechanisms to allow the public and affected stakeholders to report any problems with AI systems in real time.

 

In addition to preventive measures, the Freedom Online Coalition called for governments to provide effective remedies for individuals negatively impacted by AI systems. This includes ensuring that people have access to timely human review of AI decisions and establishing clear protocols for redress when human rights violations occur as a result of AI use.

 

The coalition also urged governments to be transparent about their AI practices by publicly disclosing how AI systems are being used and ensuring that the public can provide feedback on high-risk AI deployments. This transparency is essential to maintaining public trust in AI technologies and ensuring that they are used ethically and responsibly.

 

The Freedom Online Coalition concluded its joint statement with a call to action for all governments to adopt responsible AI governance practices. The coalition emphasized that, as AI continues to reshape societies and economies, it is critical for governments to take proactive steps to ensure that AI benefits all people while safeguarding human rights and dignity.

 

 

Need Help?

 

If you have questions or concerns about AI guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter