Council of Europe Publishes Landmark AI Treaty Ensuring Human Rights and Ethical Standards

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 08/01/2024
In News

The Council of Europe has published an overview of the groundbreaking “Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law,” which marks the first international legally binding instrument designed to govern the lifecycle of AI systems. This landmark AI treaty, released back in May, aims to ensure that AI activities align with human rights, democracy, and the rule of law while supporting technological progress and innovation.

 

The Framework Convention was developed through a collaborative process involving the 46 member states of the Council of Europe, observer states like Canada and the United States, and numerous international representatives from civil society, academia, and industry. The treaty emerged from the work of the Committee on Artificial Intelligence (CAI), which succeeded the ad hoc Committee on Artificial Intelligence (CAHAI) in 2022.

 

The primary purpose of the Framework Convention is to address the legal and ethical challenges posed by the rapid advancement of AI technologies. It establishes a set of fundamental principles that activities within the AI lifecycle must comply with, including human dignity, individual autonomy, equality, non-discrimination, respect for privacy, transparency, accountability, reliability, and safe innovation. These principles serve as a guide for ensuring that AI systems do not infringe on human rights and freedoms.

 

A significant feature of the treaty is its provision for “red lines” or bans on certain AI applications that may pose significant risks to human rights, democracy, or the rule of law. The Framework Convention covers the use of AI systems by public authorities and private actors, including those acting on behalf of the public sector. Parties to the treaty are given flexibility in how they comply with its principles, either by directly adhering to the Convention’s provisions or by implementing alternative measures consistent with their international obligations.

 

While the Framework Convention does not apply to national defense matters, it requires that activities related to national security respect international law and democratic processes. The treaty includes specific provisions for risk and impact assessments, documentation, and procedural safeguards to protect individuals from potential abuses by AI systems. These measures are intended to ensure that AI technologies are used responsibly and transparently, providing people with the ability to challenge decisions made by AI systems that significantly impact their rights.

 

The implementation of the Framework Convention is monitored by a follow-up mechanism called the Conference of the Parties, which comprises official representatives of the parties to the Convention. This body is responsible for assessing the implementation of the treaty’s provisions and making recommendations to ensure compliance. The Conference also facilitates cooperation with relevant stakeholders, including public hearings on aspects of the Convention’s implementation.

 

 

Need Help?

If you have questions or concerns about AI guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter