Council of Europe Publishes Landmark AI Treaty Ensuring Human Rights and Ethical Standards
The Council of Europe has published an overview of the groundbreaking “Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law,” which marks the first international legally binding instrument designed to govern the lifecycle of AI systems. This landmark AI treaty, released back in May, aims to ensure that AI activities align with human rights, democracy, and the rule of law while supporting technological progress and innovation.
The Framework Convention was developed through a collaborative process involving the 46 member states of the Council of Europe, observer states like Canada and the United States, and numerous international representatives from civil society, academia, and industry. The treaty emerged from the work of the Committee on Artificial Intelligence (CAI), which succeeded the ad hoc Committee on Artificial Intelligence (CAHAI) in 2022.
Core Principles
The treaty sets out principles that AI activities must follow. These include human dignity, individual autonomy, equality, non-discrimination, respect for privacy, transparency, accountability, reliability, and safe innovation. These principles guide governments and companies in ensuring that AI systems respect human rights and freedoms. A key feature is the inclusion of “red lines,” or bans on certain AI applications that pose major risks to democracy, rights, or the rule of law. The treaty applies to AI used by both public authorities and private actors, including those acting for the public sector. Signatories have some flexibility in how they meet the principles, but they must do so in ways consistent with their international obligations.
Scope and Safeguards
Although the treaty does not cover national defense, it requires that national security activities comply with international law and democratic standards. It also calls for risk and impact assessments, detailed documentation, and procedural safeguards. These measures protect individuals from harmful or opaque AI systems and ensure that people can challenge AI-driven decisions that significantly affect their rights.
Monitoring and Implementation
The treaty establishes a follow-up body called the Conference of the Parties. This group includes official representatives from signatory states. It is responsible for monitoring implementation, assessing compliance, and making recommendations. The Conference also works with stakeholders through cooperation and public hearings to strengthen the treaty’s application.
Need Help?
If you have questions or concerns about AI guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.