Amnesty International’s new report, “Coded Injustice: Surveillance and Discrimination in Denmark’s Automated Welfare State,” has revealed significant human rights violations stemming from Denmark’s deployment of algorithmic fraud-control systems in its welfare programs. The findings raise critical concerns about privacy, equality, and the disproportionate targeting of marginalized groups, challenging Denmark’s image as a leader in digital governance.
Denmark’s welfare system, managed by Udbetaling Danmark (UDK) and Arbejdsmarkedets Tillægspension, uses over 60 AI and machine learning models to identify individuals suspected of fraudulently receiving benefits. This includes algorithms analyzing data from millions of residents, such as residency status, family composition, and income. Amnesty’s research highlights how these systems unfairly target low-income individuals, migrants, and racialized groups under the guise of efficiency.
The study also underscores that the algorithms rely on flawed datasets, reflecting systemic biases embedded in Danish society. By categorizing “unusual” family or residency patterns and using metrics like “foreign affiliation,” the system disproportionately flags minorities and migrants, violating their right to equality and non-discrimination.
Amnesty’s investigation outlines how Denmark’s automated welfare state infringes on key human rights. Among the most alarming findings are:
- Privacy Invasion: Mass surveillance through data collection, including information on family dynamics, employment, and citizenship, forces welfare applicants to forfeit their right to privacy.
- Discrimination: Algorithms frequently target marginalized groups based on characteristics like race, ethnicity, and migration status, perpetuating historical injustices.
- Digital Exclusion: Vulnerable populations, including older adults and individuals with disabilities, face barriers in accessing welfare services due to the system’s reliance on digital platforms.
These issues are compounded by Denmark’s lack of transparency and oversight. Amnesty criticized the Danish government for failing to notify flagged individuals that they are subject to algorithmic surveillance scrutiny, denying them the ability to appeal decisions effectively.
The report situated Denmark’s welfare state in a broader global trend of digitized public services. Amnesty draws parallels to similar scandals, including the Dutch childcare benefits debacle, where algorithmic bias led to widespread harm among minority communities.
Despite public backlash over discriminatory systems like the “Gladsaxe Model” and the STAR algorithm, Denmark has continued to embrace data-driven governance, raising questions about its commitment to human rights.
Amnesty’s report emphasizes Denmark’s obligations under international human rights law and the recently enacted EU AI Act. It urges Denmark to immediately halt its use of discriminatory fraud-control algorithms and implement independent audits. The EU AI Act classifies such systems as “high-risk,” mandating strict transparency and fairness standards. Amnesty recommends Denmark align with these requirements ahead of the 2030 compliance deadline to ensure robust protections for affected communities.
In its response, UDK denied that its practices constitute social scoring or violate EU and national laws. However, Amnesty asserts that Denmark must provide detailed evidence and independent assessments to justify its systems.
Need Help?
If you have questions or concerns about AI guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.