UPDATE — SEPTEMBER 2025: Since Amnesty International’s March 2024 report on algorithmic fraud-control in Denmark’s welfare system, the issue has remained highly contentious both domestically and at the EU level.
Danish Response: Udbetaling Danmark (UDK) has repeatedly denied that its models constitute social scoring or breach Danish/EU law, insisting human caseworkers always review flagged cases. In spring 2024, the Danish Parliament debated Amnesty’s findings, but the governing coalition declined to suspend the algorithms. Instead, they promised “ongoing internal monitoring.” In June 2024, the Parliamentary Ombudsman launched an inquiry into high-profile systems like the “Gladsaxe Model,” examining discrimination concerns and citizens’ rights to notification. Preliminary findings are due late 2025.
EU Oversight: The EU AI Act, adopted in June 2024, explicitly classifies welfare-fraud detection systems as high-risk, requiring transparency, human oversight, and nondiscrimination testing by the 2030 compliance deadline. In July 2025, the European Data Protection Board (EDPB) issued guidance confirming that such systems require fundamental rights impact assessments (FRIAs) before use. Separately, the EU Fundamental Rights Agency (FRA) included Denmark in a 2025 comparative study, warning of systemic algorithmic discrimination in welfare systems.
Civil Society & International Pressure: Danish digital rights groups (IT-Pol) and European Digital Rights (EDRi) have echoed Amnesty’s call for independent audits, warning Denmark could face infringement proceedings under the AI Act. The UN Special Rapporteur on Extreme Poverty, in an April 2025 report, cited Denmark as an example of “digital welfare states” that risk undermining socioeconomic rights when AI is deployed without adequate safeguards.
Current Status: Denmark has not suspended its welfare fraud algorithms, but scrutiny is intensifying. Watchdogs expect the Ombudsman’s late-2025 review to be pivotal. If it validates Amnesty’s findings, Denmark may face stronger reform demands or even EU-driven action in 2026. For now, rights groups warn that migrants, minorities, and low-income households remain disproportionately exposed to errors and bias in Denmark’s automated welfare state.
ORIGINAL NEWS POST:
Denmark’s Automated Welfare System Under Fire for Surveillance and Discrimination
Amnesty International’s new report, “Coded Injustice: Surveillance and Discrimination in Denmark’s Automated Welfare State,” has revealed significant human rights violations stemming from Denmark’s deployment of algorithmic fraud-control systems in its welfare programs. The findings raise critical concerns about privacy, equality, and the disproportionate targeting of marginalized groups, challenging Denmark’s image as a leader in digital governance.
Denmark’s welfare system, managed by Udbetaling Danmark (UDK) and Arbejdsmarkedets Tillægspension, uses over 60 AI and machine learning models to identify individuals suspected of fraudulently receiving benefits. This includes algorithms analyzing data from millions of residents, such as residency status, family composition, and income. Amnesty’s research highlights how these systems unfairly target low-income individuals, migrants, and racialized groups under the guise of efficiency.
The study also underscores that the algorithms rely on flawed datasets, reflecting systemic biases embedded in Danish society. By categorizing “unusual” family or residency patterns and using metrics like “foreign affiliation,” the system disproportionately flags minorities and migrants, violating their right to equality and non-discrimination.
Amnesty’s investigation outlines how Denmark’s automated welfare state infringes on key human rights. Among the most alarming findings are:
- Privacy Invasion: Mass surveillance through data collection, including information on family dynamics, employment, and citizenship, forces welfare applicants to forfeit their right to privacy.
- Discrimination: Algorithms frequently target marginalized groups based on characteristics like race, ethnicity, and migration status, perpetuating historical injustices.
- Digital Exclusion: Vulnerable populations, including older adults and individuals with disabilities, face barriers in accessing welfare services due to the system’s reliance on digital platforms.
These issues are compounded by Denmark’s lack of transparency and oversight. Amnesty criticized the Danish government for failing to notify flagged individuals that they are subject to algorithmic surveillance scrutiny, denying them the ability to appeal decisions effectively.
The report situated Denmark’s welfare state in a broader global trend of digitized public services. Amnesty draws parallels to similar scandals, including the Dutch childcare benefits debacle, where algorithmic bias led to widespread harm among minority communities.
Despite public backlash over discriminatory systems like the “Gladsaxe Model” and the STAR algorithm, Denmark has continued to embrace data-driven governance, raising questions about its commitment to human rights.
Amnesty’s report emphasizes Denmark’s obligations under international human rights law and the recently enacted EU AI Act. It urges Denmark to immediately halt its use of discriminatory fraud-control algorithms and implement independent audits. The EU AI Act classifies such systems as “high-risk,” mandating strict transparency and fairness standards. Amnesty recommends Denmark align with these requirements ahead of the 2030 compliance deadline to ensure robust protections for affected communities.
In its response, UDK denied that its practices constitute social scoring or violate EU and national laws. However, Amnesty asserts that Denmark must provide detailed evidence and independent assessments to justify its systems.
Need Help?
If you have questions or concerns about AI guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.