Denmark’s Automated Welfare System Under Fire for Surveillance and Discrimination

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 11/18/2024
In News

UPDATE — SEPTEMBER 2025:

Since Amnesty International’s March 2024 report on algorithmic fraud-control in Denmark’s welfare system, the issue has remained highly contentious both domestically and at the EU level.

Danish Response: Udbetaling Danmark (UDK) has repeatedly denied that its models constitute social scoring or breach Danish/EU law, insisting human caseworkers always review flagged cases. In spring 2024, the Danish Parliament debated Amnesty’s findings, but the governing coalition declined to suspend the algorithms. Instead, they promised “ongoing internal monitoring.” In June 2024, the Parliamentary Ombudsman launched an inquiry into high-profile systems like the “Gladsaxe Model,” examining discrimination concerns and citizens’ rights to notification.

EU Oversight: The EU AI Act, adopted in June 2024, explicitly classifies welfare-fraud detection systems as high-risk, requiring transparency, human oversight, and nondiscrimination testing by the 2030 compliance deadline. In July 2025, the European Data Protection Board (EDPB) issued guidance confirming that such systems require fundamental rights impact assessments (FRIAs) before use. Separately, the EU Fundamental Rights Agency (FRA) included Denmark in a 2025 comparative study, warning of systemic algorithmic discrimination in welfare systems.

Civil Society & International Pressure: Danish digital rights groups (IT-Pol) and European Digital Rights (EDRi) have echoed Amnesty’s call for independent audits, warning Denmark could face infringement proceedings under the AI Act. The UN Special Rapporteur on Extreme Poverty, in an April 2025 report, cited Denmark as an example of “digital welfare states” that risk undermining socioeconomic rights when AI is deployed without adequate safeguards.

Current Status: Denmark has not suspended its welfare fraud algorithms, but scrutiny is intensifying. Watchdogs expect the Ombudsman’s late-2025 review to be pivotal. If it validates Amnesty’s findings, Denmark may face stronger reform demands or even EU-driven action in 2026.

 

ORIGINAL NEWS POST:

 

Denmark’s Automated Welfare System Under Fire for Surveillance and Discrimination

 

Amnesty International’s new report, Coded Injustice: Surveillance and Discrimination in Denmark’s Automated Welfare State, warns that Denmark’s growing use of algorithmic fraud-control tools is harming human rights. The report raises concerns about privacy, equality, and the treatment of marginalized groups, challenging Denmark’s reputation as a digital governance leader.

Algorithms Target Vulnerable Groups

Denmark’s welfare agencies—Udbetaling Danmark (UDK) and Arbejdsmarkedets Tillægspension—use more than 60 AI and machine-learning models to detect suspected benefit fraud. These tools analyze data from millions of residents, including income, family structure, and residency details. Amnesty says the systems often flag low-income households, migrants, and racialized groups, even when there is no clear evidence of wrongdoing.

The organization argues that many of these models reflect deep-rooted biases in Danish society. Metrics such as “foreign affiliation” or “unusual residency patterns” lead to higher scrutiny of people from minority backgrounds. As a result, many residents face investigations that undermine their right to equality.

Human Rights Concerns

Amnesty’s report outlines several major risks linked to Denmark’s automated welfare system.

 

  • Privacy Invasion: The models rely on large-scale data collection. Applicants must share personal details about work, family life, citizenship, and housing. Amnesty says this level of monitoring forces people to give up basic privacy protections.

 

  • Discrimination: The report describes how algorithms frequently single out racial and ethnic minorities. These patterns mirror earlier discrimination cases in Europe, such as the Dutch childcare benefits scandal, where automated tools harmed many minority families.

 

  • Digital Exclusion: Older adults, people with disabilities, and others who struggle with digital systems face barriers to accessing support. As more welfare services shift online, these groups risk losing essential benefits.

 

Lack of Transparency and Oversight

Amnesty also criticizes Denmark for giving residents little information about how these systems work. UDK does not consistently notify people when an algorithm flags them, which makes it harder for individuals to challenge decisions or request a review. This gap in transparency increases the chance of error, especially for those already struggling with the system.

Despite backlash against tools like the “Gladsaxe Model” and the STAR algorithm, the Danish government continues to expand its use of data-driven oversight. Amnesty says this raises questions about Denmark’s commitment to fairness in public administration.

Calls for Reform

The report urges Denmark to halt discriminatory fraud-control algorithms and conduct independent audits. It also highlights Denmark’s obligations under international law and the EU AI Act. The Act classifies welfare-fraud detection as high-risk and requires strong transparency and fairness measures. Amnesty argues that Denmark must align with these standards well before the 2030 compliance deadline.

UDK has denied claims of illegal conduct. It says human caseworkers always review flagged cases and stresses that the systems do not amount to social scoring. Amnesty counters that Denmark must provide clear evidence and independent evaluations to back those claims.

 

 

Need Help?

 

If you have questions or concerns about AI guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter