UPDATE — SEPTEMBER 2025:
Since the November 2024 investigation into Försäkringskassan’s AI-driven fraud screening, Sweden has shifted from denial to guarded remediation under growing domestic and EU pressure. In early 2025 the Ministry of Health and Social Affairs asked the Inspectorate for Social Security (ISF) to re-examine the system; ISF’s interim note in May and a fuller June report warned of possible indirect discrimination and flagged likely conflicts with the EU AI Act’s high-risk requirements, and even potential overlap with its ban on social-scoring-like practices. Försäkringskassan responded in July by conceding “shortcomings in transparency and risk management,” adding temporary manual oversight atop automated scoring and commissioning external algorithmic fairness audits due by year-end, but it has not suspended the system.
Opposition parties in the Riksdag have pushed for an immediate shutdown, while the government has opted for corrective measures and closer supervision. Meanwhile, Brussels is informally watching the case as a bellwether for welfare-sector AI: Sweden must prove conformity as the AI Act phases in through 2026 or risk infringement action. As of now, the tool remains in use under scrutiny; audit results and any enforcement signals from EU bodies later in 2025 will determine whether Sweden proceeds with a reengineered, compliant model—or is forced to pull the plug.
ORIGINAL NEWS STORY:
Swedish Welfare Agency’s AI System Faces Accusations of Discrimination and Bias
The Swedish Social Insurance Agency (Försäkringskassan) is under fire following an investigation revealing that its artificial intelligence (AI) system unjustly flagged marginalized groups for welfare fraud inspections. Amnesty International has called for the immediate discontinuation of the system, citing violations of equality, privacy, and social security rights.
Investigation Finds Disparate Impact
An investigation by Lighthouse Reports and Svenska Dagbladet found that the AI system disproportionately flagged women, people with foreign backgrounds, low-income individuals, and applicants without university degrees. The system assigns risk scores to welfare recipients and directs higher-scoring cases to fraud investigators.
According to the investigation, the system operates on an assumption of potential fraud rather than neutral eligibility review. Many flagged individuals experienced delays and added legal hurdles before receiving benefits.
Amnesty Raises Human Rights Concerns
Amnesty International reviewed the findings and described the system as discriminatory and dehumanizing. David Nolan, Senior Investigative Researcher at Amnesty Tech, said the algorithm penalizes people based on gender, background, income, and education level. He warned that this approach violates fundamental rights.
Amnesty also criticized the agency for treating flagged applicants with immediate suspicion, rather than as individuals entitled to social support.
Lack of Transparency and Independent Testing
Försäkringskassan refused to release detailed information about how the system works. As a result, investigators relied on aggregate data obtained from Sweden’s Inspectorate for Social Security (ISF).
Using six statistical fairness measures, including demographic parity and predictive parity, analysts found consistent evidence of bias. These results suggest that the system systematically disadvantages certain groups.
Prior Warnings Were Dismissed
The ISF raised similar concerns in a 2018 report, which found that the algorithm failed to ensure equal treatment. Försäkringskassan rejected those findings at the time and argued that the analysis lacked validity.
Critics say the agency’s response shows a long-standing failure to address discrimination risks in automated decision-making.
Potential Conflict With the EU AI Act
The system may conflict with the EU AI Act, which entered into force in August 2024. The law places strict requirements on high-risk AI systems used in public services and bans social scoring practices.
Observers warn that Sweden could face consequences similar to the Netherlands’ childcare benefits scandal, where biased algorithms wrongly accused thousands of families of fraud.
Nolan warned that continued use of the system could lead to a major rights failure. He said there is sufficient evidence to suggest violations of equality and non-discrimination principles.
Growing Scrutiny of Welfare AI Systems
The controversy in Sweden follows similar findings by Amnesty International in Denmark and France. These cases have raised broader concerns about the use of AI in welfare administration across Europe.
As governments expand automated systems in public services, rights groups continue to call for transparency, strong governance, and safeguards to protect vulnerable populations.
Need Help?
Keeping up with all the AI ordinances, regulations and laws around the world, could impact you and your business, don’t hesitate to reach out to BABL AI. Their Audit Experts can answer your concerns and questions while offering valuable insight.


