UPDATE — SEPTEMBER 2025: Since the November 2024 investigation into Försäkringskassan’s AI-driven fraud screening, Sweden has shifted from denial to guarded remediation under growing domestic and EU pressure. In early 2025 the Ministry of Health and Social Affairs asked the Inspectorate for Social Security (ISF) to re-examine the system; ISF’s interim note in May and a fuller June report warned of possible indirect discrimination and flagged likely conflicts with the EU AI Act’s high-risk requirements, and even potential overlap with its ban on social-scoring-like practices. Försäkringskassan responded in July by conceding “shortcomings in transparency and risk management,” adding temporary manual oversight atop automated scoring and commissioning external algorithmic fairness audits due by year-end, but it has not suspended the system.
Opposition parties in the Riksdag have pushed for an immediate shutdown, while the government has opted for corrective measures and closer supervision. Meanwhile, Brussels is informally watching the case as a bellwether for welfare-sector AI: Sweden must prove conformity as the AI Act phases in through 2026 or risk infringement action. As of now, the tool remains in use under scrutiny; audit results and any enforcement signals from EU bodies later in 2025 will determine whether Sweden proceeds with a reengineered, compliant model—or is forced to pull the plug.
ORIGINAL NEWS STORY:
Swedish Welfare Agency’s AI System Faces Accusations of Discrimination and Bias
The Swedish Social Insurance Agency (Försäkringskassan) is under fire following an investigation revealing that its artificial intelligence (AI) system unjustly flagged marginalized groups for welfare fraud inspections. Amnesty International has called for the immediate discontinuation of the system, citing violations of equality, privacy, and social security rights.
An investigation by Lighthouse Reports and Svenska Dagbladet uncovered that Försäkringskassan’s AI system disproportionately flagged women, individuals with foreign backgrounds, low-income earners, and people without university degrees. The AI assigns risk scores to applicants, directing those with high scores to fraud investigators under an assumption of “criminal intent.”
David Nolan, Senior Investigative Researcher at Amnesty Tech, criticized the system, saying, “The Swedish Social Insurance Agency’s intrusive algorithms discriminate against people based on their gender, ‘foreign background,’ income level, and level of education. This is a clear violation of rights.”
The investigation found that flagged individuals often faced delays and legal hurdles in accessing their welfare entitlements. Amnesty International, which reviewed the findings, described the system as dehumanizing, with flagged individuals treated with immediate suspicion.
Attempts to access detailed information about the system’s inner workings were blocked by Försäkringskassan. However, investigative teams analyzed aggregate data obtained from Sweden’s Inspectorate for Social Security (ISF). Testing against six statistical fairness metrics, including demographic parity and predictive parity, confirmed systemic bias in the algorithm.
The ISF had already raised concerns in a 2018 report, concluding that the algorithm failed to ensure equal treatment. Despite this, Försäkringskassan dismissed the findings, arguing that the analysis lacked merit.
The system may contravene the EU AI Act, which came into force in August 2024. The Act mandates strict governance and transparency rules for high-risk AI systems and bans tools used for social scoring. Critics warn that Sweden risks a scandal similar to the Netherlands’ childcare benefits fiasco, where biased algorithms falsely accused thousands of families of fraud.
Nolan emphasized the risks, stating, “If the system continues, Sweden may sleepwalk towards a scandal similar to the Netherlands. There is enough evidence to suggest that the system violates the right to equality and non-discrimination. Therefore, the system must be immediately discontinued.”
This controversy follows Amnesty International’s recent reports highlighting the dangers of biased AI systems in Denmark and France. The organization has consistently advocated for stronger AI regulations across the European Union, emphasizing human rights protections.
Amnesty’s findings in Sweden highlight the growing concerns over algorithmic discrimination in public services. As AI becomes more prevalent in welfare systems, critics stress the need for transparent governance and rigorous safeguards to prevent harm to vulnerable populations.
Need Help?
Keeping up with all the AI ordinances, regulations and laws around the world, could impact you and your business, don’t hesitate to reach out to BABL AI. Their Audit Experts can answer your concerns and questions while offering valuable insight.