Swedish Welfare Agency’s AI System Faces Accusations of Discrimination and Bias

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 12/02/2024
In News

The Swedish Social Insurance Agency (Försäkringskassan) is under fire following an investigation revealing that its artificial intelligence (AI) system unjustly flagged marginalized groups for welfare fraud inspections. Amnesty International has called for the immediate discontinuation of the system, citing violations of equality, privacy, and social security rights.

 
An investigation by Lighthouse Reports and Svenska Dagbladet uncovered that Försäkringskassan’s AI system disproportionately flagged women, individuals with foreign backgrounds, low-income earners, and people without university degrees. The AI assigns risk scores to applicants, directing those with high scores to fraud investigators under an assumption of “criminal intent.”

 
David Nolan, Senior Investigative Researcher at Amnesty Tech, criticized the system, saying, “The Swedish Social Insurance Agency’s intrusive algorithms discriminate against people based on their gender, foreign background,’ income level, and level of education. This is a clear violation of rights.”

 
The investigation found that flagged individuals often faced delays and legal hurdles in accessing their welfare entitlements. Amnesty International, which reviewed the findings, described the system as dehumanizing, with flagged individuals treated with immediate suspicion.

 
Attempts to access detailed information about the system’s inner workings were blocked by Försäkringskassan. However, investigative teams analyzed aggregate data obtained from Sweden’s Inspectorate for Social Security (ISF). Testing against six statistical fairness metrics, including demographic parity and predictive parity, confirmed systemic bias in the algorithm.

 
The ISF had already raised concerns in a 2018 report, concluding that the algorithm failed to ensure equal treatment. Despite this, Försäkringskassan dismissed the findings, arguing that the analysis lacked merit.

 
The system may contravene the EU AI Act, which came into force in August 2024. The Act mandates strict governance and transparency rules for high-risk AI systems and bans tools used for social scoring. Critics warn that Sweden risks a scandal similar to the Netherlands’ childcare benefits fiasco, where biased algorithms falsely accused thousands of families of fraud.

 
Nolan emphasized the risks, stating, “If the system continues, Sweden may sleepwalk towards a scandal similar to the Netherlands. There is enough evidence to suggest that the system violates the right to equality and non-discrimination. Therefore, the system must be immediately discontinued.”

 
This controversy follows Amnesty International’s recent reports highlighting the dangers of biased AI systems in Denmark and France. The organization has consistently advocated for stronger AI regulations across the European Union, emphasizing human rights protections.

 
Amnesty’s findings in Sweden highlight the growing concerns over algorithmic discrimination in public services. As AI becomes more prevalent in welfare systems, critics stress the need for transparent governance and rigorous safeguards to prevent harm to vulnerable populations.

 

 

Need Help?

 

Keeping up with all the AI ordinances, regulations and laws around the world, could impact you and your business, don’t hesitate to reach out to BABL AI. Their Audit Experts can answer your concerns and questions while offering valuable insight.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter