We like to think that injustice makes itself known loudly. When something goes wrong in a public system, alarms should ring, and someone should take responsibility—or be held accountable if they don’t. But in 2020 in Gothenburg, injustice arrived quietly, dressed up as efficiency.
For the first time, the city used an algorithm to assign students to schools. After all, figuring out school zones and admissions is a huge administrative headache for any municipality. What could be better than a machine to optimize distances, preferences, and capacity? The system was meant to serve public efficiency: it was presented as neutral, streamlined, and objective.
But something went terribly wrong. Hundreds of children were assigned to schools miles from their homes—across rivers and fjords, over major highways, in neighborhoods they had never visited and had no connection to. Parents stared at the decisions in disbelief. Had anyone checked whether a 13-year-old could safely walk that route in winter? What logic guided these choices? Were their stated preferences simply ignored? No one in the school administration seemed able—or willing—to explain what had happened or fix the mistakes.
I watched this unfold as a researcher in technology and a former lawyer, but also as a mother. My then 12-year-old son was one of the children affected by the algorithm. Our frustration grew as the school administration failed to respond. Calmly, they told us we could appeal if we had a problem with our placement—as if it were a matter of personal taste. As if the issue was individual dissatisfaction, not a system-wide failure. Around kitchen tables across the city, the same confusion and anger simmered. Something was wrong, and the scale of the problem was becoming clearer by the day.
It took nearly a year for city auditors to confirm what many of us had suspected: the algorithm had been given flawed instructions. It calculated distances “as the crow flies,” not the actual walking routes. Gothenburg has a major river running through it. Failing to account for that meant children faced hour-long commutes. For many, reaching the other side of the river by walking or cycling—as the law says is the proper way to get to school—was simply impossible.
After an outcry from families, procedures were improved for the following school year. But for roughly 700 children already affected by the faulty algorithm, nothing changed. They would spend their entire junior high years in the “wrong” schools.
The official line was that individual appeals were enough. But that misses the point. Algorithms don’t just make isolated decisions; they create systems of decisions. When 100 children are wrongly placed in schools on the opposite riverbank, they take the spots meant for others. Those children are then pushed to different schools, displacing others in turn. Like dominoes, the errors cascade. By the fifth or sixth displacement, the injustice becomes almost impossible to detect, let alone challenge and prove in court.
Thirteen-year-old children were assigned to schools miles away—across rivers and fjords, over major highways.
This algorithmic injustice isn’t an abstract problem, nor is it unique to Sweden. It painfully echoes recent scandals across Europe. One is the Post Office scandal in the UK, where the Horizon IT system falsely accused hundreds of post office operators of theft, leading to prosecutions, bankruptcies, and even imprisonment. For years, the system’s output was treated as nearly infallible. Human testimony was bent to the authority of the machine. Another example is the childcare benefits scandal in the Netherlands, where a system used by the Dutch tax authority wrongly flagged thousands of parents as fraudsters. Families were plunged into debt. Many lost their homes. Children were taken into foster care. In both cases, the algorithmic failures continued for many years, as the automated systemIt operated behind a veil of technical complexity and institutional defensiveness. Mistakes piled up. Harm got worse. Accountability lagged behind.
Back in Gothenburg in 2020, I realized that simply appealing my son’s placement wouldn’t be enough. You can’t fix a systemic problem with individual fixes. So, as part of a research project, I sued the city to see what happens when algorithms go to court. I didn’t challenge just my son’s specific placement—I challenged the legality of the entire decision-making system and everything it produced. I argued that the algorithm’s design broke the law.
Since I couldn’t access the system—my repeated requests to see the algorithm were ignored—I couldn’t show it to the court. Instead, I carefully analyzed hundreds of placements, using addresses and school choices to figure out how the system must have worked, and presented that as evidence.
The city’s defense was shockingly simple. They claimed the system was just a “support tool.” They said they did nothing wrong and offered no proof: no technical documents, no code, no explanation of how things worked.
And, to my surprise, they didn’t have to. The court put the burden of proof on me. The judges said it was my job to show the system was illegal. My analysis of the decisions wasn’t enough. Without direct evidence of the code, I couldn’t meet the standard of proof. The case was thrown out. In other words: prove what’s inside the black box, or lose.
This—more than the original administrative failure—keeps me up at night. We know algorithms sometimes fail. That’s exactly why we have courts: to force disclosure, to examine, and to fix things. But when legal procedures stay stuck in the past, and when judges don’t have the tools, skills, or authority to question algorithmic systems, injustice wins. While public agencies use opaque systems on a large scale, citizens facing life-changing outcomes are told to appeal—one by one—without ever seeing the code behind it.
The lessons from the Post Office and the Dutch child benefit scandals echo what I found in Gothenburg. When courts trust technology instead of questioning it, and when the burden of proof falls on those harmed rather than those who built and used the system, algorithmic injustice doesn’t just appear—it can last for years. Even when the technology itself is simple, like in Gothenburg—where the mistake was using straight-line distance instead of actual walking routes—citizens still faced a black box they had to uncover to challenge it. In this case, it was a glass box wrapped in many layers of black paper.
It’s time to demand that our courts open the black boxes of algorithmic decision-making. We need to shift the burden of proof to the party that actually has access to the algorithm, and create legal rules for effective, system-wide fixes. Until we update our legal procedures to match the realities of a digital society, we’ll keep stumbling from scandal to scandal. When injustice is delivered quietly by code, accountability must answer loudly.
Charlotta Kronblad researches digital transformation at the University of Gothenburg.
Frequently Asked Questions
Here is a list of FAQs based on the article I took an algorithm to court in Sweden The algorithm won by Charlotta Kronblad
BeginnerLevel Questions
1 What is this article about
Its about a real legal case where a Swedish researcher tried to challenge a government algorithm in court The algorithm made a decision about her and she argued it was unfair The court ruled in favor of the algorithm
2 Why did the algorithm win
The court decided that the algorithm was just following the law as written It didnt make a mistakeit applied the rules correctly The problem was that the law itself was too rigid not that the algorithm malfunctioned
3 Can you really take an algorithm to court
Not directly You cant sue a piece of software But you can challenge the decision it made by suing the government agency or company that used it In this case the author challenged the Swedish Social Insurance Agencys automated decision
4 What kind of decision did the algorithm make
It denied her application for extended parental leave benefits The algorithm automatically calculated her eligibility based on strict income and work history rules without considering her specific situation
5 Is this a common problem
Yes more and more governments and companies use algorithms to make decisions about benefits loans hiring and even criminal sentencing When the rules are too simple people with unusual circumstances often get unfairly rejected
IntermediateLevel Questions
6 Why did the author think the algorithm was wrong
She argued that the algorithm didnt account for her real income pattern She was a freelancer so her income wasnt steady monthtomonth The algorithm used a rigid 12month lookback rule that disqualified her even though she had earned enough overall
7 What was the courts reasoning for siding with the algorithm
The court said the algorithm was just a tool that applied the law exactly If the law is flawed the court cant blame the algorithm The author would need to change the law not fight the software Essentially the algorithm was correct within the systems flawed rules
8 Does this mean algorithms are always right in court
No If an algorithm is biased uses bad data