Tech

How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud

Dutch tax authorities used algorithms to automate an austere and punitive war on low-level fraud—the results were catastrophic.
bb

Last month, Prime Minister of the Netherlands Mark Rutte—along with his entire cabinet—resigned after a year and a half of investigations revealed that since 2013, 26,000 innocent families were wrongly accused of social benefits fraud partially due to a discriminatory algorithm.

Advertisement

Forced to pay back money they didn’t owe, many families were driven to financial ruin, and some were torn apart. Others were left with lasting mental health issues; people of color were disproportionately the victims. 

After relentless investigative reporting and a string of parliamentary hearings both preceding and following the mass resignations, the role of algorithms and automated systems in the scandal became clear, revealing how an austere and punitive war on low-level fraud was automated, leaving little room for accountability or basic human compassion. Even more, the automated system discriminated on the basis of nationality, flagging people with dual nationalities as being likely fraudsters. 

The childcare benefits scandal (kinderopvangtoeslagaffaire in Dutch) is a cautionary tale of the havoc that black box algorithms can wreak, especially when they are weaponized to target society’s most vulnerable. It’s a problem that is not unique to the Netherlands: the Australian government faced its own “robodebt” scandal when its automated system for flagging benefits fraud stole nearly $1 billion from hundreds of thousands of innocent people. That case, too, came down to a poorly-designed algorithm without human oversight, an extension of cruel austerity politics with inexpressible collateral damage. 

Advertisement

Here’s how the scandal unfolded. 

Parents generally have to pay for childcare in the Netherlands. However, based on a parent’s income, the Dutch state reimburses a portion of the costs at the end of each month.

The fear of people gaming welfare systems is far from new and not particular to the Netherlands, but the rise of xenophobic far-right populism has placed it centerstage in the national political discourse. Anti-immigrant politics have become increasingly normalized, and immigrants are often painted as a threat to the Dutch welfare state. 

Following this, a hardline stance regarding benefits fraud has become mostly politically uniform (even among many left-wing parties) over the past decade.

“Who pays the bill?” asked Geert Wilders, leader of the anti-immigrant Dutch Party for Freedom (the second largest party in the country), during a speech in 2008. “It’s the people of the Netherlands, the people who work hard, who properly save money and properly pay their taxes. The regular Dutch person does not receive money as a gift. Henk and Ingrid pay for Mohammed and Fatima.” 

Advertisement

What followed was essentially a take-no-prisoners war on benefits and welfare fraud. Like many nations, the Netherlands has long automated aspects of its welfare system paired with human oversight and review. From 2013 on (though these techniques could have been used earlier), authorities used algorithms to create risk profiles of residents who were supposedly more likely to commit fraud and then used automated systems with little oversight to scan through benefits applicants and flag likely fraudsters who were then forced to pay money they didn’t owe in reality.

Before the increased use of automated systems, the decision to cut off a family from benefits payments would have to go through extensive review, said Marlies van Eck, an assistant professor at Radboud University who researches automated decision making in government agencies and who previously worked for the national benefits department. Now, such choices have increasingly been left to algorithms, or algorithms themselves have acted as their own form of review.

“Suddenly, with technology in reach, benefits decisions were made in a really unprecedented manner,” she said. “In the past, if you worked for the government with paper files, you couldn’t suddenly decide from one moment to the next to stop paying people benefits.” 

Advertisement

After years of denial, an investigation from the Dutch Data Protection Authority found that these algorithms were inherently discriminatory because they took variables such as whether someone had a second nationality into account. 

As devastating as the scandal is, it treads familiar territory. The Netherlands continues to pilot discriminatory predictive-policing technology that perpetuates ethnic profiling, for example. 

Marc Schuilenburg is a professor in sociology and criminology at Vrije University in Amsterdam and author of the book Hysteria: Crime, Media, and Politics. Having spent a significant portion of his career studying the use of predictive policing algorithms, he argues that the child benefits scandal has to be seen within the context of this cultural and political shift towards punitive populism. 

“The toeslagenaffaire [benefits scandal] is not an isolated problem,” Schuilenburg told Motherboard over Zoom. “It fits into a long tradition in the Netherlands of security policies that are designed to make clear that the days of tolerance are over, and that we are locked into this fight to the death with crimes such as welfare fraud. This fits into this whole notion of populist hysteria which I discuss in my book.” 

Advertisement

“You see that these policies are spoken of in terms of war, in a hysterical military vocabulary—‘there is a war against fraud,’” he continued. “Through this language and these policies this brand of punishment populism prevails.” 

For those classified by the automated system as a fraudster, few properly-done follow-up investigations meant that there was little recourse. In some cases, even something as simple as forgetting a signature landed families with the effectively irremovable label of having committed fraud. Once that label was there, they were forced to retroactively pay the government back for all the childcare benefits they had received, which amounted to thousands of euros for many and in some cases even tens of thousands of euros. 

An investigation from Dutch daily newspaper Trouw also found that parents accused of fraud were given the label of “intent / gross negligence”, meaning that they weren’t even eligible for a payment scheme to gradually pay off their already false debts. 

Victims were locked out of other benefits as well, such as the housing allowance and healthcare allowance. Many were forced to file for bankruptcy. The results were catastrophic. 

Advertisement

“I believe in the States you have this saying ‘you’re one paycheck away from being homeless?’” van Eck told Motherboard over the phone. “Yeah, well that’s basically what we saw in this affair.” 

“If you miss two or three months of payments, especially for the child care benefits, you may have to quit your job,” she explained. If someone quit their job to care for their children as a result, she said, they’d end up having financial difficulties.  “There was this huge snowball effect because everything is connected with each other in the Netherlands. It was horrible.” 

In one of the more egregious examples of the lack of humanity in the authorities’ approach, a report from Trouw revealed that the tax office had baselessly applied the mathematical Pareto principle to their punishments, assuming without evidence that 80 percent of the parents investigated for fraud were guilty and 20 percent were innocent. 

The victims of the overzealous tax authorities were disproportionately people of color, highlighting how algorithms can perpetuate discriminatory structures and institutional racism. 

Advertisement

According to Nadia Benaissa—a policy advisor at the digital rights group Bits of Freedom—, the fraud detection systems using variables like nationality can create problematic feedback loops similar to how predictive policing algorithms built on flawed assumptions can create a self-fulfilling prophecy that leads to the over-policing of minority groups. 

Crucially, she said, we should place blame on the human individuals behind the creation and use of the algorithm rather than reify the technology as being the main driver. 

“Systems and algorithms are human-made, and do exactly what they’ve been instructed to do,” she said. “They can act as an easy and sort of cowardly way to take the blame off yourself. What’s important to understand is that often algorithms can use historical data without any proper interpretation of the context surrounding that data. With that pattern, you can only expect social problems like institutional racism to increase. The result is a sort of feedback loop that increases social problems that already exist.” 

While some efforts to increase algorithmic transparency have been made recently (such as an algorithm register from the municipality of Amsterdam), many of the automated systems in use in society remain opaque, said van Eck, even for researchers.  

“Transparency is certainly a major issue. As a researcher, it’s difficult because these algorithms remain invisible,” van Eck said. “If I want to know something, I have trouble finding a person who can talk to me about it. And, if they just buy a software system, it could be that nobody actually knows how it works. Transparency is important not just for citizens, but also on the administrative level.” 

Beyond transparency, safeguards and accountability are especially important when algorithms are given enormous power over people's livelihoods, but as of now little of that exists in the Netherlands. And, in the meantime, smart algorithms and automated systems continue to take over a larger and larger share of administrative procedures. 

For now, the families wrongly accused of fraud are waiting to be given €30,000 each in compensation, but that won’t be enough to make up for the divorces, broken homes, and the psychological toll that resulted from the affair. 

Meanwhile, despite the gravity of scandal, the resignation of Mark Rutte and his cabinet is largely symbolic. Though he resigned, he is still leading the government in the meantime and will be on the ballot in the national elections scheduled for next month. Rutte’s conservative VVD party is expected to win handily, meaning that both he and many of his ministers will likely return to their posts. 

At the end of every government scandal, the words “never again” are thrown around a lot, but the combination of few strong ethical safeguards for algorithms and an increasingly automated government leaves those words with little meaning.