Skip to content

Algorithms in Government: A Magic Formula or a Divisive Force?

It is easy to be seduced by the power of algorithms to deliver public services. Do you want to target beneficiaries of government programs and services precisely and accurately?

Well, there’s an algorithm for that. Ditto for real-time monitoring of resources, personalization of government interactions, fraud and corruption prevention, anticipation (if not outright predictions) of events and behavior, and more. In such instances, algorithms can seem like a magic formula to crack some of government’s most persistent problems.

Then again, experience has taught us that algorithms can be divisive and destructive – be it in the hands of governments, government affiliated partners, or forces hostile to public sector actors. Algorithms have been used to sow distrust in public information and government machinery (including elections), and have been held responsible for perpetuating discrimination in the delivery of services and unfavorably profiling segments of the population. People have even (rightly) blamed algorithms for everyday injustices, such as their kids being deprived of college admissions or being denied bail by judges reliant on automated systems.

Clearly, algorithms are only as good as the intent behind them. But in the algorithmic world, even purported good deeds can produce socially and politically disruptive outcomes that may not have been either understood or foreseen by the implementers of the algorithm.

‘F**k the algorithm’ recently became the colorful rallying cry of protesters against the UK government’s decision during the pandemic to award ‘A’ level grades based on an algorithm rather than actual exams. Almost 40% of the students received lower grades than they had anticipated and took to the streets and the courts for redress, forcing the government to retract the grades. Subsequent reviews suggested that the algorithms might have been biased (reinforcing prejudices in historical data, plus favoring smaller schools). Critics also took issue with the limited engagement and accountability tools that the government provided for students and parents.

The Dutch government faced a similar reversal when, in 2020, a court ruled that a digital welfare fraud detection system called Systeem Risicoindicatie (SyRI) was unlawful because it did not comply with the right to privacy under the European Convention of Human Rights. The law had passed in 2014 without a single dissenting vote in the parliament and ostensibly contained numerous provisions to discourage ‘fishing expeditions’ and ensure that any harm to individuals whose data was processed by the system was proportionate to the allegations of fraud. The court, however, found these provisions to be inadequate and faulted the law/system on many grounds, including lack of transparency, the inability to track or challenge the data, the risk of discrimination, unsatisfactory attention to purpose limitation and data minimization, and insufficient independent oversight.

Complicating matters for governments – particularly those in developing countries that are eager to introduce or expand the use of algorithms in the public sector – is the fact that most of the experience and lessons learned so far reflect the reality in developed countries whose technical, human, institutional, and infrastructural capacity exceeds that of developing countries. It is likely that the issues of the greatest interest in advanced economies differ from those in developing countries either because of their state of algorithmic maturity or, more likely, differing priorities and policy objectives.

This spotlight presents preliminary observations drawn from the experience of a high-level review of two cases (in Izmir, Turkey and Belgrade, Serbia) and an analysis of secondary material (cases centered on security have not been considered).The observations focus on data governance related design and implementation issues specific to developing country governments considering algorithmic decision-making services.