World stumbling zombie-like into a digital welfare dystopia

As humankind moves, perhaps inexorably, towards the digital “welfare” future it needs to alter course significantly and rapidly to avoid stumbling zombie-like into a digital welfare dystopia. This is the warning message in the conclusions of a report by Philip Alston, UN Special Rapporteur on extreme poverty and human rights (my empahsis):

There is no shortage of analyses warning of the dangers for human rights of various manifestations of digital technology and especially artificial intelligence. But these studies focus overwhelmingly on the traditional civil and political rights such as the right to privacy, non-discrimination, fair trial rights, and the right to freedom of expression and information. With a handful of exceptions, none has adequately captured the full array of threats represented by the emergence of the digital welfare state. The vast majority of states spend very large amounts of money on different forms of social protection, or welfare, and the allure of digital systems that offer major cost savings along with personnel reductions, greater efficiency, and fraud reduction, not to mention the kudos associated with being at the technological cutting edge, is irresistible. There is little doubt that the future of welfare will be integrally linked to digitization and the application of AI.

But as humankind moves, perhaps inexorably, towards the digital welfare future it needs to alter course significantly and rapidly to avoid stumbling zombie-like into a digital welfare dystopia. Such a future would be one in which: unrestricted data matching is used to expose and punish the slightest irregularities in the record of welfare beneficiaries (while assiduously avoiding such measures in relation to the well-off); evermore refined surveillance options enable around the clock monitoring of beneficiaries; conditions are imposed on recipients that undermine individual autonomy and choice in relation to sexual and reproductive choices, and in relation to food, alcohol and drugs and much else; and highly punitive sanctions are able to be imposed on those who step out of line.

It will reasonably be objected that this report is unbalanced, or one-sided, because the dominant focus is on the risks rather than on the many advantages potentially flowing from the digital welfare state. The justification is simple. There are a great many cheerleaders extolling the benefits, but all too few counselling sober reflection on the downsides. Rather than seeking to summarize the analysis above, a number of additional observations are in order.

First, digital welfare state technologies are not the inevitable result of ‘scientific’ progress, but instead reflect political choices made by humans. Assuming that technology reflects pre-ordained or objectively rational and efficient outcomes risks abandoning human rights principles along with democratic decision-making.

Second, if the logic of the market is consistently permitted to prevail it inevitably disregards human rights considerations and imposes “externalities on society, for example when AI systems engage in bias and discrimination … and increasingly reduce human autonomy”.

Third, the values underpinning and shaping the new technologies are unavoidably skewed by the fact that there is “a diversity crisis in the AI sector across gender and race”. Those designing AI systems in general, as well as those focused on the welfare state are overwhelmingly white, male, well-off, and from the Global North. No matter how committed they might be to certain values, the assumptions and choices made in shaping the digital welfare state will reflect certain perspectives and life experiences. The way to counteract these biases and to ensure that human rights considerations are adequately taken into account is to ensure that the “practices underlying the creation, auditing, and maintenance of data” are subjected to very careful scrutiny.

Fourth, predictive analytics, algorithms and other forms of AI are highly likely to reproduce and exacerbate biases reflected in existing data and policies. In-built forms of discrimination can fatally undermine the right to social protection for key groups and individuals. There therefore needs to be a concerted effort to identify and counteract such biases in designing the digital welfare state. This in turn requires transparency, and broad-based inputs into policy-making processes. The public, and especially those directly affected by the welfare system, need to be able to understand and evaluate the policies that are buried deep within the algorithms.

Fifth, especially but not only in the Global North, the technology industry is heavily oriented towards designing and selling gadgets for the well-off such as driverless and flying cars and electronic personal assistants for multi-tasking businessmen [sic]. In the absence of fiscal incentives, government regulation, and political pressures, it will devote all too little attention to facilitating the creation of a welfare state that takes full account of the humanity and concerns of the less well-off in any society.

Sixth, to date astonishingly little attention has been paid to the ways in which new technologies might transform the welfare state for the better. Instead of obsessing about fraud, cost savings, sanctions, and market-driven definitions of efficiency, the starting point should be on how existing or even expanded welfare budgets could be transformed through technology to ensure a higher standard of living for the vulnerable and disadvantaged, to devise new ways of caring for those who have been left behind, and more effective techniques for addressing the needs of those who are struggling to enter or re-enter the labour market. That would be the real digital welfare state revolution.

_____________________

Featured Image: Zombie Crowd by Björn Söderqvist

One comment

  1. Those prophets of doom should really take a short course on the realities and limitations of AI.

    But, of course, we know from history that a good dictatorship do not really need AI. Even the Assyrians had a great “terror state” with just bows and arrows, cuneiform writing and a couple of chariots. In addition, it is much more terrorising and effective when violence is a little blind.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.