Government and Artificial Intelligence

This month the US President's National Science and Technology Council [NSTC] issued an important report on 'Preparing for the Future of Artificial Intelligence'.

It is not its final word on the matter - another Report is due on the effect of AI-driven automation on jobs and the economy but the general tone is, as you would expect in a technologically-driven culture, cautiously positive about AI's contribution to economic growth (at least in the US). The undertone though is one of concern about social cohesion and fairness and, above all, about appropriate regulatory regimes.

The Report is a good corrective to some of the speculative fantasies about AI. Although nothing is entirely certain in this field, it pushes back AI that matches or exceeds human intelligence beyond the next five Presidential terms. It reminds us that what is now at issue is not the threat of an emergent Artificial General Intelligence [AGI] but the application of AI that exceeds human performance in a whole range of particular tasks. AI is not replacing humans, it is replacing the things that humans do - and that means jobs. Humans are still around but potentially unemployed.

There is more nervousness perhaps about something that more immediately affects mainstream media's potential critiques of government competence. Joblessness is just a matter of periodic statistics (this elite neglect of the human dimension of employment perhaps helping to fuel the Sanders and Trump revolts this year). Deaths caused in car and aircraft accidents where a maturing technology is learning by doing must be the nightmare that keeps investors, managers and politicians awake at night. Worries about safety seem paramount in an already anxious political culture.

AI is seen in this Report as so important to the US economy that AI education is to be embedded in general computer education. What the US needs is for its educated middle classes to see this technological innovation as positive under conditions where the negative impact of automation will be highest on the least educated. If the Establishment thinks it has a problem with populism now, this would be as nothing compared to the problem it might have by the time of the next Presidential Election. The elite is thus forced into a progressive redistribution agenda for its own survival.

The Report also raises the dangerous power of the algorithms involved in AI which may result in injustices and lack of accountability. What is interesting here is the admission that 'there are inherent challenges in trying to understand and predict the behaviour of advanced AI systems', The answer may be a rather weak one - reliance on ethics. Ethics are as liable to human perceptual error as the initial creation of algorithms. In a political system that has not had strong ethical boundaries at times, the application of AI to weapons systems might cause a lot of concerns to non-Americans!

To be fair, the Report faces this problem of weapons-based AI head-on (as its Recommendation 23) but the instinct to use non-proliferation to deny anyone else the same tools in the belief that the US is somehow ethically superior lurks beneath the surface here. We are not yet going to take a strong view on the effects of AI if only because the content of the follow up Report is critical. The NSTC Report is a starting point but it raises far more questions than it has answers and we are still convinced that we may be underestimating its potential for disruption to society and democracy.