This last Saturday, August 16th, Dr. Amy Zalman, only in the job of President and CEO of the World Future Society a matter of weeks, gave a talk, followed by a very stimulating discussion, to a growing group of London-based futurists convened by David Wood.
To put this into context, futurism is different from strategy, our own prime concern – instead of juggling knowledge of the past, current realities and future expectations into a plan of action and reaction against fast-moving opponents, the futurist is looking ahead to the probabilities and possibilities that might affect strategy beyond the immediate struggle.
Many of the audience were engineers and scientists used to hypotheses that had to be tested against evidenced data so some may have been surprised to hear Amy refer to professional futurism in terms that were more humanistic and the futurist, amongst many other analogues, as an artist.
The clue to this leap lies in a reference we have already made to a book on strategy by Professor Lawrence Freedman – or rather to its closing chapters where he emphasises the difficulty of a planned forward strategy under conditions of complexity where every action creates layer upon layer of reaction.
Some futurists have the science fiction fantasy that some amazing AGI (artificial general intelligence) will be able to provide humanity with the tool to overcome this problem of complexity – but any ‘humanist’ knows why this is a pipe dream.
Our AGI, unless synonymous with God and us as its paradoxical creator, not only cannot know all possible facts – the flaw in Big Data thinking – but cannot know all possible uses of such data by independent forms of consciousness such as, well, us. Rival consciousnesses are not predictable and can, indeed, choose to be non-rational as a rational act of resistance.
In a nightmare scenario, this AGI-God, tasked to give a picture of all possible scenarios and plan for the future might be minded to eliminate the most incalculable of variables, its own creator and all organic life to ensure the accuracy of its predictions. Thus would Nietzsche be proved wrong – the creator of God would be killed by God.
But, beyond these fantasies, what can futurists do to help strategic thinking, especially when they leave the territory of possible technological prediction and move into that of cultural and political foresight – and this is where Amy Zalman is perhaps helping us by referring to it as an art.
The future is an unknown known – it will happen (short of our own extermination and even then regardless) but we have no idea of exactly what will happen in an hour, let alone a hundred years. The culturally received future is always somewhere on a continuum between science fantasy and propaganda.
The ‘art’ lies along this continuum – with David Brin’s madcap if well argued ideas at one end and the use of the space race as power politics at the other – but a professional, ethical and useful Aristotelian ground lies somewhere in the middle as the art of arguing for probabilities and assessing the possibilities and potentialities and risks of this or that course of action in the light of probability.
Is this different from strategy? Perhaps not as we practice it at TPPR but the addition of a conscious attempt to play out highly variable long term scenarios and have them critiqued by those with practical, even cynical, experience of the human condition might well save us all a lot of blood and gold in the long run.
Cautiously, lest they run away with themselves in an institutional desire to get a share of the social and funding cake, we should listen to what they have to say and help the engineers and scientists parse their fears and fantasies with the logical unreason (yes, I wrote that!) of actual human behaviour.