Two particular events directed my attention recently to the importance to democratise information about AI into governence and popular culture. The first was a congressional hearing of Facebook CEO Mark Zuckerberg (the Cambridge Analytica story) where he at some point had to explain rather basic principles on Facebook’s revenue model. The second was a recent EU parliament panel (October 2017) organized by STOA (Science and Technology Options Assessment) on AI aimed to prepare audiences for new technologies and their potential impact. I will go into more detail in the latter and show that there is still a big chasm between the expert view on the nature and potential of AI, and the view shared by non-experts such as EU parlementarians and the larger society. Thus the need for further demystification and democratisation of AI is quite apparent if it wants to earn a broad platform of trust and support by the general public.
The EU session consisted of a panel of four speakers (Peter J Bentley, Miles Brundage, Olle Häggström and Thomas Metzinger) and was moderated by cognitive psychologist Steven Pinker. The session succeeded in presenting different views from the four experts on the challenges and opportunities of AI ranging from the dystopian to the utopian on one dimension and the optimistic to sceptic on the other dimension. I most appreciated the opening words of moderator Steven Pinker, giving a flood of quantitative examples showing that over time, progress has lead to improvements in lif quality, but there is a trend toward negativity bias and denial of that progress. His position is that it is OK to worry about potential threats of nuclear war and climate change, but please not about some ‘runaway AI’. Computer scientist Peter J Bentley was clear to point out the hard work that goes into AI research and productisation aimed to address very specific tasks and so discards the idea that any superintelligence would disrupt our ecosystem any time soon. Miles Brundage came in from the perspective societal implications of artificial intelligence and points out that conditional optimism is appropriate concerning the development of AI as long as technical and governance challenges are addressed and we beware for anthropomorphisms - which can then lead to pareto improvements and an ethical leisure society. With the next two speakers, the debate shifted focus to a more cautionary narrative. Professor of mathematical statistics Olle Häggström works with a working assumption of rational optimism (if we commit to doing the right things we can expect right outcomes) however, instrumental goals like self preservation of an AI would have to be addressed through setup of a Friendly AI framework aligning its values with ours, and doing this prior to AI reaching a superintelligent level. Häggström however finds this a very difficult project - but worth taking seriously. Finally, professor of theoretical philosophy Thomas Metzinger warned against research on artificial consciousness and pleaded for a code of ethics should not come out of industry but out of political institutions and we should commit to evidence based risk management.
The spread of questions during the Q&A illustrated the chasm between the scope of AI and its understanding. One question that stood out there was “What does AI mean to quantum computing?”. Now this is by no means a silly question, on the contrary, but it took the first prize for achieving to ask a question about something which potential is barely understood today about something which is very hard to define in the first place, AI. Other questions reflected around the ability to use AI to identify terrorists, how to use AI to avoid radical movements, how to provide safety nets around AI to avoid proliferation of issues seen in the darknet, and finally, should interpreters fear for their future. Right...
The domain of AI is not well defined and it is dangerous to aggregate all manifestations of AI under one umbrella. There is no ‘AI’. There rather are many different tools and techniques that are designed (through hard work) to solve specific problems. Now if we consider AI as a tag, then by assimilating topics tagged with AI, every individual will form an opinion on AI on its own. This is a quite natural but dangerous form of definition by tagging, as it is guaranteed to do at least one of following things. 1) It will shape a different meaning for everyone. 2) It will suffer from tag reference bias once media and opinion makers start tagging AI in contexts that suit their purpose. 3) It will lead to meaning overload, where AI will be attributed to almost any thinkable topic or concept, making it harder to have a focused discussion. The above situation unequivocally leads to out of context discussions, misunderstandings, misattributions, confusion and wrong policies. All these consequences are probably much worse than the sudden awakening of some superintelligence that is going to destroy humanity. Of course when you look up Artificial Intelligence in a any-o-pedia you fill typically find a half definition and that half is defined as the opposite of something else. For instance Wikipedia defines it as intelligence demonstrated by machines, in contrast to natural intelligence, displayed by human and other animals. Now there is already something to be said about artificial versus natural. Where does one stop and the other begin? Then what about intelligence? That is a slippery slope. No wonder that debates on the subject need extra attention from the moderator to retain a certain focus and to add value and insight. In this particular parlimentarian debate where the audience was exposed to several aspects on the impact of AI, it became apparent that the questions were as diverse as the audience. Questions about the relation of AI to quantum computing, about the rights of an AI agent in context of the sex industry, impact on work, ... When questions span such a wide range of domains, it is either because the subject matter is so extremely wide scoping and powerful, or that it is poorly defined, but dressed in a veil of obscurity that instills fear and incertainty to the wider audience, and in particular to policy makers. So what can be done about that? By all means it is necessary that the debate continues. There is definitely an increasing interest in the overall subject of AI. More effort should be spent in demystifying the subject so audience becomes more comfortable with it in a common framework of understanding. Demystification should include clarifying what can be done today, how it is done and to what purpose. Impact on overall ecosystems should be clarified. From that it should become clear that AI today consists of a collection of tools to solve particular problems. And it is hard to develop those tools and hard to solve the problems they solve. Being overly concerned about the potential devastating consequences of AI overdrive will temper a lot when the definition of AI in its real-world context is better explained and will more probably lead to increased appreciation of the domain.