There are many articles that point to the risks of AI. Indeed, these risks are real, but also many of these articles are based on scaremongering and sensationalism. If we take a medium to long-term view, we definitely need to think differently about the risks of AI.
Here is why:
a) We do not take the risks of AI seriously because they are generational
ie things that become commonplace even in a decade or two are not familiar to older generations, like the reluctance of older people today to use smartphones or other similar technology. Our generation simply cannot fully understand the risks since we will not be the primary generation living with it
b) AI is discussed in the context of science fiction(star wars, star trek, terminator, space odyssey, etc). Many people may not take it AI risks seriously because they see AI as a fantasy coming from Hollywood movies.
c) It is difficult to fully visualise and anticipate transformative technologies like the agricultural revolution and the industrial revolution. For this reason, AI risks may not be taken seriously
d) AI can make mistakes no human would make – ex create an image of a horse with five legs. Again, these risks are outside the realm of understanding of most people
e) Beyond the Turing test: I may find it difficult to understand human intent. The current understanding of AI is based on the “computational theory of mind.” – of which the best example is the Turing test. But a Turing test offers evidence of information processing – it does not ask if the machine itself can think. Wittgenstein proposed an alternate theory of intelligence based on language.
According to Wittgenstein, language is a tool, and the use and the actions into which the language is woven can be modeled as rules Wittgenstein used the term “language-game” to designate forms of language simpler than the entirety of a language itself, “consisting of language and the actions into which it is woven” and connected by family resemblance (Familienähnlichkeit).
The above offers a different perspective to think of the risks and challenges of artificial intelligence.