Even just five years back, Artificial Intelligence (AI) was still the stuff of science fiction, confined to research labs and tech giants’ showcases. It was pretty similar to the auto show concept car – there to be admired but not to be touched. Today, though, AI is everywhere. Whether it’s virtual assistants scheduling our meetings, facial recognition software or increasingly autonomous cars, AI is making itself felt in all our private and professional lives. One thing is certain – AI is here to stay. But quite what its impact over the next five years will be, nobody can really say.
Assuming that the economic models for artificial intelligence are viable and, thus, funding available, five important factors are in the mix if AI is to fulfill its true potential. What are they? Processing power capacity, availability of representative data, development of more powerful algorithms, adaption of user interfaces, and, last but not least, a willingness to get the right policies in place.
Moore’s Law, which predicted that that the number of transistors in a dense integrated circuit would double approximately every two years from the early 1970s, is now bang up against the kind of physical limits that could block the progress of the all-consuming power of AI. That’s unless there’s a major technological breakthrough. Let’s not forget, after all, that the ultimate form of intelligence is carbon-, as opposed to, silicon-based …
That said, even if processing power remains as it is today, incredible speed gains are being made through the simultaneous use of AI on multiple microprocessors. Think about graphic cards and their GPUs (the chip that performs the actual calculations needed to render and display images). These single-chip processors were originally designed for video games but their capacity now to handle parallelization of multi-data processing means they’re lending themselves to the use of complex AI algorithms and neural networks.
Medium term, any headway we can make with the development of processing power is more likely to be held back by the limitations of engineering, than breakthroughs in fundamental physics.
In terms of the ‘combustible’ engine of AI, the collection, access to and ownership of data could become a real challenge when it comes to Competition Law. These could become issues of national sovereignty and protectionist policies.
Microsoft’s buyout of LinkedIn, for example, led to protests by Salesforce. It feared that the deal could give Microsoft an unfair advantage over competitors through its access to the social media platform’s trove of data about companies and their employees. At state level, Russia has shut down LinkedIn RU activities on the basis that it goes against local laws that require firms to store data on Russian citizens within the nation’s borders. Even the EU, too, has recently adopted strict data protection laws concerning personal data.
As a counterpoint though, influence of the Open Data movement is growing fast in the public sphere. But use of this open data for commercial ends by foreign companies, without even indirect compensation, could call into question its continued financing by the taxpayer.
Businesses, too, are questioning the benefits they can get from the pooling of their own data (even after de-identification) with those of third parties, some of whom could be competitors, to improve relevance for their AI platforms. Most of these doubts though will likely fall away once AI platform operators prove, as Cloud software developers have succeeded in doing, that they can guarantee robust protection of confidential personal data.
AI’s most significant advances for specialized tasks such as voice recognition or image processing, for which there’s plenty of available data, come from the use of Deep Learning neural networks.
One of the most stunning applications of neural networks is that of Generative Adversarial Networks. These, in very simple terms, enable two AIs to learn by ‘talking’ to one another. If we were to transpose this method to text generation, we’d be getting a whole lot closer to the Turing Test – and to showing how a machine’s ability to exhibit intelligent behavior could be equivalent to, or indistinguishable from, that of a human. This kind of complex machine learning though represents a big user challenge. It’s no longer possible to explain the results. That means there’s a big risk that the public might reject AI on the grounds that it might come up with or even put in place unjustifiable and far-reaching actions.
However accurate the algorithm, AI’s success for users all hinges on the introduction of interfaces that foster collaboration between human beings and machines. The whole question of our ability to understand and control the algorithms we’re producing implies relevant ways of displaying and investigating processed information. The validity of the suggestions they come up with has to be checked, and if needs be, invalidated by the human user to avoid any risk of uncontrolled automation.
Conversation is a rather efficient a way of accessing, sharing and developing knowledge. So it seems logical that the first means of interaction with an Artificial Intelligence form and hastening its learning has to be language, whether it’s spoken or written.
The embedding of specialized software in the form of chatbots for instant messaging looks set to be a promising area. Virtual agents are already taking over from human operators to reply to common questions by giving answers or carrying out simple actions.
In many cases though, just a single click calling up a standard page will remain a more effective than some long explanation, with AI merely serving up the basic content.
AI’s arrival on the scene is bound to provoke ethical debate about privacy and policy. The kind of concerns that it’s set to bring up could be the possible loss of jobs, universal payments or some kind of ‘robot tax’. The strengthening of laws concerning personal data or debates over data sovereignty looks likely to involve state intervention. So whether progress with these platforms is bolstered or hampered all hangs on the future decisions that get made.
Basically, to make a value judgment on policies that emerge from this, we’re going to need to be able to accurately measure the concrete impact of AI on all the areas of life it affects. We need to look at the productivity gains it promises. Will AI wipe out certain roles or might it actually add value to existing ones? The answers, and the policies that ensue, will all be different, depending on whether AI is set to totally automate tasks or provide a means of boosting worker efficiency. Right now is still much too early to make any kind of judgment call. Better we keep a watching brief as to how things progress than bring in snap flimsy regulatory policies that might stifle progress.
How things develop in these five areas will all determine the performance and role that AI will play in our economy and society in the coming years. As with any disruptive technology, diverging trends will make themselves felt before models are finally shaped and practices take effect.
When it comes down to it, let’s not forget one thing. Not even the most sophisticated form of AI has the remotest chance of predicting what’s ahead…