With the recent news about Facebook and Cambridge analytica, we are rightly concerned about the power and impact of algorithms to shape political debate and more generally, our lives. The social score model in China shows another way in which AI could influence all aspects of society. Based on these and other views, most policy makers in the West take a negative view of AI and the power of algorithms in society. In this post, I present a different, more optimistic view of the impact of AI on society where AI could be a part of the solution to overcome the problem of Algorithmocracy and filter bubbles. I discussed some of the ideas below Last week, I spoke at the Economist innovation summit in London. Note that the views presented in this article are mine alone and are not related to any organization I am associated with. The scope of the article is confined to the impact on policy and democracy. It is not related to other aspects of the filter bubble (ex: recommendations for products)
Filter bubbles and Algorithmocracy
The term Algorithmocracy has been proposed by Eli Pariser to explain the idea of filter bubbles. The idea can be elaborated as : the power of computers will be able to crunch so much data that we’ll no longer need decisions to be made by a form of representative democracy, but instead from publicly held data points – what you say on Facebook (or whatever has replaced Facebook) will define the decisions that are made.
Farnham street elaborates the idea of Filter bubbles and Algorithmocracy
Algorithms create “a unique universe of information for each of us … which fundamentally alters the way we encounter ideas and information.” (Eli Pariser). Filter bubbles create echo chambers. We assume that everyone thinks like us and this makes us forget other perspectives. This happens because the Internet tends to give us what we want based on our past(data) preferences. The algorithms act as a one-way mirror reflecting and amplifying our views. Personalization via algorithms is bad for democracy because Democracy requires citizens to see things from one another’s point of view. We also lose track of facts and we rely instead on opinions.
The risks of populist opinions and limited viewpoints hijacking democracy were well known from the outset. Today, the percentage of people who say it is essential to live in a liberal democracy is declining. The problem is explained in America Is Living James Madison’s Nightmare (The Atlantic). The Founders of democracy designed a government that would resist mob rule. But they did not anticipate how strong the mob could become. Direct democracies risk being hijacked by populist opinion as the article says: “Had every Athenian citizen been a Socrates, every Athenian assembly would still have been a mob.” Hence, we have several safeguards to protect democracy such as representative democracy, plurality in media etc. The power of the mob opinion was compounded by the introduction of media formals which were not based on text. This problem is discussed eloquently in one of my favourite books of all time by Neil Postman – amusing ourselves to death.
Neil Postman considers the eighteenth century, the “Age of Reason” and the pinnacle for rational argument because of the medium of debate (i.e. the written word). The introduction of media such as Television changed that dynamic by shifting the emphasis on presentation – rather than on the content.
With social media and the Internet, we have shifted the debate (if you can still call it that!) to warp speed. The debate is also dominated by the mob rule and passions rather than a rational deliberation. More to the point, social media creates bubbles and echo chambers in which citizens only see their own views reflected back on them (thereby preventing any rational discussion). Long term this could profoundly change our society in a dramatic way. We all worry about Orwell – (1984) i.e. an external entity that controls us. But the bigger issue may be the one raised by Huxley (Aldous Huxley – Brave new world) where we voluntarily hand over control to an external entity(in this case AI) due to our own ‘infinite capacity for distraction’ as per Huxley. (re Huxley vs Orwell)
Here are some observations
- Filter bubbles are a human problem – not an algorithm problem because algorithms reflect human preferences. We are blaming AI for the failings of people. At best, filter bubbles could be seen as a limitation of supervised learning algorithms – but fundamentally – the data drives the algorithm and humans drive the data
- Cambridge analytica and it’s likes are not legal even as per current legislation. These are also not strictly a problem of an algorithm
- Limitations of the Media format as explained above also stem from the format itself which encourages a rapid response over a thoughtful one. Once again, not related to the Algorithm
- Techniques to overcome clickbait are also part of the current techniques to overcome the limits of social media (and not related to algorithms per-se)
So, the best way of promoting a return to a thoughtful and balanced discussion maybe through education – not in the academic sense but more in the sense of diverse and validated viewpoints and promoting systems thinking
Hence, the question is: Can AI be part of such an informed education led solution to overcome Algorithmocracy and Filter Bubbles?
Kahneman points out in Thinking fast and slow the pitfalls of cognitive biases. Cognitive biases are mental short cuts which are often not accurate. AI that can be trained to understand and interpret cognitive biases could provide a ‘slow’ (more thoughtful/ considering all options – more nuanced approach). Already, an algorithmic approach is providing an objective approach to recruiting using AI
This will work if we do not overload AI with our own biases.
Overall. we worry about biases of AI, but we don’t talk of projecting our own biases on to AI (such as religion). Take religion. All religion is inherently faith based. An acceptance of faith implies a suspension of reason. From an AI perspective, Religion hence does not ‘compute’. Religion is a human choice(bias). But if AI rejects that bias, then AI risks alienating vast swathes of humanity.
Hence, each of our biases contribute to the filter bubble but if we design the algorithm itself to overcome all cognitive biases (and its worth looking at this list of cognitive biases.)
AI will become a part of the solution through creating a sense of education through awareness
In this post, we present a more granular and a balanced view of AI and Democracy. We also present how AI could be a part of the solution to overcome the problems of Algorithmocracy and Filter bubbles. Overall, I remain an AI optimist – a position that is not easy to adopt! However, AI can be seen as overcoming the current scenario of social media driven filter bubbles.