This is an Op-ed about the future of the internet and, while speculative, it's an example and an attempt to demonstrate how Artificial Intelligence at scale in a human would or could have disastrous impacts without AI regulation and AI ethics to protect us.
GPT-3 stands for Generative Pre-trained Transformer. As you likely already know GPT-3 is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by Microsoft-funded OpenAI (that was supposed to be a not for profit firm).
In 2021 we've had a NLP-explosion year in terms of Artificial Intelligence activity. As a natural language processor and generator, GPT-3 is a language learning engine that crawls over existing content and code to learn patterns and recognize syntax. It can produce unique outputs based on prompts, questions and other inputs.
You could make the argument that the internet is already a fake place, crammed with spam and misinformation with an algorithmic incentive structure of SEO based on Ads that create behavior modification at scale. It creates a clickbait culture and dopamine loop addicts like on TikTok or a bro-bashing culture of trolls like on Reddit or Twitter. Now, as if the current state of things wasn't bad enough, i.e. that the internet is more algorithmic than human, it's bound to get worse.
The NLP explosion of GPT-3 like technologies will be able to automate content at scale, and currently we have no idea how this could impact the internet. GPT-3 is able to create landing pages, even create human-like podcasts, and clone most forms of human content found online (which would be more or less indistinguishable from a human creator).
Internet platforms like YouTube, Facebook, TikTok and others are already having a hard time moderating their content. Imagine if a GPT-3 like technology was set loose on the internet and created more content at scale than humans would be capable of in a short span of time, say a few years? The internet is already overwhelmed with attention-grabbing spam and an attention economy that only maxes Ad Giants like Facebook or Google to be more profitable through a rigged system of algorithms.
The incentives that current algorithms provide already have led to a perverse internet and dopamine loop induced feed scrolling zombies and an entire generation or two that have grown up with the internet and mobile phones, instead of a normal social life. We've engineered a generation built to consume on the internet and the next generation of AI and GPT-3 like technologies are only going to make it worse. The on-boarding to the metaverse, both corporate and consumer based could be and likely will be GPT-3 created.
The temptation to use AI to produce content at scale (mediocre or otherwise) will be too strong. The abuse of GPT-3 isn't a likelihood, it's a certainty in an internet and technology system of algorithms that's a race to the bottom. Content farms, article spinning, a digital media industry that's failing, independent creators trying to make a living, it's a zoo of content, misinformation, spam and mindless posts out there. As stories replace feeds in some sense, the quality of content is getting even worse. Now what if a lot of that was machine generated and wasn't about real humans?
Furthermore with regards to bias in AI, Stanford researchers have been warning that GPT-3 as a foundational starting point for new AI innovation could have widespread biases and other flaws that would be perpetuated. With regards to huge NLP models that are bleeding edge and that others will build upon perhaps we should heed their warnings.
A multidisciplinary group of Stanford University professors and students want to start a serious discussion about the increasing use of large, frighteningly smart “foundation” AI models such as OpenAI GPT-3 (Generative Pretraining Transformer 3) natural language model.
GPT-3 is probably a foundational layer for the future because it was developed using huge quantities of training data and computer power to reach state-of-the-art general-purpose performance. This has been brought to the attention of the AI community by Percy Liang. Foundational models create “a single point of failure, so any defects, any biases which these models have, any security vulnerabilities . . . are just blindly inherited by all the downstream tasks,” he says.
Liang leads a new group assembled by Stanford’s institute for Human-Centered Artificial Intelligence (HAI) called the Center for Research on Foundation Models (CRFM). In 2019, researchers at Google built the transformational BERT (Bidirectional Encoder Representations from Transformers) natural language model, which now plays a role in nearly all of Google’s search functions. Other companies took BERT and built new models on top of it. Now, the same is happening with Facebook's RoBERTa (Robustly Optimized BERT Pretraining Approach) and OpenAI's GPT-3. These new AI models will augment a dangerous homogeneity.
Sadly, in the innovation explosion of NLP and new AI developments, the profit motive encourages companies to punch the gas (without implementing proper AI ethics and regulation) on emerging tech instead of braking for reflection and study, says Fei-Fei Li, who was the director of Stanford’s AI Lab from 2013 to 2018 and now codirects HAI.
What if GPT-3 and copycat models created content that was so convincing in various kinds of content that they were really good imitations of human content? How would you feel about such an internet? What if you listened to podcasts or saw videos that resulted in AI-induced hypnosis, that didn't correspond to the opinions of actual people but were simply manufactured at scale for profit? You say AI for Good, but GPT-3 will most likely usher in an AI for Spam era of the internet, and the NLP explosion could be very inconvenient for the Ad-based internet. The quality of information could be degraded further, if that were even possible.
Algorithms haven't created a good or an orderly internet, but an SEO engine of Google that has incentived certain behaviors and moderated how we produce content online towards attention grabbing headlines, content that can be monetized with advertisements and popularity driven content, instead of content that's true to our real opinions, values or educational potential. GPT-3 will simply add another layer on top, so humans don't have to do much of the grunt work, but the product will be so much more bad content online.
The vanity induction of Ads giants and the corresponding misinformation -- since they do not invest properly to moderate the content -- means the quality of the Internet in the 2020s will become very corrupted. GPT-3 like AI could add so much more mediocre content that human beings will no longer trust the internet in a way that reaches a critical mass. Much in the same way that we use tools like Facebook, Instagram, WhatsApp or even Netflix less due to a mistrust of the product.
AI firms around the world are replicating what GPT-3 of OpenAI can do, and it will be a technology that's virtually impossible to moderate because the current AI regulations and AI ethics aren't strict enough. GPT-3 can be used for multiple types of content for commercial profit, and sooner or later they will be. This won't just automate many human jobs and activities but create an even more artificial internet.
NLP type technologies promise even more than we currently see possible with the technology, which is already a majority of content already created by people on the pre NLP-bot internet. After around the year 2023, I think a lot of content online will start to become designed by GPT-3 type NLP systems. This could disrupt the current internet into a place where the sheer volume and lower quality of content produces potentially even more toxic consequences.
Now obviously this will impact the livelihood of a lot of social media marketers, copywriters, some tasks of sales people and continue to disrupt the media at large. But it will also make the internet fundamentally less human. The world already creates 7.5 million new blog posts a day, so what if there was ten times that? The competition for eyeballs and the disruption to even our interest in reading content online would be immense. The vulnerability of the internet to AI is greater than we realize today in 2021.
Microsoft with OpenAI could, if they chose to, create another completely artificially created and maintained enterprise internet for the B2B ecosystem, a bit like Salesforce is doing with a business streaming network called Salesforce+. The potential for disruption is significant and the price in a hotly contested attention economy driven on Ads is enormous, with the pressure likely leading to multiple misuses of this NLP automation at scale. And that could lead to an exodus of people from certain platforms on the current internet. A disillusionment point at scale due to the lower quality and higher volume of content is highly likely.
What if you suddenly found out your favorite TikTok or Instagram creator wasn't even a real person but simply manufactured (i.e., a digital persona designed by AI) to draw your attention for profit by GPT-3? Clearly GPT-3 is leading us to the deep fake matrix of an internet so warped it won't even feel human any longer.
The incentives for corporations to abuse the NLP-explosion for their own commercial profit will be too high to protect the internet from how human attention will be captured and corrupted. It's going to be a content war of mediocrity as AI gets its hand on the wheel of the so-called human internet, various content farms and clickbait empires. It's not as if (Google) algorithms already don't run the show. What happens when they control even the news feeds with powerful recommendation engines with content not produced by actual people?
So GPT-3 like technologies are most likely going to lead us to a GID (a global internet dystopia). We just don't exactly what content form it will take. How will human attention be most easy to exploit? Which companies will create the right mix of AI-human hybrid content? How will GPT-3 and NLP explosion be introduced into apps, new platforms and the future of the internet? Microsoft is very ambitious to allow OpenAI to help automate the future of coding.
AI is not all positive and good because the business models the internet is based upon are not necessarily empowering or equitable or about education or social justice. The Silicon Valley Ads-based internet will probably fail one day. We just don't know the mechanics yet of its demise.
Like ReadWrite reminds us, 20 years ago we joked that you had to be careful what truths you believed that had been pulled from the web. In 2021, misinformation is even more rampant. With AI created content, misinformation is way more likely to be even more weaponized to create behavior modification at scale, not just for profit but for cybersecurity, foreign terrorism, data theft and so forth.
Sooner or later GPT-3 Artificial Intelligence will indeed be able to code and makes video games, and then where will people go? The GID could lead us into the metaverse, the supposed technological matrix that's the ultimate in data capture and the monetization of our humanity. It doesn't matter how strict OpenAI's controls are, as other companies are cloning what they have achieved, especially in China. When a tool is created that can create content like a human being, it cannot be caged from the algorithmic world it one day could rule.
We have to accept the possibility that AI could disrupt people from their jobs, from their livelihood but also augment the internet into a creation of its own making. In the weaponization and militarization of AI, what if the medium is the message? What if GPT-3 ruins the internet as we know it and what replaces it is a quantum based internet that's not led by Silicon Valley, but by another power altogether?
What kind of future do you suppose that will be? The dangers of GPT-3 like augmentation of the internet may actually be more dangerous than helpful because of what the internet does to people. Already the internet has become far more harmful at scale than we ever would have imagined in our techno-optimism of the 1990s.
We already know the future of digital journalism is probably lost in an Ad-based internet, though Facebook and Google pretend they are good actors in this. Tell that to the journalists who can no longer find work. I'm afraid that GPT-3 will disrupt a lot of jobs on today's internet as well.
Think of all the bloggers, copywriters, social media associates, YouTubers, Instagram influencers, content farmers, content spinners and media organizations that will be disrupted and go extinct. It's a bloodbath of internet jobs and it will be replaced by more mediocre content that's AI-generated. The problem is the quality of this content will only improve as GPT-3 like technologies become more sophisticated, due to the high quality training in content production and the algorithmic incentive world Google has created for us.
Where does a marketing professional stand in such a world? A world where Ads can be implemented by AI who write the adds and know the best copy and kind of Ads to produce the required result. Most lower level marketing and sales jobs will become extinct.
So GPT-3 is the automation of the internet which actually will displace a lot of jobs. What so called journalists do on LinkedIn is curate news in a news feed with a small paragraph giving an overview with some (sponsored partner) links. GPT-3 could be doing that today. There's nothing stopping those LinkedIn News headlines to be entirely generated with GPT-3.
In a world of clickbait, Ads and dopamine loop entrapment let's not joke around, AI will be the Chief Editor, lead writer and CMO of most companies eventually. GPT-3 is coming for the internet, and there's nothing we can do to stop it. In just ten years, the internet will be nearly unrecognizable. I'm the Last Futurist and this was an op-ed about what's likely to happen to the future of the Internet with GPT-3 like technologies.
While we monetize AI, we need to be careful we are creating a better world and an internet that we can be proud of for our children. While GPT-3 is an incredible breakthrough, currently its side effects could be more problematic than its benefits in my opinion as I studied the literature surrounding this topic.
The opinions of this Op-Ed are solely the opinions of the author and do not represent Data Science Central in any way. They are for entertainment purposes only and do not represent the consensus of the academic, expert or AI community.
With the development of new layers of AI in content creation, it's not clear if we are doing our due diligence, risk mitigation or regulating AI sufficiently to protect future users, consumers, citizens, patients and people, their data, their mental health and their discretionary time with dignity, privacy and safety concerns first.
This article is 2,606 words.