Why Are Metaverses So Hard To Build?
Facebook has been in the news lately, possibly tied into attempting to change the narrative after October’s disastrous outages. The news this time, however, comes with its announcement that it is changing its name to Meta later this year, largely in line with previous communication that they were pivoting away from social media in its current incarnation and towards the exploration of extended reality via its own vehicle of the Metaverse.
Certainly, the concept of virtual reality is nothing new. Ray Bradbury’s now-famous story The Veldt was one of many early science fiction works that explored the notion of a computer-mediated artificial reality and Philip K. Dick can be credited with establishing the genre throughout his works.VR gained more immediacy with the writings of people like William Gibson, Bruce Stirling, Pat Cadigan, and Vernor Vinge, among many others. It was Neal Stephenson’s Snow Crash in 1992 that crystalized the modern conception of Virtual Reality and introduced the term Metaverse, a collection of computer-based worlds that people could jack into, and that manifest as bandwidth-driven worlds where pizza drivers could become modern-day samurai.
With all of that (and for the explosion of CGI-fueled blockbusters and video games about what VR was about to become), the Metaverse itself remains a remarkably stubborn and elusive target. Facebook is far from the first company to attempt to scale the remounts of such worlds, though it may be that the time is approaching (if perhaps not quite here yet) where the Metaverse is ready to emerge.
So what’s taking so long? There are several reasons for that. There have been metaverses before, with both The Sims/SimCity, Second Life, and Minecraft all being examples of multiplayer interactive world simulations. The Sims continued a long line of multi-agent games stretching back to the first text-based Multi-User-Dungeons (MUDs), and Sim City started out primarily as a “real-time” economics simulation. Second Life further refined the concept of Avatars, as did the growing presence of first-person shooters like Doom and Tomb Raider, where gamers piloted their increasingly realistic avatars through an environment.
These games took advantage of the emergence of pipelines of increasingly sophisticated graphical processing units (GPUs). In 2000, these games and simulations were limited and laughably crude. By 2021, the graphics are approaching cinematic quality, and the interactions are both subtle and impressive, though game quality systems are still relatively expensive.
Virtual Reality involves creating a simulation of a world that multiple users can affect. The related concept of Augmented Reality (AR) has a somewhat different history, one primarily focused upon more commercial uses. At first glance, it appears simpler – just create an overlay on top of the real world that provides metadata about the world that people can interact with. Google’s Glass was an early version of this, eyeglasses that projected a HUD-like screen and that allowed for video recording among other things.
Ironically, the reaction to wearers of Glass was exactly the one predicted by Stephenson twenty years before – they were treated as gargoyles, voyeuristic stalkers who could and did record everything around them, to the extent that restaurants, theaters, and nightclubs would ban the use of the glasses, people would become very uneasy around their wearers, and in some areas, legislation outlawing them was beginning to emerge when Google quietly ended the program. Despite this, VR goggles from Oculus to Microsoft’s HoloLens emerged around the same time, to, at best, an indifferent reception sales-wise.
Ironically, it was the augmented reality game Pokemon Go that changed the equation, ironic in that it didn’t involve immersive VR at all. People interacted with it using their phones, with 3D images of Sony’s Pokemon characters overlaid on a real-world background as pulled from the users’ video cameras. What this told a lot of creative teams was that perhaps the real draw of AR had more to do with information than it did with graphics.
In the last couple of years, there have been several related concepts that have emerged but seemingly didn’t catch. For a while, everyone was talking about the Internet of Things (IoT), but while it has been a boon to security, HVAC, and lighting companies, it turned out that most people had no need for an Internet-connected coffeepot. Sensor grids found extensive use in agriculture and supply chain management, but these were mostly closed systems, dealing with a single vendor and its products. Computer image recognition went into machine learning classification systems, voice recognition found its way into phones, but was more often turned off than used. Digital twins technology made great promises to revolutionize manufacturing, but that promise for the most part failed to materialize.
The biggest problem in turning all of this into a global web, ultimately, wasn’t processor speed or bandwidth, digital fidelity, or machine learning recognition. At the end of the day, it all comes down to … data standards.
More, next week.
To subscribe to the DSC Newsletter, go to Data Science Central and become a member today. It’s free!
Data Science Central Editorial Calendar
DSC is looking for editorial content specifically in these areas for November, with these topics having higher priority than other incoming articles.
DSC Featured Articles
Picture of the Week
To make sure you keep getting these emails, please add [email protected] to your browser’s address book.
Join Data Science Central | Comprehensive Repository of Data Science and ML Resources
This email, and all related content, is published by Data Science Central, a division of TechTarget, Inc.
275 Grove Street, Newton, Massachusetts, 02466 US
copyright 2021 TechTarget, Inc. all rights reserved. Designated trademarks, brands, logos and service marks are the property of their respective owners.