Home » Business Topics » Metaverse

Why We Can’t Trust AI to Run The Metaverse

Why We Can’t Trust AI to Run The Metaverse
AI is here, but can we trust it? Image: Adobe Stock (Licensed)
  • New study highlights key metrics for measuring trust.
  • Web 2.0 trust issues will bleed over to the metaverse.
  • One solution is the change how we measure trustworthiness.

Artificial Intelligence is at the core of the Metaverse, but how trustworthy is it? Although there have been many previous studies into AI’s trustworthiness in Web 2.0, these cannot be extended to the metaverse, which requires more complicated metrics to assess system performance and user experience. A recent study from a multinational team of researchers suggests that as we currently lack a set of tested trustworthiness metrics, we should not put our trust in AI to run the metaverse [1].

The Current State of AI Trustworthiness

Today’s large scale AI integration means that AI has access to vast amounts of user data; AI can leverage the data to uncover sensitive user behavior like visits to certain websites or personal buying habits. To address these concerns, many agencies, corporations, and government bodies have studied Trustworthy AI (TAI), including the European Commission, United States Department of Defense, and FAANG companies (Meta (Facebook), Amazon, Netflix; and Alphabet (Google)).

TAI metrics measure the protection offered and degree of system trustworthiness. However, these metrics often have less emphasis on user-centered factors. Despite several thousand research papers listed in Google Scholar, there is no widely accepted formula for metric selection, which means we are facing serious issues with evaluating TAI in future iterations of the web.

Study Metrics

The study, titled Towards User-Centered Metrics for Trustworthy AI in Immersive Cyberspace, discusses three key metrics: fairness, privacy, and robustness:

Fairness refers to fair access to technology. For example, the digital divide—the growing gap between those who have access to technology, and those who do not—restricts fair access to technology especially in elderly, handicapped, poor, and rural populations [2].

In Web 2.0, fairness is also a major concern in Search Engine Ranking (SER) and recommendation systems (RecSys). SER, which incorporates factors like content quality and page relevance to serve a user with results, has major issues with fairness. For example, some sites have been favored over others in search results, distorting objective results and degrading user trust [3]. RecSys typically learns and predicts a user’s interest in items; as the number of recommendations is fixed, the fairness issue comes into play because there is an incentive to serve the user with results that have higher commercial benefit.

Privacy is, according to the researchers, “fundamental yet hard to define explicitly.” However, it is generally measured by metrics that focus on the exposure of private information. Privacy issues have long been in the spotlight of global policy issues, due to the vast amount of personal data collected and distributed from mobile phones and other online devices [4]. This data can be used for nefarious purposes, like governments tracking users of specific apps or websites.

Robustness, with respect to the domain of artificial intelligence, refers to AI that has been mathematically verified and validated with empirical testing [5]. AI in Web 2.0 has significant problems in this area; significant volumes of data are continuously generated online, requiring constant retraining and learning, which is slow and can encounter problems after sudden data shifts. As AI in the metaverse is not robust, prediction of future AI behavior might be difficult or impossible.

Governance of Trustworthiness

The governance of AI trustworthiness is, for the most part, controlled by service providers and a smattering of third-party companies and government entities. One of the major challenges in this area of research is how a fair, private, and robust AI environment can be measured in a standardized, systematic, and transparent way. Leaving this issue to individual service providers doesn’t work, because there is no guarantee of an equitable, trustworthiness standard. In addition, third-party companies and government entities may not have the resources to handle the sheer volume of data involved. This is vital going forward, state the study authors, because the metaverse will integrate AI into daily life on an unprecedented scale. They suggest that the best option may be to build a collaborative, autonomous government platform (meta-TAI) to assess performance.

In conclusion, the researchers noted that “AI will become an indispensable driver of immersive cyberspace” and that user trust is essential if the metaverse is ever to become widely adopted. The key is to shift from measures of system trustworthiness to a user-centered paradigm which considers cognitive and affective factors.

References

[1] Towards User-Centered Metrics for Trustworthy AI in Immersive Cyberspace

[2] Digital Divide

[3] Search Engines and Ethics

[4] Protecting Privacy in an AI-Drive World

[5] Robust AI and Robust Law (Part I: Robust AI)

Tags: