Home » Sector Topics » AI in Government

Opening the Pod Bay Door: Regulating multi-purpose AI

  • ajitjaokar 
red eye hal
Daisy?

The European Commission published a draft AI Regulation this week that is the world’s first concrete proposal for regulating artificial intelligence (AI).

Like GDPR, the Draft AI Regulation is likely to affect AI development globally profoundly.

The regulation applies to AI systems, i.e. to any software that can generate content, make predictions, make recommendations, or decisions influencing the environments they interact with for a given set of human-defined objectives. It covers machine learning approaches, logic- and knowledge-based approaches, statistical approaches, etc. 

The EU AI regulation takes a risk-based approach based on unacceptable risk, high risk, and low risk. The use of unacceptable-risk AI systems is banned. These include distorting human behavior etc. Low-risk categories are subject to transparency obligations, i.e., self-regulation.

So, the majority of the emphasis is on the high-risk classes.

High-risk AI systems scenarios include Medical devices, In-vitro medical devices, Radio equipment, Lifts, Toys, Personal protective equipment, Machinery, Marine equipment, Appliances burning gaseous fuels, Motor vehicles and trailers, Two- or three-wheel vehicles and quadricycles, equipment, and protective systems for use in potentially explosive atmospheres, Civil aviation security, Pressure equipment, Agricultural and forestry vehicles, Unmanned aircraft, Cableway installations, Recreational crafts, and personal watercraft, Rail system.

Critical obligations for providers of high-risk AI systems include Risk management systems, High-quality data sets, Technical Documentation, Information to users, Quality management system and logs, Human oversight, Robustness, accuracy, cybersecurity, conformity assessment, Registration, monitoring.

Opening the Pod Bay Door: Regulating multi-purpose AI

All this is understandable, but there is an elephant in the room

How do we regulate large language models like GPT-3?

Large language models are multi-purpose.

So, you cannot regulate the model itself because it spans categories.

Of course, you could regulate the application of a model in a high-risk scenario, but even in that case, you cannot be transparent about the datasets behind the large language models.

Similarly, you cannot ‘self declare’ for low-risk applications.

Large language models could post a risk even in the low-risk categories.

Finally, large language models are generative.

I have not been able to find a good answer to this

More so because the question is relevant since large language models will play a significant role in the future.

The Stanford University Human cantered Artificial Intelligence group studies the risks and challenges in How Large Language Models Will Transform Science, Society, and AI

Like much regulation, it seems to be well-intentioned but missing a large chunk of applications that will be key and also hard to regulate

Image source: https://pixabay.com/photos/african-elephant-big-ears-alert-4878168/

References: https://www.allenovery.com/en-gb/global/news-and-insights/publications/key-provisions-of-the-draft-ai-regulation