Home » Business Topics » Data Strategist

What to Do About the New AI Regulation?

What to Do About the New AI Regulation?

When such a sophisticated, risky, and complex technology like AI takes our lives by storm, a clearly defined set of rules on its usage is paramount. Previously, public concern was mostly focused on the inappropriate use of personal data. As AI becomes a key technology in many businesses and services, the attention is rightfully shifting towards solving the AI bias problem. While it’s unlikely to fully eliminate bias in AI systems, regulations can impose certain limitations regarding its use. 

This is why the EU Commission came up with the proposal for Regulation on Artificial Intelligence (the draft AI regulation) in April 2021. Three years in the making, the draft AI regulation is the first elaborate EU legislation regarding AI use. 

While the legislative process will most likely be finalized in the next few years, it’s time for every AI service provider to start assessing the compatibility of their current AI initiatives with the incoming regulations. Once the law is enforced, companies will have two more years to implement appropriate measures. 

In this article, we will briefly revisit the main points of the draft AI regulation, examine the recent criticism of the draft provided by the European Data Protection Board (EDPB), and discuss what companies can do to minimize non-compliance risks. 

The regulation

The draft AI regulation categorizes AI systems by their risk:

Unacceptable-risk AI systems. This category includes three types of systems that are prohibited to use: 

  • Remote, real-time biometric identification systems in public places. Essentially, this prohibits companies and governments to use facial recognition systems for law enforcement purposes with a range of exceptions, which we will address further. 
  • Systems that exploit the psychological and physical vulnerabilities of people to distort a person’s behavior and alter one’s decision-making. This, in particular, is one of the most vaguely defined types of systems in the entire draft, which calls for further clarification. 
  • Social scoring systems that use AI to assess an individual’s trustworthiness based on personal characteristics and social behavior. 

High-risk AI systems. This includes systems that assess a person’s creditworthiness or job performance, and assist with management and recruitment. Such systems will have to follow the strictest set of requirements, including implementation of risk-management and data quality monitoring systems, human oversight, record keeping, etc. In essence, organizations will have to prove that an AI system is explainable, consistent, and transparent by conducting regular conformity assessments. 

Limited- and minimal-risk AI systems. The likes of AI-powered chatbots, inventory management tools, and biometric identification systems will also have to be transparent. In a nutshell, organizations will be required to inform users that they are interacting with a computer or that media content has been generated or changed with the help of AI. 

The EDPB’s Joint Opinion on the regulation

In June 2021, the EDPB published its Joint Opinion on AI. While on the whole the board positively reacted to the draft, it also requested a few important modifications.

Data protection to be at the core 

First, the EDPB strongly suggests that the risk-based approach of the draft has to be aligned with the GDPR principles of the protection of personal data. In other words, EDPB calls for using the established GDPR framework for personal data protection as a pre-condition for deploying high-risk AI systems. Given the evidently close relationship between the two systems, this is a very valid point. 

No exceptions for unacceptable-risk AI systems 

The EDPB also claims that the possible implications (for example, a flawed facial recognition system) of unacceptable-risk AI systems is disproportionate to the potential benefits of this technology. The list of exceptions for the use of unacceptable-risk AI systems in the AI draft is profound, with numerous credible researchers pointing out that it entails a range of complex loopholes. 

User vs provider responsibility 

The AI draft shifts almost all responsibility in relation to high-risk AI systems to the providers of the technology rather than its users (organizations). The EDPB highlights that, in practice, developers of AI systems have little to no control over how these systems are utilized, calling for a transfer of responsibility.  

What does it all mean for business?

For many organizations, the most pressing question is when the regulation might be enforced. Given that it took six years for the GDPR to be transformed from a draft to a fully-fledged regulation, and that AI further extends the complexity of personal data protection, it can take even more time for the draft to become enforceable. This, however, is not an incentive to put off preparation.

For the past two years, a range of regulatory bodies from different parts of the world has made efforts to restrict the use of automated systems. For example, it’s been a year since the Canadian government began requiring companies to provide detailed algorithmic impact assessments of their automated decision-making systems.

The inevitability of this new regulatory reality urges organizations to create dedicated risk-management programs, reporting mechanisms, data-privacy protocols, etc. Essentially, every organization that uses an AI system that falls into the high-risk category will need an AI governance system and a dedicated committee that is responsible for compliant AI use. Currently, the most efficient way for organizations to prepare for the incoming regulation is to try to conduct conformity assessments, which will become common once the regulation comes into force. This will reveal missing risk-mitigation practices as well as help with standardization and documentation.

Conclusion

The AI regulation draft is rather ambiguous but it’s not surprising, providing that accounting for all AI facets is an infinitely complex task. The EDPB’s uncompromising approach to tackling these ambiguities is justifiable. For now, it might take years for legislators to figure out feasible frameworks for realizing the full potential of AI on a large scale without ethical implications.

In any case, given the overreaching impact of AI, companies need to start looking into relevant compliance requirements asap. Moreover, the ever-changing nature of this technology will definitely lead to continuous reassessment and evolution of regulations, prompting organizations to become increasingly flexible.