Home » AI

The EU’s AI act: A measured approach to innovation and regulation

  • ajitjaokar 

AI regulatory measures often stir mixed reactions. We reached a regulatory milestone this week- with the final passing of the AI ACT in the European Union

The locus of emphasis has shifted elsewhere now – specifically to oversight approaches and appointments from countries and to the AI Office. The AI Act divides machine learning systems into four main categories based on the potential risk to society. High-risk systems are subject to more stringent rules in the EU. 

There are various relevant timelines. Member states have to appoint members to the oversight (AI Office) in 12 months. The actual bans on restricted practices will apply from November. The high-risk systems obligations will come into force from May 2025. The high-risk systems will be managed by the national authorities supported by the central AI Office in the European Commission. 

It’s important to note what is not there. Specifically, the once mooted proposals for regulating AI at the level of model parameters are absent – which is good for innovation. 

There are a number of reasons to be optimistic about the AI act. 

Earlier this year, the EU’s Artificial Intelligence Office was announced as the body tasked with enforcing the AI Act. EU members will nominate experts to this committee. The emphasis has thus shifted to the AI office and the nominated members. Many of the issues like the risks of AGI, copyright, etc will be handled by this body, presumably on a case-by-case basis. Specifically its not a blanket restriction on the number of parameters as I understand it. The creation of the AI Office is a reasonable compromise for a number of reasons. 

Firstly, vendors themselves will benefit if there is some clarity. This is one of the main positive reasons for the AI act

Secondly, even where we see high-risk cases, we could have technical or process-led solutions such as graph neural networks and humans in the loop. Note that education and recruitment come at high risk in some cases (because of the possibility of algorithms passing judgment on people), In such cases, LLM output can be anchored in some way to enterprise domain knowledge in the form of a knowledge graph with a human in the loop strategy..

The risks to consumers are real as we see from the case of Air Canada who (unsuccessfully) argued that chatbots are responsible for their actions( Air Canada ordered to pay customer who was misled by airline’s chatbot ).  The chatbot had apparently ‘made up’ a bereavement policy. In other words, now, if technical solutions exist, these will need to be implemented to avoid hallucination and provide explainable solutions.

As I have said previously, the emphasis now shifts to the AI Office. European AI Office will be the center of AI expertise across the EU with the objective of implementing the AI act, working with AGI, fostering the development and use of trustworthy AI, and nurturing international cooperation.

Some of the areas of remit for the AI office include the following source the European commission

  • Investigating possible infringements of rules, including evaluations to assess model capabilities, and requesting providers to take corrective action
  • Preparing guidance and guidelines, implementing and delegated acts, and other tools to support effective implementation of the AI Act and monitor compliance with the regulation
  • Providing advice on best practices and enabling ready-access to AI sandboxes, real-world testing, and other European support structures for AI uptake
  • Encouraging innovative ecosystems of trustworthy AI to enhance the EU’s competitiveness and economic growth
  • At an institutional level, the AI Office works closely with the European Artificial Intelligence Board formed by Member State representatives and the European Centre for Algorithmic Transparency (ECAT) of the Commission.
  • The AI Office may also partner up with individual experts and organizations. It will also create fora for cooperation of providers of AI models and systems, including general-purpose AI, and similarly for the open-source community, to share best practices and contribute to the development of codes of conduct and codes of practice.
  • The AI Office will also oversee the AI Pact, which allows businesses to engage with the Commission and other stakeholders such as sharing best practices and joining activities. This engagement will start before the AI Act becomes applicable and will allow businesses to plan ahead and prepare for the implementation of the AI Act. All this will be part of the European AI Alliance, a Commission initiative, to establish an open policy dialogue on AI.


1) Views are mine alone and not associated with any organization I am working with.

2) The above is my understanding – the AI act is still very new and may evolve as the focus shifts to the oversight team. 

Image source