Decoding the EU Artificial Intelligence Act - KPMG Global (2024)

Decoding the EU Artificial Intelligence Act - KPMG Global (1)

Understanding the AI Act’s impact and how you can respond.

Understanding the AI Act’s impact and how you can respond.

  • 1000
  • View Print friendly version of this article Opens in a new window
  • Home
  • Insights
  • Decoding the EU Artificial Intelligence Act

<Back to KPMG Trusted AI Services page

Artificial Intelligence (AI) is offering new benefits to society and businesses, aiming to reshape workplaces and key industries. The push to harness the transformative potential of AI and automation is underway. However, amidst the global proliferation of AI in business and daily life, concerns about ethical use and risks emerge. Trust issues persist; in the Trust in artificial intelligence global study, three in five people express wariness about AI systems, leading 71 percent to expect regulatory measures.

In response, the European Union (EU) has made significant strides with a provisional agreement on the groundbreaking Artificial Intelligence Act (AI Act), which is anticipated to set a new global standard for AI regulation. Envisioned to become law in 2024, with most AI systems needing to comply by 2026, the AI Act takes a risk-based approach to safeguard fundamental rights, democracy, the rule of law, and environmental sustainability.

The EU's AI Act aims to strike a delicate balance, fostering AI adoption while upholding individuals' rights to responsible, ethical, and trustworthy AI use. This paper explores the potential impact of the AI Act on organizations, delving into its structure, obligations, compliance timelines, and suggesting an action plan for organizations to consider.

Decoding the EU AI Act

Discover the potential impact of the AI Act to your organization.

Download PDF (1.8 MB) ⤓

EU AI Act Overview


AI holds immense promise to expand the horizon of what is achievable and to impact the world for our benefit — but managing AI’s risks and potential known and unknown negative consequences will be critical. The AI Act is set to be finalized in 2024 and aims to ensure that AI systems are safe, respect fundamental rights, foster AI investment, improve governance, and encourage a harmonized single EU market for AI.




The AI Act's definition of AI is anticipated to be broad and include various technologies and systems. As a result, organizations are likely to be significantly impacted by the AI Act. Most of the obligations are expected to take effect in early 2026. However, prohibited AI systems will have to be phased out six months after the AI Act comes into force. The rules for governing general-purpose AI are expected to apply in early 2025.1




The AI Act applies a risk-based approach, dividing AI systems into different risk levels: unacceptable, high, limited and minimal risk.2


High-risk AI systems are permitted but subject to the most stringent obligations. These obligations will affect not only users but also so-called ‘providers’ of AI systems. The term ‘provider’ in the AI Act covers developing bodies of AI systems, including organizations that develop AI systems for strictly internal use. It is important to know that an organization can be both a user and a provider.


Providers will likely need to ensure compliance with strict standards concerning risk management, data quality, transparency, human oversight, and robustness.


Users are responsible for operating these AI systems within the AI Act’s legal boundaries and according to the provider's specific instructions. This includes obligations on the intended purpose and use cases, data handling, human oversight and monitoring.




New provisions have been added to address the recent advancements in general-purpose AI (GPAI) models, including large generative AI models.3 These models can be used for a variety of tasks and can be integrated into a large number of AI systems, including high-risk systems, and are increasingly becoming the basis for many AI systems in the EU. To account for the wide range of tasks AI systems can accomplish and the rapid expansion of their capabilities, it was agreed that GPAI systems, and the models they are based on, may have to adhere to transparency requirements. Additionally, high-impact GPAI models, which possess advanced complexity, capabilities, and performance, will face more stringent obligations. This approach will help mitigate systemic risks that may arise due to these models' widespread use.4




Existing Union laws, for example, on personal data, product safety, consumer protection, social policy, and national labor law and practice, continue to apply, as well as Union sectoral legislative acts relating to product safety. Compliance with the AI Act will not relieve organizations from their pre-existing legal obligations in these areas.




Organizations should take the time to create a map of the AI systems they develop and use and categorize their risk levels as defined in the AI Act. If any of their AI systems fall into the limited, high or unacceptable risk category, they will need to assess the AI Act’s impact on their organization. It is imperative to understand this impact — and how to respond — as soon as possible.


Related content

Trusted AITrusted AITrusted AIAccelerating the value of AI with confidence.Accelerating the value of AI with confidence.Accelerating the value of AI with confidence.
Trust in artificial intelligenceTrust in artificial intelligenceTrust in artificial intelligence2023 global study on the shifting public perceptions of AI.2023 global study on the shifting public perceptions of AI.2023 global study on the shifting public perceptions of AI.
Privacy in the new world of AIPrivacy in the new world of AIPrivacy in the new world of AIHow to build trust in AI through privacy.How to build trust in AI through privacy.How to build trust in AI through privacy.
Generative AI models – the risks and potential rewards in businessGenerative AI models – the risks and potential rewards in businessGenerative AI models – the risks and potential rewards in businessWhat the rise of ChatGPT, DALL•E 2, Bard et al. could mean for your organization.What the rise of ChatGPT, DALL•E 2, Bard et al. could mean for your organization.What the rise of ChatGPT, DALL•E 2, Bard et al. could mean for your organization.

Get in touch

blog postsDavid Rowlands

Global Head of AI

KPMG International

Profile|

blog postsLaurent Gobbi

Global Trusted AI & Tech Risk Leader

KPMG in France

Profile|

|Phone

Transforming for a future of value

KPMG Connected EnterpriseKPMG’s customer centric, agile approach to digital transformation, tailored by sector
KPMG Powered EnterpriseBe the competition that others want to beat — with outcome-driven functional transformation made possible by KPMG Powered Enterprise.
KPMG TrustedHow to build and sustain the trust of your stakeholders.
KPMG ElevateUnlock financial value quickly and confidently.

1 European Commission. (December 12, 2023). Artificial Intelligence – Questions and Answers.

2 European Council. (December 9, 2023). Artificial Intelligence Act Trilogue: Press conference – Part 4.

3 European Parliament. (March 2023). General-purpose artificial intelligence.

4 European Commission. (December 12, 2023). Artificial Intelligence – Questions and Answers.

Decoding the EU Artificial Intelligence Act - KPMG Global (2024)
Top Articles
Latest Posts
Article information

Author: Geoffrey Lueilwitz

Last Updated:

Views: 5305

Rating: 5 / 5 (80 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Geoffrey Lueilwitz

Birthday: 1997-03-23

Address: 74183 Thomas Course, Port Micheal, OK 55446-1529

Phone: +13408645881558

Job: Global Representative

Hobby: Sailing, Vehicle restoration, Rowing, Ghost hunting, Scrapbooking, Rugby, Board sports

Introduction: My name is Geoffrey Lueilwitz, I am a zealous, encouraging, sparkling, enchanting, graceful, faithful, nice person who loves writing and wants to share my knowledge and understanding with you.