Skip to content

Counsel Corner: What is the EU AI Act, and How Can GCs Prepare?

Members busy at work in the coworking space ahead of the L Suite Global Summit in 2023

The EU AI Act will influence how GCs at high-growth companies think about risk, especially as more companies integrate generative AI into their products. Learn about the EU AI Act and how to prepare yourself and your company for the new regulatory environment.

Authors

  • Team L Suite

Privacy & Cybersecurity

Our Counsel Corner series brings together the top legal minds from our community to discuss complex challenges that GCs face when growing their companies and navigating their careers. To get insights on how GCs can prepare for impending AI legislation from the European Union, we sat down with 3 experts from the L Suite community: 

  • Michael Meehan, General Counsel at Diveplane
  • Patrick Van Eecke, Partner at Cooley
  • Travis LeBlanc, Partner at Cooley

Read below for their tips on what you can do today to start preparing for the potential legal implications of the EU Act.


On June 14, 2023, the European Parliament adopted its negotiating position on the Artificial Intelligence Act. The next step is negotiations between the European Parliament, the European Council, and the European Commission. The negotiations will kick off on June 21, 2023, and a deal is expected to be reached by November 2023.

GCs are used to an ever-changing tech landscape, and the EU AI Act has the potential to influence how GCs working in high-growth companies think about risk, especially as companies begin to integrate generative-AI systems into their products.

But what exactly is the EU AI Act? And how can GCs prepare themselves and their companies to scale and grow in an environment with new regulations around AI? While the regulation aims to promote trust and ethical use of AI across the European Union, its effects will be felt worldwide as it is likely to introduce new requirements and considerations that companies everywhere will need to navigate.

You won’t be able to avoid the EU AI act

European regulations have a massive effect on worldwide standards. So much so that there’s even a phrase to describe it: the Brussels Effect. You won’t be able to avoid the fact that the EU is going to shape worldwide AI policy, whether that’s through the EU AI Act or other legislation that’s coming down the pipeline.

This is true even if you think that you won’t be affected because you don’t serve European customers (and have no plans to). The EU AI Act has such broad and far-reaching effects that anything your company does in the future related to AI will likely be affected by legislation passed by the European Parliament.

“Even though the EU AI Act hasn’t been adopted yet, Brazil initiated a similar piece of legislation which is very close to the European draft legislation, so it’s already having a global effect. It’s important for GCs to note that European laws are not just applicable to companies based in Europe, but also applicable to companies based outside of Europe, even if they never want to come to Europe.

Of course, you can completely, you can say we don't want to touch the European market at all, but this is of course, not realistic. So it means that as a US company whether you like it or not, the moment you're gonna deal with Europeans – by which I mean European companies or European end users – you will be affected by what happens with the EU AI Act.” -Patrick Van Eecke, Partner at Cooley

“GCs should be concerned about the fact that the EU is about to establish the foundational framework for the regulation of artificial intelligence. I would analogize what we're seeing with the EU AI Act to what happened with the GDPR for privacy regulation. Europeans got out first with GDPR and the United States was not able to develop a framework for a variety of reasons. GDPR is now the foundational framework for privacy regulation globally, including in the US. European AI laws will have a measurable impact on how US standards develop.

If we use the GDPR as an analogy, What we're seeing with the AI Act is Europeans doing the same thing concerning AI. We should assume that everything that we've seen as GDPR matures will happen with the AI Act as it matures. Indeed, the same data protection authorities that are issuing fines to Meta are the same people who are going to be doing the initial enforcement for the AI Act. At this point, every tech GC in the United States should be thinking about how to motivate the US Government to begin developing regulations on AI as quickly as possible." -Travis LeBlanc, Partner at Cooley

Understand high-risk vs low-risk AI

GCs must understand the difference between high-risk and low-risk AI and how those categories are defined in the EU AI Act. Knowing the difference between how these systems are viewed will help you assess how your company will potentially be evaluated moving forward.

“All GCs need to know the three risk categories and where their use of AI falls. The law classifies AI systems into three risk categories: unacceptable risk, high risk, and low risk. Unacceptable risk AI systems are those that pose a serious threat to fundamental rights and freedoms; these are systems that can be used to discriminate against individuals or to manipulate their behavior. Members of the European Parliament in their review included proposed bans on biometric surveillance, emotion recognition, and predictive policing AI systems.

High-risk AI systems are those that pose a significant risk to fundamental rights and freedoms. These include systems that are used to make decisions about people's lives or rights, such as whether they should be granted a loan or a job. High-risk AI systems must comply with several requirements and must be developed and used under the principles of good AI practice, including transparency, accountability, and fairness. They also must be subjected to a conformity assessment procedure, which is a process that ensures that the system meets the requirements of the law, and must be registered in a dedicated, central EU database.

On the other hand, low-risk AI systems are those that do not pose a significant risk to fundamental rights and freedoms. An example of a low-risk AI system would be an AI system that controls how a non-player character in a computer game walks around.” -Michael Meehan, General Counsel at Diveplane

Unlike GDPR, EU AI regulations will deeply affect product development

With GDPR, many companies had to simply document their data processes and report on how they were collecting data – they didn’t necessarily have to think about GDPR during product development. However, with the EU AI Act, your product and any potential integrations with AI will be deeply affected on a product level by any potential EU regulations.

“With GDPR, you've got a lot of rules that you need to comply with, but many of those rules are paper-based exercises where you have to fill out the right forms or have a nice privacy statement. It is something that's not really that intrusive on the product or the software that you develop. However, with the AI Act, it is going to intrude and have a direct impact on how you develop artificial intelligence systems, especially those that are deemed high-risk.

While we're at the preliminary stage, GCs will need to realize that it's not enough to create a beautiful but isolated AI system. At that point, if you decide to go to the European market, it’s too late. It's at the design phase of your product that you’ll have to think deeply about the implications of those AI rules because if you don't, you'll never pass whatever assessments are needed to interact with European customers.” -Patrick Van Eecke, Partner at Cooley

The best preparation is following current laws and regulations

There’s already a lot of precedent for GCs about how they should be thinking about privacy, data protection, and AI. The best way for you and your company to think about how you can move forward successfully is by adhering closely to any applicable laws.

“Follow the laws as they develop. You may end up needing an expert to refine your approach once the EU AI Act comes into effect, but if you prepare yourself beforehand, you can also start preparing your team and company for what lies ahead. Have teams map their data. Data mapping, which in its simplest form amounts to labeling the fields of your data, can be critical to assessing what data you are processing, and can start you on the journey of assessment and risk analysis. Also have teams record the provenance of your data. Assessing data provenance is a practice of knowing the facts around the collection of your data. In its simplest form, it can be an assessment of how you collected data from your customers, where it was collected, whenit was collected, and what consent you got at the time. Like data mapping, this may simplify assessment and risk analysis once the EU AI Act is in force."" - Michael Meehan, General Counsel at Diveplane

“My one piece of advice would be to not get so infatuated with the AI Act that you disregard that there are existing laws in Europe and the United States that already apply to AI applications. We've already seen efforts to either ban or investigate Chat GPT out of the data protection authorities in Italy, Spain, and France, and that's because they believe the GDPR already applies to AI systems. There are already many uses of AI that are already governed by existing law such as employment law, so I would not assume that you are operating in a world without applicable legal regimes right now for the use of AI.” -Travis LeBlanc, Partner at Cooley

"Non-compliance would lead to fines of up to €40 million or up to 7% of the company’s total worldwide annual turnover for the preceding financial year. The AI Act is expected to be adopted by the end of this year and enter into force beginning of 2024, after which companies will have 2 years to become compliant." - Patrick Van Eecke, Partner at Cooley



About The L Suite

Called “the gold standard for legal peer groups” and “one of the best professional growth investments an in-house attorney can make,” The L Suite is an invitation-only community for in-house legal executives. Over 2,000 members have access to 300+ world-class events per year, a robust online platform where leaders ask and answer pressing questions and share exclusive resources, and industry- and location-based salary survey data.

For more information, visit lsuite.co.