Skip to content

An In-House Counsel’s Guide to AI Compliance

In this guide, we’ll look at an overview of AI compliance and its importance before reviewing the top five areas of AI compliance to consider.

Authors

  • Joy Batra

    General Counsel, US

    Base.com

Artificial Intelligence

What Is AI Compliance?

An extension of your company’s existing compliance efforts, AI compliance involves using AI in a way that allows you to meet your obligations across multiple dimensions such as: legal, regulatory, ethical, and reputational. AI compliance means integrating AI so that it is both compliant on its own, and also supports compliant growth in the rest of your business. 

Most commonly, AI compliance will involve managing risk in domains such as privacy, IP, and data governance, while also managing additional risks AI introduces such as bias, lack of explainability or oversight, cybersecurity, and a rapidly evolving legal framework. AI compliance means ensuring your system is well-structured from the outset, and keeping an ongoing pulse on whether the technology’s decision-making is working fairly and safely.

Why Is AI Compliance Important?

AI compliance is essential to make sure any new technologies that are being integrated into the business are set up for long-term success as opposed to just short-term efficiency or capitalizing on hype. Lack of compliance can result in litigation, regulatory action, financial penalties, business disruptions, and loss of trust by customers and partners. 

AI compliance is important in all industries that may touch the technology or its outputs, but is particularly sensitive in highly regulated industries like healthcare and financial services.

AI Use Cases for In-House Legal

65+ Real-World Prompting Ideas to Transform Your Legal Workflows

Download the free guide

AI Compliance Best Practices for In-House Counsel

Here are the top five areas to consider when building a robust AI compliance program in-house.

Contracts

Building AI compliance into contracts starts with five core dimensions:

  • Reps & Warranties: To properly allocate risk, these should address lawful use of AI training data, appropriate licensing or ownership of AI-generated outputs, and compliance with applicable law across AI, IP, privacy, and data protection in all relevant jurisdictions.

  • IP: Clarify who will own or license the AI-generated outputs and what they can be used for. There should be clear parameters on which data can be used and how, in order to avoid waiving the right to a fair use defense that might otherwise apply.

  • Privacy: Review which geographies the counterparty will store data in, whether any personally identifiable information (PII) will be used, and how it will be protected if so, in order to comply with current privacy frameworks that apply to AI, such as GDPR and CCPA.

  • Indemnification: Consider addressing risks of: (1) infringement or third party claims from data used in training, (2) privacy concerns due to using personal data incorrectly, particularly in multiple jurisdictions, and (3) litigation or regulatory action from bias and safety issues. Align on whether vendor and customer will indemnify each other on any problems that emerge when using a vendor’s AI to create other products and services.

  • Ongoing Compliance Obligations: Agree on any ongoing commitments to responsible AI development, industry standards such as ISO 42001:2023, periodic audits and testing, and potential improvements on bias and explainability.

Marketing

When it comes to describing AI usage compliantly, companies need to walk the line between adequately disclosing their reliance on AI without exaggerating it. Several states (and, notably, California) have adopted AI transparency laws that require disclosures such as whether AI was used to create content or make decisions, and what data was used to train the tool. Even in the absence of transparency mandates, disclosing use of AI is now commonplace in risk factors, SEC filings, and to some extent, to consumers.

The market is likely to move toward greater transparency even if it is not strictly required, with more companies publicly sharing their safety checks, training data overviews and system cards like OpenAI and Anthropic. General counsels should plan for increased transparency expectations ahead, while also being careful not to oversell their company’s AI usage. Misleading AI claims remain a large area of exposure, as is regulatory scrutiny for “AI-washing”.

Regulatory and Litigation

Staying up-to-date on regulatory and litigation exposure is an essential part of an AI compliance effort. On the domestic regulatory front, there is quickly emerging a discrepancy between state and federal law postures, with the federal government taking a deregulatory stance in its AI Action Plan which has the stated missions of accelerating AI innovation and building American AI infrastructure.

At the state level, over 550 AI-related bills have been proposed by 45 states, with California and Illinois leading the way on state-level AI regulation. There are generally 3 types of state AI laws so far: those focused on bias, those focused on transparency, and further applications of current laws, but even in the absence of any new AI-specific legislation, state attorneys general are likely to bring claims under previously-existing legal frameworks, and in particular UDAP/UDAAP, privacy, civil rights, and competition laws.

On the litigation front, special attention should be paid to court decisions, with notable summary judgments recently emerging on fair use and copyright matters.

Internationally, jurisdictions vary in their openness to AI, with the EU AI Act being the most robust regulatory framework so far. Companies may need to limit AI use by user geography, which is reminiscent of multi-jurisdictional approaches to privacy after GDPR.

Documentation

Depending on a company’s resources and likelihood of litigation, it can be useful to document how each model works, what data it uses, how decisions are made using its output, and what limitations the model has, particularly when evaluating a new vendor. It can also be helpful to document any processes around evaluating the AI training data sources used as well as steps taken to avoid using data that is pirated or runs afoul of legal, contractual, or technical constraints. This may also include periodically diligencing and documenting external partners’ data sources. Strong documentation is supported by also favoring models that have built-in explainability, which is especially useful to communicate the company’s position to regulators and judges, and should become a market standard.

Some lawyers further recommend companies perform AI impact assessments by legal, product, risk, and technical representatives before actually implementing an AI use case. This is often done in privacy contexts, and can be useful in demonstrating the company took “reasonable precautions” as part of a litigation or regulatory defense. Other key factors to consider before implementing are human supervision, model explainability, and data security.

Governance

AI compliance needs strong, cross-functional support at the top. A cross-functional committee that evaluates the use of AI at the company level can set the culture of compliance and overall risk appetite. Here, the company can align on which use cases constitute acceptable versus high risk. (For example, scheduling candidate interviews may be an acceptable use case whereas hiring and firing decisions are often considered high risk that would benefit from having human oversight.) This group can be used to evaluate ongoing AI usage and also identify who is accountable for any issues that may arise from it.

Depending on the use case, accountability may fall to: the Chief Information Security Officer, Chief Risk Officer, Data Protection Officer, an AI Governance or Ethics Lead, Product and Engineering teams or Legal and Compliance. Ideally, this cross-functional committee can also review and track which AI tools are being used, for what purpose, their respective risk levels, and the designated point of contact for each one, though this may not always be practical depending on the company’s resources. Risks to discuss at this level include data quality, cybersecurity, as well as legal and regulatory.

Building AI Compliance Over Time

Adding AI to your company’s tech stack is not a one-time process, but general counsels can foster AI compliance in the long run by following a few general principles. When adding a new tool, after vendor diligence and impact assessments, use small-scale pilots of AI before larger deployments when possible. Once an AI tool is in place, regularly check back in to ensure its use still meets acceptable risk levels, especially on higher risk applications such as personnel decisions.

Note that there is the possibility of drift in performance and compliance over time, as well as biases or unintended impacts that may not be revealed until further down the path. To support ongoing compliance, continue to build a culture of education and compliance within the team that can help surface new opportunities and risks as soon as they emerge.

Join an exclusive group of in-house lawyers at the forefront of AI adoption

Apply for Membership
Person shakes hands with colleagues at the L Suite AI Conference June 2024