So, You Want to Use an LLM? Legal Considerations for AI Adoption in Your Business
The L Suite and Orrick explore how to navigate the legal complexities of AI adoption in your business, from privacy to implementation best practices.
Featuring Insights From
People
Navigating AI governance can feel like issue-spotting in law school, where we were constantly identifying issues without clear-cut answers.
However, unlike in academic exercises, real-world consequences can arise from oversights in your organization's AI governance.
For instance, if engineers use AI assistants to generate core product code that cannot be protected by copyright, the value of your company's intellectual property could be at risk. Similarly, if your sales team processes sensitive client data or confidential business information through an LLM without a proper enterprise license agreement, that information could be compromised, leading to significant legal challenges for your company.
While issuing blanket bans on employee AI use is neither practical nor advisable, companies should nevertheless proceed with caution. Recently, I hosted a webinar on this topic for The L Suite with Laura Belmont, General Counsel at Civis Analytics. Here is an overview of our insights on responsibly managing this revolutionary technology.
Key Takeaways:
Assess AI tools through three lenses: internal operations (only company data is at issue), client service support (client data is involved), and product integration (AI features are included in your product/service offerings).
Create a practical AI governance framework starting with procurement processes, security reviews, education, and clear protocols for onboarding AI tools. Pay special attention to how users will access the model (web interface vs. API vs. cloud platform).
Develop role-specific, digestible guidance for employees. For example, create targeted one-pagers for different teams outlining clear dos and don'ts in addition to more comprehensive policies.
Update employees regularly on emerging risks, clear escalation paths, and open channels for tool requests. This helps foster a company-wide culture of responsible use without legal needing to become the "AI police."
AI Data Flows and Key Risk Areas
Belmont noted she receives AI requests that tend to fall into one of three buckets:
Internal operations: Teams wanting to use AI tools for operational efficiencies involving company data.
Client service support: Inputting client data into AI models (for example, using ChatGPT's analytics functions to generate insights from sales or marketing data).
Product integration: Incorporating AI features into products or services sold to clients.
When weighing the risks (and potential rewards) of these use cases, you can't treat LLM integrations like traditional software — the data flowing through AI systems poses unique risks.
Many AI providers try to sidestep the issue of proprietary data sensitivity with vague statements in their terms of service like "don't input personal data." But let's be realistic: If your team uses these tools for business purposes, that probably involves some level of personal data.
Cross-border data flows add another layer of complexity. If you're using a China-based model with U.S. personal data or feeding European customer data into a U.S.-based system, you should consider cross-border data-transfer rules.
Then there's the subprocessor question. If you process data for clients, examine whether your AI provider agreements align with your downstream obligations. This might mean updating your subprocessor lists and potentially giving clients the opportunity to object to the use of specific AI tools.
Concerning Outputs
AI outputs involve well-documented challenges, from hallucinations to factual inaccuracies. These issues become critical when outputs are customer-facing. To mitigate these risks, organizations need robust review processes, including "human-in-the-loop" procedures and clear disclaimers.
Ownership of outputs also presents several thorny issues. While enterprise versions of AI tools typically grant output ownership to the customer, free versions often claim ownership rights for the model provider (more on this in a moment).
Code generation requires special attention: Software engineers were among the earliest adopters of LLM. However, since AI-generated code may lack explicit copyright protection, using these tools for core product development could undermine the value of your company’s intellectual property.
A Step-by-Step Guide to LLM Legal Review
If you're feeling overwhelmed about AI, you're not alone — there’s a lot to consider. When a team member approaches you with an AI-related request, here's how we recommend starting your legal analysis:
Start with procurement. Incorporate AI questions into your procurement process. If you don't have a formal process, now's the time to create one.
Assess each tool's security. Security considerations should be top of mind when selecting vendors. You may need to add AI-specific questions to your standard security questionnaire.
Consider access methods. The way you will access models is another key consideration. There is a significant difference in risk between using a web interface, accessing through an API, or implementing through a cloud AI platform like AWS Bedrock. Cloud platforms often offer additional security measures and may limit how much data passes through to the model.
Conduct an AI risk assessment. By systematically evaluating the ethical, legal, and operational implications of AI deployment, organizations can mitigate potential negative impacts, such as data breaches, biased decision-making, and compliance issues.
Map data protection needs. When it comes to privacy, consider inputs and outputs. Ensure compliance with privacy regulations, from the GDPR to state privacy laws. Consider whether a data processing agreement would be appropriate.
Enterprise vs. Free Tools
The free, consumer-facing versions of popular LLMs often come with terms that are less favorable to companies. They typically allow the provider to use prompts and outputs to train the provider’s models (or develop other products and services). That could jeopardize confidential information, trade secrets, and personal data.
Enterprise versions generally provide more favorable terms for companies. For instance, OpenAI's business terms limit data usage to providing the service, while Anthropic's commercial terms prohibit training models on customer content. Some providers, like Cohere, offer opt-out options for commercial customers who don't want their data used for training.
Data Processing Considerations
When reviewing AI provider agreements, pay particular attention to data processing terms. Many providers include provisions allowing "secondary uses" of data that could expose your organization to risk. Others say they will use inputs and outputs, including any personal data, to train the provider’s model. Companies should consider entering into a data processing agreement with the LLM to delineate the responsibilities and obligations of both parties regarding data privacy, security, and compliance with regulations such as the GDPR or state privacy laws.
Building a Culture of Responsible AI Use
Successful AI governance isn't just about policies and contracts — it's also about people. Companies should focus on building a culture of responsible use through:
Clear escalation paths for questions and concerns.
Open feedback channels about tool needs and gaps.
Regular communication about evolving best practices.
Technical guardrails where appropriate.
To make compliance more intuitive, policies should be accessible and approachable (I've seen too many 10-page policies sitting unread in shared drives). Create targeted guidance for different audiences. For engineering teams, we've had success with one-pagers that clearly outline do's and don'ts for using AI in code development. Marketing teams need guidance on generating content and protecting the brand, with an emphasis on brand voice consistency. For sales teams, develop straightforward protocols about using AI tools during client interactions, with clear instructions on data handling and example use cases.
The rapid pace of AI development also means employee education needs to be an ongoing conversation, not a one-time presentation. Regular updates help teams stay current with emerging risks — like the rise of malware in open-source LLMs. Consider creating an environment where teams understand the rules around AI use as well as the reasoning behind them. When employees understand the broader implications for security, privacy, and IP, they're more likely to make smart decisions independently.
Staying Ahead of the AI Curve
AI is growing at a dizzying pace, and the regulatory landscape remains volatile. The EU AI Act has set a precedent we'll likely see elsewhere. In the U.S., state legislators have proposed hundreds of laws to regulate aspects of AI use, from mandatory disclosure of AI-generated content to restrictions on using AI in hiring. Staying abreast of this rapidly changing environment is paramount (and joining communities like The L Suite can be incredibly helpful).
In addition to staying current with regulatory news, enterprise AI use cases can shift on a dime — so flexibility is critical. Keep an eye on how tools are used in practice: What starts as a simple internal tool can quickly expand to client-facing applications, potentially triggering additional compliance requirements or bumping up against usage restrictions in your agreements.
Belmont shared an example of a recent curveball: When her team wanted to integrate a specific AI model into their client-facing platform, they discovered that the model’s terms had changed to prohibit its use for lobbying. Because they wanted to build a tool that could be used by all their clients — including advocacy clients — they integrated a drop-down menu of multiple AI model choices into their platform, including one that did not involve a similar usage restriction.
Stories like this highlight why AI implementation requires vigilance and adaptability. While we can't predict how technology will evolve, we can build thoughtful frameworks that allow organizations to harness AI's benefits while protecting against its risks.
If you're looking to connect with other legal professionals about how they're thinking about AI and LLMs, The L Suite can offer helpful resources. Apply to join today.
About The L Suite
Called “the gold standard for legal peer groups” and “one of the best professional growth investments an in-house attorney can make,” The L Suite is an invitation-only community for in-house legal executives. Over 4,000 members have access to 300+ world-class events per year, a robust online platform where leaders ask and answer pressing questions and share exclusive resources, and industry- and location-based salary survey data.
For more information, visit lsuite.co.