Skip to content

Counsel Corner: Managing the Risks & Rewards of AI: A Look at Organizational Best Practices

Audience members at TechGC VCGC 10.22

Integrating generative AI into product development and engineering tasks has opened Pandora's box of opportunities and risks. For legal teams bridging innovation and compliance, collaboration with product and engineering is vital.

Authors

  • Team L Suite

Privacy & Cybersecurity

Our Counsel Corner series brings together the top legal minds from our community to discuss complex challenges that GCs face when growing their companies and navigating their careers. In this edition, we’ll be sharing insights on how legal teams can best approach this evolving landscape from experts in the field including Barath Chari, Partner @ WSGR, Francine Godrich, GC @ Focusrite PLC, and Philip Grimason, GC @ Synalogik Innovative Solutions Limited.

Strategies for Collaboration with Product and Engineering Teams

Generative AI holds immense potential for innovation but can also raise legal and ethical concerns, which is why many legal teams are taking a proactive approach to strike a balance of power between the two. Focusrite PLC has approached this with a simple policy: use with caution. Since most generative AI models will store the information they receive, avoid feeding the one you use any commercially sensitive information and always validate the responses it produces.

When it comes to collaboration and relationship building between these teams, it’s important to show that you are allies in innovation and not an impediment. Make it a point to remind your teams that you have an open door. As Francine Godrich says...

“It goes without saying that we’d be only too happy to remove the door to our office so that anyone who needs advice can get it. In fact, we have tried to remove the door but it turned out the door frame was part of the supporting structure of the building.”

It’s also worth noting that you should be ready to listen and learn about these technologies and ask for demos:

“Don’t be afraid to look foolish in asking to see what’s “under the hood” and how everything works. Make clear that you are interested in helping them understand legal risks in concrete terms and to work with them on technical mitigation strategies. — Barath Chari

Navigating the Copyright Landscape

Since generative AI can potentially create content resembling existing copyrighted material, the need for preventive measures is essential. And, as the legal landscape includes pitfalls like unwitting IP infringement and has yet to make final determinations on topics such as, “Who owns the IP created by AI?”, it becomes essential to establish ground rules for engaging with these platforms.

Here are some ideas on how to deal with AI and copyright without ruling out the use of AI:

If building AI software/platforms:

Ensure the integrity of algorithms and materials used for AI content creation. Only use data/materials you own, have permission for, or that are in the public domain.

When seeking licenses, clarify your intent for data use and obtain necessary rights, such as the ability to modify and develop derivative content.

For the creation of new copyrightable pieces, adjust the algorithm to make significant alterations to the input data. Ensure you have the right to adapt the original material or that it's public domain.

If using AI software/platforms created by 3rd parties:

Secure a Robust Contract: make sure it’s it has clear warranties and indemnities against IP infringement.

Review Licenses and policies: understand 3P’s stance on data ownership and permissions.

Examine License Terms and restrictions: do this especially if generative AI is being used to craft new works. Potential restrictions to watch out for:

  • Advertising & Copywriting
  • Distribution
  • Commercialization
  • Monetization
  • Copyrights

IP Insurance: this may be necessary to cover the cost of potential IP infringement claims if AI-derived content plays a large part in your business.

Addressing Bias and Reputational Risks

Unconscious bias and the generation of offensive content are serious concerns that legal teams must address to avoid reputational damage. Practical training sessions that use real-time examples from AI systems like ChatGPT have proven effective in educating staff.

Additionally, implementing "human in the loop" policies, where AI-generated content is reviewed and validated before public release, should serve as an additional safeguard. Just as a company wouldn’t let somebody writing marketing copy publish without approvals, and studios wouldn’t release a work without vetting rights, product teams should make sure that they have a process in place to evaluate and validate outputs prior to release.

It’s also worth exploring features that allow users to flag undesired output, which can then be reviewed and adjusted as necessary.

Consider the following questions to pose to your teams:

  • Is this information from an open AI data set? If so, be extremely specific with your prompt to pull the precise data you need rather than open-ended questions that could lead to troublesome or biased responses.
  • Is the response up to date and relevant?  ChatGPT is typically about 2 years behind in its data/knowledge, so its response may be nonsense in consideration of present knowledge and current events.
  • How did the AI arrive at its answer/conclusion? If sources cannot be obtained, use extreme caution or consider not using it all for that particular application until sources can be verified.

Ensuring Transparency and Compliance

Legal teams need to work closely with product and engineering departments to develop clear disclosures about the AI’s involvement in content creation, including watermarks, disclaimers, and other in-product notifications. The goal is to maintain transparency while abiding by data protection laws like GDPR and evolving AI-related regulations. Training programs related to intellectual property rights help the team understand the complexities of content ownership and legality.

“Our research focus is on developing and leveraging explainable AI techniques using fully anonymized data. We aim to be able to explain to a level satisfactory for evidencing, exactly why our systems have made the recommendation it has.</strong> For generated content, a highly templated approach is used to make the content predictable and repeatable." — Philip Grimason

Allocating Responsibility and Accountability for AI-Generated Content

Addressing the intricate issue of accountability in AI systems is crucial as AI continues to evolve. Some organizations employ explicit policies that outline user responsibility for understanding the input, output, and risks involved in AI systems, further categorized based on the type of AI in use (focused vs. generative).

In contrast, others prioritize consumer expectations, emphasizing that ultimate accountability lies with the organization, not the developers or AI vendors. Internal frameworks should be guiding the 'do’s and don'ts' for developers, backed by contractual protections from vendors, making the issue of accountability a collective effort involving multiple departments.

At Focusrite PLC, Francine uses the following Do’s and Don’ts table, followed by a simple final step: anything that is publicly available must be validated.

  • DO check for any bias in the data sets relevant to the use
  • DO verify any code or system configuration
  • DON’T use a system which draws data relevant to people or people trends unless the information is fully anonymized.
  • DON’T input company-sensitive data into systems unless it has been agreed by the company.

"I love a policy as long as 1. it isn’t too long, 2. is easy to understand, and 3. is practical. Our policy is clear that anyone using an AI system is responsible for understanding what they are inputting to the system, what it is generating by way of response, and what is the risk involved.” — Francine Godrich

Although generative AI is challenging the traditional boundaries of legal compliance, the balance between innovation and compliance is attainable through collaboration and proactive policies. Legal teams, naturally, are at the forefront of this critical process and can continue to pave the best possible path forward through transparency, accountability, and compliance.


About The L Suite

Called “the gold standard for legal peer groups” and “one of the best professional growth investments an in-house attorney can make,” The L Suite is an invitation-only community for in-house legal executives. Over 2,000 members have access to 300+ world-class events per year, a robust online platform where leaders ask and answer pressing questions and share exclusive resources, and industry- and location-based salary survey data.

For more information, visit lsuite.co.