Skip to content

Counsel Corner: 6 Rules for Taking on Generative AI as a GC

As high-growth companies embrace the latest tech in generative AI, their GCs face new questions around how to manage risk while enabling innovation. Five legal experts share their insights.

Authors

  • Team L Suite

Privacy & Cybersecurity

Our Counsel Corner series brings together the top legal minds from our community to discuss complex challenges that GCs face when growing their companies and navigating their careers. This edition tackles the emerging legal challenges around Generative AI. 

Is your company considering using ChatGPT or other generative AI to augment your products or services? As GC, you know that there are many interesting developments taking place in this space, and that your company will likely want to embrace the latest technology to stay competitive.

But what are the risks? And how can you set up your team to manage risk appropriately while also giving your team space to innovate?

To answer these questions, we sat down with 5 legal experts in our community: 

  • Janis Foo, General Counsel at Collective[i]
  • Adam Glick, Former Head of Legal at Intercom
  • Germaine Gabriel, Associate General Counsel at FullStory
  • Barath Chari, Partner at Wilson Sonsini Goodrich & Rosati 
  • Scott McKinney, Partner at Wilson Sonsini Goodrich & Rosati 

Read below for their insights on how GCs can build effective policies around AI, manage risks appropriately, and build processes that allow their teams to innovate even when utilizing new technologies that are subject to constantly evolving policies and legislation.

Build your data and privacy policies before tackling AI

If your company already has a robust vendor vetting process in place, you make think that you’re already covered when it comes to assessing risks from generative AI tools. However, it’s important to keep in mind that because this technology is so new with so many use cases, you’ll need to build out separate policies that explicitly outline how your company is allowed to use data. Before you begin to think about building company policies and guardrails for AI tools, you need to nail the basics first. Does your company have a privacy program? Do you have existing guidelines around data ethics and understand your internal and external customer data flows? 

If these are still open questions for you, tackle these critical issues first before you take on AI implementation. 

Once you have robust privacy and data policies built out, you can use that as a jumping off point for amending your vendor vetting process and building in special considerations for generative AI tools. 

“In a traditional procurement process, you’ll have a vendor with maybe one or two use cases. There are just so many uses for something like ChatGPT, with even more use cases that we haven't even thought of yet. I have to think about it more in terms of ‘What are the risks and benefits of different uses and what data, specifically, would be shared with these tools?’ Thinking about it through that lens helps our team understand how we approach the risks and what guidelines we need to implement.” - Janis Foo, General Counsel at Collective[i].

As is the case with all company policies, your team will require robust, ongoing training on how to properly assess tools and use cases so that legal can step in when needed during the vetting process.

"Sometimes companies might be thoughtful about getting a policy in place. However, the most crucial thing is to support the policy with employee training about actually following the guidelines across different use cases and departments. If it’s just a sheet of paper or a word doc that sits untouched, it's not worth anything.” - Scott McKinney, Partner at Wilson Sonsini Goodrich & Rosati.

As part of this process, make sure you’ve mapped your customer data flows and have built solid relationships across your organization so that team members openly communicate with you about concerns and roadblocks as you implement your AI policies.

“As you think about implementing AI policies, it’s important that Legal talks to team members to understand how they want to use Generative AI, and what data they want to use. Those conversations will allow Legal to determine whether the appropriate contractual and privacy safeguards are in place, and if there are data ownership or IP implications. It’s always crucial to understand the life cycle of the data." – Germaine Gabriel, Associate General Counsel at FullStory.

Don’t Underestimate the Risks or Complexity of Sharing Data with Third Parties

Most generative AI tools use your data inputs to build their models and improve their products. In many cases, you may not have a full view of a third party's data policies and information about how they’re storing data.

“The tricky part of this technology is that I see it from both sides. We are both a provider and consumer of it. As a provider, I know that we are very conscious of respecting our client's data and developing this technology responsibly. When evaluating other providers, I'm always looking to ensure that they have similar standards and policies because I know how valuable the technology is when it's created and managed with potential risks for abuse in mind.” Janis Foo, General Counsel at Collective[i].

As a GC, try to gather as much information as possible about a third party’s data policies, including information on if the data is going to be anonymized or aggregated, and if so, how. You’ll need this information to properly assess risks and include clear language in your contracts around data usage for generative AI purposes.

“When you’re sharing customer data with a third party, it might not always be aggregated and anonymized. You will have to expressly communicate this to customers and ask them for the rights to share information with third parties for specifically outlined purposes. As a customer, I'd be concerned if I didn't know what rights those providers have to my information. For example, depending on the industry your business operates in, customer data might include PII. This can make data sharing with third parties much more complicated." - Barath Chari, Partner at Wilson Sonsini Goodrich & Rosati.

Double check AI output, and know who (or what) is exposed to it

As AI is still a relatively new space, it’s critical to understand that the output from tools your team uses could create infringing, damaging, misleading, or completely wrong information. Make sure to build safeguards and sanity checks into your processes around AI use. This is so that customers – and your company – are protected from a scenario in which infringing or false AI-generated output is utilized.

“The technology is amazing, so you can very easily be tricked into thinking that what you're getting out of it is great.. And a lot of times, it is great, but always keep in mind that output can be completely wrong or made up. Having a level of wariness across the board for all of the use cases – marketing, product, engineering, legal – is really important, since it’s always possible that what you’re getting back is infringing, buggy, or has a fatal data security flaw. - Scott McKinney, Partner at Wilson Sonsini Goodrich & Rosati.

Additionally, knowing where your customers are – and which types of AI-generated data they’ll be exposed to is key. Even if you’re only operating within the U.S., impeding state-by-state legislation means that you need nimble policies that can change when appropriate. If you have European customers, GDPR means that you’ll need to be even more careful with how you’re using AI. 

“The U.S. has seen movement on AI legislation, both on the state and federal levels in the last few years. In Europe, under GDPR, people have a right not to be subjected to automated decision making in certain instances, and more recently the EU is considering the Artificial Intelligence Act. Numerous other countries are also implementing or have implemented AI policies and strategies. An overarching theme in the legislative landscape is the need to foster AI innovation, while protecting consumers and the public. For companies to comply with AI guidelines and laws, that might mean revamping public disclosures to help consumers understand how your organization uses AI, and it definitely means drafting internal guidelines to help your team understand the rules of the road.” – Germaine Gabriel, Associate General Counsel at FullStory.

Over-communicate with customers

Your customers should never be surprised by how their data is being used. With new AI tools on the horizon, customers may not fully understand the ways their data might be processed or how your company’s AI policies might subject their organization or customers to more (or new) risks.

It’s more important than ever to communicate clearly with customers and ask for permission before using their data as input for generative AI tools. Update your existing policies with clear language about use if you've already begun to use AI, and don’t forget that you can also reach out to customers at any time to ask for consent if you want to use their data for a particular purpose, such as training an AI-powered beta version of a feature. 

“We have a duty to our customers to make sure we safeguard their data appropriately. Our duty may be communicated in our contracts, in our DPAs, and in other public disclosures. It is a common practice to ask a Customer to use their data, if the correct privileges are otherwise absent in your agreements. Having a firm understanding about what your customers expect is happening with their data, and adhering to that, is important for any data conscious and responsible company.” – Germaine Gabriel, Associate General Counsel at FullStory.

Lean on your network

Even though AI is new, you don’t have to manage it alone when developing and implementing policies. Look to other leaders in your space along with outside counsel recommended by people in your network (or, if you’re a TechGC member, on The BrainTrust. This will enable you to meet GCs and legal experts who’ve already done the legwork around AI and privacy. 

“When researching and developing policies in addition to other legal leaders, I would work closely with your law firms and have them provide guidance which takes into consideration your business model, your company’s use of AI and how other similarly situated peer companies are crafting their policies. They have a significant amount of expertise and can help articulate some of the implications to your business that you might not have considered.” – Adam Glick, Former Head of Legal at Intercom.

Don’t forget to also use your closest network – your teammates – to tackle AI and privacy. Trying to keep up with industry developments is nearly impossible for a single person, so get your team on board so that experts in various departments at your company are staying on top of significant changes and updates in the AI landscape. 

“One thing that’s been incredibly helpful is that we have internal slack channels that the entire company is on that focus on OpenAI and other sub-processors. When someone on the team reads news or something that may potentially affect us, they share it with the entire company, so it’s not just me doing all the research. The whole company is providing each other with information on AI news; I think it would actually be impossible for me to keep up with everything on my own." - Janis Foo, General Counsel at Collective[i].

Things are moving fast; you’ll make mistakes and face uncertainty if you want to grow

Your job as GC is to manage risk while leaving room for your company to grow rapidly. Achieving this will require that you face some level of uncertainty and find an appropriate middle ground in your organization that both you and leadership agree upon. If you’re too cautious, you may be left behind while others innovate. If you approve of the unfettered usage of AI, you’re likely to set yourself up for major risks.

“There are times when GCs have unrealistic ideas about what they can prohibit around AI use. Having a policy that says “our engineers can't use X” likely won’t be productive because people across your company will turn to these tools regardless. So it’s really important to have elastic policies in place backed by training so that employees understand what the real risks are along with responsible usage. On the other hand, allowing unfettered use is an unwise idea. Just because a tool is well-known or backed by a large company doesn’t mean you’re free from major risks” - Barath Chari, Partner at Wilson Sonsini Goodrich & Rosati

“Artificial Intelligence (our own app included) is rapidly becoming table stakes. While it's important to assess the risks (and for vendors who provide AI to be transparent), it's equally important to assess the risks of not using the technology, which can be even more devastating.” - Janis Foo, General Counsel at Collective[i].

Finally, don’t forget that if your company integrates AI, you'll be a leader in the space whose actions will affect how AI is perceived across the industry. What do you – and your company – want to be known for? Would you feel comfortable with your data practices being exposed on the front page of the New York Times? 

These are important questions to remember as you develop and implement your data and AI policies. They will help you ultimately decide what level of risk is acceptable for you and your team.

It is critical to build a program to ensure responsible AI governance. Taking time to think about all of the use cases for AI across your company, whether it’s part of your product development efforts, protecting your IP or modifying commercial contracts to create the appropriate safeguards – there are so many wide-ranging implications to your business. AI can be used in many productive ways, but be aware of when it can create exposure and liability as it’s implemented throughout your organization.” – Adam Glick, Former Head of Legal at Intercom.


About The L Suite

Called “the gold standard for legal peer groups” and “one of the best professional growth investments an in-house attorney can make,” The L Suite is an invitation-only community for in-house legal executives. Over 2,000 members have access to 300+ world-class events per year, a robust online platform where leaders ask and answer pressing questions and share exclusive resources, and industry- and location-based salary survey data.

For more information, visit lsuite.co.