Main Menu

Publications

PDF

6 Contract Considerations for Generative Artificial Intelligence Providers

11.20.2023

Generative artificial intelligence (“GenAI”) has taken the world by storm, and businesses across every industry are considering ways to incorporate this new technology into their products and services.

One obvious and popular use case is using GenAI to power a customer-facing chatbot on a company’s website or SaaS platform. There are less obvious use cases as well – companies are using GenAI to provide services that create a simulated “customer” to test new marketing campaigns and products, ensure brand and content consistency across all online platforms, and monitor equipment and provide preventative maintenance recommendations. The potential use cases seem endless.

All companies that incorporate GenAI into their products and services need to consider various contractual provisions, both on the vendor side and the customer side. These contracts may take the form of “standard terms and conditions” agreed to via click-through mechanisms or negotiated contracts.

Our first article discussed contract considerations for customers of GenAI solutions. Below is a discussion of six key contract provisions that GenAI providers should consider when incorporating GenAI into their products and services.

1. Third-Party Terms from Frontier Model Providers

Very few companies in the world are training and developing their own frontier models that power GenAI. The time, money, volume of data, compute power, human capital, and other resources required to develop these models limit this group of companies to a small handful, most of which are household names. As such, most providers that are incorporating GenAI into their products and services are integrating via API with one of these companies to provide the underlying GenAI model, which may be fine-tuned by the provider for the particular use case.

Because most GenAI providers are leveraging third-party technology, it is important for providers to review their contract with the companies that provide the base model to ensure that: (i) the provider complies with all of the terms contained in the contract for use of the service; (ii) the provider’s use case is not prohibited by such terms; (iii) the provider flows through to its customers any required use restrictions or other contractual obligations; and (iv) the provider doesn’t grant to its customers rights, or provide representations, warranties or indemnities to its customers, that are broader in scope than those provided to it by the frontier model provider.

For example, Google’s Generative AI Additional Terms of Service contains the following restriction: “You may not use the Services to develop machine learning models or related technology.” A GenAI provider could be in breach of Google’s terms if it intended to use output generated from Google’s GenAI products to develop or train its own AI model. Further, this is a restriction that the provider should impose upon its customers in its customer contracts to ensure downstream compliance with Google’s Generative AI Additional Terms of Service.

Key takeaway: GenAI providers should closely review the terms and conditions of companies that provide the GenAI frontier models. The providers should also flow down to customers the representations, warranties, and covenants with which it must comply.

2. Training Data and Prompts

Many companies have collected vast amounts of valuable data as part of their normal course business operations. Given the wave of GenAI technology, many leaders of these companies may now be thinking, “Wow, this is great. We have all of this valuable data, let’s use it to train or fine-tune an AI model to help us with [insert business problem].

Because such data likely contains personal information as defined under the complex web of US and foreign data privacy laws, business leaders would be well served to shift away from a mindset that such data is “their data” to a mindset that such data is “others’ data, we are the custodians of such data with certain rights consented to by the applicable individuals.” For most companies, it is unlikely that the data has been historically collected with the express right to use such data to train AI models. A company that uses such personal information to train an AI model without the express right to do so risks violation of data privacy laws and investigation from government agencies, including the Federal Trade Commission (“FTC”).

The risk here is far from theoretical or academic. The FTC has investigated numerous companies for using personal information to train AI models without proper permissions and has even required some companies to destroy their data, AI algorithms and AI models, a remedy known as “algorithmic disgorgement”. Further, numerous class actions have been filed against AI providers under various privacy laws on the same basic theory that the companies did not obtain proper consent to use the data to train AI models.

Companies that intend to use data to train or fine-tune AI models should analyze the issue from the perspective of data that the company currently possesses and the perspective of data that the company will acquire in the future. With respect to data that the company currently possesses, the company needs to consider whether such data was acquired with the express consent to use such data to train AI models. If not, the company should consider obtaining such consent prior to using the data to train AI models. With respect to data that the company will acquire in the future, the company should amend its form contracts and Privacy Policy to state that the other party provides consent to use its data to train AI models.

The same concept also applies to training data and prompts that the GenAI customer provides to the provider or inputs into the GenAI tool. The applicable contract that governs such customer’s access to and use of the GenAI tool should contain a license from the customer to the GenAI provider that permits that GenAI provider to use the customer-provided training data and prompts for all purposes required by the GenAI provider.

Key takeaway: If a GenAI provider desires to use any personal information or customer-provided data to train or fine-tune AI models, it should: (i) consider whether data it currently possesses was collected with the express consent to use such data to train AI models; (ii) if the answer to (i) is “no,” obtain such consent; (iii) ensure that all personal information it obtains in the future is acquired under updated contracts and Privacy Policies that state the company will use such data to train AI models; and (iv) ensure that its contract governing the access to and use of the GenAI tool contains all required licenses from the customer to the provider.

3. Liability for Outputs (Not Including Infringement)

It is now generally understood that GenAI output is not perfect. It is said that traditional software “fails loudly” because the operation will fail if there is an error in the code. Unlike traditional software, GenAI output can be “confidently wrong.” A lot of text generating GenAI is trained to sound very convincing, but not necessarily trained to be factually accurate. Most have now heard of the “ChatGPT Lawyer” who filed a brief written by OpenAI’s ChatGPT without human oversight. Things did not end well for ChatGPT Lawyer when it was discovered ChatGPT cited fictious cases in support of its legal arguments.

In customer facing contracts, GenAI providers should be aggressive in disclaiming liability for the output of their GenAI tools for several reasons. First, as mentioned above, the technology is not currently at a point where any GenAI provider can be confident that all output is factually accurate and complies with laws and industry standards. For this reason, it is likely that the provider’s contract with the frontier model provider also contains aggressive disclaimers. Further, the output can be largely dependent upon the training data and prompts provided by the customer. Moreover, potential liability can be heightened if a GenAI customer uses, or relies upon, output in a manner that was not intended by the provider. Lastly, many GenAI features are currently being provided as free value adds to the customer (this is especially true of the common chatbot use case), and providers generally avoid liability for products, services and features that don’t generate revenue.

A quick example can illustrate the concerns here. Let’s assume that a financial services company incorporates a GenAI powered chatbot into its website and online portals. The chatbot vastly improves the customer experience and helps acquire new customers that visit the website. A customer inserts the following question into the chatbot: “Is it a taxable event if I withdraw money out of my account.” Of course, the financial services company does not want to be liable if the chatbot provides a response to this question that is not accurate, and the customer relies on such output as professional advice.

GenAI providers would be well served to contain aggressive disclaimers in their contracts to make clear to the customer that all output is provided “as-is”. In addition, GenAI providers should be transparent to the customer when they are interacting with a GenAI feature. Courts and regulators will have little sympathy for a company that implies or messages that a customer is interacting with a human when, in fact, they are not. Companies can be transparent via click-through and pop-up disclaimers that the customer must read and agree to prior to interacting with the GenAI feature.

Key takeaway: GenAI providers should: (i) aggressively disclaim liability for GenAI output in their customer contracts; (ii) be transparent when customers are interacting with GenAI (as opposed to humans); and (iii) implement technological mechanisms (click-throughs, pop-ups, etc.) that the customer must read and agree to when accessing or using a GenAI feature.

4. Intellectual Property Related Representations, Warranties, and Indemnities

It is no secret that GenAI is going to pressure-test the current intellectual property regimes in the US and abroad. A full discussion of intellectual property issues arising from GenAI is outside the scope of this article, but a small sample of some of the issues are:

  • Is GenAI output protectable by copyright and other intellectual property rights?
  • Is using copyrighted works to train AI models without permission copyright infringement or fair use?
  • Is using a GenAI tool that was trained on copyrighted works without permission infringement?
  • Are these activities direct copyright infringement? Or are they vicarious or contributory copyright infringement?
  • If such activities are infringement, who is liable? The GenAI developer? The GenAI provider deploying the tool? The user?
  • Are all GenAI outputs necessarily unauthorized derivative works of the data used to train the GenAI model?

These GenAI considerations bring new meaning to some standard representations, warranties and indemnities that are routinely found in technology commercial contracts. For example, it is commonplace for technology providers of all types to represent and warrant that their technology does not infringe third-party intellectual property rights. Further, the market generally expects that technology providers will defend and indemnify customers from third-party claims that allege the provider’s technology infringes third-party intellectual property rights. The scope of these provisions can vary significantly, but many commercial technology contracts contain some variation of these provisions. Depending on the precise wording of these provisions, these provisions could be interpreted to apply to the output of the GenAI technology – an interpretation that the provider almost certainly did not intend prior to the GenAI wave.

Most GenAI providers outside of the small group of tech behemoths that provide the frontier models will want to avoid this liability as it is very difficult to quantify the potential scope of such liability under the current landscape and could be costly if the issue were to arise. The risk, right now, is largely a systemic risk with respect to GenAI providers that leverage third-party technology to power GenAI, and such providers are usually in no better position to protect against this risk than their customers.

GenAI providers should analyze this issue with respect to current contracts and go-forward contracts. With respect to current contracts, GenAI providers may consider proposing an amendment to the current contract before the customer accesses or uses any GenAI features. The amendment could make clear that these representations, warranties, and indemnities do not apply to GenAI output. With respect to go-forward contracts, the task is more straightforward – all contracts should make clear that these risk allocation provisions do not apply to GenAI output.

As noted above, some of the technology companies at the bleeding edge of this technology (ex. OpenAI, Microsoft, IBM, etc.) are now offering their customers some indemnity protection from third-party claims that allege output from their GenAI products infringe third-party intellectual property rights. These companies are able and willing to provide this indemnity protection for a few reasons: (i) deep pockets and robust insurance coverage allows them to cover the liability if it comes to fruition; (ii) they are taking a calculated financial risk that assuaging customer fears over intellectual property litigation is worth the increased adoption of their GenAI products (and associated increased revenues); and (iii) they are in a much better position to determine which data is used to train the model. Virtually all other companies that are integrating with one of these services to provide a GenAI product or feature do not have this same set of circumstances. Accordingly, companies should proceed with caution when covering GenAI output in their non-infringement representations, warranties and indemnities. If the provider is so inclined, it could offer indemnity protection to its customers that is identical in scope to that provided by the frontier model provider and conditioned upon the provider receiving indemnity protection from the frontier model provider.

Key takeaway: Although major technology companies are now providing some indemnity protection against third-party claims that allege GenAI output infringes third-party intellectual property rights, most GenAI providers should review the representations, warranties and indemnities in current and go-forward contracts to ensure such provisions do not cover GenAI output.

5. Ownership of GenAI Output and Improvements to GenAI Models

Aside from infringement issues addressed above, another critical GenAI intellectual property-related issue is ownership of intellectual property rights that cover GenAI output and improvements made to GenAI models as customer training data and prompts are processed by the model.

Whether or not GenAI output is covered by intellectual property rights is a complicated and fact specific question, but the answer with respect to most current GenAI uses cases is likely “no.” Accordingly, if the customer demands that it “owns” the output and the GenAI provider agrees that the customer can do anything with the output the customer pleases, the GenAI provider can include an assignment clause that states the provider assigns all intellectual property rights covering the output to the customer. However, the provider should make clear that the provider is not representing or warranting that provider does, in fact, have any intellectual property rights to assign in the first place. Of course, if the provider intends to use the output for its own purposes or there is a chance the output contains pre-existing intellectual property, it needs to consider whether or not such an assignment should be included at all.

As a practical matter, confidentiality terms and use restriction terms are a better way to allocate GenAI output usage rights than assignment and licensing of intellectual property rights. If the provider desires to limit the customer’s use of GenAI output, it should contain such express restrictions in the contract and not rely on ownership of intellectual property rights. Conversely, if the provider intends to use the GenAI output for any purpose, such rights and permissions should be included in the contract.

Another important issue to consider is ownership of the improved GenAI model. Artificial intelligence models “learn” and improve their performance as they process more and more data. Ownership of intellectual property rights covering the improved model and associated usage rights then become a critical consideration. A GenAI provider’s customer contracts should contain language that provides: (i) the GenAI provider owns all intellectual property rights covering the improved model; (ii) the GenAI provider is free to use the improved model for any business purpose; and (iii) the improved model is not included within customer’s confidential information.

Key takeaway: GenAI providers should ensure their contracts: (i) retain GenAI output usage rights, if required; (ii) contain GenAI output use restrictions imposed upon their customers, if required; and (iii) grant the GenAI provider ownership of all intellectual property rights covering the improved GenAI model.

6. Compliance with Law

Just about every commercial contract contains some sort of covenant, representation or warranty addressing compliance with applicable laws. It may be difficult for providers to remove this concept entirely from their customer contracts, but this is yet another provision that takes on new meaning when viewed through the GenAI context.

GenAI, and artificial intelligence in general, is not yet regulated in a broad or abstract sense. However, it is only a matter of time before new artificial intelligence-specific laws and regulations at every level of US and foreign government are enacted. Also, current laws and regulations do apply to various aspects of artificial intelligence. A small sample of current laws and regulations that impact artificial intelligence are: (i) President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence; (ii) all data privacy laws, including GDPR, all US state privacy laws and US sector-specific laws such as HIPAA; (iii) FTC Act; and (iv) Computer Fraud and Abuse Act. This list does not include the many proposed laws and regulations currently making their way through the various government entities.

Accordingly, GenAI providers will need to keep a close eye on all legal and regulatory developments to ensure that they maintain compliance with such laws and regulations and do not expose themselves to breach of contract claims from their customers. To help mitigate the contractual risks, GenAI providers may consider qualifying their contractual compliance obligations to all laws and regulations that are in effect at the time the contract is entered into. Alternatively, providers may consider retaining a termination right that permits them to terminate the applicable contract if there is a change in law that materially impacts the provider’s ability to provide the GenAI solution or the cost associated with doing so.

Key takeaway: GenAI providers should carefully monitor current and future laws and regulations that apply to the development and deployment of GenAI solutions and consider retaining a right to terminate their customer contracts if there is change in law that impacts their ability to provide the GenAI solution or the cost associated with doing so.


Above is a non-exhaustive list of some issues that providers of GenAI solutions should consider prior to contracting for, and providing, GenAI solutions. Providers should regularly monitor these issues because they are changing by the day, given the rapid pace of innovation and new litigation working its way through the court systems. If you find these considerations relevant to your business and need guidance in navigating them, don't hesitate to contact one of our dedicated attorneys in our Artificial Intelligence practice.

Attorneys

Back to Page

We use cookies on our website to improve functionality and performance, analyze website traffic and enable social media features. By continuing to use our website, you agree to our use of cookies.