AI Contracting: Practical Legal Guidance for Growing Businesses
By now, just about every company is implementing artificial intelligence in their business in some way, shape or form. Whether your company is purchasing AI tools to enhance operations or developing AI-enabled products for your customers, understanding the contractual landscape is essential to protecting your business. This article provides high-level, practical guidance on navigating AI contracts from both customer and vendor perspectives.
Start with Due Diligence
Before entering any AI contract, conducting thorough due diligence on both the vendor and the AI tool is critical, even for trial use. The results of this investigation may shape the terms you require or cause you to walk away from an unacceptable deal. Growing companies should consider the following key questions early in the process:
- What is the AI tool, and how will it be used? “Artificial intelligence” is famously hard to define. Understanding the specific technology and use case is foundational. AI risk varies based on whether the tool is used for back-office functions, customer-facing applications, hiring decisions, or automated decision-making. Certain technologies and uses, such as agentic AI, trigger heightened regulatory scrutiny and create greater legal exposure. Ensure the contract defines what the AI system is and what the AI system does, including use cases, functionality, and limitations. Be wary of ambiguous marketing descriptions like "predictive" or "autonomous," which can inflate expectations and complicate performance disputes down the road.
- Who is the vendor, and how stable is its business? Many vendors in this market are startups built on third-party platforms like OpenAI. The phrase “OpenAI wrapper” is commonly used. Basic due diligence, including reviewing the vendor's capitalization, investor backing, and financial condition, is prudent. Verify whether the provider uses third-party models, APIs, or datasets, as this affects risk allocation and licensing rights. If the vendor folds or faces significant litigation, your company could be left exposed.
- What data will be used? You need clarity on what training data the vendor used to build the model, whether that includes scraped content, and how your company's inputs and outputs will be handled. This affects both intellectual property and data privacy considerations.
Intellectual Property: A Central Battleground
Intellectual property provisions are often the most negotiated terms in AI agreements, and for good reason.
For Customers: Protect Your Inputs and Secure Rights to Outputs
- Clarify ownership of inputs. Your company should retain ownership of all data, prompts, and content you submit to an AI tool. Equally important, you need contractual restrictions on how the vendor can use this data. Do not rely on standard NDAs or confidentiality terms. Many vendors seek broad licenses to use customer inputs to train and improve their models for the benefit of all their users, not just you. Ensure the rights and licenses granted to the vendor are appropriately narrow.
- Negotiate rights to outputs. The legal status of AI-generated content is unsettled. Some vendors will agree that you own outputs; others will only disclaim their own rights without transferring ownership. At minimum, secure a broad license to use outputs for all intended purposes, and understand whether the vendor retains any rights to use your outputs for its own purposes.
- Consider all types of data. The standard “input data” and “output data” definitions are oversimplified for many AI technologies and use cases. Most AI arrangements involve numerous types of data. Additional data categories may include training data, derived data, usage data, model improvements, logs and audit trails. Ensure all types of data are defined and all rights covering such data are allocated.
For Vendors: Protect Core Technology and Limit Exposure
- Reserve rights to independent assets. Outputs may incorporate elements from your training data or models. Carve out pre-existing and independently developed materials from any customer ownership provisions.
- Obtain necessary licenses for improvement. If you need customer data to improve your models, negotiate clear, broad license rights upfront and ensure clarity regarding your ownership of all model improvements. Ambiguity here creates disputes later. Consider fee increases if a customer desires to “opt out” of using their data for model training.
- Consider output disclaimers. Given the legal uncertainty around AI-generated content, consider disclaiming representations regarding IP rights in outputs: "Provider offers no representation or warranty, express or implied, related to intellectual property or other rights in outputs, and customer uses outputs at its own risk with regard to all such rights." This can be especially important if the customer is going to rely on the AI output for significant decisions.
Data Privacy and Security: Higher Stakes with AI
AI systems often process large volumes of data, including sensitive information and personal data, which heightens data breach and privacy risks.
- Determine whether personal information is involved. Evaluate all data types, including inputs, outputs, training data, etc. to assess whether personal information is implicated. This triggers compliance requirements under privacy laws and may require execution of data protection agreements, business associate agreements (for Protected Health Information subject to HIPAA), or approved data transfer mechanisms (for cross-border transfers).
- Establish clear data handling terms. Address where data is stored, who has access, retention and deletion policies, and the vendor's security certifications. Request evidence of independent security audits or industry-recognized certifications where available. If the provider uses customer data to train or improve models, confirm opt-in requirements, anonymization standards, and your right to revoke such use.
- Plan for security incidents. Contracts should include robust notification requirements and indemnification provisions for data breaches. Given the threat landscape, breach notification and defense/indemnity clauses are essential. Importantly, mandate notification for security incidents involving model or data integrity, not just traditional data breaches. Also require the provider to implement technical and organizational safeguards to prevent model leakage, where proprietary customer data could influence outputs provided to other users.
Warranties, Representations, and Disclaimers
AI contracts require careful attention to what promises each party is making and what protections each party is seeking.
For Customers
- Push for warranties that the vendor has all necessary rights and licenses to use training data and third-party technology, that the AI solution will comply with specifications and applicable laws, and that the vendor's use of your data complies with agreed restrictions.
- Be skeptical of broad "as-is" disclaimers, particularly regarding output accuracy. If the vendor makes claims in marketing materials about the AI's capabilities, attempt to include measurable performance metrics in the contract with termination rights if targets are not met. Where AI outputs feed into high-impact decisions, such as employment, healthcare, or financial determinations, consider requiring the provider to conduct periodic bias audits, implement model drift monitoring, and provide transparency into how outputs are generated.
For Vendors
- Include appropriate disclaimers regarding output accuracy. AI can produce inaccurate or biased results due to issues with training data or customer inputs that are beyond vendor control.
- Consider specific disclaimers for third-party IP in outputs: "Provider does not represent or warrant that outputs will be free of content that infringes third-party rights, including without limitation privacy and intellectual property rights."
Indemnification: Who Bears the Risk?
The indemnification provisions in AI agreements are among the most consequential and negotiated.
- Customers should seek broad IP indemnification. Customer should push for indemnities covering claims that the AI tool, its outputs, or training data infringe third-party intellectual property rights. Many vendors will provide infringement indemnification for the service itself but resist extending coverage to training data and outputs. Press for coverage, this is where significant exposure lies. Critically, ensure the indemnification language extends to "model outputs and predictions," not just "software." Contracts drafted with traditional software or SaaS language in mind may leave AI-specific risks uncovered.
- Scrutinize exclusions carefully. Vendors frequently include exclusions that can gut indemnification obligations. For example, vendors may include terms excluding claims arising from modifications, combinations with third-party products, or use of outputs in trade. Because AI involves combining inputs with the vendor's technology, broad exclusions can eliminate protection.
- Vendors should also require indemnification. Vendors should push for indemnities covering claims arising from customer inputs, misuse of the AI tool, and customer-provided training data.
Limitation of Liability: Caps and Carveouts
Liability provisions consist of two components: exclusion of consequential damages and a cap on direct damages. Both require careful attention in AI agreements.
- Negotiate meaningful caps. The direct damages cap should be sufficient to disincentivize vendor breach. Consider the financial impact if the vendor stops providing services or breaches the agreement.
- Carve out critical obligations. Certain obligations should be excluded from caps or subject to higher "super caps," particularly intellectual property indemnification, confidentiality and data security breaches, gross negligence, and willful misconduct. Without carveouts, your indemnification rights may be meaningless.
Plan for Model Changes and Exit Rights
AI vendors frequently seek unilateral amendment rights, allowing them to change contractual terms at any time with minimal notice. This is common with AI tools compared to traditional software. Beyond contract amendments, AI services present unique continuity risks: vendors may deprecate models, make significant algorithm changes, or remove features with little warning.
- Negotiate static terms where possible. If the vendor insists on amendment rights, seek a meaningful notice period before changes take effect, the right to terminate and receive a refund if you object to material changes, and an order of precedence placing any linked or referenced terms at the bottom.
- Require advance notice of model changes. Push for contractual commitments that the vendor will provide reasonable notice before deprecating models, making significant algorithm changes, or removing features that your business relies upon.
- Secure robust exit rights. Confirm continuity protections, including transition assistance obligations if you need to offboard or migrate to another provider. Ensure you can retrieve all data and outputs in usable formats upon termination. AI systems can create data lock-in that makes switching providers difficult without proper exit planning.
Regulatory Compliance: A Moving Target
The regulatory landscape for AI is evolving. State laws like the Colorado AI Act impose specific obligations on deployers of high-risk AI systems, including impact assessments, disclosures, and appeal processes. International frameworks like the EU AI Act add additional layers of complexity.
- Laws apply regardless of your physical location. Many of these laws have “extraterritorial” jurisdiction, meaning your company need not be physically located in the applicable country to be subject to its laws.
- Build compliance requirements into contracts. Require vendors to comply with all applicable current and future laws. Consider allocating specific compliance obligations, particularly where you may depend on vendor cooperation to meet disclosure or impact assessment requirements.
- Address law changes. Include provisions for amendments to accommodate new legal requirements without renegotiating the entire agreement.
Practical Takeaways
Growing companies entering AI contracts should prioritize the following actions:
- Conduct thorough due diligence on vendors and AI tools before any use, including trial deployments.
- Allocate IP rights to inputs, outputs, and customizations, with appropriate restrictions on vendor use.
- Define all relevant data categories in the agreement. Standard “input” and “output” definitions are oftentimes oversimplified and fail to capture all relevant data types.
- Secure robust indemnification for IP infringement claims, with careful attention to exclusions.
- Address data privacy and security, including breach notification and incident response.
- Negotiate meaningful liability provisions with appropriate carveouts for high-risk obligations.
- Plan for regulatory compliance with current and future AI laws.
- Establish internal AI governance with clear policies, training, and oversight mechanisms.
The AI contracting landscape presents significant opportunities and risks. Taking a thoughtful, proactive approach to these agreements positions your company to capture the benefits of AI while managing legal exposure. If you have questions about AI contracting or need assistance reviewing or negotiating an AI agreement, contact a member of our Commercial & Technology Agreements team to discuss how we can help.
This content is made available for educational purposes only and to give you general information and a general understanding of the law, not to provide specific legal advice. By using this content, you understand there is no attorney-client relationship between you and the publisher. The content should not be used as a substitute for competent legal advice from a licensed professional attorney in your state.