AI-Specific Issues In SaaS Agreements
Image Source: Unsplash
Artificial intelligence has fundamentally reshaped the SaaS landscape, with many vendors now incorporating AI or machine learning into their platforms for predictive analytics, automation, or content generation. While this offers customers powerful new capabilities, it also introduces unique legal risks and contractual challenges that must be explicitly addressed in SaaS agreements.
Data Use and Model Training
AI systems require large volumes of data to improve their performance. SaaS vendors often seek to use customer data – including content, metadata, and behavioral data – to train or fine-tune their models. This practice creates three major legal questions:
- Consent and Ownership:
- Does the customer grant the vendor the right to use its data for training purposes? If so, is this data anonymized or aggregated? Customers often assume their data will not be used to benefit competitors or other customers.
- Confidentiality:
- Can proprietary data, trade secrets, or sensitive personal information inadvertently become part of a training dataset, where it might resurface in AI outputs for other clients?
- Compliance:
- Under privacy laws like GDPR, AI training may constitute a new purpose for data processing. Vendors must ensure they have a lawful basis, and customers must be comfortable with this processing.
Practice Tip: For customers, the safest route is to include a clause prohibiting the use of customer data for AI training unless explicitly authorized. For vendors, transparency is key – clearly disclosing when and how customer data will be used for improving AI models.
AI-Generated Output
Next, we must discuss AI-generated output. The core question is who owns the intellectual property in the outputs generated by the AI? Under U.S. copyright law, works that are purely machine-generated may not qualify for copyright protection. This raises concerns about exclusivity and originality, as customers often assume they “own” whatever the AI produces, but the legal landscape is murky. A well-drafted SaaS agreement should:
- Define the ownership and permitted use of AI outputs.
- Include a warranty that outputs do not infringe on third-party IP.
- Clarify whether outputs can be reused or re-generated for other customers.
Illustrative Example: A client using an AI-powered design tool was advised on their agreement. The vendor’s terms stated that “all outputs are provided as-is and may be similar to outputs generated for other customers”. The client, a fashion brand, was understandably concerned about receiving designs that could also be provided to competitors. To address this, they negotiate a clause that prohibits the vendor from reusing certain customized outputs and required a representation that the AI would not knowingly generate designs substantially similar to third-party works.
Hallucinations and Accuracy
Hallucinations and accuracy are another critical issue. Generative AI can produce incorrect, biased, or even defamatory content. This raises the question of who is responsible if an AI-powered SaaS platform provides false information that a customer relies on? Most vendors will attempt to disclaim responsibility, stating that outputs are for “informational purposes only”. As counsel, you should:
- Push for vendor representations that the AI was trained on data sources that are lawful and free of malicious or infringing content.
- Negotiate for service-level commitments or human review processes for critical use cases.
- Consider requiring indemnification for claims arising from defective outputs, especially if they are tied to vendor-controlled models.
Bias and Fairness
Bias and fairness are also major concerns, particularly for SaaS solutions used in HR, finance, or public decision-making. Algorithms trained on biased data can produce discriminatory results, exposing both the vendor and customer to liability. Contracts should require the vendor to:
- Implement testing for bias and fairness.
- Provide transparency into the factors driving AI decisions (explainability).
- Comply with emerging AI regulations such as the EU AI Act or state-level AI laws in the U.S.
Regulatory Compliance
With frameworks like the EU AI Act categorizing certain AI systems as high-risk, SaaS agreements must address:
- The vendor’s obligation to comply with applicable AI regulations.
- The customer’s role in ensuring lawful use, especially if the customer configures or fine-tunes the AI.
- Responsibilities for recordkeeping, human oversight, and impact assessments.
Insurance and Liability Caps
Vendors often seek to exclude or limit liability for AI outputs. Customers should consider whether carveouts are needed for:
- IP infringement from AI-generated content.
- Data breaches caused by AI misconfiguration.
- Regulatory penalties stemming from algorithmic bias.
Practice Tip: Given how quickly AI regulations are evolving, it is important to build flexibility into the agreement. Consider including a “Regulatory Change Clause,” requiring the vendor to adjust its AI features or compliance practices as laws develop, without imposing excessive costs on the customer.
Summary
AI in SaaS agreements requires a multi-layered approach. This includes:
- Clearly defining data rights and training permissions.
- Addressing ownership and reliability of AI outputs.
- Allocating risk for errors, bias, or infringement.
- Ensuring compliance with emerging AI laws and ethical standards.
AI may feel like a “black box,” but as lawyers, our role is to bring clarity and accountability into the contract. By proactively addressing these issues, we can protect our clients while still enabling them to benefit from the transformative potential of AI-powered SaaS.
More By This Author:
U.S. Trade Secret Law Has Global ReachTSUM Faces U.S. Order To Stop Supplying AI Chips To China
Errors Have Costs: Delta’s $500M Case Against CrowdStrike