AI Liability Frameworks for Agency Contracts: Protecting Your Business and Your Clients
A boutique AI agency built a content moderation system for a social media startup. The system worked well during testing, but after deployment, it began systematically flagging posts by users writing in African American Vernacular English as "toxic." The startup faced a class action lawsuit alleging racial discrimination. The startup's lawyers turned to the agency's contract looking for indemnification. The contract had a standard limitation of liability clause capped at the project fee โ $85,000 โ but said nothing about AI-specific liability, nothing about bias claims, and nothing about the allocation of responsibility between the agency that built the system and the client that deployed it. The agency's insurance didn't cover algorithmic discrimination claims. Both the agency and the startup were exposed.
AI liability is one of the most consequential issues facing agencies today, and most agencies are woefully unprepared for it. Traditional software development contracts were not designed for systems that make autonomous decisions, learn from data, and can cause harm in ways that are difficult to predict or detect. If your agency is still using boilerplate contract language from the pre-AI era, you're operating without a safety net.
This guide covers the liability landscape for AI agencies and provides a practical framework for structuring contracts that allocate risk fairly and protect your business.
The AI Liability Landscape
AI liability is fundamentally different from traditional software liability because AI systems behave in ways that their creators did not explicitly program. A traditional software bug can be traced to a specific line of code. An AI failure often emerges from the interaction of training data, model architecture, deployment context, and real-world conditions that no one fully anticipated.
This creates a liability gap. Existing legal frameworks โ product liability, professional negligence, contract law โ don't map cleanly onto AI systems. Courts and regulators are actively working to fill this gap, but in the meantime, agencies need to protect themselves contractually.
The EU AI Liability Directive establishes a framework for civil liability claims related to AI systems. It creates a presumption of causation when an AI system causes harm and the deployer has failed to comply with the AI Act's requirements. This means that if your client deploys an AI system that doesn't meet regulatory requirements, and someone is harmed, the burden shifts to the deployer to prove the AI didn't cause the harm.
US liability law is developing through case law rather than comprehensive legislation. Courts are applying existing tort, product liability, and consumer protection frameworks to AI systems, but the outcomes are inconsistent. Some courts treat AI outputs as products subject to strict liability; others treat them as services subject to negligence standards.
The key question for agencies is: When an AI system you built causes harm, are you liable as the creator, is the client liable as the deployer, or is liability shared? The answer depends on the facts of each case, the jurisdiction, and โ critically โ what your contract says.
Liability Risk Categories for AI Agencies
Understanding the types of liability you face helps you structure contracts to address each one.
Accuracy and Performance Liability
This covers situations where the AI system doesn't perform as promised or expected.
- The model's accuracy degrades after deployment and causes incorrect decisions
- The model performs poorly for specific populations or use cases that weren't adequately tested
- The model produces outputs that the client relies on to their detriment
Who's typically liable: This depends on whether the performance issue was foreseeable, whether the agency tested adequately, and whether the client maintained the system as instructed. Contract terms should clarify performance expectations, testing obligations, and maintenance responsibilities.
Bias and Discrimination Liability
This covers situations where the AI system produces discriminatory outcomes.
- The model treats protected groups differently, causing disparate impact
- The model uses proxies for protected characteristics without the client's knowledge
- The model amplifies existing biases in the training data
Who's typically liable: Emerging case law and regulations tend to hold the deployer (the client) primarily responsible for discrimination because they control the deployment context and have obligations to affected individuals. However, the developer (the agency) can be liable if they failed to conduct adequate bias testing or if they knew about bias risks and didn't disclose them. Agencies should contractually require clients to conduct their own fairness assessments for their specific deployment context.
Data Liability
This covers situations involving the data used to build and operate the AI system.
- Training data was collected without adequate consent
- Data was used beyond its authorized purpose
- Personal data was exposed through model inversion or other attacks
- Data sovereignty requirements were violated
Who's typically liable: Data liability is typically shared. The client is responsible for ensuring the data they provide was lawfully collected. The agency is responsible for handling the data securely during development and for not using it beyond the project scope. Both parties need contractual clarity about data governance responsibilities.
Intellectual Property Liability
This covers situations where the AI system creates or infringes intellectual property.
- The model was trained on copyrighted material without authorization
- The model generates content that infringes third-party copyrights
- Client trade secrets are encoded in model weights that are used for other projects
- The agency claims ownership of model weights that the client believes they own
Who's typically liable: IP liability depends heavily on the contract terms. Agencies should clarify who owns what โ the model architecture, the trained weights, the training data, and the generated outputs โ and allocate IP infringement liability accordingly.
Security Liability
This covers situations where the AI system is compromised or causes security incidents.
- Adversarial attacks cause the model to produce harmful outputs
- The model is used as an attack vector to access client systems
- Model weights or training data are exfiltrated
Who's typically liable: Security liability typically follows responsibility. If the agency is responsible for the model's security architecture, they bear liability for security failures in that architecture. If the client is responsible for the deployment environment, they bear liability for security failures in that environment. The contract should clearly delineate security responsibilities.
Consequential Harm Liability
This covers situations where the AI system's outputs cause downstream harm to third parties.
- An AI medical diagnostic system misses a condition, leading to patient harm
- An AI lending system denies credit inappropriately, causing financial harm
- An AI content moderation system fails to catch harmful content, leading to real-world consequences
Who's typically liable: Consequential harm is the most contentious liability area. Traditional contract law allows parties to limit consequential damages, but courts may not enforce these limitations when the harm is severe or when the limitation is unconscionable. Agencies should understand that liability caps may not protect them from all consequential harm claims.
Structuring Your Liability Framework
Element 1: Clear Scope of Responsibility
The contract should explicitly define what the agency is responsible for and what the client is responsible for. Ambiguity is your enemy.
Agency responsibilities typically include:
- Designing and implementing the model according to agreed specifications
- Conducting testing and validation as specified in the statement of work
- Documenting the model's capabilities, limitations, and known risks
- Delivering the model in a state that meets agreed acceptance criteria
- Providing training on the model's proper use and maintenance
Client responsibilities typically include:
- Providing training data that was lawfully collected and is fit for purpose
- Deploying the model in accordance with the agency's documentation and recommendations
- Maintaining the model in production, including monitoring and retraining
- Ensuring regulatory compliance in their specific industry and jurisdiction
- Communicating the model's limitations to end users and affected individuals
Element 2: Performance Warranties and Disclaimers
Be precise about what you're warranting and what you're not.
Appropriate warranties for AI projects:
- The model will meet the performance metrics specified in the acceptance criteria when evaluated on the agreed test dataset
- The agency will conduct the testing specified in the statement of work, including any fairness testing
- The agency will document known limitations and risks identified during development
- The model will be delivered free of known security vulnerabilities at the time of delivery
Appropriate disclaimers:
- The model's performance may degrade over time as data distributions change
- The model may produce incorrect or biased outputs in situations not represented in the training data
- The agency does not warrant that the model will perform equivalently across all populations or use cases not specifically tested
- The agency does not warrant continued performance after the client modifies the model, its configuration, or its deployment environment
Element 3: Liability Allocation
Structure liability allocation to reflect actual control and responsibility.
Indemnification clauses should be mutual and specific. The agency should indemnify the client for claims arising from the agency's failure to meet its documented responsibilities. The client should indemnify the agency for claims arising from the client's deployment decisions, data quality, and regulatory compliance.
Liability caps are standard but should be calibrated to the risk profile of the project. A $50,000 project that involves low-risk analytics might have a liability cap at 1x the project fee. A $500,000 project that involves high-risk automated decision-making might require a higher cap or separate insurance coverage.
Carve-outs from liability caps are important for certain risk categories. Consider excluding from the cap: IP infringement, data breaches caused by gross negligence, and willful misconduct. Some jurisdictions don't allow caps on certain types of liability (like personal injury), so consult local counsel.
Insurance requirements should be specified in the contract. Both parties should maintain appropriate insurance coverage, including professional liability insurance, cyber liability insurance, and potentially AI-specific liability insurance (which is becoming available from several carriers).
Element 4: Disclosure and Transparency Obligations
Many AI liability claims could be avoided if the client fully understood the model's limitations. Build disclosure obligations into your contracts.
- Pre-deployment disclosure โ The agency must deliver a model card or equivalent documentation that discloses the model's intended use, known limitations, fairness assessment results, and deployment recommendations
- Risk disclosure โ The agency must communicate any material risks identified during development, including fairness concerns, performance limitations, and security vulnerabilities
- Ongoing disclosure โ If the agency becomes aware of issues affecting deployed models (e.g., vulnerabilities in underlying libraries), they must notify the client promptly
These disclosure obligations protect the agency by creating a documented record that the client was informed of relevant risks. If the client deploys the model despite known limitations, the agency's liability exposure is reduced.
Element 5: Acceptance and Handoff Procedures
A clear acceptance process creates a contractual boundary between development liability and deployment liability.
- Define specific acceptance criteria including performance metrics, fairness metrics, documentation deliverables, and security requirements
- Require the client to formally accept the model before deployment, acknowledging that they have reviewed the documentation and understand the limitations
- Establish a warranty period during which the agency will address defects discovered after acceptance
- After the warranty period, transition responsibility for the model to the client (unless an ongoing support agreement is in place)
Element 6: Change Management and Retraining
Models change after delivery, and those changes affect liability.
- Specify who is authorized to retrain or modify the model after delivery
- If the client retrains the model with new data, the agency should not be liable for outcomes resulting from the retrained model
- If the agency provides ongoing retraining services, define the performance and fairness standards that must be met after each retraining cycle
- Document the chain of modifications so that liability for any given model version can be traced to the party that created it
Practical Contract Provisions
Here are specific provisions to include or update in your agency contracts.
AI-specific definitions. Define terms like "model," "training data," "inference," "model drift," and "bias" in your contracts. Legal disputes often hinge on the meaning of technical terms, and standard legal dictionaries don't cover AI terminology.
Data governance schedule. Include a schedule that specifies the data the agency will receive, how it will be stored and processed, who has access, and how it will be returned or destroyed at the end of the engagement.
Testing and validation schedule. Include a schedule that specifies the tests the agency will conduct, the metrics that will be measured, and the acceptance thresholds for each metric.
Model documentation requirements. Specify the documentation the agency will deliver, including model cards, risk assessments, fairness assessments, and deployment guides.
Regulatory compliance allocation. Specify which party is responsible for compliance with which regulations. Don't leave this ambiguous.
Dispute resolution. Include provisions for resolving disputes about model performance, fairness, or liability. Consider requiring mediation or expert determination before litigation, given the technical complexity of AI disputes.
Building Liability Awareness Across Your Agency
Liability management is not just a legal function. It requires awareness across your entire team.
- Train your delivery team on the liability implications of their technical decisions. The engineer who decides not to test for age-based bias is creating a liability risk. The data scientist who doesn't document a known limitation is creating a disclosure risk.
- Include liability considerations in project reviews. At key milestones, ask: "What liability risks exist? Are they documented? Are they allocated in the contract?"
- Build a lessons-learned database. When liability issues arise, document them and share the lessons across the agency. This institutional knowledge is invaluable for improving your contracts and your practices.
- Engage legal counsel proactively. Don't wait for a dispute to engage a lawyer. Have your contract templates reviewed by counsel who understands AI liability, and consult them when you encounter unusual risk profiles.
Your Next Steps
This week: Pull out your current contract template and evaluate it against the framework described above. How many of the six elements are adequately addressed?
This month: Engage legal counsel to update your contract template with AI-specific liability provisions. Focus on the areas with the biggest gaps.
This quarter: Train your team on liability awareness. Conduct a workshop that walks through real-world liability scenarios and discusses how your contract provisions would apply.
AI liability law is evolving rapidly, and the contracts you sign today may be interpreted under legal frameworks that don't yet exist. The best protection is a combination of clear contractual terms, thorough documentation, responsible development practices, and appropriate insurance. Build all four pillars, and your agency will be positioned to weather the inevitable liability storms ahead.