The landscape of AI vendor liability is undergoing a fundamental shift, creating an uncomfortable position for businesses deploying AI systems. Federal courts are pioneering legal theories that hold AI vendors directly accountable for discriminatory outcomes, while vendor contracts become more aggressive in shifting liability to customers. The result is a “liability squeeze” leaving businesses responsible for AI failures that they cannot fully audit, control, or understand.
The Mobley Precedent: Vendors as Legal Agents
The Mobley v. case fundamentally altered AI liability frameworks. In July 2024, Judge allowed the discrimination lawsuit to proceed against as an “agent” of companies using its automated screening tools. This marked the first time a federal court applied agency theory to hold an AI vendor directly liable for discriminatory hiring decisions.
The legal reasoning is both straightforward and profound in its implications. When AI systems perform functions that are traditionally handled by employees — such as screening job applicants — the vendor has been “delegated responsibility” for that function. Under this theory, wasn’t merely providing software; it was acting as the employer’s agent in making hiring decisions.
The case achieved nationwide class action certification in , covering all applicants over the age of 40 rejected by AI screening system. experience illustrates the exponential nature of AI discrimination: he applied to over 100 jobs through system and was rejected within minutes each time. Unlike individual human bias, a single biased algorithm can multiply discrimination across hundreds of employers and thousands of applicants.
Contract Risk-Shifting Acceleration
While courts expand vendor liability, the contracting landscape tells a different story.
Market analysis from legal tech platforms reveals systematic risk-shifting patterns in vendor contracts. A recent study found that 88% of AI vendors impose liability caps on themselves, often limiting damages to monthly subscription fees. In addition, only 17% provide warranties for regulatory compliance, a significant departure from standard SaaS practices. And broad indemnification clauses routinely require customers to hold vendors harmless for discriminatory and other outcomes.
This creates dynamics where vendors develop and deploy AI systems knowing legal responsibility will ultimately rest with customers. Businesses using biased algorithms may find themselves sued for discrimination while discovering their vendor contracts prevent recourse for underlying defects.
The Practical Impact
Consider a mid-sized retailer using AI-powered applicant tracking. Under the Mobley precedent, both the retailer and AI vendor could face discrimination claims. However, the vendor’s contract likely contains:

