Glossary/AI Liability Gradient
Richard Ewing Frameworks
2 min read
Share:

What is AI Liability Gradient?

TL;DR

The AI Liability Gradient is an analytical framework introduced by Richard Ewing in Built In that maps the relationship between AI agent autonomy and organizational liability.

The AI Liability Gradient is an analytical framework introduced by Richard Ewing in Built In that maps the relationship between AI agent autonomy and organizational liability. As AI agents become more autonomous, the liability exposure increases non-linearly.

The gradient has four zones:

Zone 1: Assistive AI (low autonomy, low liability) — AI suggests, humans decide and act. Liability is minimal because humans maintain full control. Example: code completion, spell check.

Zone 2: Augmentive AI (moderate autonomy, moderate liability) — AI generates, humans review. Liability exists if human review is inadequate. Example: AI-generated code deployed after review, AI-written content published after editing.

Zone 3: Autonomous AI (high autonomy, high liability) — AI decides and acts within constraints. Liability shifts to the organization for the quality of constraints. Example: automated trading systems, AI customer service.

Zone 4: Agentic AI (full autonomy, extreme liability) — AI plans, decides, and acts independently. Liability is maximum because the organization is responsible for all agent actions. Example: AI agents making purchase decisions, deploying code, or communicating with customers.

The key insight: liability doesn't scale linearly with autonomy — it scales exponentially. Moving from Zone 2 to Zone 3 doubles autonomy but quadruples potential liability.

Why It Matters

The AI Liability Gradient provides a framework for boards and legal teams to assess the risk of AI deployments. Most organizations are deploying Zone 3-4 agents without Zone 3-4 governance.

Frequently Asked Questions

What is the AI Liability Gradient?

A framework by Richard Ewing showing that organizational liability increases exponentially (not linearly) as AI agent autonomy increases, from assistive through agentic AI.

What zone should my organization target?

Start at Zone 2 (augmentive) with strong human review processes. Move to Zone 3 only with robust guardrails, monitoring, and governance. Zone 4 requires board-level risk acceptance.

Related Terms

Need Expert Help?

Richard Ewing is a Product Economist and AI Capital Auditor. He helps companies translate technical complexity into financial clarity.

Book Advisory Call →