BlogAI Economics
AI Economics8 min read read

The Rise of the AI Economist: Why Product Managers Must Evolve or Perish

Traditional software has zero marginal cost. AI features carry massive, compounding variable costs. If product managers don't learn to engineer margins, they will bankrupt their companies.

By Richard Ewing·
Share:

The End of Zero Marginal Cost Software

For the last twenty years, product managers have operated under a financial delusion disguised as a business model: zero marginal cost. When you build a traditional SaaS feature—say, a new analytics dashboard or a better export tool—it costs money to build (R&D) and money to maintain (Technical Debt). But once it's deployed, the cost to serve that feature to the next user is functionally zero. Whether 10 users click "Export to CSV" or 10,000 users click it, your AWS bill barely registers the change. Because the marginal cost was zero, product managers were trained to optimize for one thing above all else: **Engagement**. Get the user to click more. Get them to stay longer. Get them to use the product every single day. If you drive engagement, you drive retention. If you drive retention, you drive LTV. It was a beautiful, elegant formula that built trillion-dollar companies. But generative AI just broke the formula. When you build an AI feature—say, an LLM-powered "Chat with your Data" bot—the economics fundamentally flip. Every time a user interacts with that feature, your application makes an API call to a foundation model. That call costs money. It might be a fraction of a cent, or it might be ten cents depending on the context window and the model. Suddenly, the marginal cost is not zero. It is highly variable, entirely unpredictable, and dangerously scalable. This is the era of **Synthetic COGS** (Cost of Goods Sold). And if you are still building products using the engagement-first playbook of 2021, you are driving your company toward insolvency at the speed of compute. ---

The Generative Margin Squeeze

Imagine you sell a B2B SaaS product for $50 per user per month. In the old world, your gross margins were probably 85%. You spent maybe $7 a month on hosting, database reads, and CDN delivery per user. The rest was gross profit to fund R&D, sales, and marketing. Now, to keep up with the competition, you launch an "AI Copilot" inside your app. It's brilliant. Your users love it. In fact, they love it so much that your Power Users start using it 50 times a day. Every time they use it, they pass 8,000 tokens of context to GPT-4o. At current pricing, that costs you roughly $0.04 per query. 50 queries a day × 20 working days = 1,000 queries a month. 1,000 queries × $0.04 = $40.00 in API costs. Your user is paying you $50 a month. Your AI API bill for that user is $40. Your traditional AWS bill is $7. Your gross margin just plummeted from 85% to 6%. This is what I call **Power User Liability**. In the AI era, your most engaged customers are your most expensive liabilities. They are actively destroying your unit economics. And because you likely sold them a flat-rate subscription, you have no mechanism to capture the value they are extracting. If you scale this feature successfully—if you actually achieve the "high engagement" you were trained to seek—you will bankrupt the company. You have built a machine that converts venture capital directly into Nvidia revenue, with your SaaS company acting as a low-margin compute reseller in the middle. ---

Enter the AI Economist

The traditional Product Manager is obsolete. The role was designed for an era where building was the constraint and distribution was free. Today, building is free (thanks to AI code generation), but execution is expensive. To survive this shift, the Product Manager must evolve into an **AI Economist**. An AI Economist doesn't just ask, "Will the user value this feature?" They ask, "Can we serve this feature at a profitable unit margin at scale?" They don't just optimize for engagement; they optimize for **ROAI (Return on Artificial Intelligence)**. Here is the mandate of the AI Economist:

1. Stop Building 'Happy Path' Generative Features

Traditional PMs see an OpenAI API key and immediately try to build a monolithic chat interface. "Just send the whole database schema to the LLM and let it figure it out!" The AI Economist looks at the same problem and builds a **Deterministic Control Layer**. They don't send every query to the most expensive foundation model. They build a classification layer (often using a cheap, local Small Language Model) that asks: "Does this query actually require deep reasoning?" If the user is just asking for a password reset, the system routes it to a traditional deterministic script for $0.00. If they ask a complex analytical question, it routes to the expensive model.

2. Master Margin Engineering

Margin Engineering is the architectural practice of designing systems specifically to protect gross profitability. The AI Economist works directly with engineering to implement: - **Semantic Caching**: If User B asks a question that User A asked an hour ago, don't run the inference again. Serve the cached answer. - **The Evergreen Ratio**: Measure the percentage of AI requests served from cache versus live inference. A healthy AI product needs an Evergreen Ratio of at least 40% to maintain SaaS-like margins. - **Dynamic Routing**: Automatically downgrading to cheaper models when the system detects low-complexity tasks or when usage quotas are approaching.

3. Price the Compute, Not Just the Software

The flat-rate SaaS subscription is dead for heavy AI products. The AI Economist understands that you cannot sell variable-cost compute wrapped in fixed-price subscriptions. They are reinventing pricing architecture: hybrid models with base platform fees plus token-based credits, outcome-based pricing where the customer pays for successful resolutions, and hard usage caps that degrade gracefully.

4. Audit Shadow AI

The most dangerous AI costs aren't the ones you plan for; they are the ones you don't see. Developers hardcoding API keys into local scripts. Support teams using unsanctioned AI tools that leak proprietary data. The AI Economist leads the **Shadow AI Audit**, hunting down unmanaged compute costs and bringing them under a centralized, deterministic governance layer. ---

The Turing Tax and the Boardroom Mandate

Every company building AI is currently paying what I call the **Turing Tax**. It is the premium you pay for using cutting-edge, general-purpose intelligence to solve narrow, specific business problems. Right now, boards of directors and CFOs are looking at their cloud bills in sheer panic. The AI hype cycle of 2024 got the budget approved. The AI reality of 2026 is that the CFO is demanding to see the ROI. When the CFO walks into the product review meeting and asks why AWS costs are up 300% but revenue is only up 12%, the traditional Product Manager will talk about "monthly active users" and "customer delight." They will be fired. The AI Economist will walk into that same meeting, open their **AUEB (AI Unit Economics Benchmark)** dashboard, and say: "Our blended Cost of Predictivity is $0.02 per query. We have isolated our Power User Liability through tiered rate-limiting, and our Semantic Cache is currently deflecting 42% of inference costs. We project gross margins will stabilize at 71% next quarter." They will be promoted to Chief Product Officer. ---

The Masterclass: Your Transition Plan

If you are a Product Manager, Engineering Leader, or startup Founder, you are standing at a career crossroads. The skills that got you here—agile methodology, story mapping, A/B testing—are commodities. They are table stakes. The scarce skill in the market today is the ability to architect, govern, and monetize probabilistic software without destroying unit economics. This is why I have launched **Track 28: The AI Economist Masterclass** within the Synthetic Enterprise Cognition curriculum. We are not teaching prompt engineering. We are teaching capital allocation. You will learn: - How to calculate your specific **Synthetic COGS** and build pricing models that guarantee margin preservation. - How to architect **Deterministic Control Layers** that govern rogue AI and prevent hallucination cascades. - How to execute a **Shadow AI Audit** and present the findings to your board of directors. - How to calculate the **Evergreen Ratio** and work with your engineering team to implement semantic caching. The era of the "Happy Builder" is over. We are entering the era of the AI Economist. You can either learn to engineer the margins, or you can watch your product collapse under the weight of its own compute bill. The choice is yours. [Explore The AI Economist Masterclass Curriculum Here](/vault/curriculum)

Like this analysis?

Get the weekly engineering economics briefing — one email, every Monday.

Subscribe Free →

More in AI Economics

Canonical Frameworks

Cost of Predictivity

The Cost of Predictivity measures the variable cost of AI accuracy. Unlike traditional software with near-zero marginal costs, AI features have significant variable costs that scale with both usage AND accuracy requirements. As AI correctness increases, cost scales exponentially — not linearly. This is the fundamental economic challenge of AI products. Traditional software follows a simple cost model: high fixed development cost, near-zero marginal cost per user. Build the feature once, serve it to millions for pennies. AI products break this model entirely. Every AI query costs compute. Every inference requires GPU cycles. Every improvement in accuracy requires either more sophisticated prompts (more tokens = more cost), retrieval-augmented generation (vector DB queries + embedding generation), or fine-tuned models (massive training costs amortized over queries). The cost structure looks more like a manufacturing business than a software business. The exponential curve is the killer. Moving from 80% accuracy to 90% accuracy might cost 2x. Moving from 90% to 95% might cost 5x. Moving from 95% to 99% often costs 10-20x. This is because the easy cases are solved by the base model, and each additional percentage point of accuracy requires increasingly sophisticated (and expensive) techniques to handle edge cases. This creates what Richard Ewing calls the AI Margin Collapse Point: the usage volume at which AI feature costs exceed the revenue they generate. Many AI features that work beautifully in prototype (low volume, don't need high accuracy) become economically devastating in production (high volume, users demand high accuracy). The AI Unit Economics Benchmark (AUEB) calculator at richardewing.io/tools/aueb helps companies calculate their Cost of Predictivity and identify their specific margin collapse point before it hits their P&L.

Read Definition →

Feature Bloat Calculus

Feature Bloat Calculus is the economic formula for determining when a feature's maintenance cost exceeds its value contribution. It quantifies the hidden tax of feature accumulation — the compounding cost that makes every new feature harder and more expensive to build. The formula considers three cost components: 1. **Direct Maintenance Cost**: The engineering hours spent maintaining the feature (bug fixes, compatibility updates, dependency management, test maintenance). This is typically 2-5% of original development cost per quarter. 2. **Opportunity Cost**: What else could those maintenance engineers be building? If 3 engineers spend 20% of their time maintaining a low-value feature, that's 0.6 FTE that could be building high-value new capabilities. 3. **Complexity Tax**: This is the compounding factor that most organizations miss entirely. Every feature in the codebase makes every other feature harder to maintain and every new feature harder to build. Adding feature #101 to a system doesn't just add feature #101's maintenance cost — it increases the maintenance cost of features #1-100. The Complexity Tax follows a roughly quadratic curve. A system with 50 features has approximately 1,225 potential interaction points (n × (n-1) / 2). A system with 100 features has 4,950 potential interaction points. Doubling features doesn't double complexity — it quadruples it. Feature Bloat Calculus quantifies this by comparing a feature's total cost (direct + opportunity + complexity) against its value contribution (revenue attribution, user engagement, strategic importance). When total cost exceeds value, the feature has "negative carry" — it's costing more to keep than it's worth. Features with negative carry should be evaluated through the Kill Switch Protocol for potential deprecation. The highest-negative-carry features should be killed first, as they free up the most capacity per removal.

Read Definition →
📊

Richard Ewing

The AI Economist — Quantifying engineering economics for technology leaders, PE firms, and boards.