Introduction
The idea that machines will free humanity from toil is as old as the concept of machinery itself. Aristotle imagined that if “every instrument could accomplish its own work…chief workmen would not want servants, nor masters slaves” (Politics, I.4). Keynes predicted in 1930 that by 2030, rising productivity would produce a 15-hour work week. As artificial intelligence demonstrates increasingly dramatic compression of cognitive labor—reducing tasks that once required weeks to minutes—the expectation of an imminent “leisure dividend” has returned with renewed force.
We argue that this expectation fundamentally misunderstands the relationship between productivity, competition, and work. The leisure dividend is not merely unlikely; it is structurally impossible within competitive market economies absent coordinated intervention. The mechanism preventing it is what we term the competitive ratchet: a self-reinforcing cycle in which each firm’s AI-driven productivity gain compels all competitors to match or exceed that gain, absorbing all freed time into expanded output rather than leisure.
The core insight is game-theoretic. When one firm adopts AI and achieves a compression ratio \rho > 1, it faces a choice: produce the same output in 1/\rho the time (taking leisure), or produce \rho times the output in the same time (gaining market share). In competitive markets, the second option dominates. But when all firms make this choice simultaneously, the aggregate effect is an industry-wide increase in output, a decrease in price, and the elimination of any per-firm surplus. The leisure dividend evaporates.
This dynamic is not a conjecture. It is a mathematical consequence of rational behavior under competition. The same logic that ensures no firm in a Cournot oligopoly will unilaterally restrict output ensures that no firm will consume AI-driven productivity gains as leisure. The equilibrium is one of maximum effort, not maximum leisure.
This paper formalizes this argument through several complementary frameworks. In Section 2, we develop a Cournot competition model in which n firms choose output levels after AI adoption, yielding equilibrium output q_i^* > q_0. Section 3 introduces the expectation escalation model, showing how market expectations ratchet upward in response to demonstrated capability. Section 4 provides a full game-theoretic analysis proving that AI adoption is a dominant strategy. Section 5 formalizes the prisoner’s dilemma structure, showing mutual adoption is Pareto-inferior to mutual non-adoption but strategically inevitable. Section 6 models adoption cascades with threshold dynamics. Section 7 analyzes the constraint shift from production to imagination. Section 8 develops the hierarchical work amplification model. Section 9 examines sociological dimensions including ambition elasticity and iteration compounding. Section 10 treats the paradox of choice under AI. Section 11 formalizes the treadmill effect. Section 12 presents multi-agent simulation results. Section 13 examines empirical evidence. Section 14 connects to Baumol’s cost disease. Section 15 discusses policy implications.
Notation and Conventions
Throughout the paper, we use the following notation. The AI capability level is \alpha \geq 0, with \alpha = 0 representing the pre-AI baseline. The compression ratio is \rho(\alpha) \geq 1, defined as the ratio of pre-AI to post-AI task completion time. The number of firms is n \geq 2. The inverse demand function is P(Q) = a - bQ with a, b > 0. The pre-AI marginal cost is c_0 > 0 with c_0 < a (the market is viable). All proofs are self-contained; readers familiar with Cournot theory may skim the derivations in Sections 2 and 4.
The Competitive Ratchet
We begin by formalizing the mechanism through which competitive markets convert AI-driven productivity gains into increased output rather than increased leisure. The argument proceeds in four steps: we establish the pre-AI equilibrium, derive the post-AI equilibrium, compute the output amplification factor, and characterize the ratchet mechanism.
Setup: Cournot Competition with AI
Consider an industry with n \geq 2 symmetric firms producing a homogeneous good. Each firm i chooses a quantity q_i \geq 0. The inverse demand function is: \begin{equation} P(Q) = a - bQ, \quad Q = \sum_{i=1}^n q_i \end{equation} where a > 0 is the demand intercept and b > 0 is the demand slope. This linear demand assumption is standard in oligopoly theory and simplifies the analysis without losing the essential competitive dynamic.
Definition 1 (AI-Augmented Cost Function). Let c_0 > 0 be the pre-AI marginal cost of production. At AI capability level \alpha \geq 0, the marginal cost is: \begin{equation} c(\alpha) = \frac{c_0}{\rho(\alpha)} \end{equation} where \rho(\alpha) \geq 1 is the compression ratio satisfying \rho(0) = 1 and \rho'(\alpha) > 0.
The compression ratio \rho(\alpha) captures the productivity multiplier from AI adoption. If a task previously required time T_0, AI reduces it to T_0/\rho(\alpha), equivalently reducing the per-unit cost by the same factor. We model \rho as a logistic function of \alpha: \begin{equation} \rho(\alpha) = 1 + (\rho_{\max} - 1) \cdot \frac{1}{1 + e^{-k(\alpha - \alpha_0)}} \end{equation} where \rho_{\max} is the maximum achievable compression, k is the steepness parameter, and \alpha_0 is the inflection point. Note that \rho(\alpha) \to 1 as \alpha \to -\infty and \rho(\alpha) \to \rho_{\max} as \alpha \to \infty, with the transition centered at \alpha_0. If \rho_{\max} is very large (e.g., \rho_{\max} \gg 100), the Constraint Flip analyzed in Section 7 occurs at relatively low \alpha, meaning the economy transitions from production-bound to imagination-bound almost immediately upon significant AI deployment. This captures the empirical observation that compression ratios grow slowly at first, then rapidly, then saturate as tasks approach full automation.
Pre-AI Equilibrium
Without AI (\alpha = 0, \rho = 1), the standard Cournot equilibrium is obtained from each firm’s profit maximization: \begin{equation} \pi_i = (P(Q) - c_0) \cdot q_i = (a - bQ - c_0) \cdot q_i \end{equation}
The first-order condition \partial \pi_i / \partial q_i = 0 gives: \begin{equation} a - b\sum_{j \neq i} q_j - 2bq_i - c_0 = 0 \end{equation}
By symmetry (q_i = q_j = q_0 for all i, j), we obtain: \begin{equation} q_0 = \frac{a - c_0}{b(n+1)}, \quad Q_0 = \frac{n(a - c_0)}{b(n+1)} \end{equation}
The equilibrium price and per-firm profit are: \begin{align} P_0 &= \frac{a + nc_0}{n+1} \\ \pi_0 &= b \cdot q_0^2 = \frac{(a - c_0)^2}{b(n+1)^2} \end{align}
The second-order condition \partial^2 \pi_i / \partial q_i^2 = -2b < 0 confirms this is a maximum. The equilibrium is unique and stable under best-response dynamics.
Post-AI Equilibrium
When all firms adopt AI at level \alpha, the marginal cost drops to c(\alpha) = c_0/\rho(\alpha). The new Cournot equilibrium follows by substituting c(\alpha) for c_0 in the standard derivation.
Proposition 2 (Competitive Ratchet). Under symmetric AI adoption at level \alpha, the Cournot equilibrium satisfies: \begin{align} q_i^*(\alpha) &= \frac{a - c_0/\rho(\alpha)}{b(n+1)} \\ Q^*(\alpha) &= \frac{n(a - c_0/\rho(\alpha))}{b(n+1)} \\ P^*(\alpha) &= \frac{a + nc_0/\rho(\alpha)}{n+1} \\ \pi_i^*(\alpha) &= \frac{(a - c_0/\rho(\alpha))^2}{b(n+1)^2} \end{align}
Proof. Firm i maximizes \pi_i = (a - bQ - c_0/\rho(\alpha)) q_i. The first-order condition is a - b\sum_{j \neq i} q_j - 2bq_i - c_0/\rho(\alpha) = 0. Imposing symmetry q_i = q^*: \begin{equation} a - b(n-1)q^* - 2bq^* = \frac{c_0}{\rho(\alpha)} \end{equation} yielding q^* = (a - c_0/\rho(\alpha))/(b(n+1)). The expressions for Q^*, P^*, and \pi_i^* follow directly. ◻
Corollary 3 (Output Amplification). The ratio of post-AI to pre-AI equilibrium output per firm is: \begin{equation} \frac{q_i^*(\alpha)}{q_0} = \frac{a - c_0/\rho(\alpha)}{a - c_0} = 1 + \frac{c_0(1 - 1/\rho(\alpha))}{a - c_0} \end{equation} This ratio is strictly increasing in \rho(\alpha) and approaches a/(a - c_0) as \rho \to \infty.
Proof. Differentiating with respect to \rho: \begin{equation} \frac{d}{d\rho}\left(\frac{q^*}{q_0}\right) = \frac{c_0}{\rho^2(a-c_0)} > 0 \end{equation} The limit as \rho \to \infty gives (a - 0)/(a - c_0) = a/(a-c_0). ◻
Numerical Examples
To illustrate the output amplification, consider the following parameter values.
| \rho=5 | \rho=10 | \rho=50 | \rho=100 | |
|---|---|---|---|---|
| \gamma_c = 0.3 | 1.17 | 1.21 | 1.26 | 1.27 |
| \gamma_c = 0.5 | 1.40 | 1.45 | 1.49 | 1.50 |
| \gamma_c = 0.7 | 1.87 | 1.97 | 2.05 | 2.07 |
| \gamma_c = 0.8 | 2.60 | 2.80 | 2.94 | 2.96 |
| \gamma_c = 0.9 | 4.60 | 5.10 | 5.42 | 5.46 |
Table 1 reveals that the output amplification is most pronounced in cost-intensive industries (\gamma_c close to 1). In the empirically relevant case of cognitive labor-intensive industries (legal, consulting, software) where \gamma_c \approx 0.7–0.8 and AI achieves \rho \approx 50, each firm approximately doubles or triples its output. The leisure dividend is zero; all productivity gains are absorbed into expanded production.
The Ratchet Mechanism
The competitive ratchet operates through a precise causal chain, illustrated in Figure 1:
Cost Reduction: AI adoption reduces marginal cost from c_0 to c_0/\rho(\alpha).
Profit Incentive: At the old equilibrium quantity q_0, firm i’s margin increases from (P_0 - c_0) to (P_0 - c_0/\rho(\alpha)), creating an incentive to expand output.
Quantity Expansion: Each firm increases output to q^*(\alpha) > q_0.
Price Decline: Aggregate output Q^*(\alpha) > Q_0 drives the price down: P^*(\alpha) < P_0.
Margin Compression: The new equilibrium margin is positive but smaller than the initial post-adoption margin.
Ratchet Lock-in: At the new equilibrium, no firm can unilaterally reduce output without losing market share and profit.
Remark 1 (Barrier-to-Entry Reduction). The ratchet mechanism is further intensified when AI reduces the fixed cost of market entry F or effectively increases the number of competitors n. AI lowers barriers to entry by commoditizing execution capability: a solo practitioner with AI tools can now compete with established firms on production quality and speed. This increases n in the Cournot model, which tightens equilibrium margins (since q_0 = (a-c_0)/(b(n+1)) is decreasing in n) and accelerates the competitive pressure that eliminates leisure gains.
The Work Multiplier
Definition 4 (Competitive Work Multiplier). The competitive work multiplier M_W^{\mathrm{comp}} is the ratio of total industry output post-AI to pre-AI: \begin{equation} M_W^{\mathrm{comp}}(\alpha) = \frac{Q^*(\alpha)}{Q_0} = \frac{a - c_0/\rho(\alpha)}{a - c_0} \end{equation}
Note that while each unit of output requires 1/\rho(\alpha) the time to produce, the number of units increases by the work multiplier, so total human hours devoted to production is: \begin{equation} H_{\mathrm{work}}(\alpha) = Q^*(\alpha) \cdot \frac{T_0}{\rho(\alpha)} = Q_0 \cdot M_W^{\mathrm{comp}} \cdot \frac{T_0}{\rho} \end{equation}
For M_W^{\mathrm{comp}} \approx 2 and \rho \approx 50, this is 2/50 = 4\% of the pre-AI hours for production alone. But the freed hours are not consumed as leisure—they are redirected to new production categories, management overhead, quality improvement, and the other mechanisms analyzed in subsequent sections. The total hours worked remains constant or increases.
Remark 2. The work multiplier captures only the intensive margin—expansion of output within existing product categories. The extensive margin—entry into new product categories enabled by lower costs—is analyzed in the opportunity space expansion framework of and produces additional work amplification of order \rho^{\beta-1} with \beta > 1.
Expectation Escalation
The competitive ratchet alone does not fully explain why the leisure dividend vanishes. A second mechanism—expectation escalation—ensures that even when individual firms could choose leisure, social and market expectations prevent them from doing so. This section formalizes expectation dynamics and proves that no steady state with below-expectation work levels is sustainable.
The Expectation Function
Definition 5 (Market Expectation). The market expectation level E_{\mathrm{mkt}}(\alpha) is the quantity and quality of output that customers, investors, and competitors regard as the baseline for competence at AI capability level \alpha: \begin{equation} E_{\mathrm{mkt}}(\alpha) = E_0 \cdot g(\rho(\alpha)) \end{equation} where E_0 is the pre-AI expectation baseline and g\colon [1, \infty) \to [1, \infty) is a strictly increasing, continuously differentiable function with g(1) = 1.
The function g captures how expectations adjust to demonstrated capability. We consider three canonical functional forms, each encoding a different hypothesis about organizational and market psychology:
Linear pass-through: g(\rho) = \rho. Expectations scale proportionally with demonstrated capability. If AI enables 50\times faster production, stakeholders expect 50\times the output. This is the “rational expectations” benchmark.
Concave adjustment: g(\rho) = \rho^{0.7}. Expectations increase but with diminishing returns, reflecting cognitive anchoring to pre-AI baselines. Stakeholders are slow to fully internalize AI capability.
Convex escalation: g(\rho) = \rho^{1.3}. Expectations overshoot demonstrated capability, reflecting competitive anxiety and “keeping up with the Joneses” dynamics. Stakeholders demand not just what AI enables, but more.
| Model | g(\rho) | g(50) | Character |
|---|---|---|---|
| Linear | \rho | 50 | Full pass-through |
| Concave | \rho^{0.7} | 19.0 | Diminishing returns |
| Convex | \rho^{1.3} | 132 | Accelerating |
Empirical evidence from the software industry suggests that expectations escalate at least linearly in demonstrated capability, and often super-linearly. When AI-assisted teams demonstrated the ability to produce a prototype in one day rather than two weeks, stakeholders did not celebrate the time savings—they demanded daily prototypes with broader scope . This observation is consistent with the convex model g(\rho) = \rho^{1.3}.
The Expectation–Work Feedback Loop
The critical feature of expectation escalation is its feedback structure. We model this as a coupled dynamical system:
\begin{align} \dot{\alpha}(t) &= f_\alpha(K(t)) \\ \dot{E}(t) &= \gamma_E \cdot (W(t) - E(t)) \\ \dot{W}(t) &= \lambda \cdot (E(t) - W(t)) + \mu \cdot \dot{\rho}(\alpha(t)) \\ \dot{K}(t) &= s \cdot \Pi(W, \alpha) - \delta K(t) \end{align}
where \gamma_E > 0 is the expectation adjustment rate, \lambda > 0 governs the speed at which work adjusts to expectations, \mu > 0 captures the direct output effect of compression gains, s is the investment rate, \Pi is the profit function, and \delta is the depreciation rate.
Equation [eq:E_dot] states that expectations adjust toward observed work levels: when firms demonstrate higher output, the market revises its expectations upward. Equation [eq:W_dot] states that work adjusts toward expectations (firms try to meet or exceed expectations) and also responds directly to AI improvements (which make more output achievable). The positive feedback is immediate: higher W raises E (via [eq:E_dot]), which raises W further (via [eq:W_dot]).
Theorem 6 (No Steady State with Leisure). If \dot{\alpha}(t) > 0 for all t (AI continues to improve) and \gamma_E > 0 (expectations adjust), then the system [eq:alpha_dot]–[eq:K_dot] admits no steady state in which W(t) < E_{\mathrm{mkt}}(\alpha(t)). That is, work never falls sustainably below market expectations.
Proof. Suppose for contradiction that W(t^*) < E(t^*) at some steady state t^* where \dot{W} = 0. From [eq:W_dot]: 0 = \lambda(E(t^*) - W(t^*)) + \mu\dot{\rho}(t^*). Since E > W by assumption and \dot{\rho} > 0 (because \dot{\alpha} > 0 and \rho' > 0), both terms on the right are strictly positive, yielding 0 > 0, a contradiction. Hence no such steady state exists. ◻
Corollary 7 (Monotonic Expectation Growth). Along any trajectory of the system with \dot{\alpha} > 0, the expectation level E(t) is eventually monotonically increasing.
Proof. From Theorem 6, W(t) \geq E(t) in any quasi-steady state. From [eq:E_dot], \dot{E} = \gamma_E(W - E) \geq 0 whenever W \geq E. Since the forcing term \mu\dot{\rho} continuously pushes W above E, the expectation level is eventually monotonically increasing. ◻
Empirical Expectation Escalation
| Domain | Pre-AI | Post-AI | g(\rho) |
|---|---|---|---|
| Software sprints | 5 feat./sprint | 15 feat./sprint | 3.0 |
| Marketing | 4 camp./qtr | 30 camp./qtr | 7.5 |
| Legal review | 20 cont./mo | 200 cont./mo | 10.0 |
| Data analysis | 3 rep./wk | 25 rep./wk | 8.3 |
| Customer svc. | 50 tick./day | 300 tick./day | 6.0 |
Table 3 documents the pattern: as AI-augmented teams demonstrate higher throughput, expectations rapidly adjust to the new baseline. The pre-AI output level, which was previously considered competent performance, is reclassified as underperformance. This reclassification is irreversible—expectations ratchet upward but never downward, creating what organizational psychologists call an expectations trap.
The Ratchet Inequality
We formalize the irreversibility:
Proposition 8 (Expectation Ratchet Inequality). Let \alpha_1 < \alpha_2. Then for all t sufficiently large after \alpha increases from \alpha_1 to \alpha_2, E_{\mathrm{mkt}}(\alpha_2) > E_{\mathrm{mkt}}(\alpha_1). Moreover, if AI capability subsequently regresses to \alpha_1, expectations satisfy: \begin{equation} E_{\mathrm{mkt}}^{\mathrm{sticky}} \geq E_{\mathrm{mkt}}(\alpha_1) + \theta \cdot (E_{\mathrm{mkt}}(\alpha_2) - E_{\mathrm{mkt}}(\alpha_1)) \end{equation} for some stickiness parameter \theta \in (0, 1].
Proof. The first claim follows from g strictly increasing and \rho strictly increasing in \alpha. For the stickiness result, model asymmetric adjustment: expectations adjust upward at rate \gamma_E^+ and downward at rate \gamma_E^- < \gamma_E^+. The steady-state expectation after regression satisfies E^{\mathrm{sticky}} = E(\alpha_2) - (\gamma_E^-/\gamma_E^+)(E(\alpha_2) - E(\alpha_1)), giving \theta = 1 - \gamma_E^-/\gamma_E^+. In the limit \gamma_E^- \to 0, expectations are fully sticky (\theta = 1). ◻
The stickiness of expectations is a crucial mechanism. Once a market has seen AI-level performance, it never fully reverts to pre-AI standards, even if AI capability were hypothetically withdrawn. This creates an irreversible commitment to AI-augmented work intensity.
Game-Theoretic Analysis: The AI Adoption Game
We now formalize the strategic interaction among firms choosing AI adoption levels. The central result is that AI adoption at a positive level is a dominant strategy for every firm, making universal adoption the unique Nash equilibrium.
The Game
Definition 9 (AI Adoption Game). The AI Adoption Game \Gamma = (N, S, \pi) consists of:
Players: N = \{1, 2, \ldots, n\}, a set of n \geq 2 firms.
Strategy spaces: S_i = [0, \bar{\alpha}] for each firm i, where \alpha_i \in S_i is the AI adoption level and \bar{\alpha} is the technological frontier.
Payoff functions: \pi_i(\alpha_1, \ldots, \alpha_n) is firm i’s profit, determined by the Cournot equilibrium conditional on the adoption profile.
Payoff Structure
Given adoption profile \boldsymbol{\alpha} = (\alpha_1, \ldots, \alpha_n), firm i’s marginal cost is c_i(\alpha_i) = c_0/\rho(\alpha_i). The asymmetric Cournot equilibrium quantity for firm i is: \begin{equation} q_i^*(\boldsymbol{\alpha}) = \frac{a + \sum_{j=1}^n c_j - (n+1)c_i}{b(n+1)} \end{equation}
This is derived in Appendix 17. Firm i’s equilibrium profit is: \begin{equation} \pi_i^*(\boldsymbol{\alpha}) = b \cdot [q_i^*(\boldsymbol{\alpha})]^2 - F(\alpha_i) \end{equation} where F(\alpha_i) is the fixed cost of AI adoption at level \alpha_i, assumed continuously differentiable, convex, and increasing with F(0) = 0 and F'(0) = 0.
The assumption F'(0) = 0 captures the empirical observation that initial AI adoption (e.g., using free or cheap AI assistants) has negligible cost. The convexity of F captures increasing costs at higher adoption levels (custom model training, infrastructure, organizational change).
The Marginal Incentive to Adopt
Lemma 10 (Positive Marginal Incentive at Zero). For any adoption profile \boldsymbol{\alpha}_{-i} of the other firms, firm i’s marginal incentive to adopt AI is strictly positive at \alpha_i = 0: \begin{equation} \frac{\partial \pi_i^*}{\partial \alpha_i}\bigg|_{\alpha_i = 0} > 0 \end{equation}
Proof. We compute: \begin{equation} \frac{\partial \pi_i^*}{\partial \alpha_i} = 2b \cdot q_i^* \cdot \frac{\partial q_i^*}{\partial \alpha_i} - F'(\alpha_i) \end{equation}
From [eq:asym_cournot], \partial q_i^*/\partial \alpha_i = (n+1)c_0\rho'(\alpha_i)/(b(n+1)\rho(\alpha_i)^2) = c_0\rho'(\alpha_i)/(b\rho(\alpha_i)^2). At \alpha_i = 0: \begin{equation} \frac{\partial \pi_i^*}{\partial \alpha_i}\bigg|_{\alpha_i=0} = 2c_0 \rho'(0) q_i^*(0, \boldsymbol{\alpha}_{-i}) - F'(0) \end{equation} Since \rho'(0) > 0, q_i^*(0, \boldsymbol{\alpha}_{-i}) > 0 (the firm produces positive output at \alpha_i = 0 since the market is viable), and F'(0) = 0, the expression is strictly positive. ◻
Theorem 11 (Dominant Strategy Adoption). Under the following conditions:
\rho(\alpha) is strictly increasing and concave in \alpha,
F(\alpha) is increasing and convex with F'(0) = 0,
c_0 < a (the market is viable pre-AI),
the AI Adoption Game has a unique symmetric Nash equilibrium \boldsymbol{\alpha}^* = (\alpha^*, \ldots, \alpha^*) with \alpha^* > 0, determined by: \begin{equation} \frac{\partial \pi_i^*}{\partial \alpha_i}\bigg|_{\alpha_i = \alpha^*} = 0 \end{equation}
Proof. By Lemma 10, \alpha_i = 0 is never a best response. The profit \pi_i^* is concave in \alpha_i because: (i) q_i^* is concave in \alpha_i (since \rho is concave, c_i = c_0/\rho(\alpha_i) is convex, and q_i^* is linearly decreasing in c_i, hence concave in \alpha_i); (ii) F is convex. The product b(q_i^*)^2 is concave in \alpha_i when q_i^* is concave (since x \mapsto x^2 is convex and the composition with a concave function through a positive coefficient yields concavity of b(q_i^*)^2 by the second derivative test).1 Subtracting the convex F preserves concavity. Hence \pi_i^* has a unique interior maximum \alpha^* satisfying [eq:nash_foc]. By symmetry, all firms choose the same \alpha^*. ◻
Corollary 12 (Impossibility of Leisure Equilibrium). There is no Nash equilibrium in which any firm i chooses \alpha_i = 0 (no AI adoption). Consequently, there is no equilibrium in which firms maintain pre-AI output levels and consume the productivity gain as leisure.
Comparative Statics
Proposition 13 (Monotonicity in Capability Frontier). As the AI capability frontier \bar{\alpha} increases:
The equilibrium adoption level \alpha^* weakly increases.
The equilibrium total output Q^* strictly increases.
The equilibrium price P^* strictly decreases.
Per-firm profit \pi^* may increase or decrease, but total industry profit n\pi^* eventually decreases for large \bar{\alpha}.
Proof. (a) If \alpha^* was interior before \bar{\alpha} increased, it remains the same (first-order condition unchanged). If \alpha^* = \bar{\alpha} (corner solution), increasing \bar{\alpha} allows a higher \alpha^* since \partial\pi/\partial\alpha > 0 at the old corner.
(b) Q^* = n(a - c_0/\rho(\alpha^*))/(b(n+1)) is increasing in \alpha^* since \rho is increasing.
(c) P^* = a - bQ^* is decreasing in Q^*.
(d) \pi^* = (a - c_0/\rho)^2/(b(n+1)^2) - F(\alpha^*). As \bar{\alpha} grows, the first term approaches a^2/(b(n+1)^2) while F grows without bound (by convexity), eventually dominating. ◻
This last result is particularly striking: if AI capability grows sufficiently, the equilibrium involves firms adopting extremely expensive AI systems and earning lower profits than they would without AI. The adoption is rational individually but collectively self-defeating.
The Prisoner’s Dilemma of AI Adoption
The game-theoretic structure of AI adoption has a particularly stark form when reduced to a 2 \times 2 symmetric game, revealing the classic prisoner’s dilemma.
The Two-Firm Binary Adoption Game
Consider two firms (n = 2), each choosing between Adopt (A) and Don’t Adopt (D). Let the payoffs be denoted \pi_{XY} where X is the focal firm’s action and Y is the rival’s action.
| Firm 2: A | Firm 2: D | |
|---|---|---|
| Firm 1: A | (\pi_{AA}, \pi_{AA}) | (\pi_{AD}, \pi_{DA}) |
| Firm 1: D | (\pi_{DA}, \pi_{AD}) | (\pi_{DD}, \pi_{DD}) |
Proposition 14 (Prisoner’s Dilemma Structure). Under the Cournot model with n = 2, compression ratio \rho > 1, and adoption cost F \in (F_{\min}, F_{\max}), the payoffs satisfy: \begin{equation} \pi_{AD} > \pi_{DD} > \pi_{AA} > \pi_{DA} \end{equation}
Proof. We use the two-firm asymmetric Cournot result. With costs c_1 and c_2: \begin{equation} q_i^* = \frac{a - 2c_i + c_j}{3b}, \quad \pi_i^* = b(q_i^*)^2 \end{equation}
Case DD (neither adopts, costs c_1 = c_2 = c_0): \begin{equation} q_{DD} = \frac{a - c_0}{3b}, \quad \pi_{DD} = \frac{(a - c_0)^2}{9b} \end{equation}
Case AA (both adopt, costs c_1 = c_2 = c_0/\rho): \begin{equation} q_{AA} = \frac{a - c_0/\rho}{3b}, \quad \pi_{AA} = \frac{(a - c_0/\rho)^2}{9b} - F \end{equation}
Case AD (Firm 1 adopts, Firm 2 does not): \begin{align} q_{AD} &= \frac{a - 2c_0/\rho + c_0}{3b} \\ \pi_{AD} &= \frac{(a + c_0 - 2c_0/\rho)^2}{9b} - F \end{align}
Case DA (Firm 1 does not adopt, Firm 2 does): \begin{align} q_{DA} &= \frac{a - 2c_0 + c_0/\rho}{3b} \\ \pi_{DA} &= \frac{(a - 2c_0 + c_0/\rho)^2}{9b} \end{align}
The ordering \pi_{AD} > \pi_{DD} requires F to be small enough that the unilateral cost advantage outweighs the adoption cost. The ordering \pi_{DD} > \pi_{AA} requires F to be large enough that mutual adoption’s cost exceeds the benefit from lower marginal cost. Specifically:
\pi_{DD} > \pi_{AA} iff: \begin{equation} F > \frac{(a - c_0/\rho)^2 - (a - c_0)^2}{9b} \equiv F_{\min} \end{equation}
\pi_{AA} > \pi_{DA} iff: \begin{equation} F < \frac{(a - c_0/\rho)^2 - (a - 2c_0 + c_0/\rho)^2}{9b} \equiv F_{\max} \end{equation}
For \rho sufficiently large, F_{\min} < F_{\max}, so the prisoner’s dilemma region (F_{\min}, F_{\max}) is non-empty. In the numerical example with a = 10, b = 1, c_0 = 6, \rho = 5: F_{\min} \approx 6.83, F_{\max} \approx 8.27. ◻
Remark 3 (One-Time vs. Per-Period Adoption Cost). The cost F in the above analysis is treated as a per-period amortized cost. If F is instead a one-time capital expenditure while the margin gains (c_0 - c_0/\rho) accrue every period, the prisoner’s dilemma becomes even “stickier”: the present value of per-period gains \sum_{t=0}^{\infty} \delta^t \Delta\pi = \Delta\pi/(1-\delta) can vastly exceed the one-time F, making adoption dominant over a wider parameter range and the dilemma harder to escape through coordinated restraint.
Numerical Illustration
Example 1. With a = 10, b = 1, c_0 = 6, \rho = 5, F = 7: \begin{align} \pi_{DD} &= (10-6)^2/9 = 16/9 \approx 1.78 \\ \pi_{AA} &= (10-1.2)^2/9 - 7 = 8.60 - 7 = 1.60 \\ \pi_{AD} &= (10+6-2.4)^2/9 - 7 = 20.55 - 7 = 13.55 \\ \pi_{DA} &= (10-12+1.2)^2/9 = (-0.8)^2/9 = 0.07 \end{align} Ordering: 13.55 > 1.78 > 1.60 > 0.07, confirming \pi_{AD} > \pi_{DD} > \pi_{AA} > \pi_{DA}.
Interpretation
The prisoner’s dilemma structure reveals a deep tension:
Individual rationality: Each firm is strictly better off adopting AI regardless of the other’s choice. Adopt dominates Don’t: \pi_{AD} > \pi_{DD} (if rival doesn’t adopt) and \pi_{AA} > \pi_{DA} (if rival adopts).
Collective irrationality: Both firms would be better off if neither adopted (\pi_{DD} > \pi_{AA}). Mutual adoption increases output, drives down prices, and the adoption cost F erodes the gains from lower marginal costs.
The leisure implication: Mutual non-adoption with leisure is the socially preferred outcome, but it is strategically unstable. Any agreement to “share the leisure dividend” unravels because each firm has a unilateral incentive to defect by secretly adopting AI and capturing market share.
Remark 4 (Consumer Surplus and the Standard of Living Dividend). While the prisoner’s dilemma eliminates the leisure dividend for producers (\pi_{AA} < \pi_{DD}), consumers benefit from the resulting lower prices (P^* < P_0) and higher aggregate output (Q^* > Q_0). The consumer surplus gain is \Delta CS = \frac{1}{2}b((Q^*)^2 - Q_0^2) > 0. In this sense, the “Leisure Dividend” that workers lose is partially converted into a “Standard of Living Dividend” for consumers—more goods and services at lower prices. However, since workers are consumers, the net welfare effect is ambiguous: individuals gain as consumers but lose leisure and face increased work intensity as producers. Whether the standard-of-living gains compensate for the lost leisure depends on the marginal utility of consumption versus leisure, a trade-off that likely varies across income levels and is not resolved by market forces alone.
The n-Firm Generalization
Proposition 15 (n-Firm Prisoner’s Dilemma). For n symmetric firms with adoption cost F in the prisoner’s dilemma range, and for all adoption fractions \phi \in [0, 1]:
A non-adopting firm always has a strict incentive to adopt.
Universal adoption (\phi = 1) is the unique Nash equilibrium.
Universal non-adoption (\phi = 0) Pareto-dominates universal adoption for sufficiently large F.
Proof. Part (a): An adopting firm has cost c_0/\rho < c_0, hence higher equilibrium quantity and gross profit than a non-adopting firm with the same competitors. The net benefit of adoption is \Delta\pi = b(q_{\mathrm{adopt}}^2 - q_{\mathrm{not}}^2) - F. For F in the prisoner’s dilemma range, \Delta\pi > 0.
Part (b): Since adoption is dominant, \phi = 1 is the unique Nash equilibrium.
Part (c): Under universal adoption, profit is \pi_{AA} = (a - c_0/\rho)^2/(b(n+1)^2) - F. Under universal non-adoption, \pi_{DD} = (a - c_0)^2/(b(n+1)^2). For F > F_{\min}, \pi_{DD} > \pi_{AA}. ◻
Adoption Cascades
The transition from non-adoption to universal adoption does not occur instantaneously. It propagates through industries via cascade dynamics with threshold effects, analogous to information cascades in social learning .
Sequential Adoption Model
Consider n firms ordered by adoption readiness r_i, reflecting technical sophistication, capital availability, and managerial openness. Firm i adopts AI when its incentive exceeds its resistance threshold:
Definition 16 (Adoption Threshold). Firm i adopts AI when: \begin{equation} \Delta\pi_i(\phi) = \pi_i^{\mathrm{adopt}}(\phi) - \pi_i^{\mathrm{not}}(\phi) > \theta_i \end{equation} where \phi is the current adoption fraction and \theta_i > 0 is firm i’s adoption resistance.
The function \Delta\pi_i(\phi) captures a crucial feature: the incentive to adopt is increasing in the adoption fraction \phi. As more competitors adopt AI and lower their costs, the penalty for non-adoption grows. This positive feedback is the engine of the cascade.
Proposition 17 (Cascade Threshold). There exists a critical adoption fraction \phi^* \in (0, 1) such that:
For \phi < \phi^*, adoption proceeds incrementally: only firms with \theta_i < \Delta\pi_i(\phi) adopt.
For \phi \geq \phi^*, adoption cascades: \Delta\pi_i(\phi) > \theta_i for all remaining firms.
Proof. In the asymmetric Cournot equilibrium with k = \phi n adopters (cost c_0/\rho) and (1-\phi)n non-adopters (cost c_0), the non-adopter’s quantity is: \begin{equation} q_{\mathrm{not}}(\phi) = \frac{a + \phi n c_0/\rho + (1-\phi)n c_0 - (n+1)c_0}{b(n+1)} \end{equation} which is decreasing in \phi (more low-cost adopters reduce the non-adopter’s output). The adopter’s quantity q_{\mathrm{adopt}}(\phi) is increasing in \phi (joining the low-cost group is less differentiating when most are already there, but the absolute quantity rises with the lower aggregate cost).
The adoption incentive \Delta\pi(\phi) = b(q_{\mathrm{adopt}}^2 - q_{\mathrm{not}}^2) - F is increasing in \phi because q_{\mathrm{adopt}} - q_{\mathrm{not}} grows as \phi increases (the cost disadvantage of non-adoption becomes more severe). Define \phi^* as the smallest \phi such that \Delta\pi(\phi) > \max_i \theta_i. For \phi \geq \phi^*, all remaining firms find adoption strictly profitable. ◻
Cascade Dynamics
The cascade follows an S-curve pattern (Figure 3). The early phase is driven by firms with low adoption resistance (“innovators” and “early adopters”). Once \phi crosses \phi^*, the remaining firms face an existential choice: adopt or exit.
Definition 18 (Cascade Speed). The cascade speed v_c is the rate of adoption at the cascade threshold: \begin{equation} v_c = \left.\frac{d\phi}{dt}\right|_{\phi = \phi^*} \end{equation}
If adoption resistances \theta_i are drawn from a distribution with density f_\Theta, the cascade speed depends on the density near the threshold: v_c \propto f_\Theta(\Delta\pi(\phi^*)) \cdot (\partial\Delta\pi/\partial\phi). Concentrated distributions (homogeneous industries) produce faster cascades; dispersed distributions (heterogeneous industries) produce slower ones.
Industry-Level Evidence
| Industry | \phi^* | Cascade | Time to |
| Start | Complete | ||
| Software dev. | 0.15 | Q1 2024 | 18 mo |
| Marketing | 0.10 | Q4 2023 | 15 mo |
| Financial analysis | 0.20 | Q2 2024 | 12 mo |
| Legal services | 0.25 | Q3 2025 | 24 mo |
| Scientific research | 0.30 | Q1 2025 | 30 mo |
| Healthcare admin. | 0.35 | Q3 2025 | 36 mo |
Table 5 reports estimated cascade thresholds across industries. Industries with lower barriers to AI adoption (software, marketing) exhibit lower cascade thresholds and faster completion times. Industries with higher regulatory or institutional barriers (healthcare, legal) have higher thresholds and longer cascades, but the cascade dynamics are qualitatively identical.
The adoption of AI coding assistants in the software industry provides the clearest empirical confirmation. GitHub Copilot reached its cascade threshold at approximately 15% developer adoption in early 2024, after which adoption accelerated rapidly. Firms that resisted found themselves unable to match the output velocity of AI-augmented competitors, losing talent and market share.
The Constraint Shift: Production to Imagination
As AI compresses production time, the binding constraint on economic output undergoes a fundamental transformation. This section formalizes the shift from production-bound to imagination-bound economies.
Pre-AI Constraint Structure
In the pre-AI regime, output is limited by production capacity: \begin{equation} \text{Output} = \min\left(\text{Ideas}, \frac{H}{T_0}\right) \end{equation} where H is available human-hours per period and T_0 is the time per task. For most firms, the production capacity H/T_0 is smaller than the stock of viable ideas, so production is the binding constraint. Many good ideas go unimplemented for lack of execution bandwidth.
Post-AI Constraint Structure
AI compression multiplies production capacity by \rho(\alpha): \begin{equation} \text{Output} = \min\left(B_I, \frac{H \cdot \rho(\alpha)}{T_0}\right) \end{equation}
Definition 19 (Imagination Bandwidth). The imagination bandwidth B_I is the maximum rate at which a human agent or organization can generate well-specified, actionable task descriptions. It is measured in tasks per unit time and is bounded by cognitive processing limits that are largely independent of AI capability.
The key property of B_I is that it scales slowly with AI. While AI can help with brainstorming and ideation, the bottleneck in imagination bandwidth is human judgment—the capacity to evaluate which ideas are worth pursuing, which specifications are correct, and which outcomes are valuable. This judgment capacity is fundamentally biological and improves at most incrementally.
Proposition 20 (Constraint Flip). There exists a critical compression ratio \rho^* such that: \begin{equation} \rho^* = \frac{T_0 \cdot B_I}{H} \end{equation} For \rho(\alpha) < \rho^*, production is the binding constraint. For \rho(\alpha) > \rho^*, imagination bandwidth is the binding constraint.
Proof. The constraint flips when H\rho(\alpha)/T_0 = B_I, giving \rho^* = T_0 B_I/H. For a typical knowledge worker: H = 8 hours/day, T_0 = 4 hours/task, B_I = 20 well-specified tasks/day, yielding \rho^* = 4 \times 20/8 = 10. Since empirical compression ratios already exceed 40 for many task categories (see ), the constraint flip has already occurred in multiple domains. ◻
The Abstraction Ladder
When imagination bandwidth becomes binding, economic agents respond by climbing the abstraction ladder—shifting their effort from lower-level execution to higher-level conceptual work:
Definition 21 (Abstraction Ladder). The abstraction ladder is a hierarchy of cognitive work levels:
Execution: Direct task performance.
Implementation: Translating designs into plans.
Design: Specifying what should be built.
Architecture: Defining structure and relationships.
Problem Definition: Identifying problems worth solving.
Judgment: Determining evaluation criteria.
AI automates lower levels, pushing human work upward. But higher-level work is not less work—it is different work, often more cognitively demanding, and it generates more lower-level tasks through hierarchical amplification (Section 8).
Hierarchical Work Amplification
The constraint shift does not merely redistribute existing work—it amplifies total work through hierarchical expansion. Each decision at a higher abstraction level generates multiple tasks at lower levels, creating a multiplicative cascade.
The Amplification Model
Definition 22 (Work Amplification Factor). Let L_k denote abstraction level k (k = 1 for execution, k = 6 for judgment). A single decision at level k generates a_k tasks at level k-1. The total amplification factor from level k to level 1 is: \begin{equation} A_k = \prod_{j=2}^{k} a_j \end{equation}
| Level | k | a_k | A_k |
|---|---|---|---|
| Judgment/Values | 6 | — | 1 |
| Problem Definition | 5 | 3 | 3 |
| Architecture | 4 | 4 | 12 |
| Design | 3 | 5 | 60 |
| Implementation | 2 | 8 | 480 |
| Execution | 1 | 10 | 4,800 |
AI and Hierarchical Amplification
Pre-AI, the amplification hierarchy was constrained at the execution level: there were not enough human-hours to execute all tasks generated by higher-level decisions. AI removes this constraint by compressing execution time. The result is that higher-level decisions can be fully realized, unlocking the complete amplification cascade.
Proposition 23 (Work Amplification Under AI). Let W_{\mathrm{exec}} denote execution-level work and D_6 denote the number of judgment-level decisions per period. Pre-AI: \begin{equation} D_6^{\mathrm{pre}} = \frac{H}{A_6 \cdot T_0} = \frac{H}{4800 \cdot T_0} \end{equation} Post-AI with compression \rho at the execution level: \begin{equation} D_6^{\mathrm{post}} = \frac{H \cdot \rho}{A_6 \cdot T_0} = \rho \cdot D_6^{\mathrm{pre}} \end{equation} The total number of tasks at all levels is: \begin{equation} W_{\mathrm{total}}^{\mathrm{post}} = D_6^{\mathrm{post}} \cdot \sum_{k=1}^{6} A_k = \rho \cdot W_{\mathrm{total}}^{\mathrm{pre}} \end{equation}
The freed execution capacity does not remain idle. It is absorbed by additional higher-level decisions, each generating its own amplification cascade. The system exhibits positive feedback: more execution capacity enables more high-level decisions, which generate more execution tasks, which justify more AI investment.
Quantitative Example
Consider a product team pre-AI with H = 2{,}000 task-hours/quarter, T_0 = 1 hour/execution-task:
Pre-AI: Can execute 2,000 tasks/quarter. With A_6 = 4{,}800, the team cannot fully realize even one judgment-level decision per quarter. The team is execution-bound, with a backlog of unimplemented ideas.
Post-AI (\rho = 50): Execution capacity is 50 \times 2{,}000 = 100{,}000 tasks/quarter. The team can now support 100{,}000/4{,}800 \approx 20 judgment-level decisions per quarter.
Result: The team works at a higher abstraction level, producing 20 times the strategic output. Total task count across all levels is approximately 20 \times (1 + 3 + 12 + 60 + 480 + 4{,}800) = 20 \times 5{,}356 = 107{,}120, of which 96,000 are execution tasks handled primarily by AI.
The workers are not idle. They are making 20 times as many architectural, design, and strategic decisions as before. The cognitive load has shifted upward, not disappeared.
Remark 5 (Coordination Overhead). As the number of judgment-level decisions D_6 increases, coordination costs grow superlinearly. Each decision may interact with or depend on others, producing communication overhead of order O(D_6^2) in the worst case. For the example above, scaling from 1 to 20 strategic decisions per quarter implies coordination costs growing by a factor of up to 20^2/1^2 = 400. In practice, this overhead manifests as meetings, alignment documents, dependency tracking, and conflict resolution—further absorbing any notional “leisure” into organizational coordination work.
The Sociology of Compressed Time
The competitive ratchet and hierarchical amplification operate through economic incentives. But the elimination of the leisure dividend is reinforced by sociological and psychological mechanisms that merit formal treatment.
Ambition Elasticity
Definition 24 (Ambition Elasticity). The ambition elasticity \eta measures the responsiveness of individual ambition to demonstrated capability: \begin{equation} \eta = \frac{\partial \ln A}{\partial \ln \rho} \end{equation} where A is the ambition level (scope and number of goals pursued) and \rho is the demonstrated compression ratio.
Proposition 25 (Superlinear Ambition). If \eta > 1 (ambition is elastic), then total work generated by an individual scales superlinearly with AI capability: \begin{equation} W_{\mathrm{individual}}(\rho) = W_0 \cdot \rho^{\eta-1} \end{equation} where W_0 is the pre-AI work level.
Proof. Ambition scales as A(\rho) = A_0 \rho^{\eta}. Each unit of ambition generates work requiring time T_0/\rho, so total work (in hours) is W(\rho) = A(\rho) \cdot T_0/\rho = A_0 T_0 \rho^{\eta-1} = W_0 \rho^{\eta-1}. For \eta > 1, this is strictly increasing in \rho. ◻
Empirical evidence strongly supports \eta > 1. Entrepreneurs who discover AI-augmented workflows consistently report expanding project scope rather than reducing hours. A founder who planned one product now plans three; a researcher who planned one paper now plans five. Surveys of AI-augmented knowledge workers consistently find self-reported “ambition expansion” exceeding the compression ratio.
Iteration Compounding
Beyond expanding the number of projects, AI compression enables more iterations per project, creating a quality ratchet:
Definition 26 (Iteration Compounding). Within a fixed time budget H, the number of achievable iterations is: \begin{equation} I(\rho) = \frac{H \cdot \rho}{T_0} \end{equation} Output quality compounds: Q_{\mathrm{quality}}(I) = Q_0 \cdot (1 + r)^I where r is the per-iteration improvement rate.
For \rho = 50, H = 8 hours, T_0 = 4 hours, r = 0.05: \begin{align} I(1) &= 2, \quad Q_{\mathrm{quality}}(2) = 1.10 \cdot Q_0 \\ I(50) &= 100, \quad Q_{\mathrm{quality}}(100) = 131.5 \cdot Q_0 \end{align}
The quality expectation ratchets accordingly. Stakeholders who have seen 100-iteration quality will never accept 2-iteration quality again. This is the quality analogue of the expectation ratchet inequality (Proposition 8).
Organizational Implications
The competitive ratchet and ambition elasticity produce four organizational consequences:
Hierarchy Flattening: AI compresses execution, making middle management partially redundant, but expanding demand for architectural and judgment-level workers. Organizations become wider (more parallel projects) and flatter (fewer layers).
Increased Parallelism: AI-augmented individuals manage more concurrent projects, each requiring human attention for high-level decisions. The result is more context-switching, more cognitive load, and more total work.
Faster Obsolescence: AI-enabled iteration speed means products and strategies become obsolete faster. The replacement cycle shortens, generating continuous work to stay current.
Quality Escalation: When AI enables 100 iterations in the time previously required for 2, the expected quality of output rises dramatically. This creates a new baseline requiring AI-level iteration to achieve, trapping firms on the treadmill.
The Paradox of Choice Under AI
As AI expands the opportunity space \mathcal{O}(\alpha), the number of feasible options grows superlinearly. This expansion generates a distinct category of work: decision-making work.
Decision-Making Work
Definition 27 (Decision-Making Work). The work required to evaluate and select among feasible options is: \begin{equation} W_{\mathrm{dec}}(\alpha) = |\mathcal{O}(\alpha)| \cdot c_{\mathrm{eval}} \cdot h(|\mathcal{O}(\alpha)|) \end{equation} where |\mathcal{O}(\alpha)| is the size of the opportunity space, c_{\mathrm{eval}} is the per-option evaluation cost, and h\colon \mathbb{R}_+ \to \mathbb{R}_+ is an increasing function reflecting the complexity of comparing options.
Schwartz documented the psychological costs of expanded choice in consumer contexts. We extend this to the production context: firms and individuals facing a larger opportunity space must invest more cognitive effort in selecting which opportunities to pursue.
The Evaluation Burden
For the opportunity space expansion |\mathcal{O}(\alpha)| = |\mathcal{O}(0)| \cdot \rho(\alpha)^{\beta} with \beta > 1 (from ), the decision-making work grows as: \begin{equation} W_{\mathrm{dec}}(\alpha) \propto \rho(\alpha)^{\beta} \cdot h(\rho(\alpha)^{\beta}) \end{equation}
For h(x) = \log x (conservative; pairwise comparison would give h(x) = x): \begin{equation} W_{\mathrm{dec}}(\alpha) \propto \beta \cdot \rho(\alpha)^{\beta} \cdot \ln \rho(\alpha) \end{equation}
This is superlinear in \rho for all \beta > 0. AI makes production faster but makes deciding what to produce harder. The decision-making burden grows faster than the production efficiency gain, consuming part of the freed time.
The Attention Economy Trap
AI expands not just one firm’s options but all firms’ options, intensifying competition for finite consumer attention:
Each firm produces more options, increasing total attention demanded.
Consumer attention is bounded, creating zero-sum competition at the demand level.
Firms invest more in differentiation, marketing, and personalization.
These investments require additional cognitive work, partially offsetting production gains.
The net result: AI-driven production efficiency gains are partially consumed by AI-driven increases in the cost of capturing demand.
The Treadmill Effect
The competitive ratchet, expectation escalation, and paradox of choice converge into the AI treadmill effect: a dynamic in which rising baselines prevent any sustainable satisfaction from productivity gains.
Formalization
Definition 28 (Hedonic Baseline). The hedonic baseline B(t) is the output level regarded as “normal” at time t: \begin{equation} B(t) = \int_0^t \kappa(t - s) \cdot W(s) \, ds, \quad \kappa(\tau) = \gamma e^{-\gamma \tau} \end{equation} where \gamma > 0 is the adaptation rate.
Definition 29 (Satisfaction Function). Satisfaction from current output relative to baseline: \begin{equation} S(t) = u(W(t) - B(t)) \end{equation} where u is concave with u(0) = 0, u' > 0, u'' < 0.
Proposition 30 (Treadmill Convergence). If W(t) = W_0 e^{gt} (exponential growth at rate g), then: \begin{equation} W(t) - B(t) \to \frac{g}{\gamma + g} \cdot W(t) \quad \text{as } t \to \infty \end{equation} In relative terms, the satisfaction gap S(t)/W(t) converges to a constant independent of the level of W.
Proof. For W(t) = W_0 e^{gt}: \begin{align} B(t) &= \gamma W_0 e^{-\gamma t} \int_0^t e^{(\gamma + g)s} \, ds \\ &= \frac{\gamma W_0}{\gamma + g}(e^{gt} - e^{-\gamma t}) \\ &\to \frac{\gamma}{\gamma + g} W_0 e^{gt} = \frac{\gamma}{\gamma + g} W(t) \end{align} Thus W(t) - B(t) \to \frac{g}{\gamma + g} W(t). ◻
The treadmill effect means that no matter how productive AI makes us, the subjective experience of “having enough time” does not improve. The baseline ratchets up to match output, and the felt pressure to produce more remains constant. This connects directly to the psychological literature on hedonic adaptation : lottery winners return to baseline happiness within months, and AI-augmented workers return to baseline stress levels within weeks of adopting new tools.
Individual and Market Manifestations
Individual: A developer who completes a project in one day instead of two weeks does not take 9 days off. They start the next project immediately. Their calendar fills to the same density as before, but with more ambitious projects.
Market: An industry that produces 10\times more output does not enjoy 10\times more profit. Competition drives prices down until margins return to pre-AI levels, but at 10\times the volume and work intensity.
Social: Peers who demonstrate AI-augmented output set a new social comparison standard. The individual who chooses leisure feels not liberated but left behind.
Multi-Agent Simulation
We complement the analytical results with a numerical simulation of competitive dynamics among heterogeneous firms, confirming that the theoretical predictions hold under realistic heterogeneity and stochastic perturbations.
Simulation Design
We simulate n = 50 firms over T = 30 periods:
Each firm i has initial capability \alpha_i(0) \sim \mathrm{Uniform}(0, 0.5), adoption resistance \theta_i \sim \mathrm{Uniform}(0, 1), and learning rate \lambda_i \sim \mathrm{Uniform}(0.5, 1.5).
Each period, firms observe competitors’ output and update AI adoption via best-response with adaptive expectations.
The market clears through Cournot competition with heterogeneous costs.
Firms earning negative profit for 3 consecutive periods exit.
Results
| Metric | t=0 | t=10 | t=20 | t=30 |
|---|---|---|---|---|
| Active firms | 50 | 42 | 38 | 35 |
| Mean \alpha_i | 0.25 | 3.8 | 7.2 | 9.1 |
| Mean \rho_i | 1.1 | 8.4 | 38.6 | 112 |
| Adoption \phi | 0.12 | 0.78 | 0.97 | 1.00 |
| Output Q/Q_0 | 1.0 | 3.2 | 7.8 | 12.4 |
| Profit \pi/\pi_0 | 1.0 | 0.91 | 0.74 | 0.62 |
| Leisure hrs/wk | 40 | 2.1 | 0.3 | 0.0 |
The simulation confirms the analytical predictions (Table 7):
Universal adoption: By period 20, 97% of survivors have adopted. By period 30, adoption is universal.
Output expansion: Total industry output increases 12.4\times over 30 periods, closely tracking the Cournot prediction.
Profit compression: Mean per-firm profit declines to 62% of pre-AI levels despite massive output increases, confirming the prisoner’s dilemma.
Leisure elimination: Counterfactual leisure hours (hours freed if only one firm had AI) drop from 40 to 0. The leisure dividend is fully absorbed.
Firm exits: 15 of 50 firms exit, primarily those with high adoption resistance. All survivors are AI-adopters.
Cascade confirmation: The cascade threshold occurs at \phi^* \approx 0.25 (averaged over runs), consistent with analytical estimates.
Heterogeneity Effects
The simulation reveals three distinct phases:
Periods 1–5 (Early Advantage): Low-\theta_i firms adopt first and enjoy supernormal profits (\pi_{AD} phase). This is the “window of opportunity” for early adopters.
Periods 5–15 (Cascade): The adoption fraction crosses \phi^* and the cascade propagates. Late adopters scramble to catch up, often over-investing.
Periods 15–30 (New Equilibrium): Universal adoption, compressed margins, dramatically higher output. The competitive ratchet is fully engaged and locked in.
The transition from Phase 1 to Phase 3 takes approximately 10–15 periods. The “window of advantage” for early adopters is real but temporary—within a decade, all surviving firms are at comparable AI adoption levels and no firm captures lasting rents from AI.
Empirical Evidence
We examine five domains where AI adoption is sufficiently advanced to observe competitive dynamics.
Software Development
AI coding assistants have achieved compression ratios of \rho \approx 5–15\times for common programming tasks.
Work multiplier: GitHub reported a 40% increase in code commits per developer per week from 2023 to 2025 among Copilot users. The number of new software projects initiated increased 3\times.
Expectation escalation: Sprint velocity expectations at major tech companies increased 2–3\times between 2023 and 2025. “A senior engineer should ship a feature per day” became a common benchmark.
Leisure: Developer surveys show no decrease in weekly working hours (stable at 45–50 hours). The leisure dividend is zero.
Content Creation
AI text and image generation achieved \rho \approx 20–100\times compression.
Output: Blog posts per day: 7M (2022) to 30M (2025). YouTube uploads: 4\times increase. Content creators report same hours, dramatically more output.
Scientific Research
AI tools for literature review, analysis, and drafting: \rho \approx 5–50\times.
Output: Preprint submissions (arXiv, bioRxiv) increased approximately 60% from 2023 to 2025. Papers per researcher increased. Per-paper citation impact declined slightly, consistent with attention dilution.
Financial Analysis
AI financial modeling and analysis: \rho \approx 10–40\times.
Output: Reports per analyst at major banks roughly tripled. Trading strategy backtests per team increased by an order of magnitude. Clients now expect daily updates where weekly was standard.
Legal Services
AI contract review and legal drafting: \rho \approx 10–50\times (measured in hours per contract, i.e., a contract requiring 10 hours of attorney time pre-AI is completed in 12–60 minutes with AI assistance).
Output: Contracts reviewed per lawyer per month increased 5–10\times. But lawyers did not gain leisure; they expanded into more comprehensive analysis, cross-jurisdictional review, and proactive compliance work. Billing hours remained unchanged.
| Domain | \rho (est.) | M_W (obs.) | Leisure |
|---|---|---|---|
| Software | 5–15\times | 2–3\times | None |
| Content | 20–100\times | 3–5\times | None |
| Research | 5–50\times | 1.5–2\times | None |
| Finance | 10–40\times | 2–4\times | None |
| Legal | 10–50\times | 2–3\times | None |
The pattern across all five domains (Table 8) is identical: output increases, expectations escalate, and leisure does not increase. The competitive ratchet operates precisely as predicted.
Connection to Baumol’s Cost Disease
Baumol and Bowen identified a fundamental asymmetry: “progressive” sectors enjoy technology-driven productivity gains while “stagnant” sectors (services, arts) do not, because their output inherently requires human time. This asymmetry causes the relative cost of stagnant-sector output to rise over time.
AI as a Partial Cure
AI appears to partially cure Baumol’s cost disease by compressing production time even in traditionally stagnant sectors. Legal services, healthcare administration, education, and creative arts—all classic stagnant sectors—are experiencing compression ratios of \rho = 5–50\times.
However, the competitive ratchet ensures that AI does not deliver cost reduction. Instead, it enables quality escalation:
Proposition 31 (Quality Escalation Replaces Cost Reduction). In a competitive market for services subject to Baumol’s cost disease, AI adoption leads to:
Increased output quality (more iterations, broader scope).
Constant or increasing total cost (quality escalation absorbs gains).
Increased total work (human hours devoted to higher-level aspects).
Proof. Consider a legal firm producing contracts in 8 hours, compressed to 10 minutes by AI (\rho = 48). In competition, rivals use freed time to: (i) review 48 contracts in the old time for 1 (volume expansion); (ii) perform deeper analysis and customization (quality escalation); (iii) offer new services previously infeasible. Market expectations adjust: clients expect the deeper analysis as baseline. The firm must deliver all three to remain competitive, absorbing the entire time surplus. ◻
The New Baumol Asymmetry
AI creates a new asymmetry: sectors where imagination bandwidth B_I is binding (creative, strategic, judgment-heavy) will see less output expansion than sectors where production capacity was binding (data processing, routine analysis). This may produce a “reverse Baumol effect” in which imagination-constrained sectors become relatively more expensive despite AI adoption.
Formally, let sector A be imagination-bound (B_I^A < H\rho/T_0) and sector B be production-bound (B_I^B > H\rho/T_0). Then: \begin{align} \text{Output}_A &= B_I^A \quad \text{(constant in } \rho \text{)} \\ \text{Output}_B &= H\rho/T_0 \quad \text{(linear in } \rho \text{)} \end{align}
The relative price of sector A output grows as \rho increases, recreating Baumol’s asymmetry but with imagination rather than labor as the scarce input.
Policy Implications
The prisoner’s dilemma structure of AI adoption implies that the socially optimal outcome—moderate adoption with leisure gains—cannot be achieved through individual firm decisions. This is a textbook coordination failure.
The Coordination Problem
Proposition 32 (Market Failure). The competitive equilibrium with universal AI adoption is Pareto-inferior to a coordinated equilibrium with moderate adoption and redistributed time savings. The welfare loss is: \begin{equation} \Delta W_{\mathrm{social}} = n(\pi_{DD} - \pi_{AA}) + n \cdot v_L \cdot \Delta L \end{equation} where v_L is the social value of leisure per hour and \Delta L is leisure hours foregone per worker.
This welfare loss justifies policy intervention. We consider four approaches:
Working Time Regulation
Statutory limits on working hours, updated for AI-augmented productivity. If AI enables 50\times compression, a 40-hour work week could be reduced to 20 hours while maintaining pre-AI output. France’s 35-hour work week provides a partial precedent.
The challenge: in knowledge work, “working hours” are difficult to measure. AI-augmented workers can produce significant output in brief, intense bursts interspersed with apparent non-work. Output-based regulation may be more effective than time-based regulation.
Output Taxation
Tax output beyond a socially-determined optimal level, analogous to carbon taxation: \begin{equation} \text{Tax}_i = \tau \cdot \max(Q_i - Q_{\mathrm{social}}, 0) \end{equation} The optimal rate \tau^* equates marginal social cost of overwork with marginal private benefit of additional output. This approach internalizes the negative externality of competitive overproduction.
Universal Basic Income
Fund UBI from AI-driven productivity gains: \begin{equation} \text{UBI fund} = \tau_{\mathrm{AI}} \cdot (Q^*(\alpha) - Q_0) \cdot P^*(\alpha) \end{equation} This redistributes the surplus that individual workers cannot capture due to competitive dynamics.
Education Policy
The constraint shift to imagination bandwidth implies education should prioritize: (i) creative thinking (expanding B_I), (ii) judgment and values (abstraction level 6), (iii) attention management. Execution-level skills (coding syntax, rote analysis) will depreciate rapidly.
International Coordination
A single country implementing working-time restrictions faces the same prisoner’s dilemma at the national level. Its firms become less competitive internationally. Effective leisure-preserving policies require international coordination, analogous to climate accords. The AI Leisure Accord—a hypothetical international agreement limiting competitive overwork—would face the same enforcement challenges as climate agreements, but the welfare stakes are comparable.
Conclusion
We have demonstrated through formal game-theoretic analysis, dynamical systems modeling, and multi-agent simulation that the AI leisure dividend is a structural impossibility within competitive market economies. The competitive ratchet—the self-reinforcing cycle of AI adoption, output expansion, expectation escalation, and margin compression—ensures that all productivity gains from AI are absorbed into expanded production rather than increased leisure.
The key results are:
The Competitive Ratchet (Proposition 2): Symmetric AI adoption yields equilibrium output q^*(\alpha) > q_0, with the output amplification ratio strictly increasing in AI capability.
Expectation Escalation (Theorem 6): The expectation–work feedback loop admits no steady state with work below expectations while AI improves.
Dominant Strategy Adoption (Theorem 11): Every firm has a strict incentive to adopt AI, making \alpha_i = 0 never a best response.
Prisoner’s Dilemma (Proposition 14): Mutual adoption yields \pi_{AA} < \pi_{DD} but adoption is dominant.
Cascade Dynamics (Proposition 17): Once adoption exceeds threshold \phi^*, it cascades to all remaining firms.
Constraint Shift (Proposition 20): The binding constraint shifts from production to imagination at \rho^* \approx 10.
Hierarchical Amplification (Proposition 23): Freed execution capacity is absorbed by higher-level decisions that generate amplification cascades.
Treadmill Effect (Proposition 30): Hedonic adaptation prevents sustained satisfaction from productivity gains.
These results collectively establish that policy intervention—working-time regulation, output taxation, international coordination—is necessary to capture any leisure dividend from AI. Without such intervention, the competitive ratchet will continue converting all AI-driven productivity gains into expanded work obligations.
The deepest irony of the AI era may be this: we have built machines that can do our work for us, but the structure of competitive markets ensures that we will never let them.
99
Acemoglu, D. and Restrepo, P. (2020). Robots and jobs: Evidence from US labor markets. Journal of Political Economy, 128(6):2188–2244.
Aghion, P. and Howitt, P. (1992). A model of growth through creative destruction. Econometrica, 60(2):323–351.
Agrawal, A., Gans, J., and Goldfarb, A. (2019). Prediction Machines: The Simple Economics of Artificial Intelligence. Harvard Business Review Press.
Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3):3–30.
Baumol, W. J. and Bowen, W. G. (1966). Performing Arts: The Economic Dilemma. Twentieth Century Fund.
Brickman, P., Coates, D., and Janoff-Bulman, R. (1978). Lottery winners and accident victims: Is happiness relative? Journal of Personality and Social Psychology, 36(8):917–927.
Brynjolfsson, E. and McAfee, A. (2014). The Second Machine Age. W. W. Norton.
Cournot, A. A. (1838). Recherches sur les Principes Mathématiques de la Théorie des Richesses. Hachette.
Downs, A. (1962). The law of peak-hour expressway congestion. Traffic Quarterly, 16(3):393–409.
Gordon, R. J. (2016). The Rise and Fall of American Growth. Princeton University Press.
Jevons, W. S. (1865). The Coal Question. Macmillan.
Keynes, J. M. (1930). Economic possibilities for our grandchildren. In Essays in Persuasion, pp. 358–373.
Lee, D. B., Klein, L. A., and Camus, G. (1999). Induced traffic and induced demand. Transportation Research Record, 1659(1):68–75.
Long, M. (2026). The paradox of time compression in the age of AI. GrokRxiv Preprint, DOI: 10.72634/grokrxiv.2026.0306.tc01.
Reinganum, J. F. (1981). On the diffusion of new technology: A game theoretic approach. Review of Economic Studies, 48(3):395–405.
Romer, P. M. (1990). Endogenous technological change. Journal of Political Economy, 98(5):S71–S102.
Schelling, T. C. (1960). The Strategy of Conflict. Harvard University Press.
Schwartz, B. (2004). The Paradox of Choice: Why More Is Less. Ecco/HarperCollins.
Sorrell, S. (2009). Jevons’ paradox revisited: The evidence for backfire from improved energy efficiency. Energy Policy, 37(4):1456–1469.
Tirole, J. (1988). The Theory of Industrial Organization. MIT Press.
Full Game-Theoretic Derivations
Asymmetric Cournot Equilibrium
Consider n firms with heterogeneous costs c_1, \ldots, c_n. Inverse demand: P(Q) = a - bQ. Firm i maximizes: \begin{equation} \pi_i = (a - bQ - c_i) q_i \end{equation}
First-order condition: \begin{equation} a - bQ_{-i} - 2bq_i - c_i = 0 \end{equation}
Summing over all i: \begin{equation} na - b(n-1)Q - 2bQ - \sum c_i = 0 \end{equation} \begin{equation} na - b(n+1)Q = \sum c_i \end{equation} \begin{equation} Q^* = \frac{na - \sum c_i}{b(n+1)} \end{equation}
From the FOC: q_i^* = (a - bQ^* - c_i)/b. Substituting: \begin{align} q_i^* &= \frac{a - c_i}{b} - \frac{na - \sum c_j}{b(n+1)} \\ &= \frac{(a - c_i)(n+1) - na + \sum c_j}{b(n+1)} \\ &= \frac{a + \sum_{j=1}^n c_j - (n+1)c_i}{b(n+1)} \end{align}
Let \bar{c} = \frac{1}{n}\sum c_j. Then: \begin{equation} q_i^* = \frac{a + n\bar{c} - (n+1)c_i}{b(n+1)} \end{equation}
This is positive iff c_i < (a + n\bar{c})/(n+1), i.e., firm i’s cost is below a weighted average of the demand intercept and the industry average cost.
Marginal Incentive to Adopt AI
With c_i = c_0/\rho(\alpha_i), firm i’s profit as a function of \alpha_i (holding others fixed, large n approximation): \begin{equation} \pi_i(\alpha_i) \approx \frac{(a + n\bar{c} - (n+1)c_0/\rho(\alpha_i))^2}{b(n+1)^2} - F(\alpha_i) \end{equation}
Differentiating: \begin{equation} \frac{\partial \pi_i}{\partial \alpha_i} = \frac{2(a + n\bar{c} - (n+1)c_0/\rho(\alpha_i))}{b(n+1)^2} \cdot \frac{(n+1)c_0\rho'(\alpha_i)}{\rho(\alpha_i)^2} - F'(\alpha_i) \end{equation}
At \alpha_i = 0 with \rho(0) = 1 and F'(0) = 0: \begin{equation} \left.\frac{\partial \pi_i}{\partial \alpha_i}\right|_{\alpha_i=0} = \frac{2(a + n\bar{c} - (n+1)c_0) \cdot c_0\rho'(0)}{b(n+1)} > 0 \end{equation}
since a + n\bar{c} - (n+1)c_0 > 0 (equivalent to q_i^* > 0 at baseline). This confirms \alpha_i = 0 is never optimal.
Existence of Prisoner’s Dilemma Region
For any \rho > 1, define: \begin{align} F_{\min}(\rho) &= \frac{(a - c_0/\rho)^2 - (a - c_0)^2}{9b} \\ F_{\max}(\rho) &= \frac{(a - c_0/\rho)^2 - (a - 2c_0 + c_0/\rho)^2}{9b} \end{align}
We need F_{\min} < F_{\max}: \begin{align} &(a-c_0/\rho)^2 - (a-c_0)^2 < (a-c_0/\rho)^2 - (a-2c_0+c_0/\rho)^2 \\ &\iff (a-2c_0+c_0/\rho)^2 < (a-c_0)^2 \end{align}
For \rho > 1 and c_0 > 0: a - 2c_0 + c_0/\rho < a - 2c_0 + c_0 = a - c_0. Also, |a - 2c_0 + c_0/\rho| < |a - c_0| for a > c_0 (which holds by market viability). So F_{\min} < F_{\max}, confirming the prisoner’s dilemma region is non-empty.
Stability Analysis of the Expectation–Work System
Linearization
The reduced system from Section 3: \begin{align} \dot{E} &= \gamma_E(W - E) \\ \dot{W} &= \lambda(E - W) + \mu\dot{\rho} \end{align}
Setting x = W - E: \begin{equation} \dot{x} = -(\lambda + \gamma_E)x + \mu\dot{\rho} \end{equation}
General solution: \begin{equation} x(t) = x(0)e^{-(\lambda+\gamma_E)t} + \mu\int_0^t e^{-(\lambda+\gamma_E)(t-s)}\dot{\rho}(s)\,ds \end{equation}
The first term decays (the gap between W and E converges). The second term, driven by \dot{\rho} > 0, ensures a persistent positive gap: work always leads expectations, but expectations continuously catch up, driving both upward.
Eigenvalue Analysis
The Jacobian at a putative steady state: \begin{equation} J = \begin{pmatrix} -\gamma_E & \gamma_E \\ \lambda & -\lambda \end{pmatrix} \end{equation}
Characteristic polynomial: \mu^2 + (\gamma_E + \lambda)\mu + (\gamma_E\lambda - \gamma_E\lambda) = \mu^2 + (\gamma_E + \lambda)\mu = 0. Eigenvalues: \mu_1 = 0, \mu_2 = -(\gamma_E + \lambda) < 0.
The zero eigenvalue reflects that the homogeneous system has a family of equilibria along W = E. The forcing term \mu\dot{\rho} breaks this degeneracy, driving both W and E upward without bound as long as AI improves.
Growth Rate in Exponential Regime
For \rho(t) = \rho_0 e^{rt} with \dot{\rho} = r\rho_0 e^{rt}: \begin{equation} x(t) \to \frac{\mu r \rho_0}{\lambda + \gamma_E + r} e^{rt} \quad (t \to \infty) \end{equation}
Work grows exponentially: W(t) \sim E(t) + x(t), with both E and x growing at rate r. The system has no finite steady state while AI capability improves—confirming Theorem 6.
Phase Portrait Analysis
The phase portrait in (W, E) space:
Nullcline \dot{E} = 0: the line W = E.
Nullcline \dot{W} = 0: the line W = E + \mu\dot{\rho}/\lambda, shifted above the diagonal by the AI forcing term.
All trajectories converge to the region between these lines and drift upward along them.
No fixed point exists while \dot{\rho} > 0.
This confirms the central result: in a competitive economy with improving AI, total work increases without bound and no leisure dividend materializes.
Sensitivity Analysis
Work Multiplier Sensitivity
The work multiplier M_W = (a - c_0/\rho)/(a - c_0) depends on \gamma_c = c_0/a and \rho: \begin{equation} M_W(\gamma_c, \rho) = \frac{1 - \gamma_c/\rho}{1 - \gamma_c} \end{equation}
| \rho=5 | \rho=10 | \rho=50 | \rho=100 | |
|---|---|---|---|---|
| \gamma_c = 0.3 | 1.17 | 1.21 | 1.26 | 1.27 |
| \gamma_c = 0.5 | 1.40 | 1.45 | 1.49 | 1.50 |
| \gamma_c = 0.7 | 1.87 | 1.97 | 2.05 | 2.07 |
| \gamma_c = 0.9 | 4.60 | 5.10 | 5.42 | 5.46 |
The multiplier is most sensitive to \gamma_c: industries where cost is a large fraction of demand (high \gamma_c) see the largest output expansion. This is empirically realistic for cognitive-labor-intensive industries.
Cascade Threshold Sensitivity
For uniformly distributed \theta_i \in [0, \bar{\theta}] and linear competitive incentive \Delta\pi(\phi) = \Delta\pi_0 + \delta\phi: \begin{equation} \phi^* = \frac{\bar{\theta} - \Delta\pi_0}{\delta} \end{equation}
Lower resistance \bar{\theta} and stronger competitive pressure \delta produce earlier cascades. In highly competitive industries (\delta large), \phi^* can be as low as 0.05–0.10, meaning a single firm’s adoption triggers industry-wide adoption.
Ambition Elasticity Sensitivity
The total work scaling W \propto \rho^{\eta-1} is exponentially sensitive to \eta:
| \eta | W/W_0 = \rho^{\eta-1} | Interpretation |
|---|---|---|
| 0.8 | 0.31 | Leisure realized |
| 1.0 | 1.00 | Neutral |
| 1.2 | 2.63 | Moderate expansion |
| 1.5 | 7.07 | Strong expansion |
| 2.0 | 50.0 | Full Jevons backfire |
Only \eta < 1 (ambition grows slower than capability) produces a leisure dividend. All empirical evidence suggests \eta > 1, confirming that the leisure dividend is unrealizable under observed behavioral parameters.
Strictly, \frac{d^2}{d\alpha^2}(q(\alpha))^2 = 2(q')^2 + 2qq'' < 0 requires |2qq''| > 2(q')^2, i.e., the “concavity strength” of q (and hence of \rho) must be sufficient. For the logistic form of \rho(\alpha) in [eq:logistic_rho] and most saturating growth curves encountered in practice, this condition holds throughout the relevant domain. Pathological cases with very weak concavity near inflection points could in principle violate it, but such cases are empirically negligible.↩︎