AI Thesis
The five-layer AI stack, Jevons paradox, AI agents, and why inference scaling matters.
AI Stack Framework
Jensen Huang's five-layer framework for understanding where value accumulates across the AI technology stack, from energy to applications.
Chips Layer
How GPUs became the dominant compute unit for AI workloads, and why Nvidia's CUDA ecosystem creates deep, self-reinforcing lock-in.
Infrastructure Layer
How the three hyperscalers — AWS, Azure, and Google Cloud — are building the physical and software infrastructure that AI runs on.
Models Layer
The frontier AI model companies — OpenAI, Anthropic, xAI — their economics, growth trajectories, and how investors gain exposure.
Applications Layer
Why the AI application layer defies broad generalisation and requires individual evaluation of each company's value creation potential.
Jevons Paradox
How efficiency improvements in AI — cheaper models, lower inference costs — lead to more compute demand, not less, mirroring a 19th-century observation about coal consumption.
AI Agents
How the shift from chat interfaces to agentic AI systems with tool access represents a structural change in economic output per employee.
Inference Scaling
Why inference — running trained models to serve user requests — is where AI's recurring revenue will come from, and the dynamics driving its growth.