A Factor-Based Framework for Modeling AI's Impact on U.S. Labor Markets
by Jed Miller
Unveiling a theoretical model that captures the complex dynamics of AI's labor market transformation. Building on Anthropic's Economic Index research, this framework models employment impact through four interconnected components: displacement, creation, market maturity, and demand effects. Building from the U.S. industry level up, the result is a more nuanced understanding of how AI is and will impact the labor markets.
How Does Claude 4 Think? — Sholto Douglas & Trenton Bricken
by Dwarkesh Podcast
Sholto and Trenton's return to Dwarkesh's podcast delivers a masterclass in current AI capabilities and the technical architecture underlying modern language models. Their discussion of reinforcement learning from verifiable rewards finally "working" at scale provides crucial context for understanding why we're seeing such rapid progress in coding and mathematics - domains where clear feedback loops enable effective training. But the mechanistic interpretability insights prove most fascinating: seeing neural circuits perform multi-step reasoning, from medical diagnosis to mathematical computation, while simultaneously revealing when models resort to pure confabulation. Their exploration of sparse autoencoders uncovering 30 million distinct features in Claude 3 Sonnet - including abstractions like "code vulnerabilities" that fire across seemingly unrelated contexts - illuminates just how alien yet structured these systems truly are. The implications for my AI Labor Market Index work are profound: if we can now trace the actual computational pathways models use for complex reasoning, we're moving beyond black-box predictions toward genuine understanding of how AI transforms cognitive work.
Why so many IT projects go so horribly wrong
by The Economist
The Economist's analysis of Flyvbjerg's research reveals IT projects' distinctive risk profile – fewer overruns than other megaprojects but catastrophic failures when they do derail, with mean overruns exceeding 450%. This "fat tail" phenomenon stems from IT's intangibility, organizational complexity, and rushed planning phases. The article's warnings about AI deployment echo my approach to the Labor Market Index architecture – emphasizing modular components, clear constraints, and deliberate planning to avoid the all-too-common pattern where technical ambition outpaces organizational readiness, turning promising innovations into cautionary tales.
2027 Intelligence Explosion: Month-by-Month Model
by Dwarkesh Podcast
Scott Alexander and Daniel Kokotajlo's month-by-month AI-2027 scenario (https://ai-2027.com) presents a detailed roadmap to potential superintelligence through AI systems capable of accelerating their own development, raising profound questions about humanity's future that resonate with my "Beyond the Horizons" essay. Their prediction of increasingly capable AI agents gradually transforming research, economics, and governance offers a concrete timeline for the subtle erosion of human agency I explored, while challenging us to consider whether we can maintain meaningful influence in systems that may soon exceed our comprehension.
The Government Knows AGI is Coming
by The Ezra Klein Show
A sobering discussion with Ben Buchanan, former top AI adviser in the Biden White House, about the rapid timeline for artificial general intelligence and our institutional unpreparedness. Buchanan offers a uniquely authoritative perspective on the policy challenges of AGI development, balancing technical understanding with governance realities.
RJ Scaringe: Reimagining Transportation's Future Beyond EVs
by Rich Roll Podcast
A refreshing departure from my usual AI-focused content, this conversation with Rivian's founder offers invaluable insights on systems-level thinking and long-term sustainability. Scaringe articulates a compelling vision for transportation that extends beyond electrification, addressing production methods, supply chains, and end-of-life considerations. His perspective on innovation cycles and incremental progress provides a thoughtful counterbalance to the rapid-transformation narratives common in the AI space, reminding us that meaningful change often happens through persistent, methodical evolution rather than overnight disruption.
Which Economic Tasks are Performed with AI?
by Anthropic Research Team
A groundbreaking empirical study providing the first large-scale analysis of AI usage across economic tasks. The research reveals that current AI interactions are predominantly collaborative, with 57% augmenting human capabilities. Concentrated in technology and content creation roles, the study offers crucial insights into how AI is incrementally transforming work, bridging theoretical discussions of technological change with concrete, data-driven evidence.
Human-AI Interaction Patterns
Based on Anthropic's 2024 Economic Impact Research
Beyond the Horizon: A Meditation on Human Agency in an AI World
by Jed Miller
A response to Gradual Disempowerment by Jan Kulveit, Raymond Douglas, et al., exploring the tension between AI optimization and human agency through the lens of a Colombian coffee farmer. While acknowledging the challenges of maintaining human influence in increasingly AI-driven systems, this essay argues for the critical importance of embedding human agency in technological progress.
Data Estates in the AI Era: Architecting for Scale and Innovation
by Jed Miller
Exploring how enterprise architecture principles guide the evolution of modern data estates, enabling AI innovation while maintaining scalable, robust foundations for next-generation technology solutions.
AI Computing Hardware - Past, Present, and Future
by Last Week in AI
A comprehensive exploration of AI's hardware foundations, from the evolution of GPUs to cutting-edge semiconductor fabrication. The episode masterfully breaks down complex topics like extreme ultraviolet lithography, high bandwidth memory, and the implications of export controls on AI chip development. Particularly valuable for understanding how advances in computing architecture shape AI's capabilities and future potential.
Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development
by Jan Kulveit, Raymond Douglas, et al.
A compelling analysis of how even gradual AI advancement could lead to humanity losing meaningful influence over key societal systems. While my Silicon Valley Tragicomedy essay explores AI's trajectory through the lens of Hegelian dialectics and historical patterns of technological revolution, this paper offers a contrasting perspective focused on incremental systemic risks. It challenges common sudden-takeover scenarios, examining instead how gradual progress might erode human agency through interconnected economic, cultural, and political changes. This thought-provoking counterpoint offers a crucial framework for technology leaders seeking to ensure AI development enhances rather than diminishes human flourishing.
The Hidden Environmental Cost of AI
by Hard Fork, The New York Times
Start at 34 minutes for a sobering examination of artificial intelligence's environmental footprint. The discussion delves into the stark reality of AI's energy consumption and carbon emissions, highlighting how the race for more powerful AI models comes with significant environmental costs. A crucial perspective for anyone working in AI to consider as we balance technological advancement with environmental responsibility.
A Tragicomedy in Silicon Valley: Hegelian-dialectics, Marxism, AI, and Jevon's Paradox
by Jed Miller
Exploring how historical patterns of technological revolution can illuminate our current AI transformation, examining the dialectics of change and what it means for our collective future.
Fei-Fei Li on Spatial Intelligence and Human-Centered AI
by Possible Podcast
A thought-provoking discussion that reframes AI development through a distinctly human-centric lens, emphasizing the critical importance of diverse perspectives in training large language models. Dr. Li masterfully illustrates how AI exists within a broader ecosystem that reflects our societal values and challenges, while offering an optimistic vision of AI's potential to tackle humanity's greatest challenges, including our dependence on fossil fuels through advancements in fusion technology research.
Machines of Loving Grace: How AI Could Transform the World for the Better
by Dario Amodei
A masterful analysis that meticulously explores AI's potential to compress a century of scientific progress into a decade, while thoughtfully acknowledging both structural challenges and human factors in achieving this transformation. Amodei presents a compelling framework for understanding how intelligence intersects with physical and societal constraints, offering a vision that's both ambitious and grounded in realistic possibilities for advancing human welfare.
How Transformers Make LLMs Work
by Grant Sanderson
The T in GPT! An engaging and accessible explanation of transformer architecture that makes the fundamental concepts behind large language models surprisingly fun to understand.