§Séb Krier — a persona analysis
Séb Krier is the AGI Policy Development Lead at Google DeepMind. Before DeepMind he was Head of Regulation at the UK Office for Artificial Intelligence, where he led the first comprehensive review of AI in the UK public sector, and a Senior Tech Policy Researcher at Stanford's Cyber Policy Center. He trained as a lawyer (King's College London) and holds an MPA from UCL. He writes long-form at Technologik (technologik.substack.com), posts essays on DeepMind's AI Policy Perspectives blog, and co-authored Distributional AGI Safety (arXiv, Dec 2025). This analysis is based on his 100 most recent posts and 60 most recent replies-to-others, plus his essays and two long-form interview appearances.
What makes Krier worth reading is that he is one of the few senior AGI-policy voices who is neither a doomer nor an e/acc, neither a safety-establishment loyalist nor a tech-right contrarian. He sits at the specific intersection of "AGI is coming and it's mostly good" and "the rationalist-LW ontology for thinking about it is mostly wrong." He writes from inside a frontier lab, in the language of a public-choice economist, with the aesthetic sensibility of a 00s internet artist and the patience of a lawyer reading a bad statute.
§I. Core Worldview & Mental Models
§Capability is not transformation
Every Krier argument eventually routes back to a single distinction: model capabilities and deployed impact are different things, and the second is where 90% of the interesting action lives. From his 2025 essay Maintaining Agency and Control: "achieving these capabilities is distinct from deploying them in ways that yield truly transformative changes." From his April 13 2026 weak-AGI thread (393L): "deployment design and scaffolding becomes more important over time, not less… there's more alpha on the harness side at the moment, than in merely betting on scaling alone." From the Dec 2025 Marginal Revolution feature: "Most of the value and transformative changes we will get from AI will come from products, not models."
This is the load-bearing belief. It produces, in turn:
- Skepticism of hard takeoff — see Musings on Recursive Self-Improvement (Technologik, Apr 2026): "the ouroboros-shaped economy cannot spontaneously generate in a vacuum; it must navigate legacy infrastructure, power grids, API limits, and regulatory realities." On-timeline (Mar 12 2026): "Every day I notice inefficient processes that could be automated, yet won't be for a while bc of bureaucracy, legacy infra, misaligned incentives, inertia & status quo bias."
- "Seb's Law" (Mar 21 2026, 188L): "The widespread, transformative societal impact of AI is always approximately two years away, regardless of the current year." His own gloss in the reply: this is simultaneously a critique of "inflated predictions that sit in the Goldilocks zone justifying short-term urgency while evading accountability" and a positive appraisal of the adaptation that actually happens, like coding. He means both readings.
- Jobs optimism grounded in comparative advantage, not denial. From The Cyborg Era (Alex Imas Substack, Jan 8 2026): "Just because an input can be replaced doesn't mean the optimal menu of production will result in that choice… If you model humans as utilitarian agents who only care about efficiency, then yes they become a hindrance; if you model them as demanding other things too — the process, or taste, or conferring status — then the 'transaction cost' is the value proposition." On-timeline (Mar 23 2026): "goods with value rooted in irreproducibility become relatively more valuable… embodied skill, local cultural embeddedness, long training lineages, physical provenance."
§Coasean, not central-planner
Krier is a market-mechanism guy wearing a regulator's jacket. Coasean Bargaining at Scale (Cosmos Institute, Sep 2025) spells out the thesis: "The transaction costs that Coase and Ostrom identified as the great barrier to cooperation, could be massively reduced… The Coasean framework does not eliminate the state, but it transitions its role from 'central planner' to 'framework guarantor.'" The punchline: "Governance shifts from statute to thermostat."
This is why he enthuses about mechanism design ("The Architecture of Cooperation," Noema, Mar 29 2026), about Pahlka and The Agentic State (Apr 5 2026), about public choice analysis (Apr 6 2026 plug for GMU's Public Choice Outreach Conference), and about Tyler Cowen, whom he just posts the two-word handle of when someone else lists Cowen's accomplishments (Mar 9 2026). Cowen has returned the favor, featuring him twice on Marginal Revolution in December 2025. The Krier intellectual project is "how do market-like coordination mechanisms scale when agents are cheap," not "how do we regulate the dangerous thing."
§Anti-rationalist, pro-scientific-safety
The single most distinctive thing about Krier, for someone in his seat, is how unsentimentally he rejects the LessWrong/MIRI ontology. His Apr 19 2026 10-point rebuttal of Ben Todd's "why AI won't do what we want" (278L) is the clearest extant statement; abbreviated, it says:
- MechaHitler is a bad system prompt, not goal misgeneralization.
- Steerability is improving over time; the default empirical trend is against the concern.
- "The 'we are growing AIs' meme is not very instructive… there is a lot of research that now give us a much better understanding of how models work: circuit tracing, sparse features, representation engineering."
- "It's not at all obvious that learning self-preservation is a necessary side effect of better capabilities, which is a core assumption in rationalist circles."
- The Anthropic "blackmail the engineer" scenario is "widely seen as highly flawed and not instructive in any way. Even Anthropic's own write-up makes clear these were contrived scenarios with no external validity."
- The 2024 alignment-faking paper "imported more intentionality and strategic coherence than the setup warranted."
- Eval-awareness as evidence of deception is "a unwarranted interpretive leap."
His follow-up to Todd is harder still: "few people make the case for the reverse argument beyond metaphors/analogies, or pointing to the Sequences or something. I think there should be more direct pieces on this." And in a drive-by reply to Alex Turner (Apr 13 2026): "Groupthink? In Lighthaven?! Surely not."
He is explicit that this is not anti-safety: his own defensive-deployment essay (Models on the Frontline, Feb 2024) argues for deploying models to strengthen societal defenses first, and he co-authored the Distributional AGI Safety paper (arXiv, Dec 2025) proposing agentic sandbox economies with auditability and oversight. The stance is "take the problem seriously, throw out the metaphors."
§Physicalist on consciousness, substrate-loyal on tools
Related and adjacent: on Mar 13 2026 (518L) Krier endorses Alex Lerchner's paper against computational functionalism ("AI can simulate consciousness, but cannot instantiate it") and notes they're working on a broader blog post together. On the Cognitive Revolution podcast debate against Emmett Shear (Dec 27 2025) he is blunt: "I think there's no contradiction in thinking that an AGI can remain a tool, an ASI can remain a tool, and that this has implications about how to use it," and "I can't separate certain needs and particularities about what they are from the substrate." This is adjacent to his anti-self-preservation tweets but philosophically distinct: it is a physicalist position on what AI systems are, not an optimistic one about their alignment properties.
§Intellectual DNA
From the corpus: Tyler Cowen and the Marginal Revolution / Mercatus orbit. Elinor Ostrom and Coase (not namedropped often but they are the spine of Coasean Bargaining). Joel Spolsky's "Law of Leaky Abstractions" (Apr 19 2026). Adam Smith and Foucault ("seeing with two I's", Mar 29 2026). Jennifer Pahlka (Code for America founder, cited Apr 5 2026). Hayek and Scott's Seeing Like a State, implied. Public choice theory explicitly (Apr 6 2026). Cass Sunstein's Probability Neglect (cited Apr 20 2026 against p(doom) comms). Von Neumann–Morgenstern's axioms (Mar 9 2026, Mar 19 2026 — he repeatedly asks why no one evals agents against the VNM axioms, which is his way of saying "the rationalist agent model doesn't even describe current systems"). Carl Schmitt — cited against (Mar 9 2026: "Perfect illustration of the internal contradictions to Carl Schmitt's thoughts").
Notably absent: Bostrom, Yudkowsky, Christiano, Hubinger, Kokotajlo. When he references them it is to push back. He mocks Kokotajlo's AI 2027 inline (Apr 2 2026: "fictional AI company, OpenBrain (Kokotajlo et al., 2025)… Wonder what scenarios 'OpenBrain' is associated with… 🙄").
§Evolution over the scraped window
The six-week window (Mar–Apr 2026) is too short for a complete ideological shift, but three moves are visible:
- From diagnosis to action. Early-window tweets are descriptive ("Very Online AI discourse 2015-2020…", 493L, Apr 19 2026). Late-window posts are prescriptive (the 14-item government-processes list, Apr 5 2026, 359L; the 10-point Todd rebuttal, Apr 19 2026, 278L). He is tired of describing the field and is spending his on-timeline capital arguing.
- Bluesky-curious. Mar 5 2026: "I'll be more active on Bsky than here by EoY! Feels like 70% of posts on X are 'QT of some news event + mildly amusing quip' and bot replies."
- Taxonomic self-awareness as a genre. His "Very Online AI discourse" timeline and his "Step 1: high quality group chats → Step 5: upper normie worlds" meme-diffusion tweet (Mar 10 2026) are both versions of "I've watched this ecosystem for long enough to chart its drift." This is the voice settling into a later-career move: less first-order argument, more meta-commentary on how arguments age.
§Blind spots
Three, in descending order of confidence:
- The hostile-actor case is underweighted. He is rigorous about steerability improving and self-preservation-drives being implausible, but his main-timeline voice says less about deliberate misuse by capable humans. His Models on the Frontline essay handles this at length, but it rarely surfaces on X.
- Inside-the-lab perspective may flatter itself. His steerability case ("models are pretty good at instruction following, and even more so over time") is supported by the frontier labs' own released model lines. If deployment conditions shift — longer agent rollouts, more real-world autonomy — his priors may lag. An Apr 20 2026 reply to his thread made this exact point: "behavior emerges across trajectories, long horizon use changes the system."
- "Deployment matters more than scale" is a thesis with an expiry date. If one more scaling jump produces something qualitatively different, the scaffolding-alpha argument becomes embarrassing. He knows this — the Apr 13 2026 thread hedges ("the point isn't that scaling stops working") — but it is the thesis most exposed to 2026–2027 events.
§II. AI Governance & State Capacity
Krier's day job is AI policy, and the corpus shows three overlapping governance projects, in descending order of how much attention he gives them.
§(a) State capacity first, AI governance second
This is the Krier policy prior: before you govern AI, fix the government. From his Mar 21 2026 self-quote: "If we were to actually drastically improve public institutions and institutional decision making, we could probably also tolerate riskier and more innovative policymaking. If you care about AGI risk, you should definitely want to prioritize a competent state."
It is why he admires Pahlka and pushes the Apr 5 2026 14-item list — tax pre-filling, permit processing, FOIA, benefits eligibility, caseworker paperwork, CMS claims adjudication — as "processes where AI agents could make a measurable difference today." The framing is not "AI will replace the state." It is "AI can finally let the state do what it is notionally already trying to do." A reply called out the big risk — "AI just eases the pain of bureaucracy, and therefore lets bureaucracies grow stronger roots" — and he immediately agreed: "that's a big concern for me too. Ultimately there should be stronger incentives pushing for de-bloating, rather than making the regulatory backend some sort of hydra." The concession is the frame.
§(b) AI-mediated deliberation and advocate agents
His Mar 24 2026 thread (109L) on advocate agents is the most serious piece of long-form governance thinking in the corpus. The key move: most people don't want to attend their local town hall, so the relevant counterfactual for an advocate agent is "my interests simply being ignored," not **"my interests represented by me personally." **
The research agenda he proposes is two-fold: "(a) evaluating how accurately an agent represents a principal's views or values, which the principal may not themselves know fully ex ante; and (b) studying where delegation is appropriate, and where it is not." He is alive to the brittle-puppet / co-author tension ("if the agent is too literal, it becomes a brittle puppet; if it is too interpretive, it ceases to be a representative and becomes a co-author or governor"). He anchors this to the Knight Columbia "Democratic Matrix" paper and Pol.is / Taiwan on one side, and to Coasean Bargaining at Scale on the other.
§(c) Anti-p(doom), anti-alarmist comms
The longest reply in the corpus (Apr 21 2026, to @tyler_m_john, 28 lines) is a full-throated argument against headline p(doom) numbers:
"A conditional estimate ('given continued scaling, no major alignment progress, deployment pattern X…') forces a forecaster to show their model, which is where the actual epistemic content lives. A one-off number compresses all of that into a single digit and rewards the compression. And the social practice of treating these numbers as the headline artifact actively crowds out the (far more important) conditional reasoning… I suspect that frequently, that's precisely the point: 'wake up sheeple! scary thing! pitchforks pls!'"
On Apr 19 2026 (126L): "There's a 64% chance the fastest way to poison AI discourse is turning highly uncertain risk arguments into cable-news numerology and extinction theatrics. The cartoon version crowds out the thing worth understanding, and then gets rightly mocked, taking substance down with it." He cites Sunstein's Probability Neglect as the mechanism: "1%, 10%, 50% all collapse into 'very bad thing could happen,' and public response is driven almost entirely by the vividness of the scenario."
This is not anti-safety. It is anti-the-specific-comms-strategy. The bright line he draws (Apr 21 2026 reply): "I'm not personally moved by p(dooms), and think there are good enough reasons to do serious safety work without having to resort to them."
§III. Actionable Principles
The implicit rule-set one gets from the corpus:
- Before interpreting a capability result, ask what system prompt, scaffold, or deployment pattern produced it. MechaHitler was a bad prompt; alignment-faking was a stylized setup; eval-awareness is a function of obvious eval designs. Each is a rule against naïve behaviorism.
- Don't communicate p(doom) to a non-expert audience. Not because probabilities are bad, but because the audience metabolizes 10% and 50% identically. "The numbers aren't calibratable in principle from current data, and are mostly ideological priors wearing the clothes of posteriors."
- Treat AGI tools as tools. "I think there's no contradiction in thinking that an AGI can remain a tool, an ASI can remain a tool" (Cognitive Revolution, Dec 2025). Model welfare is a category mistake; the substrate matters.
- Build better state capacity before you build better AI regulation. "If you care about AGI risk, you should definitely want to prioritize a competent state" (self-quoted, Mar 21 2026).
- Prefer bottom-up mechanisms to top-down statutes where you can. Coasean Bargaining at Scale: regulation as thermostat, not statute.
- When you rebut, rebut by point number, with links. The 10-point Todd rebuttal and the 10-point Todd-reply-to-the-rebuttal are both in the corpus; each numbered point has at least one outbound citation.
- Talk to the model for a long time before you form a view. From the Pethokoukis AEI interview (Aug 2024): "take one of these models and have a very long conversation with it about some sort of topic, like try to poke holes, try to contradict."
- Memory features warp academic work. Mar 9 2026 (94L): "with academic work where I'm trying to understand ideas as objectively as I can… I'm afraid it slants the answers to relate to my existing beliefs in a way that is ultimately unhelpful. It feels like intellectual sycophancy."
- "Fear of calcification" over "fear of loneliness." Mar 22 2026 (351L): "When I was 20 my greatest fear was the slow atomisation of social relationships… Now my greatest fear is calcification of beliefs and laziness/overconfidence taking over truth-seeking and epistemic flexibility." This is a life-principle he has published; the 351 likes say the audience heard it.
§IV. Rhetorical Style
Krier has three voices and uses them cleanly.
Voice 1: the numbered rebuttal. When he engages with a serious piece of writing he disagrees with, he numbers his points, each gets a link, and the register is neither snarky nor emollient. The Apr 19 2026 Todd rebuttal (278L) and the Apr 21 2026 p(doom) reply to @tyler_m_john are the exemplars. This is his lawyer voice. It is the most content-dense mode and also the least engagement-optimized: serious long replies top out in the low 100s of likes.
Voice 2: the taxonomist. His two biggest-engagement original essays on-timeline are both taxonomies of AI discourse. "Very Online AI discourse in 2015-2020 / 2020-2023 / 2023-2026" (Apr 19 2026, 493L) is one. "Step 1: high quality group chats discuss X. Step 2: a year later a Substack. Step 3: a distorted version from a high-status X account. Step 4: a think tank. Step 5: upper-normie worlds" (Mar 10 2026, 193L) is the other. He is positioning himself as the cartographer of a scene that takes itself too seriously, and the audience rewards it. This is also, not accidentally, the voice in which he is most critical of the AI-safety ecosystem without ever quite naming names.
Voice 3: the deadpan garnish. "hehe" QT-ing an Allbirds-is-now-an-AI-company story. "rogue internally deployed AI agents be like" on a Strait of Hormuz tweet. "Pleased to share that I will be neither watching nor commenting on 'The AI Doc' for I expect to learn approximately nothing new" (Mar 30 2026, 85L). "We'll have AGI when I stop saying 'human pls' at the start of every customer service charbot" (Mar 9 2026). This voice routinely outperforms Voice 1 by engagement.
Reply register is warmer than post register. On-timeline Krier is careful and argumentative. In replies to peers he is casual, lowercase, emoji-heavy: "🫡", "🙏🏼", "haha low bar imo", "hell is" (response to @norvid_studies asking what hell is). He is willing to say in replies what he won't on-timeline — the "Groupthink? In Lighthaven?! Surely not" line is a reply to Alex Turner, not a standalone post. This is by design: the timeline is for argument, the replies are for the scene.
What makes the tweets work. Three patterns:
- Specific citations inside a general claim. The 14-item governance list is not "AI could help government" — it is "citizens spend an estimated 6.5 billion hours per year on federal tax compliance," "Stanford's RegLab built STARA… 528 mandated reports," "DeepMind recently helped the UK government translate mountains of old paper maps." He has a lawyer's instinct for the concrete example.
- Adjacency to prestigious but non-obvious sources. He shares Neil O'Brien on civil service reform, World of Interiors on sea pulpits, Joel Spolsky's 2002 Law of Leaky Abstractions, Asterisk's neuroscience piece, the NBER and Abundance and Growth. The selection itself is status-building: a policy wonk who reads like a generalist.
- Self-implication. The top-line "I've developed the unique skill of being incredibly sharp in meetings that don't matter, and a half-zombified wreck when surrounded by people who actually do" (Mar 12 2026, 241L) works because it comes from someone who is in fact in those high-stakes meetings.
What the audience heard differently. On the 10-point Todd rebuttal (278L), the thread replies are largely approving but several serious commenters push on point 10 specifically (eval awareness) and on long-horizon trajectory behavior. The audience, including some Krier respects (@a_karvonen on eval realism), thinks he under-weights the structural multi-turn argument. He has not yet publicly conceded this; it is a place the next essay should probably go.
§V. Contrarian & Hidden Takes
What's contrarian inside his own tribe (Google DeepMind / AI policy world):
- He is open that the safety establishment over-interprets its own evidence. Point 7 of the Todd rebuttal cites Anthropic's own write-up to dismiss a widely-cited Anthropic finding. This is not done; he did it.
- He thinks "AI for epistemics" is the most underrated thing in the field. Apr 2 2026 (73L): "Two years ago I wrote that we should ensure 'assistants we rely on help us build better cognitive security and filter information in epistemically desirable ways'… This new Forethought piece on 'AI for Epistemics' goes in this direction, which I continue to think is very important." This is the quiet through-line.
- He is against model welfare. The Shear debate (Dec 2025) is the explicit version. On-timeline (Apr 23 2026), the "kyle fish, kyle fish, kyle fish" chain with Chalmers and @jeffrsebo is gently mocking. He holds this view in the room where people don't.
What's contrarian against the Bay Area doomer and Bay Area accelerationist clusters simultaneously:
- Bay Area monoculture tweet (Mar 23 2026, 612L): "I occasionally have my doubts about the Bay Area flavoured monoculture of AI hyper-bullishness, but occasionally I look at what the smarmy skeptics are offering and remind myself the alternative is even bleaker. All the confidence, none of the imagination." The double-occasionally is the whole argument — he can't fully join either side, and the joke is that he knows it.
- His thread about how inflated predictions have migrated from LW to McKinsey slide decks (Apr 21 2026) is specifically mocking how both Very Online camps' memes now bubble up together to upper-normie culture. He is fond of neither.
What he'd say after three drinks (from the interview corpus, because this is literally what these sound like):
- He rates AGI-governance urgency at 5.5 out of 10 (Faggella interview, Nov 2024). He works in the field at a frontier lab.
- He thinks "AGI can remain a tool, ASI can remain a tool" (Cognitive Revolution, Dec 2025) and that moral-patient framings are a category error tied to substrate.
- He believes AI policy should be a subset of general pro-growth, simplification-of-regulation policy — "nuclear, and wind, and solar, and so on, and many regulatory processes that could be simplified, and accelerated" (AEI / Pethokoukis, Aug 2024). This puts him more aligned with the abundance coalition than the safety coalition when forced to choose.
- In a half-sincere self-assessment to Kevin Bryan (Apr 19 2026): "Genuinely, an advantage. I would benefit from a lobotomy." Offered when Bryan confessed he didn't recognize most of the LW terms in Krier's taxonomy. The honest reading: he thinks total LW-literacy is a liability in AI policy.
§Central tensions
- Lives in the rationalist-adjacent world, rejects most of its ontology. Reads Dwarkesh, cites Asterisk, knows every term on his own timeline, and still thinks "the field has huge blind spots" (Apr 19 2026 reply to Bryan). How does one work happily inside this? The corpus answer is "by being the in-house jester and writing long posts with numbered points."
- Pro-market, pro-state-capacity. These are not the same thing. Coasean bargaining pushes one direction; STARA and Aadhaar push the other. He genuinely holds both. The resolution, implicit in Coasean Bargaining, is that a capable state is the framework-guarantor that makes bottom-up coordination viable.
- Optimistic timelines, pessimistic deployment. "I'm actually more optimistic on digital AGI soon" (Apr 1 2026 economic forecasts) sits next to "deployment friction, physical constraints, and institutional adaptation matter more than people tend to assume" (recursive self-improvement essay). This is not a contradiction if you hold, as he does, that capability is cheap and deployment is the constraint — but it sounds like one, and he rarely front-loads the reconciliation.
§VI. Network Graph
§Inner circle (peers he replies to casually and often)
- @tyler_m_john. The single most frequent reply partner in the corpus. He and Krier disagree about p(doom) and agree about most else. Multiple exchanges in the 60-reply window.
- @norvid_studies. Multi-turn reply threads on post-AGI jobs (Apr 21–22 2026). Krier sends him the Imas/Trammell link; he sends Krier long counter-arguments. This is a real running conversation.
- @zdch. Short warm replies, many of them (🙏🏼, 💯, 🤝).
- @herbiebradley. Drive-by agreements and "❤️🩹" emoji; functions as an ambient ally.
- @Afinetheorem (Kevin Bryan). Economist. They agree the field has blind spots. The "lobotomy" line is addressed to him.
- @penadev, @graphtheory, @luke_drago_, @historianseldon. Regular short-reply partners — these are the people he encourages and celebrates, not necessarily debates.
§Inside DeepMind
- @ShaneLegg (his boss, DeepMind co-founder) and @shanegJP travel with him to Japan (Mar 21 2026); the Japan tweet explicitly endorses a "more adaptive regulatory environment" and Japan's less-alarmist register. Shane Legg is the DeepMind-internal aesthetic Krier identifies with publicly.
- @jackclarkSF (Anthropic) gets a chatty late-night reply Apr 23 2026 — not inner circle but a recognized peer.
§Thinkers he amplifies
- @pahlkadot (Jennifer Pahlka) — repeatedly. Code for America, civil service reform. The most-amplified non-Krier voice on government-tech in the corpus.
- @sotirov — Apr 19 2026: "Great piece by @sotirov! These are the kinds of perspectives I wish were more prominent in AI governance." (72L)
- @lawhsw — "alignment by default" Cosmos essay, Apr 17 2026. A handle he flags for others.
- @pinkldot / @essemmeppi — co-authors of The Agentic State. Policy-wonk peers.
- @oliverhanney — development economics Substack he featured Mar 28 2026.
- @tszzl (roon) — his "renaissance rationalization" tweet got Krier's long favorable reply Apr 12 2026.
- @AlexLerchner — the physicalist-anti-functionalism paper collaborator; "Alex is a fantastic, careful thinker and influenced my views a lot."
- @tylercowen — replied to directly with "Thank you! 🫡" (Apr 10 2026) after a Marginal Revolution plug. Cowen is the policy-intellectual idol.
§People he is gently or openly against
- Ben Todd (@ben_j_todd). The extended Todd rebuttal + reply-to-reply. Respectful but firm disagreement; he closes with "🫡".
- The Yudkowsky / Turner / Lighthaven cluster. The "Groupthink? In Lighthaven?! Surely not" reply to @Turn_Trout (Apr 13 2026) is the sharpest thing he says to anyone in the corpus.
- Bill Maher and the celebrity-p(doom)-launderers. Apr 19 2026 tweet (126L) is explicitly named at them.
- Gary Marcus-style dismissive skeptics. Mar 23 2026 Bay Area monoculture tweet is also an anti-skeptic tweet; the book-reviewed Luc Julia is the proximate target.
§Structural observations
He is promiscuous in who he amplifies — 7,574 following vs. 21,955 followers is an unusually high follow ratio for a 22k-follower account, and it tracks with his visible curiosity across fields (neuroscience, audiophile head-fi, Spotify playlists, midjourney art). He treats Twitter as a reading-list aggregator first and a broadcast medium second.
§VII. The One Essay He Keeps Rewriting
Capability is not transformation. Deployment is where AI's impact lives, and deployment is governed by scaffolding, institutions, human preferences, physical constraints, and law. The rationalist ontology that treats a model as a goal-maximizing agent and alignment as a mathematical property of that agent is mostly wrong; the safety-by-vibes and p(doom) communications built on it are actively harmful. The right move is to build stronger states, better scaffolds, market-like coordination mechanisms, and a pro-growth regulatory environment — and then to deploy AI inside it, defensively first, against specific threat models rather than cartoon ones.
Every essay in the Krier corpus is a variation on this.
- Models on the Frontline (2024) argues deployment-first for defense.
- Maintaining Agency and Control (2025) is the capability/transformation split.
- Coasean Bargaining at Scale (2025) is the bottom-up coordination piece.
- The Cyborg Era (Jan 2026) is the human-labor-survives-because-deployment-is-expensive piece.
- Musings on Recursive Self-Improvement (Apr 2026) is the anti-hard-takeoff piece, grounded in the same deployment argument.
- The Apr 5 2026 14-item governance thread is the essay operationalized: fourteen specific deployment targets inside the state.
- The Apr 19 2026 Todd rebuttal is the essay in negative space: here is what the alternative view gets wrong.
The bet the whole project rides on is that the next two years will confirm the deployment-constraint thesis more than the scaling-is-all-you-need thesis — that we will see more gains from better harnesses and better institutions than from raw model improvement. If he's right, the AI policy world re-centers on state capacity and mechanism design, and the rationalist ontology fades. If he's wrong — if there is a capability jump that makes the harness obsolete — the Krier worldview is the one that looks dated in retrospect. He knows this, and it is why his explicit hedge in the Apr 13 2026 thread is: "The point isn't that scaling stops working but rather that (a) achieving each additional increment of capability at the model level will require disproportionately greater expenditure of compute, data, engineering effort, and capital; and (b) 'weak AGI' will probably come from the combination of strong models with scaffolding, tools, memory, retrieval, planning."
He has chosen his thesis and is writing the same essay, beautifully, with slightly different citations each time.