/sebkrier_analysis
path sebkrier_analysis.md

Séb Krier — a persona analysis

Séb Krier is the AGI Policy Development Lead at Google DeepMind. Before DeepMind he was Head of Regulation at the UK Office for Artificial Intelligence, where he led the first comprehensive review of AI in the UK public sector, and a Senior Tech Policy Researcher at Stanford's Cyber Policy Center. He trained as a lawyer (King's College London) and holds an MPA from UCL. He writes long-form at Technologik (technologik.substack.com), posts essays on DeepMind's AI Policy Perspectives blog, and co-authored Distributional AGI Safety (arXiv, Dec 2025). This analysis is based on his 100 most recent posts and 60 most recent replies-to-others, plus his essays and two long-form interview appearances.

What makes Krier worth reading is that he is one of the few senior AGI-policy voices who is neither a doomer nor an e/acc, neither a safety-establishment loyalist nor a tech-right contrarian. He sits at the specific intersection of "AGI is coming and it's mostly good" and "the rationalist-LW ontology for thinking about it is mostly wrong." He writes from inside a frontier lab, in the language of a public-choice economist, with the aesthetic sensibility of a 00s internet artist and the patience of a lawyer reading a bad statute.


I. Core Worldview & Mental Models

Capability is not transformation

Every Krier argument eventually routes back to a single distinction: model capabilities and deployed impact are different things, and the second is where 90% of the interesting action lives. From his 2025 essay Maintaining Agency and Control: "achieving these capabilities is distinct from deploying them in ways that yield truly transformative changes." From his April 13 2026 weak-AGI thread (393L): "deployment design and scaffolding becomes more important over time, not less… there's more alpha on the harness side at the moment, than in merely betting on scaling alone." From the Dec 2025 Marginal Revolution feature: "Most of the value and transformative changes we will get from AI will come from products, not models."

This is the load-bearing belief. It produces, in turn:

Coasean, not central-planner

Krier is a market-mechanism guy wearing a regulator's jacket. Coasean Bargaining at Scale (Cosmos Institute, Sep 2025) spells out the thesis: "The transaction costs that Coase and Ostrom identified as the great barrier to cooperation, could be massively reduced… The Coasean framework does not eliminate the state, but it transitions its role from 'central planner' to 'framework guarantor.'" The punchline: "Governance shifts from statute to thermostat."

This is why he enthuses about mechanism design ("The Architecture of Cooperation," Noema, Mar 29 2026), about Pahlka and The Agentic State (Apr 5 2026), about public choice analysis (Apr 6 2026 plug for GMU's Public Choice Outreach Conference), and about Tyler Cowen, whom he just posts the two-word handle of when someone else lists Cowen's accomplishments (Mar 9 2026). Cowen has returned the favor, featuring him twice on Marginal Revolution in December 2025. The Krier intellectual project is "how do market-like coordination mechanisms scale when agents are cheap," not "how do we regulate the dangerous thing."

Anti-rationalist, pro-scientific-safety

The single most distinctive thing about Krier, for someone in his seat, is how unsentimentally he rejects the LessWrong/MIRI ontology. His Apr 19 2026 10-point rebuttal of Ben Todd's "why AI won't do what we want" (278L) is the clearest extant statement; abbreviated, it says:

His follow-up to Todd is harder still: "few people make the case for the reverse argument beyond metaphors/analogies, or pointing to the Sequences or something. I think there should be more direct pieces on this." And in a drive-by reply to Alex Turner (Apr 13 2026): "Groupthink? In Lighthaven?! Surely not."

He is explicit that this is not anti-safety: his own defensive-deployment essay (Models on the Frontline, Feb 2024) argues for deploying models to strengthen societal defenses first, and he co-authored the Distributional AGI Safety paper (arXiv, Dec 2025) proposing agentic sandbox economies with auditability and oversight. The stance is "take the problem seriously, throw out the metaphors."

Physicalist on consciousness, substrate-loyal on tools

Related and adjacent: on Mar 13 2026 (518L) Krier endorses Alex Lerchner's paper against computational functionalism ("AI can simulate consciousness, but cannot instantiate it") and notes they're working on a broader blog post together. On the Cognitive Revolution podcast debate against Emmett Shear (Dec 27 2025) he is blunt: "I think there's no contradiction in thinking that an AGI can remain a tool, an ASI can remain a tool, and that this has implications about how to use it," and "I can't separate certain needs and particularities about what they are from the substrate." This is adjacent to his anti-self-preservation tweets but philosophically distinct: it is a physicalist position on what AI systems are, not an optimistic one about their alignment properties.

Intellectual DNA

From the corpus: Tyler Cowen and the Marginal Revolution / Mercatus orbit. Elinor Ostrom and Coase (not namedropped often but they are the spine of Coasean Bargaining). Joel Spolsky's "Law of Leaky Abstractions" (Apr 19 2026). Adam Smith and Foucault ("seeing with two I's", Mar 29 2026). Jennifer Pahlka (Code for America founder, cited Apr 5 2026). Hayek and Scott's Seeing Like a State, implied. Public choice theory explicitly (Apr 6 2026). Cass Sunstein's Probability Neglect (cited Apr 20 2026 against p(doom) comms). Von Neumann–Morgenstern's axioms (Mar 9 2026, Mar 19 2026 — he repeatedly asks why no one evals agents against the VNM axioms, which is his way of saying "the rationalist agent model doesn't even describe current systems"). Carl Schmitt — cited against (Mar 9 2026: "Perfect illustration of the internal contradictions to Carl Schmitt's thoughts").

Notably absent: Bostrom, Yudkowsky, Christiano, Hubinger, Kokotajlo. When he references them it is to push back. He mocks Kokotajlo's AI 2027 inline (Apr 2 2026: "fictional AI company, OpenBrain (Kokotajlo et al., 2025)… Wonder what scenarios 'OpenBrain' is associated with… 🙄").

Evolution over the scraped window

The six-week window (Mar–Apr 2026) is too short for a complete ideological shift, but three moves are visible:

  1. From diagnosis to action. Early-window tweets are descriptive ("Very Online AI discourse 2015-2020…", 493L, Apr 19 2026). Late-window posts are prescriptive (the 14-item government-processes list, Apr 5 2026, 359L; the 10-point Todd rebuttal, Apr 19 2026, 278L). He is tired of describing the field and is spending his on-timeline capital arguing.
  2. Bluesky-curious. Mar 5 2026: "I'll be more active on Bsky than here by EoY! Feels like 70% of posts on X are 'QT of some news event + mildly amusing quip' and bot replies."
  3. Taxonomic self-awareness as a genre. His "Very Online AI discourse" timeline and his "Step 1: high quality group chats → Step 5: upper normie worlds" meme-diffusion tweet (Mar 10 2026) are both versions of "I've watched this ecosystem for long enough to chart its drift." This is the voice settling into a later-career move: less first-order argument, more meta-commentary on how arguments age.

Blind spots

Three, in descending order of confidence:


II. AI Governance & State Capacity

Krier's day job is AI policy, and the corpus shows three overlapping governance projects, in descending order of how much attention he gives them.

(a) State capacity first, AI governance second

This is the Krier policy prior: before you govern AI, fix the government. From his Mar 21 2026 self-quote: "If we were to actually drastically improve public institutions and institutional decision making, we could probably also tolerate riskier and more innovative policymaking. If you care about AGI risk, you should definitely want to prioritize a competent state."

It is why he admires Pahlka and pushes the Apr 5 2026 14-item list — tax pre-filling, permit processing, FOIA, benefits eligibility, caseworker paperwork, CMS claims adjudication — as "processes where AI agents could make a measurable difference today." The framing is not "AI will replace the state." It is "AI can finally let the state do what it is notionally already trying to do." A reply called out the big risk — "AI just eases the pain of bureaucracy, and therefore lets bureaucracies grow stronger roots" — and he immediately agreed: "that's a big concern for me too. Ultimately there should be stronger incentives pushing for de-bloating, rather than making the regulatory backend some sort of hydra." The concession is the frame.

(b) AI-mediated deliberation and advocate agents

His Mar 24 2026 thread (109L) on advocate agents is the most serious piece of long-form governance thinking in the corpus. The key move: most people don't want to attend their local town hall, so the relevant counterfactual for an advocate agent is "my interests simply being ignored," not **"my interests represented by me personally." **

The research agenda he proposes is two-fold: "(a) evaluating how accurately an agent represents a principal's views or values, which the principal may not themselves know fully ex ante; and (b) studying where delegation is appropriate, and where it is not." He is alive to the brittle-puppet / co-author tension ("if the agent is too literal, it becomes a brittle puppet; if it is too interpretive, it ceases to be a representative and becomes a co-author or governor"). He anchors this to the Knight Columbia "Democratic Matrix" paper and Pol.is / Taiwan on one side, and to Coasean Bargaining at Scale on the other.

(c) Anti-p(doom), anti-alarmist comms

The longest reply in the corpus (Apr 21 2026, to @tyler_m_john, 28 lines) is a full-throated argument against headline p(doom) numbers:

"A conditional estimate ('given continued scaling, no major alignment progress, deployment pattern X…') forces a forecaster to show their model, which is where the actual epistemic content lives. A one-off number compresses all of that into a single digit and rewards the compression. And the social practice of treating these numbers as the headline artifact actively crowds out the (far more important) conditional reasoning… I suspect that frequently, that's precisely the point: 'wake up sheeple! scary thing! pitchforks pls!'"

On Apr 19 2026 (126L): "There's a 64% chance the fastest way to poison AI discourse is turning highly uncertain risk arguments into cable-news numerology and extinction theatrics. The cartoon version crowds out the thing worth understanding, and then gets rightly mocked, taking substance down with it." He cites Sunstein's Probability Neglect as the mechanism: "1%, 10%, 50% all collapse into 'very bad thing could happen,' and public response is driven almost entirely by the vividness of the scenario."

This is not anti-safety. It is anti-the-specific-comms-strategy. The bright line he draws (Apr 21 2026 reply): "I'm not personally moved by p(dooms), and think there are good enough reasons to do serious safety work without having to resort to them."


III. Actionable Principles

The implicit rule-set one gets from the corpus:


IV. Rhetorical Style

Krier has three voices and uses them cleanly.

Voice 1: the numbered rebuttal. When he engages with a serious piece of writing he disagrees with, he numbers his points, each gets a link, and the register is neither snarky nor emollient. The Apr 19 2026 Todd rebuttal (278L) and the Apr 21 2026 p(doom) reply to @tyler_m_john are the exemplars. This is his lawyer voice. It is the most content-dense mode and also the least engagement-optimized: serious long replies top out in the low 100s of likes.

Voice 2: the taxonomist. His two biggest-engagement original essays on-timeline are both taxonomies of AI discourse. "Very Online AI discourse in 2015-2020 / 2020-2023 / 2023-2026" (Apr 19 2026, 493L) is one. "Step 1: high quality group chats discuss X. Step 2: a year later a Substack. Step 3: a distorted version from a high-status X account. Step 4: a think tank. Step 5: upper-normie worlds" (Mar 10 2026, 193L) is the other. He is positioning himself as the cartographer of a scene that takes itself too seriously, and the audience rewards it. This is also, not accidentally, the voice in which he is most critical of the AI-safety ecosystem without ever quite naming names.

Voice 3: the deadpan garnish. "hehe" QT-ing an Allbirds-is-now-an-AI-company story. "rogue internally deployed AI agents be like" on a Strait of Hormuz tweet. "Pleased to share that I will be neither watching nor commenting on 'The AI Doc' for I expect to learn approximately nothing new" (Mar 30 2026, 85L). "We'll have AGI when I stop saying 'human pls' at the start of every customer service charbot" (Mar 9 2026). This voice routinely outperforms Voice 1 by engagement.

Reply register is warmer than post register. On-timeline Krier is careful and argumentative. In replies to peers he is casual, lowercase, emoji-heavy: "🫡", "🙏🏼", "haha low bar imo", "hell is" (response to @norvid_studies asking what hell is). He is willing to say in replies what he won't on-timeline — the "Groupthink? In Lighthaven?! Surely not" line is a reply to Alex Turner, not a standalone post. This is by design: the timeline is for argument, the replies are for the scene.

What makes the tweets work. Three patterns:

What the audience heard differently. On the 10-point Todd rebuttal (278L), the thread replies are largely approving but several serious commenters push on point 10 specifically (eval awareness) and on long-horizon trajectory behavior. The audience, including some Krier respects (@a_karvonen on eval realism), thinks he under-weights the structural multi-turn argument. He has not yet publicly conceded this; it is a place the next essay should probably go.


V. Contrarian & Hidden Takes

What's contrarian inside his own tribe (Google DeepMind / AI policy world):

What's contrarian against the Bay Area doomer and Bay Area accelerationist clusters simultaneously:

What he'd say after three drinks (from the interview corpus, because this is literally what these sound like):

Central tensions


VI. Network Graph

Inner circle (peers he replies to casually and often)

Inside DeepMind

Thinkers he amplifies

People he is gently or openly against

Structural observations

He is promiscuous in who he amplifies — 7,574 following vs. 21,955 followers is an unusually high follow ratio for a 22k-follower account, and it tracks with his visible curiosity across fields (neuroscience, audiophile head-fi, Spotify playlists, midjourney art). He treats Twitter as a reading-list aggregator first and a broadcast medium second.


VII. The One Essay He Keeps Rewriting

Capability is not transformation. Deployment is where AI's impact lives, and deployment is governed by scaffolding, institutions, human preferences, physical constraints, and law. The rationalist ontology that treats a model as a goal-maximizing agent and alignment as a mathematical property of that agent is mostly wrong; the safety-by-vibes and p(doom) communications built on it are actively harmful. The right move is to build stronger states, better scaffolds, market-like coordination mechanisms, and a pro-growth regulatory environment — and then to deploy AI inside it, defensively first, against specific threat models rather than cartoon ones.

Every essay in the Krier corpus is a variation on this.

The bet the whole project rides on is that the next two years will confirm the deployment-constraint thesis more than the scaling-is-all-you-need thesis — that we will see more gains from better harnesses and better institutions than from raw model improvement. If he's right, the AI policy world re-centers on state capacity and mechanism design, and the rationalist ontology fades. If he's wrong — if there is a capability jump that makes the harness obsolete — the Krier worldview is the one that looks dated in retrospect. He knows this, and it is why his explicit hedge in the Apr 13 2026 thread is: "The point isn't that scaling stops working but rather that (a) achieving each additional increment of capability at the model level will require disproportionately greater expenditure of compute, data, engineering effort, and capital; and (b) 'weak AGI' will probably come from the combination of strong models with scaffolding, tools, memory, retrieval, planning."

He has chosen his thesis and is writing the same essay, beautifully, with slightly different citations each time.