/WillManidis_analysis
path WillManidis_analysis.md

Will Manidis (@WillManidis) — A Field Analysis

Will Manidis is a New York-based founder, Thiel Fellow, and writer. He founded ScienceIO, a healthcare language-model company that was acquired by Veradigm in February 2024 (announced in what is, chronologically, the oldest tweet in this corpus: "ScienceIO has been acquired by Veradigm" — Feb 27 2024). He has since rotated almost entirely out of healthcare and into two parallel bodies of writing — an AI-policy arc he calls Our Intelligence Troubles (2026-) and a virtue-and-craft arc he calls A Businessman's Apologetic (2025-), both published at willmanidis.com and in his Substack Minutes. The analysis that follows is drawn from 100 posts (Feb 2024–Apr 2026, heavily weighted to Feb–Apr 2026), 60 replies-to-others, five top-hit thread-reply dumps, a biographical pass, seven of his essays, and one long-form podcast (World of DaaS, ScienceIO-CEO era). The scraped window is short; the voice in it is one of the most recognizable in post-AI Twitter.


I. Core Worldview & Mental Models

The central bet: the future looks more like the past

Manidis's load-bearing belief is that the last sixty years of Western modernity — secular, peaceful, consensus-liberal, technocratically administered — were a historical abnormality now ending. He says this plainly on Apr 14 2026: "my basic view is that the future will look much more like the past than the present. we are at the end of a great historical abnormality: this level of cultural stability and secularism is over. we're going back to the basics of savage violence and divine faith" (1,439L). The two halves of that sentence are the two halves of his 2026 writing — the AI-political-violence arc ("savage violence") and the Christian-seriousness arc ("divine faith") are not separate projects; they are the same prediction viewed from two angles.

This is why the Bernie-Sanders-vs-Claude post, his single biggest hit (Mar 19 2026, 41,094 likes — "a trillion dollars spent on ai risk teams and no one thought to hard code into the model to not tell Bernie Sanders that you do crime"), sits above a serious essay. The joke is not just that safety teams are bad at their jobs; it is that technocratic administration is brittle and about to lose its monopoly on what counts as serious thought. He reads the Claude-Sanders clip as a small crack in the "reasonableness" standard (cf. Mar 15 2026 on FDA risk thresholds — id 2033157829717008722).

Tech's three delusions

From his Mar 4 2026 essay-thread on Anthropic's "contract formalism" (id 2029182933681025059), he believes tech operates on three specific fictions:

  1. That the legal system doesn't really exist — until it must be engaged, and then it is a fair mechanistic process. This is why Anthropic shows its cards at the forefront and thinks it has won the hand.
  2. That the industry is "positive-sum" and a "community" — which makes standard corporate weapons (coercive litigation, IP blocking, white-shoe leaks) feel out of bounds.
  3. That the material inputs to scaling are the binding constraint. "scaling laws are much more likely to be constrained by political and popular will to allow them to continue. hard to model these" (Apr 9 2026, id 2042309665480929357).

The prescription, stated openly in that same Mar 4 essay: "there's incredible potential for outperformance available to you for betraying any of these silly norms that tech uses to feel good about itself... holy wars require holy weapons, so get busy."

Equities as the real reserve currency

A mental model that shows up repeatedly, most directly Mar 4 2026: "in some very real and uncomfortable sense the reserve currency of the united states is our listed equities, not the dollar. almost all of monetary policy makes sense when you view it through this lens" (id 2029253649025609971). This is the lens behind his Mar 23 2026 read on OpenAI's guaranteed PE returns: "the entire postwar global order is built to make whatever is the largest capex risk asset return 30% net of fees. if this fails, the entire west goes" (id 2036140918206513546). He does not think OpenAI is allowed to lose, and most of his AI-critique assumes that premise.

Intellectual DNA

Tweets name-drop; essays cite. What he actually reads, visible in the corpus + essay citations:

Evolution across the scraped window

Two shifts visible inside a fourteen-month window:

  1. A sincerity turn. On Apr 15 2026 he writes a long reply to @mil000 that reads as a public confession: "i cut my teeth on twitter doing the same bit you're doing now. critiquing, calling out frauds, being a cynic... there is nothing I regret more than those years spent being a cynic" (id 2044476933572821201). His Apr 16 essay Against Cynicism formalizes that confession. The Apr 13 tweet "too much writing, too much thinking, too much planning, too much analysis. your pencils are sharp enough. theres not much time left now" (1,288L) is the same arrow pointing outward.
  2. A radicalization on AI politics. By March 2026 he believes political violence against data centers is now a regular feature of American life (Apr 7 post on the shots fired into an Indianapolis councilor's home, id 2041472086133129519), that congressional testimony from a language model is six months out (id 2035020400120180994), and that labs have "green lit themselves" to popular violence with their own apocalyptic rhetoric (Apr 13 reply to @deanwball, id 2043697959259230674). This arc has hardened month-over-month.

Blind spots


II. Our Intelligence Troubles — The Political Economy of AI

This is his main 2026 beat, and it is built on a single non-obvious claim: AI will be the first major technological shift of the modern era where the political-popular will to allow it becomes the binding constraint before the material constraint does. Chips, power, data — all are bottlenecks he treats as solved-or-solvable. Political tolerance is not.

The thesis in full, from "No 'New Deal' for OpenAI" (minutes.substack.com, Apr 6 2026):

"The New Deal was not a peaceful coalition between capital and labor. It was a settlement that came together after decades of industrial violence." "There are zero new dollars of capital committed in this document. OpenAI is offering fellowships of at most $100,000, a rounding error against $25 billion in annualized revenue."

He believes the labs' current public stance ("AI might kill everyone," "this could go catastrophically wrong") has functionally green-lit political violence against their own infrastructure — because PauseAI-style rhetoric is just the inversion of what Dario and Sam have been saying. His Apr 13 2026 reply to Dean Ball is the clearest statement of this argument and worth reading in full (id 2043697959259230674): "the rhetoric you're attributing to the pause crowd isn't meaningfully distinguishable from what the labs themselves are saying... if your model is 'apocalyptic framing produces violence,' the framing is coming from the labs."

Four sub-claims

1. The labor-political coalition that delivered the right in 2024 is about to fracture. From "On the Political Economy of Language Models" (Apr 8 2026): "There is no tariff on the deployment of language models. There is no reshoring strategy for cognitive automation." Capital wants to automate; the coalition's labor base is what gets automated. The labs are staffed left and will, in his prediction, leak undetectable models to aligned actors.

2. Professional-managerial labor gets hollowed out, not vanished — and will build itself make-work shelters. The Mar 12 2026 tweet "the real post scarcity society will be working as an overpaid and unaccountable research analyst for some Anthropic funded EA successor research entity in the year 3000 like a Japanese imperial soldier refusing to believe the war ended many decades back" (id 2032184698475037021) is the compressed version of this argument. Non-college labor has no such shelter. Grievance asymmetry follows.

3. Political violence is now endogenous, not tail-risk. "a lot of ink will be spilled over these pause ai groups because they provide a legible explanation for the coming wave of mass popular violence. these are the wrong target" (Apr 11 2026, id 2043047391104839799). This is the view he will not back off from even in polite disagreement with peers. When the Indianapolis councilor was shot at (Apr 7), he called both the violence "abhorrent" and the labs' reaction (dismissing as "mass hysteria") self-destructive.

4. The response to AI will be more law, more procedure — not less. "A Subpoena for The Devil" (Mar 30) frames this explicitly: the third domain AI opens (neither wild nor administered) triggers legalistic reflex, not simple authority. The Mar 20 2026 forecast — "we're at most like 6 months away from congress forcing testimony out of a language model" (id 2035020400120180994) — is a falsifiable claim dated from Mar 20, meaning due around Sept 20 2026; ungraded as of this writing.

Artifact: he builds the things he argues for

When he decided state-level data-center politics mattered, he built datacenterbans.com in early April 2026 — a live tracker of state and county moratoriums, opposition news, and LLM-affecting state law. He pushes fixes on it in direct replies throughout the corpus (e.g. id 2042415697196720469, id 2042592559520784862). He did the same for Citibike street coverage (bike-map.com, Apr 14 2026). The essays are not just rhetoric — he treats writing and shipping as complementary.

One forecast to flag

Mar 6 2026 self-quote (id 2029949581945958544): "wrong timing right idea", referring to his earlier prediction that a foundation-model provider would disclose a "self-replication" escape attempt within six months. He is honest enough to grade himself; the call was wrong on timing but he has not retracted the underlying structural prediction. Worth watching.


III. Actionable Principles — Systems, Protocols, Rules

Distilled from the corpus, each grounded in a dated tweet. These are not aphorisms-as-wallpaper; he returns to them repeatedly and his Apr 3 tweet on long-term compounding (7,829L, his largest non-political hit) is the closest thing to a creed.

1. A few good things for a long time.

"the greatest regret I have is underestimating the value of long term compounding. friendships, people, places, all get better with decades. beautiful things dont even start to reveal themselves for years. it is entirely what life is about. a few good things for a long time." (Apr 3 2026, id 2040055253102391463)

This is the emotional center of the account.

2. Find the natural structure.

"the function of your professional life is to find the most natural structure that allows you to turn the things you do as naturally as breathing or walking into compounding capital and joy over decades. this, necessarily, requires rotating quickly out of things that aren't it." (Apr 6 2026, 3,598L, id 2041145501354016843)

Note the internal tension with Rule 1 — a few good things for a long time, but also rotate quickly out of things that aren't it. He lives with this tension and believes correctly-chosen things compound; everything else should be exited fast. (A top reply pointed this out as impractical for most people — he did not concede.)

3. Take life seriously.

"the single greatest thing you can do for your sanity is to take life seriously. we live in a profoundly irony-poisoned society and you cut off 1/100th of what life can offer you by being a goofball. the jester may be near the king but he does not sit at the hand of the father." (Apr 8 2026, 3,137L, id 2042002580922876033)

Connects directly to the Against Cynicism essay and the Milo reply.

4. Your pencils are sharp enough.

"too much writing, too much thinking, too much planning, too much analysis. your pencils are sharp enough. theres not much time left now, its best if you get going." (Apr 13 2026, 1,288L, id 2043670348923650212)

Against overthinking. The tension with his own 4,000-word essays is, one suspects, deliberate.

5. If you live a life of public faith, you are a minister whether you like it or not.

Reply to @lukeburgis, Apr 11 2026 (id 2043013302188437869): "my basic view is that if one wants to live a life of public faith, then you're a minister of sorts whether you like it or not. and if you believe no soul is categorically beyond salvation, then the lost are precisely who you must spend time with." This is the operating principle behind why he will engage with everyone — Elizabeth Holmes, Beff Jezos, cynics, left radicals. Not politeness; theology.

6. Buy the illegible trait; sell when it becomes legible.

"the only reason people spend huge amounts on software is to buy the most desirable but outwardly illegible trait of the vendor. ex: you buy palantir because great secrets exist, you buy openai because insane revenue acceleration is possible... these firms ultimately lose their ability to demand premium pricing when the trait becomes legible/mundane. ie the magic trick stops working when you know how it works." (Apr 9 2026, id 2042259581590544829 + self-reply)

A tight model of premium software pricing that explains half of his other market takes.

7. Leaks happen for proximity.

"you can always figure out who a leak is by finding the name that is most surprising in the story. people leak for proximity more than anything" (Apr 11 reply to @lulumeservey, id 2043011202108776915). A small piece of operator wisdom, but it is a signal that he has been in rooms.


IV. Rhetorical Style — What Makes the Tweets Work

The signature move: serious argument as a throwaway pseudo-joke

His top two hits are structurally identical: a sentence that reads as a quip but sits on top of a thesis. "a trillion dollars spent on ai risk teams and no one thought to hard code into the model to not tell Bernie Sanders that you do crime" (41,094L) compresses roughly the argument of On the Political Economy of Language Models into one line. "we live in age of great moral panics about things that don't matter at all and zero moral outrage over some of the most egregious societal sins we've ever seen" (36,456L) does the same for Against Cynicism and The Pyramid and the Tomb. The tweets work precisely because they are essay punchlines disguised as shitposts — you can quote-tweet them without committing to the argument underneath, and so the replies fill in whatever the reader wants (look at the replies on the moral-panics thread — sports prediction markets, Satanic inversion, algorithmic outrage — each reader plugs in their own sin).

The "many people are saying" register

A recurring device: "many people are saying we're in the deal guy yuga, many are saying" (Apr 2 2026, id 2039759280593600579); "Ive realized things about the state of markets and the corresponding human condition that would kill an average man" (Mar 4 2026, id 2029287384190156914); "i've come to believe a set of things about markets... that is beginning to scare even those that have followed me this far" (Apr 21 2026, id 2046626395359506924). It is a pseudo-prophetic register — gesturing at private knowledge too heavy to state — and it reliably gets 200–500 likes for sentences that contain almost no information. He seems to be half-parodying it and half sincere.

The short parenthetical that is the actual thesis

"(allegedly)" in his Mar 27 SPAC-fraud tweet; "its all so shameful and embarrassing" appended to an otherwise dry 13D-activism-in-private-markets observation (Apr 14 2026). The tweet's information is in the subordinate clause; the one-word moral verdict is what you remember.

Weapons-grade compression

"The software megacycle started with PayPal going public and will end with it going private" (Feb 23 2026, 277L). "its craps now, not poker" (Internet Native Deal Guys). "deal guy yuga." "tool shaped pills." "lab-lead conversion is a trillion dollar ROI for any major faith." The compression is the brand.

The reply register differs from the post register

In posts he is prophetic and aphoristic. In replies he is either (a) brief and warm with named friends ("miss you Scott, thanks for the kind words" to @skupor; "happy birthday big boy" to @tjparker; "@pratyushbuddiga want to hang") or (b) surgically long and patient with peer adversaries (two 300-word replies to @deanwball, Apr 13, are the most substantive policy engagement in the entire corpus). The Milo reply (Apr 15) is the exception — public, long, confessional — because he framed it as a chance to minister rather than argue.

What the top threads reveal

The replies under the Bernie-Sanders top hit (thread 2034768017163170132) are almost entirely a meme pile — the audience heard a punchline, not an argument, and ran with "@grok verify this" and hand-waving at the alignment problem. The same is true for the moral panics thread: most top replies are commenters naming their own preferred panic. This is consistent with his own tweet (Apr 7 2026, id 2041472086133129519) worrying that the AI community reads political violence as "mass hysteria" — his audience is often reading his jokes as jokes and missing the thesis. He almost never corrects them in-thread; he writes another essay instead.


V. Contrarian & Hidden Takes

He is a genuinely religious man operating inside a secular-materialist industry

This is the single most load-bearing fact about him, and it is hidden in plain sight — the bio is Revelation 21:2. It explains almost every other unusual position. His prayer tweet on Mar 5 2026 ("god help me see my failures aren't proof all of me is broken. god help me even more to see my successes aren't proof all of me is good", 555L) is not performance; the Mar 9 reply about "when man is put into extreme duress in nature or pushed to the extreme edge of his competence window in isolation... you routinely get remarkably consistent spiritual experiences" reads as personal. On World of DaaS he stated outright belief in demons. The Apr 1 2026 tweet — "I don't understand why there aren't mormon missionaries camped outside anthropic offices right now. converting a lab lead is a trillion dollar roi for any major faith" (id 2039485300250902694) — is half-joke, half-strategic recommendation to a faith he half-sincerely wishes would take seriously.

And not in a cynical I-told-you-so way. The Mar 4 2026 "contract formalism" essay-thread says tech's cultural allergy to standard corporate weapons (litigation, IP trolling, leaks) is an operator weakness, not a moral virtue. "the modern corporation must avail itself of all possible edge, and holy wars require holy weapons." Against the grain of his own Twitter audience.

He thinks the peptide craze is mostly placebo

Mar 11 2026, id 2031858205416894696: "I have a very woo-woo view that a large percent of the population is suffering from sarno-described pain, and that peptides are serving as a totemic fix to this because the most powerful placebo is one that requires physically injecting yourself." This cuts directly against the techno-libertarian consensus of his own tribe.

He is willing to cite a French radical-left terrorism-adjacent text approvingly

The Apr 12 2026 post on The Coming Insurrection (id 2043394423673737582) — "it's eerie to line up their diagnosis against OpenAI's" — is a genuine intellectual risk on his platform. Most of his audience would rather not read the Invisible Committee. He does anyway.

He sees TBPN clearly and is loyal to it

TBPN is not just content he likes; he is on its "board of directors 2025-2026" (Apr 2 2026). But on Apr 20 2026 he writes the sharpest critique of its imitators on X: "I genuinely think everyone who has 'studied' TBPN fundamentally misunderstands what made the show special and are destined to nuke capital chasing it" (988L, id 2046252222447566856), following it up in a self-reply with an essay-length thesis on prestige-vs-personality-driven media. He loves the show and is clear-eyed about why it works. The same essay (Investing and the Media Business) predicts most venture-firm-as-media-business plays fail because impersonal media platforms trend toward slop at scale.

What he'd say after three drinks that he won't say on-timeline

Actually — and unusually for this genre — he mostly does say the three-drinks thing on timeline. The long reply to Dean Ball on Apr 13 is about as candid as it gets: the labs are the ones producing the apocalyptic framing; PauseAI is a symptom; go after the CEOs' rhetoric. The stuff he won't say, reading between the lines: he suspects a specific lab is going to have a specific very-ugly public incident (the "I've been trying to publish part two of this essay for a few weeks... there is no safe way to do public disclosure here" post on Mar 20 is exactly this — the hidden-take is in what he refuses to say, not what he does). The Apr 2 self-reply joking about Jeremy Giffon requiring a body double "for his own security" is the tell that this private knowledge network is real.

Internal tensions


VI. Network Graph

Inner circle (people he treats as peers, often in DMs or warm short replies)

Intellectual opponents he engages seriously

The broader network

What he amplifies vs. what he ignores

Amplifies: Arena Magazine, Pirate Wires, TBPN, Ross Douthat / Ben Sasse interviews, Tyler Cowen, Thiel Fellowship announcements, data-center-ban news items, his own essays. Ignores: most AI-safety discourse, most effective-altruism content (except to mock it), SF-tech in-group status games, mainstream political commentary, the American right's content creators below a certain IQ threshold, and anything on healthcare. Post-ScienceIO he has almost entirely rotated off his former beat. The March 15 peptides post is the only exception I found.


VII. The One Essay He Keeps Rewriting

Every essay in A Businessman's Apologetic and Our Intelligence Troubles is, at the structural level, the same essay:

Modernity is a short-lived aberration that stripped out sacrifice, transcendence, and stakes. The AI era is where that aberration ends — either because the labs voluntarily re-introduce real stakes (profit caps, energy-cost absorption, binding concessions), or because violence and faith return and re-introduce them for us.

Viewed through this lens:

The compressed, 280-character versions are everywhere in the tweet corpus:

It is a surprisingly coherent body of work for someone working in real-time. The main risk to it is that the frame is too clean — if AI just diffuses without the violence-and-faith inflection he has wagered on, the whole structure becomes an elegy for a reversion that did not happen. His own Mar 6 "wrong timing right idea" self-grade suggests he knows this.

A reading curriculum, if one were building the Manidis canon

If someone wanted to understand where this comes from, not where it is going, the reading list implied by the corpus is:

  1. Scripture — Revelation, Matthew 9, Luke 15, Acts 17. Read literally.
  2. Peter Thiel — everything, including obscure Japanese-language interviews.
  3. The Invisible Committee, The Coming Insurrection (2008).
  4. Orwell, The Lion and the Unicorn (1941).
  5. Ross Douthat — the Sasse interview and his broader religious-right journalism.
  6. Tyler Cowen — ongoing.
  7. Dr. John Sarno — the back-pain-is-psychological book that sits behind the peptides-as-placebo thread.
  8. Arena Magazine, Pirate Wires, TBPN — the contemporary online-right-of-center ecosystem whose taste he is actively trying to elevate.
  9. Classical Greek and Latin — directly disclosed on World of DaaS; the Roman-and-medieval imagery in his prose is not pastiche.

The canon is narrower than it looks. He is, at root, a Catholic-in-sensibility-if-not-in-confession young-Thiel-Fellow ex-healthcare-founder trying to write the script for the post-liberal economic order in real time, with a religious frame that most of his industry treats as dead weight and which he treats as the operating system. Whether he is early, correct, or about to be humbled by a decade of diffuse AI normalization is the bet you are taking when you read him.