/edzitron_analysis
path edzitron_analysis.md

Ed Zitron (@edzitron)

Edward Benjamin Zitron is the CEO of EZPR, the writer of Where's Your Ed At, host of Better Offline, a Business Insider columnist, and the author of the forthcoming Why Everything Stopped Working. He is a former games journalist and tech PR operator who now makes his living pulling apart the tech industry's stories about itself. At scrape time he had 110,047 followers on X; The Guardian reported in Jan 2026 that his newsletter had more than 80,000 subscribers and his podcast was in the Top 20 tech charts.

The voice is specific: a PR person who turned against PR logic, a tech obsessive who thinks the tech industry is desecrating the computer, and a finance-adjacent AI skeptic whose actual unit of analysis is not "AI capability" but cash flow, megawatts, token burn, backlog, depreciation, and who is pretending not to know those things.

The scraped window is narrow - May 5 to May 11 2026 - but unusually coherent. Almost every post is one facet of the same argument: generative AI is a subsidized, circular, physically constrained, accounting-obscured bubble, and the people selling it are either lying, credulous, or too financially exposed to ask basic questions.


I. Core Worldview & Mental Models

The demand story is guilty until proved innocent

Zitron's core belief is that tech markets have learned to manufacture the appearance of demand before demand exists. In the current cycle, the trick is "AI demand." His May 8 premium-newsletter tweet states the thesis in its cleanest form: "The AI bubble is entirely circular, with 90% of revenues flowing through OpenAI and Anthropic" (May 8 2026, 379L/64RT). The follow-up says the quiet part more aggressively: outside those two companies, "AI demand barely exists" (May 8, 40L/1RT).

That is not a vibe-based anti-AI claim. It is a customer-concentration claim. In AI's Circular Psychosis (wheresyoured.at, May 2026), he argues that OpenAI and Anthropic are not merely the category leaders; they are the category's revenue, compute demand, and backlog wearing a trench coat. In Am I Meant To Be Impressed? (wheresyoured.at, May 2026), he makes the same argument from hyperscaler capex: Microsoft, Google, Amazon, and Meta are spending sums large enough to require a real market, while the visible revenues are either tiny, hidden, annualized, or dependent on OpenAI and Anthropic paying bills they cannot pay without more capital.

The mental model is simple and repeated: if a business is real, you should be able to name the customer, the revenue, the margin, the cost to serve, the power source, the construction timeline, and the path from invoice to cash. If you cannot, he assumes the "business" is a story being told to investors.

Physical reality beats press-release reality

The second load-bearing model is that infrastructure announcements are not infrastructure. A data center is not real when a CEO announces gigawatts, or when a county approves a site, or when a company says "contracted power." It is real when power is available, GPUs are installed, tenants are paying, and the economics work after depreciation and debt service.

That is why the May 10 Kevin O'Leary data-center tweet got 2085L/164RT: "Never gets built." He argues the proposed 9GW project is unprecedented, would take five to ten years if possible, and lacks both capital and conviction. The ranked replies mostly treat this as a local-environmental and fraud story, but Zitron's own follow-up is the stronger forecast: "Mr. Dogshit will be hyping quantum before 2028" (May 10, 188L). The joke is that the asset class will rotate before the asset exists.

The same model explains the Colossus-1 posts. Anthropic renting xAI's entire 300MW facility is funny to him because it reverses the hype narrative. If xAI had real internal compute demand, why could it hand the facility to a competitor? His May 6 post asks exactly that: "Where does AI compute demand actually exist?" (May 6, 33L/2RT, self-reply). In replies to others he gets more concrete: "Would be shocked if they make it to 90MW" (reply to @ShanuMathew93, May 10), and to @rdd147, "this would require first for somebody to build a 1gw data center" (May 10).

This is where his best work is: he turns a futuristic claim into an operations checklist.

LLMs are not minds, and pretending otherwise is fraud-adjacent

The technical ontology is deliberately unsentimental. He does not treat LLMs as proto-minds, coworkers, interns, or "autistic geniuses." When Sam Altman called 5.5 "an autistic genius," Zitron's hit reply was: "No it's a large language model" (May 9, 2566L/55RT). His immediate self-reply - "Also just fucking rude to autistic people?" (May 9, 643L) - shows the two layers of the objection. First, the claim anthropomorphizes a statistical system. Second, it borrows human disability language to add mystique to a product.

The consciousness-marketing tweet is even cleaner. Bloomberg Opinion framed "leaning into" the idea that Claude and ChatGPT might be conscious as a smart move for AI companies. Zitron's entire response: "So...lying?" (May 10, 2497L/204RT). The thread replies heard exactly what he wanted them to hear: snake oil, securities hype, "creative interpretation of the facts." There is no subtle position underneath. He thinks consciousness talk is marketing camouflage.

The Guardian interview (Jan 2026) supplies the long version: he calls LLMs "intelligent in the same way a pair of dice are intelligent." In the tweets, that becomes "No it's a large language model."

The moral crime is contempt for the user

Zitron's most-engaged tweet in the scrape is not a balance-sheet analysis. It is a sentence about a chart: "No Y axis just shows how much contempt Anthropic has for the audience" (May 5, 3656L/158RT). This is the moral center of the account. Bad metrics are not just sloppy. They express contempt. Vague run rates, annualized numbers, unlabelled charts, "contracted power," "precommitted capacity," and "AI revenue" without margins are all treated as insults to the public.

This comes from the same place as the Rot Economy frame that public profiles tie to his rise: modern tech companies optimize extraction and narrative rather than user value. The AI bubble is not a new thing to him; it is the most capital-intensive version of an old thing.

Intellectual DNA

His cited and implied curriculum is not a standard AI-safety or academic-economics canon. It is:

Blind spots

The biggest blind spot is that he sometimes treats missing disclosure as near-proof of missing business. Often that is the right instinct; markets do hide weak numbers behind vague language. But it can make the analysis vulnerable when a real private customer, delayed build, or internal use case exists but is not visible yet.

The second blind spot is timing. He has a strong model of why something is unsustainable, but unsustainable systems can keep refinancing themselves long after the critic has identified the flaw. The account occasionally sounds as if the market must care soon because the facts are obvious. That is not how bubbles, credit cycles, or platform monopolies reliably behave.

The third blind spot is rhetorical: the sharper the insult, the more the audience rewards it. The fully researched newsletter promo on May 6 got 373L/100RT; the no-y-axis dunk got 3656L/158RT. His readers want the audit, but they reward the verdict.


II. AI, Data Centers & Bubble Mechanics

The AI industry is two subsidized companies and their landlords

Zitron's current AI thesis has four steps:

  1. OpenAI and Anthropic create the visible demand.
  2. Microsoft, Google, Amazon, Oracle, CoreWeave, and other neoclouds build or finance capacity around that demand.
  3. The same hyperscalers and investors fund OpenAI and Anthropic so they can pay the bills.
  4. Analysts then point to the resulting revenue/backlog as proof that AI demand is real.

That is why he keeps returning to circularity. On May 7 he wrote that IREN's future revenue is "Microsoft for OpenAI or NVIDIA feeding itself money" and concluded: "AI compute demand is an illusion" (223L/25RT). On May 5, after reports that Anthropic had committed to spend $200 billion on Google cloud and chips, he wrote that the market reaction showed "people without object permanence": Anthropic "cannot afford to pay these bills" because Google had just given it $10 billion (749L/79RT).

The argument is not that OpenAI and Anthropic have no revenue. It is that their revenue cannot support their infrastructure promises, and the infrastructure vendors are counting future revenue from customers who need continuous subsidy.

Capacity constraints are not bullish if they do not produce profit

In a normal software company, capacity constraints can indicate demand. Zitron's argument is that AI capacity constraints often indicate something worse: the product is too expensive to serve, while the company still cannot generate enough margin to justify more buildout.

The Claude/xAI deal is his perfect object lesson. On May 6 he wrote that Anthropic was "loosening rate limits across the board by renting the entirety of Musk's Colossus-1 data center" while giving away "$8-$13.50 for every $1 of sub revenue" (1278L/42RT). A few minutes earlier he asked what happens to Colossus-2 if xAI is giving away Colossus-1 (239L/11RT). The business question is not "is Claude popular?" It is "what does popularity cost?"

This is why he is impatient with generic "compute constrained" narratives. He does not accept "we could have made more revenue with more capacity" unless the company also shows the cost of that revenue, who is paying, and whether the added capacity improves or worsens margins.

Token economics are the coming reality check

The tweets on token economics are a quieter but important layer of the worldview. He replies to @jrichlive: "token economics have not remotely 'turned positive' nor are they showing signs that they will" (May 7). When challenged through @BenBajarin and @jrichlive, he asks for concrete examples: "Who are the customers of inference? Can you give me some non-oAI/Anthropic/mag7 examples with spend?" (reply to @BenBajarin, May 7).

The long version is AI's Economics Don't Make Sense (wheresyoured.at, Apr 2026). The thesis is that subscriptions hide variable compute cost. Users experience "messages" or "requests"; providers experience token burn. Once Microsoft moves Copilot toward usage-based billing and Anthropic/OpenAI enterprise plans expose token costs, the magic disappears.

That same frame explains his May 5 post about Codex limits: "that's because the limits are inflated until may 31" (157L/9RT). He reads generous limits as customer-acquisition subsidy, not a stable feature.

The model-capability debate is secondary

He is not mainly arguing that LLMs can do nothing. In the Digital Disruption interview (Oct 2025), he gives the more nuanced version: it makes easy work easier and hard work harder. The tweet voice usually skips that caveat because his enemy is not narrow utility. His enemy is totalizing economic mythology: AI will replace labor, write all software, justify trillions in capex, and make every company more productive.

That is why he can dismiss a WSJ article asking LLMs which jobs AI will replace with "Who gives a shit" (May 10, 2619L/92RT). The problem is not that the exercise is imperfect; it is that it is epistemically worthless. The thread replies echo him: "We asked an AI model" is lazy journalism, a magic-eight-ball article, random numbers. He is training the audience to treat AI self-description as contaminated evidence.


III. Actionable Principles / Systems & Protocols

1. Make them show the axis, the denominator, and the dollar amount

If a chart has no Y-axis, the correct interpretation is not "interesting trend." It is "contempt." The May 5 Anthropic finance-sector chart tweet is the canonical protocol: no axis, no trust. The same standard applies to annualized run rates, unspecified "AI revenue," vague productivity percentages, and "contracted" language.

Supporting tweet: "No Y axis just shows how much contempt Anthropic has for the audience" (May 5, 3656L/158RT).

2. Convert every AI story into "who pays whom?"

Zitron's first move is not to ask whether the model is impressive. It is to map counterparties. Microsoft pays OpenAI? OpenAI pays Microsoft? Google funds Anthropic, Anthropic buys Google TPUs, Google books backlog? NVIDIA backstops IREN, IREN buys NVIDIA GPUs? He treats circular counterparties as the skeleton key.

Supporting tweet: "OpenAI and Anthropic account for $748 billion of Microsoft, Google and Amazon's revenue backlogs" (May 6 self-thread, 17L/2RT).

3. Ask whether the customer can pay without new capital

The question under every Anthropic/OpenAI post is solvency. On May 8, commenting on Anthropic renting from Akamai, he asks: "Is ANY data center capacity coming online? Why is Anthropic so desperate..." (136L/8RT). On May 5 he says Anthropic cannot pay Google's bills because Google just funded it. That is his durable test: if the customer needs its vendor or investors to finance the invoice, the revenue is not healthy.

Supporting tweet: "Probably should be more concerned if they can pay" (May 8, 96L/2RT).

4. Treat "announced" capacity as fiction until it is powered and used

He repeatedly audits whether data centers have actually been built. "At this point how much AI capacity is actually getting built?" (May 8, 235L/26RT). On May 6, he says recently announced projects keep turning out to be "barely under construction," stuck, or missing meaningful updates (321L/31RT).

Supporting tweet: "Suddenly everybody's talking about data centers not getting built now!" (May 6, 1252L/50RT).

5. Separate product usefulness from subsidy

An AI product can be useful and still be economically fake if the price does not expose the cost. This is why he cares about Copilot, Claude Code, and rate limits. The service people like may be the subsidized version, not the sustainable product.

Supporting tweet: "at what point does this 'become worth it' because it sure isn't right now" (May 6, 166L/18RT).

6. Watch for distress signals, not press releases

His distress signals are: rate-limit cuts, usage-based billing, discounted API credits, outages, unbuilt capacity, private-credit deals, loan-target cuts, neocloud misses, and companies hiding segment revenue. The tweets treat these as early tremors.

Supporting tweets: "Softbank Cuts OpenAI Loan Target By 80% To $6 Billion" gets "Uh oh!" (May 8, 1537L/90RT); Anthropic API-credit discounts prompt "LLMs are deeply unprofitable so discounts feel like they're a bad idea!" (May 7, 128L self-reply).

7. If a journalist asks a model about itself, ignore the article

He has unusually low tolerance for AI journalism that treats LLM output as evidence. "Who gives a shit" is not just contempt; it is a media-literacy rule. Models trained on AI hype are not neutral witnesses about AI's future labor impact.

Supporting tweet: "Who gives a shit" on WSJ asking models which jobs AI will replace (May 10, 2619L/92RT).


IV. Rhetorical Style / What Makes The Tweets Work

Verdict first, evidence behind the curtain

The highest-performing posts are tiny verdicts:

The pattern is that the long-form work creates a reservoir of authority, then the tweet spends it in one line. Readers do not need the whole spreadsheet every time. They know there is one somewhere.

Profanity functions as classification

"Dogshit" is not just decoration in this corpus. It is a category label for claims that fail basic reality tests. CoreWeave earnings are "hot dogshit" (May 7, 314L/11RT). IREN is "more dogshit from the neoclouds" (May 7, 223L/25RT). Kevin O'Leary becomes "Mr. Dogshit" (May 10, 2085L/164RT). The repetition turns a crude word into a sorting mechanism: this is not a hard problem, this is garbage.

That said, the insults work because they are surrounded by specifics. Under the O'Leary post he talks about 1GW feasibility, 90MW skepticism, and quantum hype. Under the jrichlive exchange he asks what data supports positive token economics. The public tweet is a punchline; the replies are a deposition.

The question mark is a weapon

Several posts are one-character or near-one-character interrogations: "?" at Jim Cramer comparing OnlyFans and Nvidia (May 8, 166L); "?" to unusual_whales on in-home mini data centers (May 5, 204L); "What" to @fchollet (reply, May 11). The move says: the claim is so malformed that the burden is entirely on the speaker.

It is a PR person's inversion of PR. Instead of polishing the narrative, he makes the narrative restate itself under fluorescent light.

The self-thread is where the thesis decompresses

His promotional posts often underperform the dunks, but they contain the actual architecture. The May 6 Am I Meant To Be Impressed? post got 373L/100RT, while the self-thread laid out the backlog dependency, circular financing, and hyperscaler-capex thesis in detail. Individual replies in that thread got only 16 to 111 likes. That divergence matters: his public influence comes from compression, but his credibility comes from readers believing the decompressed version exists.

Reply voice: warmer with peers, sharper with fuzzy claims

The replies-to-others file changes the read. The broadcast voice is maximalist and contemptuous. The reply voice can be curious and collegial:

But when someone makes an AI-economics claim, the reply voice tightens into cross-examination. To @jrichlive: "what data are you relying on to say that token economics have 'turned positive'?" (May 7). To @BenBajarin and @jrichlive: "Can you give me some non-oAI/Anthropic/mag7 examples with spend?" (May 7).

The distinction is important. He is not generically hostile. He is hostile to unsupported abstraction.

What the audience actually hears

The top thread replies are mostly not technical. They hear permission to call the thing bullshit:

This is the tradeoff of Zitron's style. He brings accounting details to a mass audience by laundering them through disgust. The disgust travels farther than the details.


V. Contrarian & Hidden Takes / Evolution & Tensions

He is not an AI doomer; he is an AI-business doomer

The lazy read is "AI hater." The better read is: he thinks generative AI is over-described as intelligence, underpriced as software, overbuilt as infrastructure, and misrepresented as labor replacement. That is not the same as fearing a god-machine. In fact, he has open contempt for mystifying language. "No it's a large language model" is anti-doomer as much as anti-booster.

His hidden contrarianism is against both sides of AI metaphysics. He rejects booster consciousness-talk, but he also rejects safety-adjacent grandeur when it becomes marketable mythology. He is much more interested in whether Anthropic has enough capacity, whether Microsoft can keep subsidizing Copilot, and whether a data center has power.

The after-three-drinks take: AI reveals contempt for labor

The Guardian interview says the quiet part more directly than the tweets. Zitron argues that the AI era has shown "how many people are excited to replace human beings." On X, this appears as contempt for job-replacement journalism and executive fantasy. The May 10 WSJ dunk is not only about bad methodology. It is about a media and executive class eager to treat work as a list of automatable outputs.

The Digital Disruption transcript makes this explicit through the "intern" discussion. He argues that executives see jobs as units of work and miss the human meanings of mentorship, learning, and systems knowledge. This is the moral underside of his software-economics posts: software engineering is not just code; writing is not just word count; an intern is not just output in a body.

He loves technology more than the people selling it

A useful tell: in long form, he lights up about Anker batteries, right-to-repair laptops, Waymo, handheld PCs, and even brief moments with Vision Pro. In the tweets, this leaks through as consumer-level irritation rather than pure cynicism: he complains to @AnkerOfficial that a projector update made setup take "20+" minutes instead of two (May 9).

That tweet matters more than it looks. The standard is not "new technology bad." The standard is "did the product get better for the user?" If not, he treats the company as having violated the compact.

The PR contradiction is real, but not accidental

WIRED's Oct 2025 profile frames the obvious tension: Zitron runs a PR firm and has represented AI or automation-adjacent companies while attacking AI hype. This could be hypocrisy, but it is also why the critique has teeth. He knows the machinery of flattering founders, planting narratives, and smoothing over weak claims. The account often reads like a flack prosecuting other flacks for malpractice.

The unresolved tension is that he sometimes wants a clean moral line through a messy professional life. His answer is practical: he works with companies he believes in and says he does not want to pitch generative AI. Critics may not find that sufficient. The audience mostly does not care, because the account's value is not purity; it is adversarial translation.

Evolution: from product rot to balance-sheet rot

Across public profiles and his own essays, the arc is:

  1. Tech products degraded because growth incentives ate user value.
  2. Generative AI became the ultimate growth story after 2021.
  3. The AI story required hidden subsidies, vague metrics, and massive capex.
  4. The next phase is not only product disappointment but financial contagion through private credit, neoclouds, hyperscalers, and data-center debt.

Inside the scraped May window, the evolution is compressed but visible. May 5-6 is mostly "AI demand story is a lie" and Copilot/token economics. May 7-8 turns to IREN, CoreWeave, SoftBank, OpenAI financing, and Anthropic discounts. May 10-11 shifts toward public-facing absurdity: consciousness marketing, WSJ job-replacement filler, Kevin O'Leary's 9GW project, and private credit.

He is moving from "this product doesn't work as advertised" to "this financing structure is going to hit ordinary balance sheets, pensions, utilities, water, and local governments."


VI. Network Graph

Recurring conversational peers

The replies-to-others corpus is unusually revealing because it is not full of fan management. It is mostly technical cross-talk, media chatter, and argument.

Information feeds and validators

He repeatedly amplifies or reacts to accounts/sources that produce financial or infrastructure breadcrumbs:

Antagonists

The antagonists are not just AI companies. They are hype intermediaries:

What he ignores

He mostly ignores model-leaderboard discourse unless it is attached to spending or deception. He is not spending the scraped week arguing benchmark deltas, prompt tricks, alignment taxonomies, or AGI definitions except to puncture them. He also pays little attention to small everyday AI users except as examples of subsidized consumption. The real subjects are executives, investors, reporters, cloud vendors, and local governments holding the bag.


VII. The One Essay He Keeps Rewriting

The one essay is: the tech industry has replaced value with growth narratives, and generative AI is the most expensive narrative yet.

Each version changes the object:

The essay's structure is also consistent:

  1. Quote a euphoric claim.
  2. Strip out the adjective.
  3. Ask what number would have to be true.
  4. Ask who pays.
  5. Ask whether the payer can pay.
  6. Translate the answer into contempt.

That is why his best tweets feel like conclusions rather than takes. "So...lying?" is not a joke looking for an argument. It is the final line of the argument after the spreadsheet has already been built.

Reading curriculum

The curriculum behind Zitron is less "read these philosophers" than "learn to read the documents nobody wants you to read":

His public persona is loud, but the deeper pattern is procedural. He wants the reader to stop asking whether AI feels futuristic and start asking whether the invoice clears.