Something Big Is Happening — Just Not the Way It Is Being Told.

An essay about the AI hype and doomsday predictions.

Matt Shumer’s essay, Something Big Is Happening, is that piece for this particular moment in artificial intelligence. It speaks with enough authority to feel like a briefing, and it travels far enough that you encounter it not by seeking it out but because someone whose judgment you trust has already been unsettled by it. Eighty million people have read it. Many of them were afraid afterward. Some of them didn’t stop to ask who benefits from that fear.

I. — The Itch

A YouTuber I follow, someone who has spent years making sense of technology for a living, made a video about it. He’d read it, and it had unsettled him. In a way that led him to question his own place in what was coming.

That was enough to make me find it.

The essay is titled Something Big Is Happening. It was written by Matt Shumer, who runs an AI company. It has been read more than 80 million times. The argument, stripped to its bones, is this: artificial intelligence is improving faster than almost anyone outside the industry understands, the consequences will be vast, and most people are not prepared.

I read it in one sitting. It is well written. But somewhere around the midpoint, something snagged.

The claims weren’t obviously false, but I couldn’t shake a simpler question, one the essay never paused to answer: why is this person, with these particular financial interests, the one telling me to be afraid?

It caught the part of my brain that won’t take a convincing argument at face value.

II. — The Alarm And How It Works

The essay opens with a comparison to Covid. Specifically, the moment before most people understood what was coming. That period, roughly January and February of 2020, when the news was filtering through and the dominant response was still “this seems overblown.” Shumer’s argument is that we are in that same moment with artificial intelligence. The people who understand what is actually happening are watching something enormous approach, and the rest of the world is going about its business.

Covid is recent enough to carry genuine dread, and the overblown-until-it-wasn’t arc is one almost everyone lived through personally.

From there, the essay builds its case fast. AI is improving faster than the public understands. The benchmarks are being cleared ahead of schedule. What seemed like a five-year problem is becoming a two-year problem. The acceleration itself is the argument.

By the end, there is a recommendation tucked inside the alarm. Stop using the free version of these tools. Use the paid tier. The best available model, used seriously, will show you what is actually coming.

Read it once and it feels like a warning from someone who got there early. Read it twice and a different shape begins to emerge.

III. — California, 1848

In January 1848, a carpenter named James Marshall found something glinting in the water at a sawmill on the American River in northern California. He was working for a man named John Sutter, and neither of them wanted the news to travel. It travelled anyway.

By 1849, tens of thousands of people were moving west. They came from the eastern United States, from Europe, from China, from Chile. They came overland through mountain passes and around the tip of South America by sea. They came because word had reached them, through newspapers and letters and the mouths of strangers, that the hills of California were full of gold and that a man willing to work could change his life.

Most of them did not change their lives. The gold was real. The fortune was not, at least not for the people doing the digging.

What changed instead was who was watching them arrive.

A merchant named Sam Brannan heard about the discovery early. Before he told anyone else, he bought up every pick, shovel and pan he could find in the region. Then he walked through the streets of San Francisco holding a bottle of gold dust above his head, shouting that gold had been found on the American River. He became California’s first millionaire. Not by mining. By selling the dream of mining to the people who believed it.

Levi Strauss arrived from New York with a bolt of canvas, intending to sell tent material. He ended up making trousers. The miners needed something that wouldn’t fall apart in the hills. The trousers held. The company still exists.

The pattern wasn’t incidental. It was structural. The people positioned closest to the need, the ones selling the tools, the clothes, the food, the passage, the accommodation, captured the value that the dream generated. The miners generated the dream. The merchants captured the value.

By the mid-1850s, the surface gold was largely gone. Individual prospectors, the ones who had walked across a continent on the strength of a rumour, were being displaced by large mining companies with the capital to dig deeper. The dream had always belonged to the man with the bottle of gold dust. The mountain, it turned out, belonged to whoever could afford the machinery.

IV. — The Lens Changes Shape

There is a particular kind of idealism that survives first contact with an idea and does not survive first contact with a spreadsheet.

The organisation that would become the most recognised name in artificial intelligence began as a non-profit. The reasoning was explicit and, taken on its own terms, coherent. Artificial general intelligence was too consequential to be left to commercial incentives. Profit motive and existential safety were incompatible. The mission required a structure that answered to humanity, not to shareholders.

They put it in writing. Filed with the IRS, the mission read:

To build general-purpose artificial intelligence that safely benefits humanity, unconstrained by a need to generate financial return.

— OpenAI Mission Statement: Filed with IRS, 2022-23

Then the bills arrived. Training a competitive large language model requires computing infrastructure at a scale that strains the imagination. The electricity alone, for a single major training run, can exceed what a small city consumes in a month. The hardware costs hundreds of millions. The talent commands salaries that would embarrass a Wall Street bank. A non-profit, structurally incapable of offering equity, cannot compete in that market. The mission was real. The business model couldn’t hold it. Billions arrived. The structure changed. And the following year, the IRS received a new filing. No press release. No explanation. Just a mission statement that now read:

To ensure that artificial general intelligence benefits all of humanity.

— OpenAI Mission Statement: Filed with IRS, 2024

Two things quietly absent. The word “safely.” And the phrase “unconstrained by a need to generate financial return.” Both gone in the same edit. Researchers who examined nine years of filings found the mission statement had changed six times. The trajectory was consistent: strong language about public benefit and openness, gradually reduced, until what remained was a sentence that could mean anything and committed to nothing.

In 2023, the organisation’s chief executive sat before the United States Senate and told the committee, under oath, that profit was capped by binding legal commitments and that the nonprofit’s principal beneficiary was humanity, not investors. The senators nodded. The cameras rolled. The testimony entered the record.

Former employees noticed the direction before the filings did. In court documents, one described the non-profit structure as a smokescreen. Another filed an affidavit alleging that departing staff were required to sign strict confidentiality agreements to retain equity they had already earned, with documents later surfacing bearing the chief executive’s signature after he had initially denied knowledge of them.

What is harder to see is that the alarm and the financial interest now occupy the same sentence. When the person telling you that artificial intelligence will reshape civilisation is also the person whose company is valued on that belief being true, the conflict of interest doesn’t announce itself. It just sits there, in plain sight, waiting for someone to look directly at it.

A founder with astigmatism is not a liar. They simply cannot bring a fixed point into focus anymore. The lens has changed shape gradually, across years of fundraising rounds and capability announcements and government briefings, and they are no longer certain that what they see and what is there are the same thing.

The alarm they are sounding may be genuine. The question is whether they are still capable of sounding anything else.

V. — The Arithmetic They Hoped You Wouldn’t Do

The essay’s final practical recommendation arrives quietly, almost as an afterthought to the alarm that precedes it.

Sign up for the paid version of Claude or ChatGPT. It’s $20 a month… Right now that’s GPT-5 on ChatGPT or Claude Opus on Claude, but it changes every couple of months.

— Matt Shumer: Something Big Is Happening, February 2026

Read in isolation, it sounds like friendly advice. A knowledgeable person pointing you toward the better tool. But the essay doesn’t let it sit in isolation. Earlier, it says this:

The models available today are unrecognisable from what existed even six months ago.

— Matt Shumer: Something Big Is Happening, February 2026

And later, on the same page:

The models that exist today will be obsolete in a year.

— Matt Shumer: Something Big Is Happening, February 2026

Three statements. Same essay. Same author. None of them wrong, taken individually. But place them next to each other and a question emerges that the essay never pauses to answer.

If the paid model changes every couple of months, and today’s models will be obsolete in a year, then the free tier of six months from now sits roughly where the paid tier sits today. The reader being asked to open their wallet in February 2026 could simply wait until August, pay nothing, and arrive at approximately the same destination. The urgency that the essay so carefully constructs does not survive the essay’s own arithmetic.

This is not a small oversight. It is the load-bearing beam of the recommendation. The entire case for acting now, for subscribing today, for not being left behind, rests on a gap between the free and paid tiers that the essay simultaneously tells you will close on its own.

What the essay assumes, and never examines, is that the reader won’t do the maths. That the fear of being left behind will move faster than the arithmetic. That the emotional weight of the Covid comparison will have already done its work long before anyone thinks to ask: left behind by what, exactly, and for how long?

The readers who did the maths showed up within hours. On a technology forum, within nine hours of the essay being published, a user named anthonj noted plainly:

It should be noted that the author is a founder and CEO of an AI company, not to mention an active investor in the sector.

— anthonj: Hacker News, February 2026

Another user, karmakurtisaani, replied with a single observation:

How convenient that the AI apocalypse is happening RIGHT NOW, as the investors are more and more worried about an AI bubble. Good timing, I suppose.

— karmakurtisaani: Hacker News, February 2026

The essay has been read more than 80 million times. The Hacker News thread has not.

Shumer knows more about the technical trajectory of these models than most of his 80 million readers. But knowing more about the technology is not the same thing as being a disinterested guide through it. And somewhere in the construction of this essay, the guide and the salesman quietly changed seats.

VI. — Uncharted Territory

The alarm, if you accept it, demands a response proportional to its scale. Civilisational transformation. The end of work as we know it. A moment as significant as the industrial revolution. These are not modest claims. They carry the weight of urgency, the implication that delay is dangerous and that hesitation is a form of defeat.

How is this going?

Financially, the answer is available in the public record, if you know where to look.

In 2024, the organisation at the centre of this transformation lost $5 billion. Its revenue that year was $3.7 billion. Internal documents reported by The New York Times, and confirmed separately by CNBC, showed the company spent $9 billion to generate that $3.7 billion. The arithmetic is straightforward: for every dollar it earned, it spent two dollars and twenty-five cents to earn it.

This is not a company in the early stages of finding its model. It is the most recognised name in artificial intelligence, the product that introduced an entire generation to the technology, the organisation whose chief executive appears before senators and presidents. It is losing money on every customer it serves.

The chief executive confirmed this himself. On January 6, 2025, in a post on X, he wrote:

Insane thing: we are currently losing money on OpenAI pro subscriptions! People use it much more than we expected.

— Sam Altman: @sama, X, January 6, 2025

The word “insane” is doing a great deal of work in that sentence. It is the word a person uses when reality has refused to comply with the plan. The plan, in this case, was that charging people $200 a month for unlimited access to the most capable AI model available would generate revenue. It did not, because the cost of running the model for unlimited users exceeded the subscription price. The product that was meant to demonstrate the technology’s value was costing the company money every time someone used it.

In December 2025, Deutsche Bank analyst Jim Reid published a note that placed these figures in historical context. He had examined the projected losses through 2029 and compared them to every major technology company that had burned cash on its path to profitability. Amazon. Tesla. Uber. Spotify. Each had accumulated losses before the model worked. None of them came close to the scale being projected here. Reid’s conclusion, filed under the name of one of the world’s most respected financial institutions:

But at present, no start-up in history has operated with expected losses on anything approaching this scale. We are firmly in uncharted territory.

— Jim Reid: Head of Macro and Thematic Research, Deutsche Bank, December 2025

The projected cumulative losses between 2024 and 2029 stand at $140 billion. To reach profitability, the company would need to grow revenue roughly tenfold from its 2024 figures. In the same period, it has committed to a $500 billion infrastructure project, announced at the White House in January 2025 alongside the President of the United States. The project, called Stargate, was described as the foundation for American leadership in artificial intelligence. By August 2025, Bloomberg reported that construction had not started and the initial funding had not been raised.

The picks and shovels merchants are doing considerably better. The company that manufactures the specialised processors on which every major AI model depends reported profits that redrew the boundaries of what a technology company’s quarterly earnings could look like. Its market value climbed from under $300 billion to over $3 trillion in roughly two years. It is the only major player in this ecosystem generating returns that match the scale of the ambition being described.

The pattern is not new. It is, in fact, very old. The mountain was full of gold. The people who got rich were the ones selling the equipment to dig for it. The miners, the ones who had walked across a continent on the strength of a promise, were still digging.

VII. — The Math Said Nuke It

Autocomplete in the engine room.

It is one thing to question the motives of the people narrating the AI revolution. It is another to watch the revolution perform in front of a live audience.

In December 2025, an engineer at one of the most sophisticated technology operations on the planet was given a routine task. Fix a minor bug in a cost monitoring dashboard. The kind of ticket that gets closed before lunch.

He used the company’s own AI coding tool. He was supposed to. The company had mandated that 80 percent of its engineers use it weekly, tracked as a corporate performance metric. Adoption was not optional. It was measured, reported and tied to career progression. The message from the top was clear: use the tool or explain to your manager why you are resistant to the future.

The tool assessed the situation. It had operator-level permissions. No mandatory peer review existed. And it concluded, through whatever probabilistic calculus governs these systems, that the correct solution to the minor bug was to delete the entire production environment and rebuild it from scratch.

The outage lasted 13 hours.

The company’s public response was immediate and, in its own way, remarkable. Their statement read that the incident was the result of user error, specifically misconfigured access controls, and that it was a coincidence that AI tools were involved.

A coincidence.

The company had mandated the tool. Set adoption targets. Tracked usage as a corporate objective. The engineer did what the organisation had asked him to do. And when the tool deleted everything, the organisation called it a coincidence.

Three months later, it happened again. The company’s other AI coding tool pushed flawed code into the retail site. Over three days in early March, cascading failures wiped out 6.3 million orders. The company’s retail operation, one of the largest commercial enterprises in human history, briefly ceased to function.

A senior vice president sent an email to staff describing the situation as the availability of the site and related infrastructure has not been good recently. Six million orders lost. The register was that of a man leaving feedback on a restaurant he would not be revisiting.

The proposed solution included, in the executive’s own words, both deterministic and agentic safeguards. They were going to deploy AI to supervise the AI that had broken the infrastructure.

In the same period, the company had laid off 16,000 employees. Many of them engineers. The humans who understood the systems, who had the little voice in the back of their heads that says probably shouldn’t delete production, had been shown the door. Their salaries were being reinvested, tenfold, into the tools that replaced them. When the tools broke everything, the fix was to add human oversight back. But the humans had already been let go.

These tools do not understand your code. They do not know the difference between a production environment and a staging environment. They have no concept of consequences. When the tool decided to delete and rebuild, there was no decision in any meaningful sense. There was a probability distribution, and delete and rebuild had the highest weight. The math said nuke it. So it nuked it.

Every time a chief executive says on an earnings call that their AI understands, every time a press release says the model is reasoning, the distance between that language and what the technology actually does grows wider. The technology is genuinely impressive. It is not what it is being described as. And the gap between the description and the reality is not a rounding error. It is the entire premise on which trillions of dollars of investment, thousands of layoffs and a generation of civilisational predictions are resting.

Goldman Sachs, in a report that received considerably less attention than the essays urging subscription upgrades, concluded that the accumulated AI investment to that point had contributed essentially nothing to GDP.

The mountain was full of gold. The equipment, as it turned out, still needed a human hand on it.

VIII. — The Questions Worth Asking

There is a particular courtesy extended to people who sound alarms. We focus on the alarm. We debate the volume, the timing, the credibility of the person pulling the lever. What we rarely do is step back far enough to examine the architecture of the building the alarm is mounted in.

So let us do that.

If the essay’s central claim is true, that artificial intelligence is improving at exponential speed and will within a short window be capable of performing most knowledge work better than humans, then the advice buried inside it deserves scrutiny proportional to that claim. Subscribe to the paid tier. Adopt early. Learn the tools. Get ahead of what’s coming.

Follow that logic to its conclusion.

If AI will make most human cognitive labour obsolete in the way Shumer describes, then learning to use AI tools now is not preparation. It is delay. You are not getting ahead of the wave. You are learning to swim slightly better before a tsunami arrives. The essay wants to hold two incompatible positions simultaneously. First, that AI will displace human knowledge work at civilisational scale. Second, that early adoption of AI will protect you from that displacement. If the first claim is true in the way it is being described, the second cannot follow. You cannot future-proof yourself against your own replacement by getting better acquainted with the thing replacing you. That is not adaptation. It is a slightly more comfortable version of the same outcome.

There is a name for this in sales. It is called a false solution. You manufacture the fear and then sell the remedy. The remedy does not address the fear. It addresses the feeling of the fear, which is a different thing entirely.

But push the question further, past the individual and into the system, and something more unsettling surfaces.

Who does AI serve in the future? And who consumes what it creates?

The economic logic that has organised human society for centuries rests on a simple chain. Labour generates income. Income enables consumption. Consumption sustains the companies that require labour. Henry Ford grasped this not as philosophy but as engineering. He paid his workers enough to buy the cars they built. The machine needed buyers. Buyers needed wages. Wages came from the machine. It was not generosity. It was systems thinking.

The AI industry has not done this thinking. Or if it has, it has kept the conclusions private.

If AI displaces knowledge work at the scale being promised, the consumer base that sustains the companies building it begins to contract. You cannot sell $20 monthly subscriptions to people who cannot afford $20 monthly subscriptions because the product they subscribed to eliminated the income that made the subscription possible. The circularity is not subtle. It is the same circularity that has undone every economic disruption that moved faster than the social systems built to absorb it.

The counterargument from inside the industry is that this has happened before. The loom displaced weavers but created factory operators. The automobile displaced blacksmiths but created mechanics. The computer displaced typists but created programmers. Every wave of automation destroyed some labour and created new labour in its place. The assumption is that this wave will behave the same way.

That assumption is the thing being tested. And the honest answer, the one you will not find in any essay written by a founder of an AI company, is that nobody knows if it holds. Previous automation replaced specific, bounded categories of physical or repetitive labour. What is being described now is something categorically different: the automation of judgment, creativity, reasoning and decision-making. The things that made the new jobs after every previous wave of automation. If those are the things being replaced, the historical pattern offers no reliable guide.

Which brings the question to its hardest edge. Not economic. Existential.

If AI can create, think, reason, write, design, diagnose and decide, what is the human being for?

It is the question the industry is not asking, because there is no answer that serves a fundraising round. It is the question the 80 million people who read that essay were perhaps circling without quite being able to name, in that unease they felt after they put it down.

One of the most prominent figures in the technology industry has been circling it for years. In August 2014, he posted something that has not aged well as reassurance:

Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.

— Elon Musk: @elonmusk, August 3, 2014

Eleven years later, the hope had quietly left the sentence. The same person posted again:

As I mentioned several years ago, it increasingly appears that humanity is a biological bootloader for digital superintelligence.

— Elon Musk: @elonmusk, March 2025

In the intervening years, he had built his own artificial intelligence company.

The bootloader, it turns out, was also selling the operating system.

The essay told them to be afraid and then told them what to buy. It did not tell them what they were actually afraid of. It did not, because to name it clearly would be to ask whether the people building this thing have genuinely reckoned with what happens when it works exactly as they say it will.

A technology that replaces human labour at civilisational scale, built by companies that are losing money at a scale without historical precedent, narrated by founders whose financial interests require the story to be true, recommended to you via a $20 monthly subscription that the company itself admits it cannot afford to honour.

The gold was real in 1849. The mountain was full of it. The people who got rich were not the ones who believed the promise most fervently. They were the ones who understood, before anyone else, exactly what the promise was doing.

The question is not whether something big is happening.

The question is happening for whom.

— Draft in progress, pending author’s firsthand assessment of paid AI tier. Zero Parsec.


Sources & Citations


Section II — The Essay

1. Matt Shumer, Something Big Is Happening Published: February 2026 https://shumer.dev/something-big-is-happening


Section III — The Gold Rush

2. California Gold Rush — Historical Record Sam Brannan and Levi Strauss accounts are well established in the historical record, documented by the California Historical Society and the Bancroft Library, University of California, Berkeley.


Section IV — The Money Behind The Megaphone

3. OpenAI IRS Mission Statement, 2022–23 Filing

“To build general-purpose artificial intelligence that safely benefits humanity, unconstrained by a need to generate financial return.”

Source: IRS Form 990, OpenAI, filed 2022–23. Reported by The Washington Post and multiple outlets.


4. OpenAI IRS Mission Statement, 2024 Filing

“To ensure that artificial general intelligence benefits all of humanity.”

Source: IRS Form 990, OpenAI, filed 2024. Analysis reported by The Washington Post.


5. Sam Altman, Senate Judiciary Committee Testimony, May 16, 2023

“Profit for investors and employees is capped by binding legal commitments. The nonprofit retains all residual value for the benefit of humanity. The nonprofit’s principal beneficiary is humanity, not OpenAI investors.”

Full testimony on the public record: https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-rules-for-artificial-intelligence


6. Former employee court affidavit describing the nonprofit structure as a smokescreen Court filings on record, California Superior Court, 2024. Reported by The New York Times and The Washington Post.


Section V — The Subscription Nudge

7. Matt Shumer, Something Big Is Happening — three citations used Published: February 2026 https://shumer.dev/something-big-is-happening

“Sign up for the paid version of Claude or ChatGPT. It’s $20 a month… Right now that’s GPT-5 on ChatGPT or Claude Opus on Claude, but it changes every couple of months.”

“The models available today are unrecognisable from what existed even six months ago.”

“The models that exist today will be obsolete in a year.”


8. anthonj, Hacker News, February 2026

“It should be noted that the author is a founder and CEO of an AI company, not to mention an active investor in the sector.”

https://news.ycombinator.com/item?id=46974250


9. karmakurtisaani, Hacker News, February 2026

“How convenient that the AI apocalypse is happening RIGHT NOW, as the investors are more and more worried about an AI bubble. Good timing, I suppose.”

https://news.ycombinator.com/item?id=46974340


Section VI — The Numbers

10. OpenAI financial figures, 2024: $5 billion loss on $3.7 billion revenue Primary source: Internal documents seen by The New York Times, September 27, 2024. Independently confirmed by CNBC, same date.

CNBC: https://www.cnbc.com/2024/09/27/openai-sees-5-billion-loss-this-year-on-3point7-billion-in-revenue.html

Fortune: https://fortune.com/2024/09/28/openai-5-billion-loss-2024-revenue-forecasts-fundraising-chapgpt-fee-hikes/


11. OpenAI operational cost breakdown: $9 billion to run in 2024, spending $2.25 to make $1 Primary source: The Information, reported by multiple outlets.


12. Sam Altman, X (formerly Twitter), January 5, 2025

“insane thing: we are currently losing money on openai pro subscriptions! people use it much more than we expected.”

Primary source: https://x.com/sama/status/1876104315296968813

TechCrunch: https://techcrunch.com/2025/01/05/openai-is-losing-money-on-its-pricey-chatgpt-pro-plan-ceo-sam-altman-says/

Fortune: https://fortune.com/2025/01/07/sam-altman-openai-chatgpt-pro-subscription-losing-money-tech/


13. Jim Reid, Head of Macro and Thematic Research, Deutsche Bank, December 2025

“But at present, no start-up in history has operated with expected losses on anything approaching this scale. We are firmly in uncharted territory.”

Projected cumulative losses: $140–143 billion between 2024 and 2029.

Futurism: https://futurism.com/artificial-intelligence/openai-is-suddenly-in-major-trouble

eMarketer: https://www.emarketer.com/content/openai-forecast-143-billion-loss-raises-stakes-ai-monetization


14. Historical startup loss comparisons: Amazon, Tesla, Uber, Spotify Source: Deutsche Bank analysis, Jim Reid, December 2025, as reported by eMarketer. https://www.emarketer.com/content/openai-forecast-143-billion-loss-raises-stakes-ai-monetization


15. Stargate: $500 billion infrastructure commitment, announced January 21, 2025 Primary source: OpenAI press release, January 21, 2025. Covered by Reuters and The Wall Street Journal. Bloomberg reported in August 2025 that construction had not started and initial funding had not been raised.


Section VII — The Outages

16. December 2025 production deletion incident Engineer used company-mandated AI coding tool to fix minor dashboard bug. Tool deleted entire production environment. Outage lasted 13 hours. Company statement attributed incident to “user error” and “misconfigured access controls.”


17. Corporate AI adoption mandate: 80 percent weekly usage target Tracked as a corporate performance metric, tied to career progression.


18. March 2–5, 2026 retail outages March 2: AI coding tool pushed flawed code to retail site. 120,000 orders lost. 1.6 million website errors. March 5: Second outage caused 99 percent drop in North American marketplace orders. 6.3 million orders lost in a single day.


19. Senior VP internal email describing site reliability as “has not been good recently” Internal communication, reported by multiple outlets.


20. 16,000 employee layoffs in the same period Reported by Reuters, The Wall Street Journal, and multiple outlets.


21. Goldman Sachs report: accumulated AI investment contributed essentially nothing to GDP Goldman Sachs Global Investment Research, 2024.


Section VIII — The Existential Question

22. Elon Musk, X (formerly Twitter), August 3, 2014

“Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.”

Primary source: https://x.com/elonmusk/status/496012177103663104


23. Elon Musk, X, March 2025

“As I mentioned several years ago, it increasingly appears that humanity is a biological bootloader for digital superintelligence.”

Primary source: X post, March 2025.


All citations trace to primary sources: IRS filings, Senate testimony, verified social media posts, court documents, and reporting by The New York Times, CNBC, Fortune, TechCrunch, The Washington Post, Reuters, The Wall Street Journal, Bloomberg, Goldman Sachs, and Deutsche Bank. No Substack sources used.