Something Big Is Happening — Not the Way It Is Being Told.
An essay about the AI hype and doomsday predictions.
Every once in a while, a certain piece of writing speaks with enough authority and it travels far enough that you encounter it because someone whose judgment you trust has already been unsettled by it. I happened read one such and it made me think, deep.
I. The Itch
A YouTuber I follow, someone who has spent years working in and teaching technology for a living, made a video about it. He’d read it, and it had unsettled him in a way that led him to question his own place in what was coming. That was enough to make me find it.
The essay is titled Something Big Is Happening. It was written by Matt Shumer, who runs an AI company. It has been read more than 80 million times. The argument, stripped to its bones, is this: artificial intelligence is improving faster than almost anyone outside the industry understands, the consequences will be vast, and most people are not prepared.
I read it in one sitting. It is well written. But somewhere around the midpoint, something did’t sit quite right with me.
The claims weren’t obviously false, but I couldn’t shake a simpler question: why is this person, with these particular financial interests, the one telling me to be afraid? It caught the part of my brain that won’t take a convincing argument at face value.
II. The Alarm And How It Works
The essay opens with a comparison to Covid. Specifically, the moment before most people understood what was coming. That period, roughly January and February of 2020, when the news was filtering through and the dominant response was still “this seems overblown”. Shumer’s argument is that we are in that same moment with artificial intelligence. The people who understand what is actually happening are watching something enormous approach, and the rest of the world is going about its business. Covid is recent enough to carry genuine dread, and the overblown-until-it-wasn’t arc is one almost everyone lived through personally.
From there, it builds its case fast. AI is improving faster than the public understands. The benchmarks are being cleared ahead of schedule. What seemed like a five-year problem is becoming a two-year problem. The acceleration itself is the argument.
By the end, there is a recommendation tucked inside — Stop using the free version of these tools. Use the paid tier. The best available model, used seriously, will show you what is actually coming.
Read it once and it feels like a warning from someone who got there early. Read it twice and something else begins to emerge.
III. California, 1848
In January 1848, a carpenter named James Marshall found something glinting in the water at a sawmill on the American River in northern California. He was working for a man named John Sutter, and neither of them wanted the news to travel.
But by 1849, tens of thousands of people were moving west. They came from the eastern United States, from Europe, from China, from Chile. They came overland through mountain passes and around the tip of South America by sea. They came because word had reached them, through newspapers and letters and the mouths of strangers, that the hills of California were full of gold and that a man willing to work could change his life.
Most of them did not change their lives. The gold was real. The fortune was not, at least not for the people doing the digging. What changed instead was for who was watching them arrive.
A merchant named Sam Brannan heard about the discovery early. Before he told anyone else, he bought up every pick, shovel and pan he could find in the region. Then he walked through the streets of San Francisco holding a bottle of gold dust above his head, shouting that gold had been found on the American River and reselling all the equipment to the people. He became California’s first millionaire. Not directly by mining, but by selling the dream of mining to the people who believed it.
Levi Strauss arrived from New York with a bolt of canvas, intending to sell tent material. He ended up making trousers. The miners needed something that wouldn’t fall apart in the hills. The trousers held. The company still sells denim trousers and I am wearing a version of it as I am writing this down.
The pattern was impeccable. The people positioned closest to the need, the ones selling the tools, the clothes, the food, the passage, the accommodation, captured the value that the dream generated. The miners generated the dream. The merchants captured the value.
By the mid 1850s, the surface gold was largely gone. Individual prospectors, the ones who had walked across a continent on the strength of a rumour, were being displaced by large mining companies with the capital to dig deeper. The dream had always belonged to the man with the bottle of gold dust. The mountain, belonged to whoever could afford the machinery.
IV. The Lens Changes Shape
The organisation that would become the most recognised name in artificial intelligence began as a non-profit. The reasoning was explicit that artificial general intelligence was too consequential to be left to commercial incentives. Profit motive and existential safety were incompatible. The mission required a structure that answered to humanity, not to shareholders.
They put it in writing. Filed with the IRS, the mission read:
To build general-purpose artificial intelligence that safely benefits humanity, unconstrained by a need to generate financial return.
— OpenAI Mission Statement: Filed with IRS, 2022-23
Then the bills arrived. Training a competitive large language model requires computing infrastructure at a scale that strains the imagination. The electricity alone, for a single major training run, can exceed what a small town consumes in a month. The hardware costs hundreds of millions. The talent commands salaries that would embarrass a Wall Street bank. A non-profit, structurally incapable of offering equity, cannot compete in that market. The business model couldn’t hold the mission. Billions arrived. The structure changed. And the following year, the IRS received a new filing. A mission statement that now read:
To ensure that artificial general intelligence benefits all of humanity.
— OpenAI Mission Statement: Filed with IRS, 2024
Two things absent. The word “safely” and the phrase “unconstrained by a need to generate financial return.” Both gone in the same edit. Researchers who examined nine years of filings found the mission statement had changed six times. The trajectory was consistent: strong language about public benefit and openness, gradually reduced, until what remained was a sentence that could mean anything and committed to nothing.
In 2023, the organisation’s chief executive sat before the United States Senate and told the committee, under oath, that profit was capped by binding legal commitments and that the nonprofit’s principal beneficiary was humanity, not investors.
Former employees noticed the direction before the filings did. In court documents, one described the non-profit structure as a smokescreen. Another filed an affidavit alleging that departing staff were required to sign strict confidentiality agreements to retain equity they had already earned, with documents later surfacing bearing the chief executive’s signature after he had initially denied knowledge of them.
What is harder to see is that the alarm and the financial interest now occupy the same sentence. When the person telling you that artificial intelligence will reshape civilisation is also the person whose company is valued on that belief being true, the conflict of interest in plain sight.
The lens has changed shape gradually, across years of fundraising rounds and capability announcements and government briefings, and they are no longer certain that what they see and what is there are the same thing.
V. The Arithmetic They Hoped You Wouldn’t Do
The essay’s final practical recommendation arrives as an afterthought to the alarm that precedes it.
Sign up for the paid version of Claude or ChatGPT. It’s $20 a month… Right now that’s GPT-5 on ChatGPT or Claude Opus on Claude, but it changes every couple of months.
— Matt Shumer: Something Big Is Happening, February 2026
Read in isolation, it sounds like valid advice. A knowledgeable person pointing you toward the better tool to understand the technology better. But earlier, it says this:
The models available today are unrecognisable from what existed even six months ago.
— Matt Shumer: Something Big Is Happening, February 2026
And later, on the same page:
The models that exist today will be obsolete in a year.
— Matt Shumer: Something Big Is Happening, February 2026
Three statements. Same essay. Same author. None of them wrong, taken individually. But place them next to each other and a question emerges that the essay never pauses to answer.
If the paid model changes every couple of months, and today’s models will be obsolete in a year, then the free tier of six months from now sits roughly where the paid tier sits today. The reader being asked to open their wallet in February 2026 could simply wait until August, pay nothing, and arrive at approximately the same destination. The urgency that the essay so carefully constructs does not survive the essay’s own arithmetic.
The entire case for acting now, for subscribing today, for not being left behind, rests on a gap between the free and paid tiers that the essay simultaneously tells you will close on its own.
What the essay assumes, is that the reader won’t do the maths. That the emotional weight of the Covid comparison will have already done its work long before anyone thinks to ask: left behind by what, exactly, and for how long?
A few readers who were skeptical showed on a technology forum, within few hours of the essay being published:
It should be noted that the author is a founder and CEO of an AI company, not to mention an active investor in the sector.
— anthonj: Hacker News, February 2026
Another user replied with a single observation:
How convenient that the AI apocalypse is happening RIGHT NOW, as the investors are more and more worried about an AI bubble. Good timing, I suppose.
— karmakurtisaani: Hacker News, February 2026
The essay has been read more than 80 million times. The Hacker News thread has not.
Shumer knows more about the technical trajectory of these models than most of his 80 million readers. But knowing more about the technology is not the same thing as being a disinterested guide through it. And somewhere in the construction of this essay, the guide and the salesman quietly changed seats.
VI. Uncharted Territory
The alarm, if you accept it, demands a response proportional to its scale. Civilisational transformation. The end of work as we know it. A moment as significant as the industrial revolution. These claims carry the weight of urgency, the implication that delay is dangerous and that hesitation is self-destruction.
How is this going?
Financially, the answer is available in the public record, if you know where to look.
In 2024, the organisation at the centre of this transformation lost $5 billion. Its revenue that year was $3.7 billion. Internal documents reported by The New York Times, and confirmed separately by CNBC, showed the company spent $9 billion to generate that $3.7 billion. The maths is straightforward: for every dollar it earned, it spent about two dollars and twenty-five cents to earn it.
This is not a company in the early stages of finding its model. It is the most recognised name in artificial intelligence, the product that introduced an entire generation to the technology, the organisation whose chief executive appears before senators and presidents. It is losing money on every customer it serves.
He confirmed this himself, on January 6, 2025, in a post on X :
Insane thing: we are currently losing money on OpenAI pro subscriptions! People use it much more than we expected.
— Sam Altman: @sama, X, January 6, 2025
The word “insane” is the word a person uses when reality has refused to comply with the plan. The plan, in this case, was that charging people $200 a month for unlimited access to the most capable AI model available would generate revenue. It did not, because the cost of running the model for unlimited access exceeded the subscription price. The product that was meant to demonstrate the technology’s value was costing the company money every time someone used it.
In December 2025, Deutsche Bank analyst Jim Reid published a note that placed these figures in historical context. He had examined the projected losses through 2029 and compared them to every major technology company that had burned cash on its path to profitability. Amazon. Tesla. Uber. Spotify. Each had accumulated losses before the model worked. None of them came close to the scale being projected here. Reid’s conclusion, filed under the name of one of the world’s most respected financial institutions:
But at present, no start-up in history has operated with expected losses on anything approaching this scale. We are firmly in uncharted territory.
— Jim Reid: Head of Macro and Thematic Research, Deutsche Bank, December 2025
The projected cumulative losses between 2024 and 2029 stand at $140 billion. To reach profitability, the company would need to grow revenue roughly tenfold from its 2024 figures. In the same period, it has committed to a $500 billion infrastructure project, announced at the White House in January 2025 alongside the President of the United States. The project, called Stargate, was described as the foundation for American leadership in artificial intelligence. By August 2025, Bloomberg reported that construction had not started and the initial funding had not been raised.
The picks and shovels merchants on the other hand, are doing considerably better. The company that manufactures the specialised processors on which every major AI model depends reported profits that redrew the boundaries of what a technology company’s quarterly earnings could look like. Its market value climbed from under $300 billion to over $3 trillion in roughly two years. It is the only major player in this ecosystem generating returns that match the scale of the ambition being described.
The valley full of gold. The people who got rich were the ones selling the equipment to dig for it. The miners, the ones who had walked across a continent on the strength of a promise, were still digging.
VII. The Math Said, “Nuke It”
In December 2025, an engineer at one of the most sophisticated technology operations on the planet was given a routine task. Fix a minor bug in a cost monitoring dashboard. The kind of ticket that gets closed before lunch.
He used the company’s own AI coding tool. He was supposed to. The company had mandated that 80% of its engineers use it weekly, tracked as a corporate performance metric. Adoption was measured, reported and tied to career progression. The message from the top was clear — use the tool or explain to your manager why you are resistant to the future.
The tool assessed the situation and concluded, through whatever probabilistic calculus governs these systems, that the correct solution to the minor bug was to delete the entire production environment and rebuild it from scratch.
The outage lasted 13 hours.
The company’s public response was immediate and remarkable. Their statement read that the incident was the result of user error, specifically misconfigured access controls, and that it was a coincidence that AI tools were involved.
A coincidence.
The company had mandated the tool. Set adoption targets. Tracked usage as a corporate objective. The engineer did what the organisation had asked him to do. And when the tool deleted everything, the organisation called it a coincidence.
Three months later, it happened again. The company’s other AI coding tool pushed flawed code into the retail site. Over three days in early March, cascading failures wiped out 6.3 million orders. The company’s retail operation, one of the largest commercial enterprises in human history, briefly ceased to function.
A senior vice president sent an email to staff describing the situation as the availability of the site and related infrastructure has not been good recently. Six million orders lost. The proposed solution included, in the executive’s own words, both deterministic and agentic safeguards. They were going to deploy AI to supervise the AI that had broken the infrastructure.
In the same period, the company had laid off 16,000 employees. Many of them engineers. The humans who understood the systems, who had the little voice in the back of their heads that says probably shouldn’t delete production, had been shown the door. Their salaries were being reinvested, tenfold, into the tools that replaced them.
These tools do not understand your code the way you do. They do not know the difference between a production environment and a staging environment. They have no concept of consequences. When the tool decided to delete and rebuild, there was no decision in any meaningful sense. There was a probability distribution, and delete and rebuild had the highest weight. The math said nuke it. So it nuked it.
Every time a chief executive says on an earnings call that their AI understands, every time a press release says the model is reasoning, the distance between that language and what the technology actually is quite wide. The technology is genuinely impressive. It is not what it is being described as. The gap between the description and the reality is the entire premise on which trillions of dollars of investment, thousands of layoffs and a generation of civilisational predictions are resting.
Goldman Sachs, in a report that received considerably less attention than the essay urging subscription upgrades, concluded that the accumulated AI investment had contributed essentially nothing to GDP.
VIII. The Questions Worth Asking
If the essay’s central claim is true, that artificial intelligence is improving at exponential speed and will within a short window be capable of performing most knowledge work better than humans, then the advice buried inside it deserves scrutiny proportional to that claim. Subscribe to the paid tier. Adopt early. Learn the tools. Get ahead of what’s coming.
Now, follow that logic to its conclusion.
If AI will make most human cognitive labour obsolete in the way Shumer describes, then learning to use AI tools now is not preparation. It is delay. You are not getting ahead of the wave. You are learning to swim slightly better before a tsunami arrives. The essay wants to hold two incompatible positions simultaneously. First, that AI will displace human knowledge work at civilisational scale. Second, that early adoption of AI will protect you from that displacement. If the first claim is true in the way it is being described, the second cannot follow. You cannot future proof yourself against your own replacement by getting better acquainted with the thing replacing you. That is not adaptation. It is a slightly more comfortable version of the same outcome.
There is a name for this in sales. It is called a false solution. You manufacture the fear and then sell the remedy. The remedy does not address the fear. It addresses the feeling of the fear, which is a different thing entirely.
But push the question further, past the individual and into the system, and something more unsettling surfaces.
Who does AI serve in the future? And who consumes what it creates?
The economic logic that has organised human society for centuries rests on a simple chain. Labour generates income. Income enables consumption. Consumption sustains the companies that require labour. Henry Ford grasped this as engineering. He paid his workers enough to buy the cars they built. The machine needed buyers. Buyers needed wages. Wages came from the machine. It was not generosity. It was systems thinking.
The AI industry has not done this thinking. Or if it has, it has kept the conclusions private.
If AI displaces knowledge work at the scale being promised, the consumer base that sustains the companies building it begins to contract. You cannot sell $20 monthly subscriptions to people who cannot afford $20 monthly subscriptions because the product they subscribed to eliminated the income that made the subscription possible. The circularity is not subtle, it is the same one that has undone every economic disruption that moved faster than the social systems built to absorb it.
The counterargument from inside the industry is that this has happened before. The loom displaced weavers but created factory operators. The automobile displaced blacksmiths but created mechanics. The computer displaced typists but created programmers. Every wave of automation destroyed some labour and created new labour in its place. The assumption is that this wave will behave the same way.
That assumption is the thing being tested. And the honest answer, the one you will not find in any essay written by a founder of an AI company, is that nobody knows if it holds. Previous automation replaced specific, bounded categories of physical or repetitive labour. What is being described now is something categorically different: the automation of judgment, creativity, reasoning and decision-making. The things that made the new jobs after every previous wave of automation. If those are the things being replaced, the historical pattern offers no reliable guide.
Which brings the question to its hardest edge. Existential.
If AI can create, think, reason, write, design, diagnose and decide, what is the human being for?
It is the question the industry is not asking, because there is no answer that serves a fundraising round. It is the question the 80 million people who read that essay were perhaps circling without quite being able to name, in that unease they felt after they put it down.
One of the most prominent figures in the technology industry has been circling it for years. In August 2014, he posted something that has not aged well as reassurance:
Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.
— Elon Musk: X.com (formerly Twitter), August 3, 2014
Eleven years later, the hope had quietly left the sentence. The same person posted again:
As I mentioned several years ago, it increasingly appears that humanity is a biological bootloader for digital superintelligence.
— Elon Musk: X.com (formerly Twitter), March 2025
In the intervening years, he had built his own artificial intelligence company.
A technology that replaces human labour at civilisational scale, built by companies that are losing money at a scale without historical precedent, narrated by founders whose financial interests require the story to be true, recommended to you via a $20 monthly subscription that the company itself admits it cannot afford to honour.
The question is not whether something big is happening, but happening for whom.
Addendum, added during final editing:
While this essay was being prepared for publication, Senator Bernie Sanders delivered a speech on the floor of the United States Senate that arrived at the same question by a different route. Where this essay followed the money and the rhetoric, Sanders followed the labour. Where this essay asked who benefits from the alarm, Sanders asked who gets hurt when the alarm turns out to be real. His framing was blunter and more direct than anything in this piece, but the root was the same —
These multi-billionaires are investing in AI and robotics because those investments will increase their wealth and power exponentially. In other words, the richest and most powerful people on earth will become even richer and even more powerful. But what happens to the average American?
— Senator Bernie Sanders: Addressing the United States Senate, March 2026
The essay you have just read asks whether the narrators of the AI moment can be trusted. Sanders is asking what happens if they can, and nobody in a position to act is paying attention.