Nobody Knows Everything

Blog Posts
Mar 24, 2026
ByRich Scudellari

Part 1 in a two-part series

Open your feed on any given morning, and something will be ending. Jobs, creativity, privacy, and cognition itself. The predictions arrive in waves, each one more confident than the last. The genre has its own aesthetic: a numbered list of reasons for extinction; a graph with an inflection point; a closing line about how we have maybe eighteen months to prepare for something irreversible. Most of it is wrong. A good amount is misguided. And some of it is dishonest.

The clearest example of the last category is a piece by Citrini Research titled "The 2028 Global Intelligence Crisis": framed as a fictional macro memo from the future that describes a world in which AI-driven white-collar displacement triggers a cascading financial crisis. Vivid, well-constructed, and utterly speculative. On the Monday after publication, IBM, Datadog, and American Express fell hard. A piece of financial science fiction moved (at least in part) a trillion dollars of market value. It later emerged that the piece was written in collaboration with a short seller. That doesn't invalidate the ideas, but raises obvious questions about motive.

Derek Thompson pushed back with a piece called "Nobody Knows Anything," describing the AI discourse as a "marketplace of competing science fiction narratives." It's a fair description of the genre, and I’d take it one step further. What we're watching is less a science fiction marketplace and more a prediction market: where many of the loudest voices are actively trying to move the odds toward wherever they've already placed their bets.

Narrow Truths

Thompson's framework is worth taking seriously. He doesn't say people know nothing, he says they know narrow truths. That distinction matters.

The AI ecosystem runs on genuinely different vantage points. Frontier researchers understand the models' architecture and are routinely surprised by their own outputs. Application builders understand their use cases, and often not the underlying science or the long-term macro effects. Enterprise buyers understand what's happening inside their own four walls, and far less beyond them. Economists and investors are applying frameworks originally built for technologies that moved 100 times slower.

None of these people are wrong. All of them have real insight. And yet none of them can see the full picture. This isn’t a flaw. It’s the expected condition when technology moves this fast and has such widespread impact. The question is whether any of them acknowledge what they’re missing.

Three Compounding Problems

If a constrained purview is the natural state, three structural features of this moment make those narrow truths even harder to piece together.

The technology defies first-principles reasoning. These models do things their architects didn't anticipate and can't fully explain. When the builders are regularly astonished by what the models produce, humility is warranted from everyone.

The pace of change makes any opinion perishable. The half-life of an informed view on AI capabilities is measured in weeks (maybe days), not years. That structurally favors noise over signal. Hot takes travel fast. Careful analysis takes time, and by the time it arrives, the landscape has already shifted.

Breakthroughs are coming from everywhere. The barriers to participation are exceptionally low, which is genuinely valuable: the next transformative application might be built by someone who has never set foot in Silicon Valley. But the same openness that lets innovation emerge from anywhere also means the discourse gets flooded from everywhere.

Underneath all of this is a deeper issue: the impact of AI on our economy and society is not merely complicated, it’s complex. A complicated problem, like building a multi-story building, will eventually yield to enough smart people applying enough expertise. A complex problem has too many variables producing too many possible outcomes for any single framework to reliably predict what happens next. Most of the confident predictions in circulation treat AI as a complicated problem. It isn’t.

The Flood

All of the above create a perfect environment for noise. And noise, it turns out, is extremely useful if you're trying to get attention.

Everyone publishing on AI needs distribution, because distribution is how you sell whatever you're actually selling. The academic needs compelling hypotheses to attract funding. The venture capitalist needs a narrative that draws founders and LPs (yes, myself included). The founder needs customers to believe the product's magic is real. The research lab needs talent and policy goodwill. The short seller, as Citrini illustrated, may just need stocks to move.

None of this requires conscious bad faith. Anxiety and uncertainty are real. But people with something at stake and a publishing platform tend to reach for the most vivid, most urgent version of the story. "If it bleeds, it leads" isn't a media industry problem anymore. It's the operating logic of every blog, podcast, X thread, and newsletter trying to break through. Dystopian scenarios go viral. Measured takes do not.

Read Accordingly

The cacophony of competing narratives is real. The structural incentives to dramatize that uncertainty are also real. They compound each other in ways that make the discourse genuinely hard to navigate.

That doesn't mean ignore them. It means read them with two questions in mind:

  1. What’s the context surrounding the writer?
  2. How does this context affect their incentives?

The Citrini piece moved markets. The motive emerged later. That sequence is worth sitting with. Not as a reason to disengage, but as a reminder that in a moment this uncertain, everyone is understandably anxious and will stick to their book. Nobody knows everything, and often the people who sound most certain are leading you toward their own incentives.

As Charlie Munger put it: “Don’t ask a barber if you need a haircut.”

Note: A special thank you to Nia Pryce, Teresa Brewer, Bryant Barr, Brian Craig, RJ Price, and Doug Scott. Ideas are never generated in a vacuum, and good prose is never written alone.

Up next

From Capabilities to Culture: Redefining the Hiring Process

Blog Posts
September 25, 2024

The Business She Refused to Inherit

Founder Stories
October 1, 2025

Closing the Exploitation Window: Threat Mitigation At Scale

Blog Posts
March 28, 2024