Agentic systems learned to speak fluently. They never learned to care whether they were right.

In 1990, David Brin imagined a personal AI called the Hypersecretary. It would manage your interaction with an overwhelming global network - spawning task-specific sub-agents to fetch information, verify facts, negotiate access. It understood meaning, not just keywords. It filtered ruthlessly but injected deliberate randomness to prevent you becoming trapped in a bubble of your own preferences. It drafted your responses but never sent them without your review.

We're building this now. We call it agentic AI. But Brin's version had a feature ours lacks: it was obsessive about truth.

I read Earth when it came out. I didn't love it - it's an overstuffed novel that never really gains take-off velocity, and contemporary reviewers fairly called it "a thesis in literary form." If you want to enjoy Brin, start with the first Uplift trilogy (Sundiver, Startide Rising, The Uplift War). But I keep thinking about Earth, because what Brin got right and wrong says something about our own blind spots.

The Architecture

Brin is a physicist - PhD in Space Science, background in optics and astrophysics. These weren't ideas from an artist's garret; they were informed extrapolations in the hard SF lineage. He borrowed his fragmented, information-saturated narrative style from John Brunner's Stand on Zanzibar (1968), but where Brunner immersed readers in suffocating overload, Brin proposed a solution.

The Hypersecretary's architecture holds up. It's not a monolithic AI but an orchestrator spawning lightweight agents called "zesties." They have compute budgets and lifespans; when credits run out, they die. They leave metadata trails other agents can follow - swarm intelligence emerging from simple components. If a thousand agents from different users are hunting the same information, their trails converge on reliable sources. Your agent doesn't have to be the smartest; it just has to follow the herd intelligence of the network.

The human stays in the loop by design. One character dictates responses and lets the system clean up her syntax, but never sends anything until her designated review days. The agent drafts; she decides.

Brin anticipated filter bubbles and designed against them - that same character sets her system to surface 20% of items randomly, deliberately defeating her own parameters. The Hypersecretary isn't just a servant; it's a cognitive coach. An AI that only tells you what you want to hear is, in Brin's recurring critique of modern media, lobotomising its user. He anticipated adversarial attacks: "chaffing," flooding specific parameters with truth-like noise until the system's certainty drops so low it stops reporting. That's prompt injection, SEO spam, the usual adversarial retrieval games. He saw that the filtering layer would become an attack surface.

He even imagined inherited filter profiles as wealth. In Earth, people pass down "Filter Libraries" - refined sets of trusted sources and exclusion rules accumulated over decades. A "Century-Old Filter" is described as more valuable than land. Digital aristocracy, handed down through generations. We don't have inheritable attention infrastructure yet. But what happens when your carefully trained algorithms, your years of context with a personal AI, become assets you pass down or sell?

The Truth Gap

This is where Brin's vision parts company with what we actually built.

His agents ran on a "vouching" system. Information came with a pedigree - a chain of who had staked their reputation on its accuracy. If a high-reputation source vouched for a lie, their credibility took a measurable hit across the entire network. The hard problem wasn't "is this true?" - computationally intractable - but "who is willing to stake their reputation on this?" A market mechanism for truth, where authority is a liquid asset that depletes when you're wrong.

The verification went deeper. Agents ran a "Liar's Paradox" test: if data seemed too perfect - exactly what the user wanted to hear - they actively searched for dissent. No counterargument? Flag it as potential manipulation. Unanimity was itself suspicious; in a healthy information ecology, there should always be some noise and disagreement. Absolute consensus is a sign of a hacked network.

We built some of this — just not where it matters. Google Maps infers traffic congestion from phones reporting their location; that's sensor-grounding. But we didn't extend the principle to claims that actually need verification. Nobody's cross-referencing news reports against physical sensors. Nobody's running Liar's Paradox tests on search results.

That absence isn't accidental or simple - it's the product of unresolved engineering limits, legal risk, and incentives that quietly reward plausibility over accountability.

We built fluent bullshitters with no native concept of truth. Yes, modern systems bolt on retrieval, citations, and tool-calling - but these are retrofits, not foundations. When ChatGPT confidently cites a paper that doesn't exist, or Google's AI overview tells you to put glue on pizza, that's not a bug in an otherwise sound architecture - it's the architecture. Generation without grounding. Fluency, but no verification.

Brin has noticed. Modern AI, he's said recently, is "too much like a parrot and not enough like a librarian." A pointed critique, coming from someone who imagined the alternative thirty-five years ago.

The business model is the other big miss. Brin's Hypersecretary serves its user. One character hires a "rogue hacker" to build hers because she suspects the off-the-shelf versions have corporate or government backdoors. But even in his cynical imagining, you could choose a trustworthy builder. The agent worked for you.

We built something different. The agents are provided "free" by companies whose interests diverge from yours. The filter isn't your rogue hacker; it's Meta's. Google's. OpenAI's. And the danger Brin captured isn't quite the one we worry about. In Earth, the fear isn't that an agent acts against you - it's that an agent hides something you should know. Filtering as censorship-by-default. If your agent serves someone else's interests, the threat is what it quietly stops surfacing.

Brin's vouching architecture isn't implementable as-is - reputation systems get gamed, Goodhart's Law eats everything. His counter: make the algorithms auditable too. If you can see how reputation is calculated, you can spot the gaming. Transparency all the way down. It's an answer, though not obviously a complete one. But the instinct was right: provenance matters more than fluency. We're bolting on citation and retrieval after the fact, retrofitting truth onto systems designed for plausibility.

Brin compares our current moment to the 1930s, when radio and loudspeakers amplified demagogues and nearly wrecked civilisation. Every information transition produces this crisis; the pessimists are "currently right," he argues, because we haven't yet built the transition tools. The Hypersecretary was supposed to be one of them.

What This Tells Us

Brin's 1990 vision assumed AI would be trustworthy but overwhelming. The user's problem was finding signal in noise, managing attention, not disappearing into their own beliefs.

We got fluent but untrustworthy. The user's problem is knowing whether to believe what the agent tells them.

That's not a minor miss. It inverts the design problem entirely.

Brin views science fiction as "self-preventing prophecy" - stories designed to frighten people into action. He places Earth alongside 1984, Silent Spring, Soylent Green. He's "often accused of being an optimist," he says, but sees only a "60% chance we'll eke through" - barely good enough odds to justify having children.

He also had something today's AI predictors lack: freedom to roam. No capital bets. No product to ship. No investors to satisfy. Brin could afford to be wrong; current tech executives can't, and that shapes what they're willing to say.

When Sam Altman or Dario Amodei or Jensen Huang tells you where AI is going, they're not disinterested observers. They're making claims that serve their interests, framing the future in ways that make their current bets look prescient. They're predicting and building at the same time, which means their predictions are entangled with their incentives.

Brin's example suggests a pattern: we're good at extrapolating capabilities - the shape of the technology, the problems it will address - but bad at predicting political economy. Who controls the agents. Whose interests they serve. What happens when the tool works against you rather than for you. The things experts are confident about are usually right. The things they're not talking about are where the surprises come.

The Helvetian Precedent

That blind spot - political economy, not capability - brings us to the prediction that gnawed at me most in 1990. It's about what happens when an information ideology acquires geopolitical weight.

In Brin's 2038, there's been a "Helvetian War" - a global coalition against Switzerland. The justification: Swiss banks hoarding wealth while the rest of the world suffered ecological collapse. The global network had made it clear that kleptocrats were hiding stolen assets behind secrecy laws. The war was framed as forcing open the vaults - a police action for transparency, not conquest.

Brin is sympathetic. Secrecy is the ultimate sin in his moral framework; transparency is worth a war.

In 1990, this felt absurd - a plot contrivance revealing the author's ideological hobby-horses more than any plausible future. Surely the international order wouldn't sanction a war for data transparency. Surely "might makes right" had guardrails.

It no longer feels absurd. We're watching an American executive increasingly unconstrained by norms. Extraterritorial application of law. Weaponised financial infrastructure. Forced divestiture of apps under national security framing. Semiconductor export controls as capability containment. Soft coercion rather than kinetic war, but the logic rhymes.

There's something else, though. Brin imagined the driver as "transparency in the global interest." What we're seeing is different: the preferences of a tech elite becoming a primary geopolitical force, outweighing the interests of stable global order. Not transparency as moral imperative, but technology-enabled power projection as moral imperative. We know best. We move fast. Sovereignty is friction.

Science fiction has a way of normalising futures before they arrive - not prediction so much as agentic prescience, where the assertion itself shapes what becomes thinkable. The tech brotherhood established a truth: data should be free, information wants to flow, secrecy is illegitimate friction. Brin's novel established that opposition to such principles could form a valid casus belli.

How soon before some weak country is pressured, overthrown, or occupied because it sits on the right rare earths, the right energy grid, the right latitude for a hyperscale data centre? On our current track, not long.