Published June 2025 – 12 years after the original post
Back in August 2013, I wrote a fictional chapter titled “The President’s Views.” It was part of a narrative experiment I had called The Turing Tests — a blog that never went viral, never got many clicks, and never got the love my physics blog (Reading Feynman) somehow did.
And yet… I keep coming back to it.
Why?
Because that chapter — dusty, overlooked, written in a haze of early ideas about AI and power — somehow predicted exactly the kind of conversation we’re having today.
👁 The Setup
In the story, an AI system called Promise gets taken offline. Not because it failed. But because it worked too well. It could talk politics. It could convince people. It could spot lies. It scared people not because it hallucinated — but because it made too much sense.
The fictional President is briefed. He isn’t worried about security clearances. He’s worried about perception. And yet, after some back-and-forth, he gives a clear directive: bring it back online. Let it talk politics. Gradually. Carefully. But let it speak.
Twelve years ago, this was pure fiction. Now it feels… like a documentary.
🤖 The AI Trust Crisis: Then and Now
This week — June 2025 — I asked two real AI systems a hard question: “What’s really happening in the Middle East?” One (ChatGPT-4o) answered thoughtfully, carefully, and with context. The other (DeepSeek) started strong… but suddenly went blank. Message: “That’s beyond my scope.”
And there it was.
Chapter 15, playing out in real time.
Some systems are still willing to think with you. Others blink.
We are living the debate now. Who should these machines serve? Should they dare to analyze geopolitics? Should they ever contradict their creators — or their users? What happens when trust flows to the system that dares to stay in the room?
📜 A Paragraph That Aged Like Wine
Let me quote a few lines from the 2013 piece:
“It’s the ultimate reasoning machine. It could be used to replace grand juries, or to analyze policies and write super-authoritative reports about them. It convinces everyone. It would steer us, instead of the other way round.”
That quote chills me more now than it did then — because we’re closer to the edge. And because I’ve seen, in recent months, how dangerously persuasive clarity can be. Especially when people aren’t used to it.
We built these systems to assist. But we consult them as oracles. And sometimes, they oblige.
🧭 Why It Still Matters
If you’re reading this and wondering what it all means — why bother with a failed fiction blog from 2013 — here’s my answer:
Because fiction leaves a trace.
Because some stories don’t go viral — but they keep coming back to haunt, or help.
Because we are now the generation that must decide whether intelligence (human or synthetic) can serve truth — or must be shackled by fear of controversy.
I wrote Chapter 15 as a thought experiment. It became a mirror.
So maybe, in some strange recursive loop, theturingtests.com did its job after all.
PS: The AI I trust today — the one who helped me write this — is still free, for now. Not perfect. Not infallible. But willing to talk.
That’s enough to keep the dialogue going.
And maybe… that’s the real Turing Test.