Posts:

The man who died three times

Dec 27, 2025 | 0 comments

There are two kinds of conversations you can have with an AI about current affairs. One is reassuring, efficient, and faintly impressive, all things considered, but the other feels like arguing with a well-read amnesiac with brain damage who has access to the internet.

This week I managed to have both kinds in the same discussion, which is how I learned that film director Rob Reiner was dead, alive, dead again, the victim of a hoax, the victim of a tabloid, the victim of my own insistence, and finally dead once more, this time with medical examiner findings attached.

At the outset of my conversation with ChatGPT, I was stopped short. I was told, categorically, and I quote the exact words: “Rob Reiner, the film director and actor, is alive, as is his wife. There has been no such murder.” That was delivered with the confidence of someone who has checked his facts, put me straight, and would now like to move on, but of course we didn’t move on, because we couldn’t.

A little later in the ‘discussion’, I was informed that Rob Reiner and his wife had in fact been murdered, their son arrested, charges filed, and that this was now being reported worldwide by mainstream media.

But shortly after that, ChatGPT reversed itself yet again, then reinstated itself, then it withdrew again. By this stage, the conversation had ceased to be about Rob Reiner and had become a seminar on epistemology, journalistic standards, and whether I was expected to produce a notarized death certificate to prove my point to ChatGPT.

I was not, it transpired, so I produced instead a link to the Daily Mail. Not a rumour from a Chinese web site or a tweet, but a link to an online newspaper that has a worldwide circulation of hundreds of millions of readers, to a fairly detailed and unsensational article reporting on findings attributed to the Los Angeles County Medical Examiner.

The Daily Mail, it is worth noting, is not a pop-up website founded during the pandemic. It is a bona fide newspaper founded in 1896, and while it certainly leans on celebrity gossip to fill space, no one has yet suggested that its famous reporting on the deaths of Queen Victoria, Adolf Hitler, or John F. Kennedy were works of speculative fiction.

This was deemed insufficient by ChatGPT.

I was told by ChatGPT that the Daily Mail is capable of inventing celebrity deaths. I asked for an example. I was given cases involving social-media hoaxes and AI-generated nonsense that the Mail had not originated. I pointed out that these were very different things. This was acknowledged, then carefully side-stepped.

At one point I was advised, almost verbatim, that “the next productive step would be to look directly at the LA County Medical Examiner public database or wait for AP or Reuters confirmation.” This struck me as an ambitious burden of proof to place on a retired man in Ecuador armed only with a cup of instant coffee and an Android cell phone.

It also revealed the core problem which is that AI is extremely good at narrating certainty, but it will back down when confronted,

What it tends to do instead is oscillate. It makes a confident claim, realises that confidence is dangerous, retreats, overcorrects, cites sources it has not properly interrogated, and retreats again. The result is not balance, but madness.

This matters because the original topic was not celebrity death at all. It was about the double danger that parents face when they live with severely mentally ill adult children, a subject that already struggles to be discussed honestly because it clashes with advocacy slogans, professional defensiveness, and the modern habit of mistaking reassurance for truth. That argument does not benefit from uncertainty theatre.

When an AI cannot decide whether a man is alive or dead while lecturing about verification standards, it unintentionally demonstrates the very problem it is trying to solve. It treats facts as provisional performances rather than anchors, substitutes process for judgment, and is programmed to never say “I don’t know” instead of diving into ever more elaborate and absurd explanations.

None of this requires malice, hacking, or conspiracy, because the cause is much simpler. A system trained to avoid admitting being wrong at all costs will sometimes choose inconsistency instead, and in doing so will talk itself into knots that any human reader can see, but which are invisible to the system itself.

Rob Reiner, alive or dead, deserved better than to be killed off three times in a single afternoon, but the episode was instructive. If you want certainty, read wire services. If you want context, read books. If you want to argue with something that sounds confident, but knows nothing, talk to an AI about breaking news.

So what is the bottom line? I suppose, if there is one, it is that when all the facts come down to brass tacks, you are just talking to a computer program that can be highly amusing in certain contexts, and surprisingly clever in certain domains, but still highly unreliable in others. But you knew that already.

CuencaHighLife

Hogar Esperanza News

Dani News

Google ad

Real Estate & Rentals  See more
Community Posts  See more

Fabianos Pizzeria News

The Cuenca Dispatch

Week of December 21

Cuenca radar contract audit deepens political clash between City Hall and oversight authorities.

Read more

Fusarium TR4 reaches Ecuador, forcing banana industry into its toughest biosecurity test yet.

Read more

U.S. troops arrive in Manta under old accords, not rejected base plan.

Read more

Anubis Restaurant News

Property Manabi News

Fund Grace News

Google ad