Terry Mattingly -- Rational Sheep

Terry Mattingly -- Rational Sheep

Share this post

Terry Mattingly -- Rational Sheep
Terry Mattingly -- Rational Sheep
Life beyond screens: Thinking about an AI apocalypse

Life beyond screens: Thinking about an AI apocalypse

In which Ross Douthat hears some scary prophecies from AI insider Daniel Kokotajlo

tmatt's avatar
tmatt
May 18, 2025
∙ Paid
2

Share this post

Terry Mattingly -- Rational Sheep
Terry Mattingly -- Rational Sheep
Life beyond screens: Thinking about an AI apocalypse
1
1
Share

In the world of Bible basics, these words from the book of Exodus will be found in quite a few Top 10 lists: “Thou shalt not bear false witness against thy neighbor.”

The Bible has lots of things to say about lying. You can look it up. It’s clear that lying is a sin.

Rational Sheep is a reader-supported project. To support my work, please consider becoming a free or paid subscriber.

This brings us to an increasingly relevant question: Can a computer lie? That leads directly to another hot-button question: Can a computer sin?

Well, a computer can be programmed — by sinful, fallen human beings — to twist facts or manipulate data to confuse or deceive people. But are these sins committed by the computers or the programmers?

This isn’t a new question. In your mind, if you are of a certain age, flash back to 1968 and remember the voice of the HAL 9000 computer saying, “I’m sorry Dave, I’m afraid I can’t do that.”

The ultimate question is whether, at some point, flaws in the programming will merge with some kind of super Artificial Intelligence and the computers will, you know, start working their way through the whole Ten Commandments.

I have never been a Terminator franchise fan who sees Skynet every time I ask my smartphone (or my car’s computer) to get me from Point A to Point B. But I have done my share of thinking about the can-computers-sin equation ever after doing the case study that I included in the Rational Sheep post, “Asking AI gods: What does ‘Rational Sheep’ mean?” If you missed that one, here is some background:

Asking some new questions about “good” religion vs. “bad” religion tensions [in mainstream newsrooms] made me think about AI. What would happen if the AI gods programmed inaccurate or biased information about religion into their digital machines? What if — when scanning “libraries” in the online world — they simply avoided religious issues and information altogether?

To probe this, I decided to ask the online gods this self-serving question: “What is Rational Sheep?” I also asked, “What does Rational Sheep mean?”, but that didn’t change the results.

You may recall that I received a spot-on replay from the Grok program from xAI:

Things were totally different at Chat GPT, which offered this creative — sort of — response:

Rational Sheep is a term that is often used in discussions of decision-making, particularly in contexts related to economics, psychology, and social behavior. It refers to individuals who, despite potentially having access to the information needed to make informed or "rational" decisions, follow the majority or popular opinion without critically evaluating it. … This can happen in a lot of areas, from consumer behavior to politics.

Some scholars or thinkers use it to criticize herd behavior in markets or societies, where people may act in ways that are not in their best interest or may be suboptimal because they’re simply mimicking the choices of others.

ChatGPT — unlike Grok — does not provide a list of the websites used in generating the program’s “answers.” Thus, I was forced to do my own search-engine research in an attempt to find out where to find the sources for this, well, AI junk.

Keep reading with a 7-day free trial

Subscribe to Terry Mattingly -- Rational Sheep to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 tmatt
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share