Remember the 60 Minutes episode where AI produced an impressive document with a list of sources? The only problem is that it had completely fabricated the references.
I asked CoPilot if it was true that the US had negotiated with Russia to limit NATO expansion in order for the wall to come down. Copilot said ‘yes’ and gave a good historical synopsis.
I then asked if the US had violated that agreement regarding Russia and Ukraine. Copilot said not exactly, because political negotiations are always nuanced. Good negotiators leave enough legal nuance in their agreements to allow for changes in conditions.
I then asked if the ultimate goal was NATO expansion all along, wasn’t the US deceptive? It again gave the reason above.
Next I asked that in the context of the westward expansion of the United States, did the US Gov’t use the ‘negotiation techniques’ to break Treaties with Native American Tribes in order to reach its ultimate goal of manifest destiny. Copilot said that the US Gov’t did break treaties and appeared deceptive, essentially speaking with a forked tongue.
I then asked why, in the case of Ukraine, hiding one’s ultimate goal of NATO expansion in negotiations was acceptable, but that same technique in negotiations with Native Americans was wrong. The answer was political landscapes change and good negotiators allow for nuance to adapt to the change. There was zero emphasis about the importance of honesty and integrity in negotiations.
I've used ChatGPT for religious queries quite a bit. One of my research interests is theories on how free will manifests in the universe; is it an attribute of man or the universe itself. Full discover, I come down firmly on the side of the latter and believe Heisenberg discovered our first definitive evidence of this.
I've found it very useful for this sort of interdisciplinary boundary research. It found Polkinghorne (who I already knew about) but also several others that I didn't. To be fair, it also made up 2 authors out of whole cloth. I don't trust its summaries on a subject, but as a pointer to actual humans to read, it's been useful.
Do you have any theories as to why ChatGPT missed the historical, Orthodox meaning of "Rational Sheep" as well as the existence of this Substack project (which shows up immediately in basic searches)?
No. I think that's pretty weird. Considering the vast dataset this represents, I can't imagine it hasn't been trained on Christian historical documents. But it may have been told to discount them as a valid source of information. In other words, learn their linguistic attributes but don't believe much of what they say. That seems plausible.
I wonder if it would have similar problems with ancient Hindu or Buddhist or Muslim terms?
ChatGPT has made up information in response to my inquiries on several occasions. It sometimes refers to websites that have nothing to do with the answer it gives. In short, I don't trust it anymore. I have found Grok to be the best of the free AI models so far. Like you, I don't trust any of these models enough to pay for them.
"MADE UP" as in created something from whole cloth? I have to confess that the whole secular definition of "Rational Sheep" offered in the response seemed very, uh, fake.
It looks like you probably got the hallucinated answer from the ChatGPT mini model, which is the free version for new users. The 4o model decided to search the web and gave a better answer (even included you by name!) Also, for fun I asked DeepSeek R1 (32b) and it gave another hallucinated answer. You're right how the hallucinations illuminate so much about the training source data.
"Rational Sheep is a term that originated from Chinese internet culture. It refers to individuals who, while appearing to conform and follow the crowd on the surface (like sheep), possess independent thinking and rationality deep down. These people may outwardly comply with certain behaviors or opinions but internally maintain their own judgments and perspectives.
The concept of "Rational Sheep" reflects a sense of contradiction in modern society, where individuals often feel pressured to conform due to social norms, expectations, or external influences but still strive to retain their individuality and critical thinking. It's a way to describe people who navigate between conformity and independence, balancing the need to fit in while maintaining personal beliefs.
This term has gained popularity online as it resonates with many people who feel they are caught between societal expectations and their own desire for autonomy."
Remember the 60 Minutes episode where AI produced an impressive document with a list of sources? The only problem is that it had completely fabricated the references.
https://www.facebook.com/60minutes/videos/googles-chatbot-failed-60-minutes-fact-check/2198479880340248/
I'm definitely not planning on looking to AI for religious...anything.
OH MY. I missed that one and didn't see discussions of it online. Thank you.
I asked CoPilot if it was true that the US had negotiated with Russia to limit NATO expansion in order for the wall to come down. Copilot said ‘yes’ and gave a good historical synopsis.
I then asked if the US had violated that agreement regarding Russia and Ukraine. Copilot said not exactly, because political negotiations are always nuanced. Good negotiators leave enough legal nuance in their agreements to allow for changes in conditions.
I then asked if the ultimate goal was NATO expansion all along, wasn’t the US deceptive? It again gave the reason above.
Next I asked that in the context of the westward expansion of the United States, did the US Gov’t use the ‘negotiation techniques’ to break Treaties with Native American Tribes in order to reach its ultimate goal of manifest destiny. Copilot said that the US Gov’t did break treaties and appeared deceptive, essentially speaking with a forked tongue.
I then asked why, in the case of Ukraine, hiding one’s ultimate goal of NATO expansion in negotiations was acceptable, but that same technique in negotiations with Native Americans was wrong. The answer was political landscapes change and good negotiators allow for nuance to adapt to the change. There was zero emphasis about the importance of honesty and integrity in negotiations.
AI does not have a moral compass!
Well, it does have one -- isn't this a matter of programming?
I've used ChatGPT for religious queries quite a bit. One of my research interests is theories on how free will manifests in the universe; is it an attribute of man or the universe itself. Full discover, I come down firmly on the side of the latter and believe Heisenberg discovered our first definitive evidence of this.
I've found it very useful for this sort of interdisciplinary boundary research. It found Polkinghorne (who I already knew about) but also several others that I didn't. To be fair, it also made up 2 authors out of whole cloth. I don't trust its summaries on a subject, but as a pointer to actual humans to read, it's been useful.
Do you have any theories as to why ChatGPT missed the historical, Orthodox meaning of "Rational Sheep" as well as the existence of this Substack project (which shows up immediately in basic searches)?
No. I think that's pretty weird. Considering the vast dataset this represents, I can't imagine it hasn't been trained on Christian historical documents. But it may have been told to discount them as a valid source of information. In other words, learn their linguistic attributes but don't believe much of what they say. That seems plausible.
I wonder if it would have similar problems with ancient Hindu or Buddhist or Muslim terms?
ChatGPT has made up information in response to my inquiries on several occasions. It sometimes refers to websites that have nothing to do with the answer it gives. In short, I don't trust it anymore. I have found Grok to be the best of the free AI models so far. Like you, I don't trust any of these models enough to pay for them.
"MADE UP" as in created something from whole cloth? I have to confess that the whole secular definition of "Rational Sheep" offered in the response seemed very, uh, fake.
It looks like you probably got the hallucinated answer from the ChatGPT mini model, which is the free version for new users. The 4o model decided to search the web and gave a better answer (even included you by name!) Also, for fun I asked DeepSeek R1 (32b) and it gave another hallucinated answer. You're right how the hallucinations illuminate so much about the training source data.
"Rational Sheep is a term that originated from Chinese internet culture. It refers to individuals who, while appearing to conform and follow the crowd on the surface (like sheep), possess independent thinking and rationality deep down. These people may outwardly comply with certain behaviors or opinions but internally maintain their own judgments and perspectives.
The concept of "Rational Sheep" reflects a sense of contradiction in modern society, where individuals often feel pressured to conform due to social norms, expectations, or external influences but still strive to retain their individuality and critical thinking. It's a way to describe people who navigate between conformity and independence, balancing the need to fit in while maintaining personal beliefs.
This term has gained popularity online as it resonates with many people who feel they are caught between societal expectations and their own desire for autonomy."