It Seems to Me (ISTM) is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. I just posted the following story a few hours ago. I based what I wrote on searches on Google and ChatGBT. it jarred me, because I thought I had read the story before we all had computers and I was still a student in the 1960s. Apparently I was right and ChatGPT was making stuff up. If you just read the early post skip down to the bold-faced Update.
That's an interesting observation by Ernie & Federico. Evidently ChatGPT is still prone to error.
Regarding its answer to your question about religion, I don't consider it politically correct at all. Its answer was in line with its purpose. It wasn't created to be a god or to be worshipped; it's a tool.
And while we're on the subject of Asimov (which is actually the reason I came here in the first place), his Three Laws of Robotics would seem to negate any god-like tendences of ChatGPT:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
I asked ChatGPT if it was God & It Lied to Me.
The Last Question is actually a short story by Issac Asimov. Published in 1956. https://en.m.wikipedia.org/wiki/The_Last_Question
If only I had opened this email earlier, I could’ve been first to post the information about “the last question.”
That's an interesting observation by Ernie & Federico. Evidently ChatGPT is still prone to error.
Regarding its answer to your question about religion, I don't consider it politically correct at all. Its answer was in line with its purpose. It wasn't created to be a god or to be worshipped; it's a tool.
And while we're on the subject of Asimov (which is actually the reason I came here in the first place), his Three Laws of Robotics would seem to negate any god-like tendences of ChatGPT:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.