So I was all set to talk about “optimistic nihilism” but burned myself out studying artificial intelligence.
I decided to chat with that little robot about the Murdaugh case and discovered some interesting things: yes it gets things wrong all the time, someone tried to insert a moral code into ChatGPT, it won’t back down when faced with its mistakes, it hates yes/no questions. If confronted, it will pivot like a politician and talk about another subject like you won’t notice or something. So, kind of like a lot of people. Oh, and it doesn’t understand the meaning of the words in its answers.
If we suffer as a society from people who over-rely on Facebook, I shudder at the release of this chatty bot on the world. It really needs guardrails. More than that, it mimics the word patterns typed into it, so whatever old aunt Gretchen is putting in there could be tomorrow’s answer to global warming.
Now that certain software companies (that will remain nameless) have put this monster into …
Keep reading with a 7-day free trial
Subscribe to The Tell with Christine Axsmith to keep reading this post and get 7 days of free access to the full post archives.