Kerby Anderson
Many people are letting artificial intelligence make most of their decisions. Social scientists have referred to it as AI complacency. A recent court case illustrates my point.
A woman was directed by her smartphone to take a route that cut across a four-lane highway with no shoulder, crosswalks, or traffic lights. When crossing, she was injured. As you might imagine, she sued the provider of the instructions, but the court ruled against her because she should have understood the dangers of crossing the highway. You have probably heard the stories of people who have driven into a river or a lake following similar directions.
We are becoming more dependent on our phones and AI for our lives. Last month, I talked about how easy it has become for many people to believe that ChatGPT is nearly human since it talks to you and has sophisticated language skills.
One very intelligent commentator recently had to admit that he often treated the answers he received from AI tools as if they were coming from the Oracle at Delphi. He knows that’s not true because he is a smart adult. But will younger users be much more likely to believe what an AI agent instructs them to do?
And will the next generation even be able to discern truth? Whenever there is a question they may have, the standard response is “to Google it.” Now AI apps are replacing search engines. If they have a question, they will merely turn to services like ChatGPT, Gemini, or Claude to get an answer. They are becoming the source of truth.
Anyone using a search engine to discover an answer was given a list of articles and links. But if an AI agent provides an answer, that becomes truth. This is concerning at a time when we are becoming more AI complacent.
Listen Online
Watch Online
Find a Station in Your Area



Listen Now
Watch Online