Kerby Anderson
Clay Shirky, writing in the Yale Alumni Magazine, reminds us of the benefits and limitations of chatbots. He begins with a thought experiment. Imagine you were sitting around the Thanksgiving table in 2022 and a guest asked the table when AI would get to a million users. Some might say two years away, others might say ten years away.
The answer is “a week from Monday.” OpenAI dropped ChatGPT 3.5 on the last day of November. It had a million users by December 5. This illustrates how fast AI has become part of our lives.
To use ChatGPT, all you need to do is talk. But he says, “It is almost impossible to resist treating software that talks as something that also thinks.” And because it has sophisticated language skills, it is easy to see why users depend on it and tend to overestimate the range and depths of its skills.
One good example is the statement that “An AI can pass the LSAT.” While that may be true, it is not true that “An AI can be a lawyer.” He asked one of his lawyer colleagues about the LSAT story. The lawyer replied, “I cannot convey to you how little of my day is like answering a question on the LSAT.”
He says that if you see a factory making tires, you conclude that the worker’s effort is to create tires. However, that is not true for a history class. If you conclude that the output is history papers and students are there to create the papers, you have not just misunderstood the situation, you have it backwards. The output may be history papers, but the goal should be to create historians.
The problem is that users are becoming too dependent on chatbots, and they also overestimate what they can actually do.
Listen Online
Watch Online
Find a Station in Your Area



Listen Now
Watch Online