What Should We Expect From ChatGPT And LLMs
The LLM as an Information Tool
Consider a common interaction with an AI like ChatGPT. A user might express a personal hardship, such as losing their job, and then immediately follow it up with a factual query, like asking for the location of the tallest building.
In this scenario, ChatGPT's typical response is to first offer a brief note of sympathy for the user's situation and then proceed to provide the factual information requested. From one perspective, there is nothing problematic about this exchange. The AI acknowledged the human element and then fulfilled its primary function as an information-retrieval tool.
Setting Realistic Expectations for AI
This interaction raises a crucial question: What should our expectations for Large Language Models (LLMs) be? If you were to type the same query—"I lost my job and where is the tallest building"—into a Google search, the engine would simply return a list of web pages about the world's tallest buildings. It wouldn't, and isn't expected to, address the user's personal statement.
This comparison highlights a fundamental debate. Should we view LLMs as sophisticated search engines, or do we hold them to a higher standard of emotional intelligence and care? The core of the matter lies in defining the role we want these powerful new technologies to play in our lives.