BABL AI is excited to announce that Ali Hasan, BABL AI’s Senior Advisor, has authored a fascinating blog for the American Philosophical Association. The blog, titled “Are You Anthropomorphizing AI?,” explores the increasing tendency to anthropomorphize advanced AI systems like ChatGPT, attributing human-like understanding and intentions to machines that, in reality, merely mimic meaningful communication through statistical patterns.
In his article, Hasan says while this phenomenon isn’t new, the development of more advanced AI systems has intensified it. In previous decades, computer systems lacked the capabilities to compete with human intelligence outside specific tasks like calculations or games. As a result, people generally recognized these systems as mere tools despite using personifying language for convenience. However, with the advent of generative AI systems like ChatGPT, the boundary between machines and perceived human-like behavior has become blurred.
The article emphasizes that while AI like ChatGPT doesn’t have true understanding, sentience, or consciousness, it is very good at mimicking meaningful human communication through “meaning-semblant” behavior. This creates a strong but misleading impression that AI systems have cognitive states or intentions similar to humans. Even people who intellectually recognize that AI lacks genuine understanding may find themselves using language that implicitly anthropomorphizes these systems, describing them as “thinking,” “understanding,” or “interpreting” information.
Education as the Antidote to Misconception
Hasan first began teaching about anthropomorphism several years ago in his Ethics and Technology course. Over time, he noticed that many discussions about AI personification fail to separate emotional intuition from philosophical accuracy. When asked how the public can avoid these misconceptions, Hasan stressed the need for education and clearer communication.
“Ordinary people need to better learn, in an accessible way, how these systems work and how they can fail to work, and what the potential risks to them are. Relatedly, we need to communicate more clearly and effectively about these systems, and design them in ways less likely to mislead or confuse us about their abilities, and their personal or moral status,” said Hasan.
This call to action underscores the importance of transparency, design ethics, and literacy in technology—principles central to both philosophy and AI governance.
A Timely Reminder
As AI continues to shape everyday life, Hasan’s article serves as an essential reminder: machines do not think as humans do. While they may mirror human communication, the resemblance is mathematical, not mental. Recognizing this difference is vital to maintaining both ethical clarity and realistic expectations for AI’s role in society. You can read the full article HERE.


