Before I get into the specifics of artificial “understanding”, you should be reminded that traditional programming is based on commands a user inputs into a coding platform to be executed by the system for a specific purpose. Kinds of Minds by Daniel Dennett, offers a perfect example of this, “There are computer-driven devices that can read books for the blind: they convert a page of visible text into a stream of audible words” (Dennett, 8). Clearly, manual inputs do not lead to developing a real understanding or learned capability from the computer. AI, or artificial intelligence, uses this new concept of Machine Learning (ML) to train, learn and understand inputs, most often being questions or requests. If you have ever used an AI platform like ChatGPT, you should know that these LLM’s, large language models, are vastly different than what Dennett mentions with the screen-reader. Lecharles wrote a four-part series to help explain to those who don’t already know the capability of these machines, “These systems started doing things nobody explicitly taught them, like reasoning or math” (Lecharles, 2025). To learn is to comprehend. To comprehend is to understand..right? Wouldn’t you say that having the capability of understanding constitutes having a mind? In context, I would have to agree. However, it worries me to know that we have trained non-human beings to do some really human things. We took the machine from inanimate to nearly autonomous. I mean, there are robots being developed with limbs that look exactly like a human. How long will it be before we can’t even tell the difference? Clearly, the LLM’s have become so advanced that we are now finding a need to extend them into physical reality. But what is the need?
I like how you contrasted Dennett’s screen reader example with today’s LLMs. It really shows how different machine learning feels compared to older programmed tools. I do think, though, that calling what LLMs do “understanding” might be tricky. Chalmers points out that conversational ability and generality make them look like they understand, but it could still just be advanced pattern recognition. I liked your question about whether learning equals having a mind; it made me think about whether we’re seeing real comprehension or just a very convincing imitation. (Also, really cute website 🙂 )
I like how you contrasted Dennett’s screen reader example with today’s LLMs. It really shows how different machine learning feels compared to older programmed tools. I do think, though, that calling what LLMs do “understanding” might be tricky. Chalmers points out that conversational ability and generality make them look like they understand, but it could still just be advanced pattern recognition. I liked your question about whether learning equals having a mind; it made me think about whether we’re seeing real comprehension or just a very convincing imitation. ( Also, really cute website 🙂 )