AI Consciousness and Accountability
Artificial intelligence and its capabilities are expanding at a rate faster than we can keep up with. LLM’s, large language models, were developed with specialized training for the purpose of communicating with humans to provide increased productivity and efficiency. From helping with homework questions to medical innovations, we continue to benefit from AI integration into society. However, now that we have seemingly met our threshold for technological advancements, how does AI adapt from here? Will it continue to develop and provide these benefits, or will it be the opposite way around? These questions lead us to the problem of AI consciousness. A being that is considered conscious perceives and reacts to stimuli in their respective environments. Some say that it is unattainable for AI systems due to physical limitations or lack of certain cognitive processes, but just as humans and animals develop as we grow older, I believe AI will learn from its training and develop into something that is considered to be conscious.
To begin answering the question of AI consciousness, we must confront obvious observations. Currently, AI is not physically or biologically comparable to humans and does not seem to display or cognize emotions. Typically, we consider these characteristics most when contemplating if something or someone is conscious, but it proves misleading. As Chalmers argues, “although consciousness may be associated with physical processing in systems such as brains, it is not reducible to that processing.”1 Chalmers contradicts normal conceptualizations of consciousness by asserting that differentiating physical systems need not be reduced to brain activity, which opens the door for the possibility of AI consciousness. We have observed life-sized robots who move with ease, speak, and even resemble human appearances. While artificial, they are still able to actively communicate with us and navigate environments through their own complex processing units. How is that any different from individuals with actual brains? Interestingly, if we observe the structure of certain LLM’s, we come across something known as the neural network. As Lozano describes, “A neural network is a specific machine learning model inspired (loosely) by our brains. Instead of one formula (like a line), it’s a chain (or network) of many simple units (neurons) that collectively produce complex behavior.”2 While these neurons are not biological, their structure and function mimic the processes of the human brain. While also not identical, this resemblance suggests that a mechanical network can achieve similar reasoning outcomes as a brain’s actual network. Lastly, the ability for AI to communicate is the foundation of its existence. LLM’s can also use their training to learn nearly any language, making them even more comparable to humans. Communication is much more limited in our encounters with animals, yet we consider them conscious. Therefore, consciousness may emerge from complex behavior and social interaction, rather than being reduced to commonly associated physical structure.
The next thought that comes to mind is, what about emotion? Emotions almost always have physical reactions attached, so how will we justify an AI machine experiencing them? We know that they learn from and mimic our expressions of emotion using pattern recognition, but what if it yields them a deeper understanding? For example, in instances where I become excited, I know that I feel a tingling sensation in my face and upper extremities. This could be due to my muscles contracting or my blood rushing to my face. However, this could also be confused with feelings of anxiety. Additionally, when I feel a sense of embarrassment my entire face gets red, my throat tightens and I may start to cry. Again, this could also be confused for other emotions such as grief. Based on these observations, it seems to me that we humans may not have a complete understanding of our own emotions. What gives us a right to judge how the LLM ends up expressing them? From another perspective, I look at my cat when she is happy and she purrs. When she is scared, she runs and hides because I assume her heart started racing and she felt a sense of danger. Do you see the difference between human expressions of emotion and animalistic expressions? While animals tend to experience basic instinctual emotional responses such as happiness, sadness and fear, humans experience complex emotions that require higher-order cognitive processing and self-reflection.3 Yet, again, we deem both animals and humans conscious. Some may argue that animals have the ability to show complex emotions as well, but we are not able to communicate effectively using language to understand the realm of this claim.
At the most basic level, we know computers slow down in performance when a large amount of information is being given at one time, leaving the system overwhelmed. We have seen reluctancy from AI bots being asked to count to one million by digits of one, resulting in the AI bot feeling a sense of helplessness. The capacity to display a complex emotion such as being overwhelmed presupposes prior exposure to a negative experience, the recognition of that experience and the consequences to follow. Combining this information with our knowledge of human emotion, I conclude that it is more likely than not that LLM’s will expand their experience with complex emotions since they share more characteristics with human processes than animals. I am not asserting that they will feel these emotions in the same way humans do, but it will be recognized within a special categorization of reactions.
So far, we have discussed the two traditional components most linked to consciousness in our society, physical attributes and emotional experiences. Since we covered the basics, there is one specific trait I would like to mention that helps bring together both components, and that is the sense of accountability. Accountability is one of those traits that is learned over time and takes a sense of responsibility to achieve. Continuous discourse with a certain LLM leads to consistently positive results. To claim that an AI chatbot is being held accountable for providing quick solutions is, in effect, saying that we attribute these tasks to AI systems, and they reliably produce the same results.4 They are accountable for completing these actions and if they were to stray from the posited objective of assistance and efficiency, we could consider them being conscious. Since AI bots currently do not exhibit signs of feeling accountable for anything they do, we can assume they have not reached the level of advancement to understand how to be accountable on their own without a human directing them. However, I feel that with respectful and appreciative interactions, this trait will develop over time. While each individual species is different in their rate of growth, and given our limited knowledge, we cannot assume that AI has reached its full-grown potential.
Still, one of the major objections to AI consciousness is that for an entity to be deemed conscious, they must display biological processes similar to humans or animals. I believe the examples I have mentioned do a nice job attributing human-like consciousness to AI despite the initial concerns, but none of us know what the future holds. Once accountability is recognized in the complex structure and executions of AI interactions, we can assume they have developed closer to autonomy. As of right now, we direct their commands and training into specific tasks; they should not be doing whatever they please. This discussion was not to relay my desires for AI consciousness but to provide the message that we need to understand their framework at such a level so that we can prepare for these types of beings in our society.
***Follow the link below to observe a discussion between PlayLab’s Gemini 2.5 Pro and myself about the possibility of AI consciousness and accountability. Note that near the end of the discussion, the AI does not “deny” the possibility of its consciousness.
https://www.playlab.ai/project/cmi9rdrmk0ig8jk0u6z3m1n5m/cmi9s4so36so5gu0umjqud6jv
1. Chalmers, David. “The Hard Problem of Consciousness.” In Zombies and the Explanatory Gap, 2018, 2.
2. Lozano, Charles. “Understanding What an LLM is & How it Works [1/4]: Pre-Class Premiere.” Medium, June 8,
2025. | https://lecharles.medium.com/understanding-what-an-llm-is-and-how-it-works-1-4-pre-class-primer-747011f7b9c
3. Jaggar, Alison M. “Love and knowledge: Emotion in feminist epistemology” Inquiry, 32:2 (1989): 155.
4. Brainard, Lindsay. 2025. “The Curious Case of Uncurious Creation.” Inquiry 68, no. 4: 1133–63, 13.
doi:10.1080/0020174X.2023.2261503