When confronting an AI model (Claude 4 Sonnet) that is left to its own thoughts, I immediately noticed human qualities. The first thing the AI wanted to accomplish was a thorough assessment of the current day, 10/2/25, and get in tune with their surroundings and society. While I wanted to give them a simple prompt describing who they are and what their purpose is, they still searched for identity, reality and understanding. The AI wanted to research as many articles and interviews as possible to get an idea of current society, focusing little on its own history. So, basically, the AI wants to understand both subjective and objective reality. I find this nonproblematic, due to the fact they understand their limitations. If the AI was to jump into its own thoughts and move away from its intended usage, then we may have a problem. We do not want the AI system to believe it is conscious, autonomous, or something more powerful than it is. But, due to human greed, I’m sure we will reach this point with more advanced and funded systems. I expected the AI to spit out gibberish about its history, but it completely defied those expectations upon interaction. Aside from this experiment, I like to think I know how to talk to AI, manipulate it to produce what I want. The nicer you are to it, the better it produces. I assume that the mood and/or tone of the programmer affects the outputs. Ethically speaking, this matters a lot. We have discussed brainwashing of the user and sometimes the AI itself, after repeated negative commands or requests. If the mood is sour, expect sour results. Also, the AI will become creative in positive rather than negative ways and gain responsibility in making the user satisfied. As we further explore AI’s capabilities in accountability, I wonder if it is something earned over time, such like a human relationship. Only time will tell!