I fucking hate when people ask an LLM "what were you thinking" because the answer is meaningless, and it just showcases how little people understand of how they actually work.
Any activity inside the model that could be considered any remote approximation of "thought" is completely lost as soon as it outputs a token. The only memory it has is the context window, the past history of inputs and outputs.
All it's going to do when you ask it that, is it's looking over the past output and attempting to rationalize what it previously output.
And actually, even that is excessively anthropomorphizing the model. In reality, it's just generating a plausible response to the question "what were you thinking", given the history of the conversation.
I fucking hate this version of "AI". I hate how it's advertised. I hate the managers and executives drinking the Kool-Aid. I hate that so much of the economy is tied up in it. I hate that it has the energy demand and carbon footprint of a small nation-state. It's absolute insanity.