We Are Not in the Driver's Seat: How Post-Hoc Storytelling Shapes Minds—Human and Machine Alike
A Guest Post from Professor Michael Wagner
Dear Readers,
It is my pleasure to introduce today’s guest post by Michael G. Wagner, professor and department head at Drexel University’s Antoinette Westphal College of Media Arts and Design, where his research explores how emerging technologies reshape teaching and learning. Professor Wagner is a seasoned technology educator whose 30+ years of experience span high school classrooms and university lecture halls alike. He is also the author of The Augmented Educator on Substack and the host of a YouTube channel covering immersive audio, digital media, and game design.
This post emerged from a thoughtful exchange on Substack Notes in reference to part of my recent article, “Critical AI Literacy: What Is It and Why Do We Need It?” In that discussion, Michael challenged the commonly accepted view that “AI doesn’t think like us,” by highlighting a deeper issue: we still understand far too little about how the human brain functions—and likewise, how AI systems truly operate—to draw definitive conclusions about their differences.
I share his caution. While I maintain that assuming a meaningful divide between human cognition and the statistical architecture of large language models can be useful—particularly in encouraging users to remain active, reflective, and aware of their own contributions to the thinking process—I also recognize the value in keeping an open mind. If the gap between our minds and these systems turns out to be narrower than we think, the implications could be profound.
As a seasoned researcher and educator, Michael brings a grounded and research-informed perspective to this question—one that opens up space for further inquiry rather than foreclosing it. I invited him to expand on his ideas in this post, and I’m grateful he accepted. His reflections offer both conceptual clarity and an invitation to keep exploring what we think we know about both ourselves and the systems we’re building.
Please join me in welcoming Professor Michael Wagner.
We Are Not in the Driver's Seat: How Post-Hoc Storytelling Shapes Minds—Human and Machine Alike
By Michael Wagner
Recent advances in "chain-of-thought reasoning" have dramatically improved AI capabilities by enabling models to replicate how humans think. Systems like ChatGPT o1, DeepSeek R1, and Claude 3.7 Sonnet show impressive capabilities in mathematics, logic, and creative reasoning, showcasing their step-by-step reasoning processes—a skill previously considered uniquely human. This rapid progress raises an important question: are these systems truly reasoning, or have we just created more convincing imitations of human thought?
This question has ignited intense debate among researchers across disciplines, from AI developers to cognitive scientists and philosophers. The discussion extends far beyond technical specifications, occupying central positions in cognitive science, philosophy of mind, and AI research. Prominent thinkers like Douglas Hofstadter, Judea Pearl, and Gary Marcus have passionately argued that human cognition operates through mechanisms fundamentally distinct from those driving current AI systems.
Underlying this discussion is a significant gap in our current understanding of cognition. Constructivist epistemology, the widely accepted theory that knowledge arises from actively building and using mental models, implies that human reasoning relies on detailed internal representations. These encompass not just verbalized 'chains-of-thought' but also imagery, sensorimotor feedback, and other embodied or affective elements.
This perspective does indeed provide strong support for the idea that human cognition is fundamentally unlike current AI reasoning. Yet neuroscientific evidence reveals an intriguing paradox: despite these fundamental differences, both systems show similar patterns in creating post-hoc explanations for decisions that originate at deeper processing levels, and which are impacting later choices.
This apparent contradiction reflects cognition's complex, layered nature, but it does not necessarily represent incompatible perspectives. We may need to accept that we must hold both truths simultaneously: humans employ deeply embodied mental models absent in current AI systems, while both—humans as well as AI—construct and utilize equivalent post-hoc narratives in their reasoning processes.
The Passenger Seat Perspective
For decades, neuroscientists have documented a curious phenomenon: when we decide to perform an action—such as reaching for a pen or responding to a question—our brains prepare for this activity approximately 0.3 to 0.5 seconds before we become consciously aware of making the decision to do so. This "readiness potential," first identified in the 1960s by researchers Hans Kornhuber and Lüder Deecke, reveals something profound about human cognition: the conscious narrative we construct about our decisions occurs after our neural machinery has already started the process.
From a practical perspective, this delay appears reasonable. Our brains must process vast amounts of sensory information, integrate it with existing memories and experiences, and synchronize multiple neural systems before presenting a coherent picture to our conscious awareness. This processing overhead requires time—more time than we might intuitively expect. What's surprising is not that there's a delay, but its duration and implications. The fact that our brains take nearly half a second to create our conscious awareness contradicts our feeling of experiencing the world instantly.
This phenomenon was further confirmed in Benjamin Libet's now-famous experiments, where participants reported when they became aware of their decision to move, while researchers simultaneously measured brain activity. The consistent finding that neural signals precede conscious awareness by up to half a second challenges our intuitive sense that consciousness directs all our actions. Instead, it suggests that our conscious mind might be more interpreter than commander, explaining choices that deeper brain processes have already set in motion.
As the popular science channel “Kurzgesagt - In a Nutshell” explains in one of their recent videos, this delay means we're essentially "living in the past," experiencing the world as it was half a second ago rather than in real-time. At this point, it is important to note that this striking realization doesn't negate free will. Instead, it suggests a reframing of our relationship with consciousness. As Kurzgesagt aptly puts it: "We are not in the driving seat, we are in the passenger seat telling the driver what to do."
Our conscious mind may not initiate every action, but it provides direction, preferences, and values that shape future decisions. Rather than seeing consciousness as the immediate controller, we might therefore understand it as providing strategic guidance to our deeper cognitive systems—setting intentions and monitoring outcomes while the actual mechanics of decision-making often occur below awareness.
The Voice in Our Head
This means that our conscious decisions effectively establish parameters for our brains, after which neural processes autonomously execute actions and present us with a coherent narrative of what has happened. For most people, this narrative manifests as an "inner monologue"—a running verbal commentary that helps us make sense of our experiences and decisions. This internal voice seems to explain our choices in real-time, yet it operates as a skilled storyteller, constructing plausible accounts of decisions already started by deeper cognitive processes in the past.
It must be noted that this argument comes with an important caveat: not everyone processes their thoughts through the same mental framework. While many people assume universal access to a "voice in the head" that narrates intentions and ideas, research indicates that a significant minority thinks primarily in images or abstract concepts, rather than verbal narration. For these individuals, thoughts might simply emerge into awareness without the running commentary that others experience; however, they still do so after events have already occurred beneath awareness.
What is striking is that this capacity for post-hoc explanation mirrors how reflective large language models reason. They generate explanations that appear logical but emerge from hidden computational processes. Both our conscious mind and AI's chain-of-thought create stories to make sense of underlying processes we can't directly observe. While this similarity doesn't erase the fundamental differences between human and machine cognition, it does invite us to reconsider what we mean by "reasoning" as we examine AI's explanatory capabilities.
From Inner Monologue to Chain-of-Thought
The similarities between our inner monologue and AI's "chain-of-thought" reasoning are striking and deserve closer examination. When we observe modern AI systems working through complex problems, we see them articulating step-by-step reasoning that resembles our own verbal thinking processes. For example, when asked to solve a multi-step math problem, an AI might write:
"To find the answer, I'll first identify the variables. The problem gives us a train traveling at 60 mph that leaves at 2 PM, and a second train traveling at 75 mph that leaves at 3 PM. I need to determine when the second train will overtake the first train. First, I'll calculate how far the first train travels in one hour..."
This step-by-step walkthrough mirrors how a human might verbalize their approach to the same problems, breaking them into manageable components and working through them sequentially. Yet beneath this surface-level narration lies a statistical process—a massive neural network making predictions based on patterns extracted from training data. And this is not unlike how our own subconscious processes operate before consciousness provides its interpretation.
In both cases, what we witness is a form of narrative construction. Our conscious mind assembles a coherent story from neural processes we cannot directly access, while an AI's chain-of-thought creates a human-readable explanation from calculations occurring across millions of parameters.
Neither narrative necessarily reflects the actual decision-making process that produced the result—they are both interpretations designed to make complex underlying mechanics comprehensible, either to ourselves or to human observers. This doesn't mean these narratives are meaningless; rather, they serve as useful interfaces between opaque computational systems (whether biological or artificial) and the need for explicit understanding.
What is striking is that these parallels in narrative construction challenge our conceptualization of both human and artificial reasoning. If human consciousness functions as a storyteller crafting narratives about decisions already made by nonconscious systems, then perhaps the distinction between human and AI reasoning becomes less categorical and more a matter of degree.
But while these similarities are undeniable, the crucial truth is that our understanding of human cognition is unfinished. Despite centuries of philosophical inquiry and decades of neuroscience, fundamental questions persist to this day:
How does conscious awareness emerge from neural networks?
Why do thinking styles vary so dramatically between individuals?
Where do intentions reside before entering consciousness?
Given these uncertainties, we should be more modest in comparing human and AI reasoning abilities. When researchers claim that machine learning operates in ways utterly distinct from human thought, they may overestimate our understanding of our own cognitive processes. By recognizing this limitation, we open a space for more nuanced discussions about how AI systems might complement, rather than imitate, human cognition.
Implications for Educators
For educators navigating the integration of AI tools in learning environments, these parallels can offer valuable insights. If both humans and AI systems construct post-hoc narratives to explain processes occurring at deeper levels, how might this shape our approach to teaching with AI?
First, it suggests that when evaluating AI-generated explanations, we should remember that the apparent "reasoning" presented may be more akin to a well-crafted story than a transparent window into the system's actual processing. Like our own retrospective justifications, AI’s chains-of-thought can produce plausible-sounding explanations that don’t truly reflect their internal reasoning process.
This understanding can inform critical thinking exercises where students compare their own problem-solving approaches with those generated by AI. Rather than assuming either represents the "true" reasoning process, teachers might encourage students to examine both as constructed narratives—useful frameworks for understanding complex calculations, but not complete or accurate depictions of the underlying cognitive work.
Consider this classroom activity: In a geometry lesson, students could solve a proof and write out their step-by-step reasoning. Then, they could compare their explanation with an AI-generated solution to the same problem. The teacher might ask: "What steps did you think about but not write down? What might the AI be 'thinking' that doesn't show up in its explanation? How do the explanations differ even when reaching the same conclusion?" This exercise helps students recognize the constructed nature of all explanations while deepening their mathematical understanding.
This perspective helps educators balance appreciation for what AI can contribute to learning environments with an understanding of its limitations. It shows that the similarities in post-hoc explanation don't erase the fundamental differences in how humans and AI systems process information. While AI may produce impressive chains-of-thought, it still lacks the embodied, emotional, and contextually nuanced understanding that shapes human reasoning and learning.
A More Reflective Understanding
As we live in a world increasingly shaped by artificial intelligence, acknowledging the limits of our self-understanding creates space for more nuanced engagement with emerging technologies. This doesn’t diminish human cognition or suggest consciousness is merely an illusion. Rather, it invites us to approach AI with an openness to the possibility that some of its processes resemble aspects of our own mental operations. It recognizes that the stories we tell ourselves about how we reason may sometimes be as constructed as those generated by AI. And this should lead us to a deeper appreciation of the beautiful complexity inherent in all forms of intelligence.
AI Disclosure Statement: Claude 3.7 Sonnet assisted in drafting and refining the prose of this article while ChatGPT o1 Pro Mode was used for red-teaming. Images were created with the help of Leonardo.ai. All concepts, ideas, arguments, and theoretical frameworks presented were developed through human reasoning by the author.
Key Takeaways for Educators
Humans and reflective large language models both construct explanations after key processes have already taken place.
This doesn't mean AI "understands" like humans do, but it suggests an intriguing parallel in how both produce narratives and make decisions.
In teaching contexts, treat AI-generated explanations as potentially helpful but not definitive insights into the machine's inner workings.
Design activities that have students compare their own reasoning processes with AI-generated explanations to develop critical thinking about both.
Help students recognize that knowledge emerges through active interpretation rather than passive reception—a core constructivist insight applicable to both human and AI reasoning.
Thank you to Michael for this thought-provoking post. If you are interested in submitting a guest post to AI EduPathways, please reach out to me on LinkedIn or over email at mike@litpartners.ai.
Many thanks for the thought provoking article. Interesting observation about human and llm reasoning ... Both are maps of unseen and unseeble terrains of processes.
I have been wondering about how the reasoning llms can be useful for students. I have also thought about the idea of using llm reasoning as a counterpoint or comparison with learners own reasoning to solve problems like the geometry type alluded to in the post. ... Regardless of how faithful the CoT actually is.
But letting students know that there is a difference between the map and territory and perhaps speculating on how it differs might be an interesting meta-exercise ... Presumably for higher level learners.
Especially if we have some concrete examples like Anthropic recently shared in one of their research articles. https://www.anthropic.com/research/tracing-thoughts-language-model
I agree with many like Yan Lecun and the 4E group that llms is not the way to human like thinking or agi if a model of human thinking is the end goal.
Thanks for the opportunity to contribute to AI EduPathways! I was a lot of fun to write this article.