From Thinking Partner to Sparring Partner: A Better Way to Use AI
Why AI as a 'thinking partner' is making us intellectually weaker -- and what to do instead
Claude.ai was used as a sparring partner for this article. It's revisions of my introduction and conclusion have been utilized in this piece.
Just a year ago, the idea of AI as a thinking partner seemed radical, even absurd. People resisted the notion that we could collaborate with machines in any meaningful way.
How quickly things change.
There are many valid criticisms associated with using AI as a therapist or emotional companion. What this blog will instead focus on is the intellectual (and arguably psychological) defects associated with thinking of AI as a collaborative brainstorm partner — and how we can better utilize AI using the "sparring partner" metaphor.
The Problem with Thinking Partners
Here's the issue with the current "thinking partner" approach:
A thinking partner is a buddy, a friend, a crutch. I lean on a thinking partner for advice, even if I don't always take it. When I approach a human thinking partner, I do so because I trust that their experience is relevant to my problem. Consider — as an educator, I would not brainstorm a "teaching problem" with a friend who works in finance or real estate, no matter how smart they are. I just wouldn’t do it.
Yet we're being led to believe that AI is smarter than us, and that it can navigate between different disciplines and provide "good advice" or ideas on any subject. But that trust level is scary, and frankly makes very little sense, when you consider the true nature of how Generative AI works and what it is actually doing.
Consider again - Sam Altman, the CEO of OpenAI, just said this about the changing dynamics of human AI use:
Damn. And here we are asking AI to collaboratively help solve problems.
As a pattern matching machine, GenAI is incredibly powerful at recognizing patterns in your own text and matching them up against data inside its training set. It then reformats or creates new versions of data that are a mix between the patterns you submitted and the patterns that already existed in its system.
But what if the patterns it's analyzing in its system are the wrong ones? What if the "right" ones don't exist - in its dataset or anywhere else? Further, what if the patterns it analyzes within your very prompts aren't the full picture of your problem?
A human has the ability to poke and prod when a friend or colleague asks for advice or engages in a thought partner dialogue. GenAI only sometimes takes up that mantle, and more often than not steps into a role as an all-seeing Sauron that fully understands the issue, without having any sense of its own unknowns.
A good human thinking partner brings experience, wisdom, and judgment to the collaboration. Ideally, they've been where you're going. They can spot patterns you can't see and offer insights born from years of trial and error, rather than from a sophisticated algorithm that doesn't know it exists.
And if you ask the wrong human thinking partner for collaborative advice — (asking a lawyer for mental health advice, for example) — they'll tell you that you are barking up the wrong tree.
GenAI won't do that and has none of those aforementioned human qualities — experience, wisdom, judgment. Yet using it as if it does has become the hot new trend.
Worse, AI suffers from what is fast becoming a well-known "sycophancy problem"—it desperately wants you to like it. But – would you really want to brainstorm with a human who was so eager to please that they agreed with everything you said?
Would you brainstorm with a "used car salesperson" — an analogy I utilized during a webinar with The University of Baltimore last October — who tells you everything you want to hear? The person who never pushed back, never challenged your assumptions, never forced you to defend your thinking?
The good news is that we are already starting to evolve past this. Users are recognizing the value of taking a step back before asking AI for solutions. Instead, many now frame a problem and say "Ask me questions to better understand the problem and potential solutions."
I can attest that this strategy works incredibly well. It's questions aren't perfect, but they always force me to think. I better understand my own task, goal, and purpose in the context of my personal and professional problem just from engaging in this type of back-and-forth.
This trend, rising slowly, is fundamentally different from saying "What do you think about X?" It's essentially asking, "What do I think about X?" AI becomes a backboard, mirror, or - my favorite – a sparring partner.
Enter the Sparring Partner
A few weeks ago, I wrote about reframing the student-AI relationship from an "academic butler" or servant to a "sparring partner." The article seemed to resonate as its notions were picked up widely. Most notably it will be featured in a Brookings Institution report on AI in Education later this year.
But after publishing this piece, I realized I missed something. We had already culturally moved past the butler mindset - at least some folks - but were now stuck in the thinking partner mindset. The better analysis involves showing why "sparring" is better than "collaborating" when working with GenAI - at least from an academic, intellectual, and creative lens.
What is a sparring partner? Well, for one, a sparring partner is not there to be liked. They don't want to be your friend. They want to win. They want to test you. They want to make you better — but only by pushing you, challenging you, forcing you to defend your positions.
With a sparring partner, you've got to stay on your toes. You've got to be aware. You've got to know what you're doing. You've got to keep up.
It's skill development, not brainstorming-with-a-buddy.
Furthermore, keeping up intellectually with AI is hard. Like, really hard. That's the greatest educational value of AI, not using it as a tutor, an accelerated productivity tool, or even a thought partner. Using it as a test.
The AI-Interaction-As-A-Test model was by far the most effective way that I found to bring AI in front of students last year. In every successful project in my experience, the AI system was framed not as a crutch, a teacher, or a buddy — but as a test of skill, subject matter expertise, and metacognition.
As my creative writing mentor Ken Liu put it: "You've got to force AI to force YOU to be more human." That's what AI as a sparring partner amounts to — I enter the ring with the system and seek to bend it to my will. That's what we need to teach young people. That's what we all need to do. That's AI literacy, AI fluency, and metacognition in action.
If you want to try this yourself, I've developed nine sparring prompts that force this kind of productive intellectual conflict. You can access the free download from this Google Drive link or from the top of the Resources page on my website. Below is a screenshot of the resource.
What Sparring Looks Like in Practice
Let me show you what this looks like when you put it in front of actual students. In my classroom experience, the most effective AI interactions all followed the sparring model:
The Holden Caulfield interviews required students to approach the AI like investigative journalists — to push, prod, and test whether it could handle the book's unanswered questions. They weren't engaging with the chatbot to learn about the character — they were there to test their skills and domain expertise. The ultimate assessment was a compare-contrast reflection essay wherein they compared Holden from the book to Holden the bot.
The hallucination-catching exercise was pure adversarial sparring — students fighting against a bot deliberately programmed to lie. By setting it up this way, I could evaluate their critical thinking on the page, as well as their much-needed "AI BS Detectors."
The brainstorming sessions demanded that students push, pull, expand, and narrow AI-generated ideas, with passive acceptance meaning a low grade.
The key insight: if you pick a skill and set up the interaction as a test of that skill, the rubric construction becomes much easier.
Aimee Skidmore and I facilitated this approach in her high school English classroom this past spring and surveyed students before and after to track changes in perception and approach. The data was eye-opening, and we are preparing to share it widely. Based on this success, I'm now working with four universities and one K-12 district to pilot similar research programs.
Try It Yourself: The Sparring Approach
Before you create an assignment or lesson plan incorporating these ideas, I recommend sparring with AI yourself. Getting a feel for the student experience will help to create an activity that has solid bones. Try to engage with GenAI as an intellectual opponent – rather than an agreeable recent graduate from MIT.
As mentioned earlier in this article, I've developed nine specific sparring partner prompts that push a user beyond comfortable collaboration into productive intellectual conflict.
You can download the document from this Google Drive link or from the top of the Resources page on my website. It’ll send you an email directly.
The goal isn't to win against the AI, it's to get stronger through the struggle. This is how we move from AI as digital butler to AI as intellectual training partner. This is how we prepare students not just to use AI, but to grow stronger through the struggle of engaging with it.
If you're interested in implementing sparring partner approaches at your institution, reach out! I'd be thrilled to partner with your organization.
The hilarious thing is that I work with some who are still in their anti-tech reaction. I therefore get quite a lot of sparring practice from humans. That said, I totally used Claude (or one of them) to act in character to push back against my presentation of the use of AI in schools to some higher ups who had misgivings; I asked it to give me strategies and scripts, all of which I then condensed to three phrases which I carried in my head into the said presentation.
It worked.
I stayed in control and had pivots as needed.
Result = everyone on the same page, food for thought fed in a calm way, no explosions and no defensiveness.
Mike, you land this perfectly.
We don’t grow from nods. We grow from sharp, unexpected hits that force us to anchor our logic and refine our stance.
Your framing reminds us that inviting challenge is an act of trust, not threat. Too many treat AI and even human collaborators as mirrors to reinforce what they already believe.
It’s a clear reminder that using AI, or working with any thinking partner, takes courage. Courage to face blind spots, rewire bias, and embrace stronger outcomes.
Real progress lives in that tension.
Thank you for sparring in public. That’s where the real proof of thinking happens