How We Frame Machines

How We Frame Machines

Comparative Transcript Analysis: A Video Guide and Samples

What Essays Can Teach Us About Evaluating AI Use

Mike Kentz's avatar
Mike Kentz
Apr 03, 2025
∙ Paid

Yesterday’s post about VibraGrade was, in fact, an April Fool’s joke. There is no platform for grading emoji use (thank God). That said, based on the replies I received, it’s clear that while the tech isn’t real, the fear it pokes at very much is. Some of us do feel like we're barreling toward a future where grading might hinge on metrics that miss the point. And that’s exactly why we need to ask better questions.


The conversation around AI in education has matured over the past year—but we’re still not asking the right questions.

When I first started tracking the discourse on Substack, LinkedIn, at conferences, and in the media, it was almost entirely focused on cheating. That discourse was accurate, but felt limiting in some way. Educators felt they were being reduced the classroom cops rather than developers of creativity and cognition.

AI cheating is still a very real issue, but educators seem now to be more focused on developing more expansive teaching methods and assessment protocols rather than “playing police.” We’re not out of the woods yet, but it’s inspiring to see so many creative solutions being developed across the board.

At the same time, some educators pushed back on the myth of the “AI tutor.” That skepticism turned out to be right: generative AI is a poor teacher when dropped in front of a student with no structure or scaffolding. The technology still presents some opportunities for new pathways of learning, but the idea that AI could act as a full-service tutor seems increasingly unlikely by the day.

And then there were ethics. At the outset, this conversation tended to focus on three questions:
What should students use AI for, if anything?
When in the learning process should they use it?
Why use it at all?

Each of these invites subjective answers. Some say AI should never be used. Others think it’s fine for brainstorming. I’ve heard some writing teachers say ‘Use it to help you revise your writing, but not to write the first draft. Then others, on the opposite side of the spectrum, argue for students to use AI to overcome the tyranny of the blank page, but never to use it for revisions.

Can you imagine being a student right now?

Furthermore, in a world where students can access AI anytime, "what," "why," and "when" end up as nothing more than suggestions. However, what does appear to fall more within our locus of control than we perhaps realized is — how?

"How" is the lever we can actually teach, guide, and assess. And done well, it can serve multiple purposes:
– Help prevent shallow or dishonest use.
– Build metacognition, creativity, and critical thinking.
– Protect human cognition in the long term.

But we face two big challenges:

  1. AI use is subjective and context-specific. There is no universal “right way” to engage with AI. What’s effective depends on the task, the thinker, the goal, and even the bot, in some cases.

  2. We don’t have benchmarks. For one, we are still in the relative Stone Age of broad cultural AI use. It’s early days. For two, most current studies focus on prompt engineering, not cognition. They aim to maximize AI’s output—not to measure human thinking, reflection, learning, or growth.

So what can we do?

We take a page from how we teach writing.

The essay has never had a single perfect formula. It’s subjective, abstract, and open-ended—just like a chat with AI. Yet we’ve created essay rubrics and instructional methods by analyzing examples, teaching rhetorical moves, and building shared language over time. It’s taken hundreds, arguably thousands, of years — composition was first taught as a standalone subject in early 1800’s and later embedded into the Harvard Curriculum in the 1870’s but emerged as a “scholarly research discipline” in the 1970s — but over time we have been able to agree on some core, foundational principles of “good writing.”

(Even then, if you gave the same rubric to two seasoned Writing Teachers and asked them to grade the same essay, there would still be some disagreement.)

We can do the same with AI.

By treating AI chats like we treat essays—examining them closely, comparing weak and strong interactions, and using rubrics—we can teach students not only how to “use AI” but how to think in conversation with it.

This is what I call Comparative Transcript Analysis.


Below this paywall, I’ve included:

  • A short video guide walking through my approach

  • Sample student transcript comparisons from my 9th-grade English class

  • Tips for how to lead this kind of analysis in your classroom

A final note: This method is not about “teaching AI.” It’s about teaching thinking. Just like close reading essays, analyzing AI chats builds metacognition. That it also improves AI outputs is a bonus. But the real goal is developing the kind of learners our world needs next.

Keep reading with a 7-day free trial

Subscribe to How We Frame Machines to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Mike Kentz · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture