New Research Study: The Field Guide to Effective GenAI Use
A Museum of Real Use: Six Educators Annotate Their AI Use—and a Method Emerges for Benchmarking the Chats
Today, I am excited to share the publication of a new experimental research study in The WAC Clearinghouse — The Field Guide to Effective GenAI Use.
The Field Guide presents three core arguments:
Ethical or effective AI use can only be determined on a relative basis
AI Transcript Analysis needs to become an entirely new field of study/academic discipline
We can benchmark chats for students and evaluate their AI use across a variety of skills and subject matters. Benchmarks allow for a new assessment structure that adapts to the AI era while preserving the cognitive processes we hold dear.
In the exhibit, you will also find six annotated chat transcripts from real educators. The contributors include myself, Dr. Lance Cummings, Jason Gulya, Doan Winkel, Dr. Kara Kennedy, and Dr. Nick Potkalitsky — voices many of you already follow and trust. They teach across disciplines, institutions, and pedagogical styles. Each was willing to make their own process visible.
Below is a description of the exhibit with links and explanations. I invite you to read and critique the approaches on the page — including the Foreword and first four chapters of the exhibit, which place this methodology within the context of traditional pedagogy and instructional methods.
I hope you enjoy it - and please do not hesitate to reach out with questions, thoughts, ideas, and commentary.
The Field Guide to Effective GenAI Use
The marginalia tell the story. The prompts are only half the conversation.
For the past year, I’ve been working through a puzzle many educators are still trying to solve. When students use generative AI in their work, how should we evaluate it? How do we know what good use looks like?
Many educators are asking students to turn in prompts as part of a bibliography or informal discussion point. Others have taken the step of counting interactions as part of a formative process grade. These efforts are well-placed and forward-thinking, but there is another step we can take. The interaction itself can be treated as an artifact of student thinking. It can be more than a reflection tool or a process checkpoint. It can be a central component of the grade.
Our next challenge is to self-analyze and develop meaningful benchmarks for AI use across contexts. This research exhibit aims to take the first major step in that direction.
With the right approach, a transcript becomes something else:
A window into student decision-making
A record of how understanding evolves
A conversation that can be interpreted and assessed
An opportunity to evaluate content understanding
Our collaborative exhibit aims to demonstrate the steps of self-analysis that will lead us in the direction of using AI ethically and effectively - as well as using it as an assessment tool and an artifact of student thinking.
The Exhibit
Now live in the WAC Clearinghouse, this experimental exhibit features:
Six real educators sharing unedited transcripts of their own AI use
Margin annotations that explain what they were thinking as they prompted, read, and reflected
Two kinds of commentary: one that speaks to the user on the page, and one that explains the author’s own internal process
Transcripts include use cases such as:
Generating rubrics and learning activities for a High School English class (Mike Kentz)
Preparing Study Abroad materials and curriculum (Dr. Lance Cummings)
Shifting a college essay assignment into a project-based learning activity (Jason Gulya)
Preparing for a Mathematics Tutoring session (Dr. Kara Kennedy)
Drafting a curriculum for an experiential entrepreneurship class (Doan Winkel)
Designing a College-Level writing curriculum for High School students (Dr. Nick Potkalitsky)
You can find links to each of their transcripts—and their annotations—on the landing page of the exhibit.
To be clear, these transcripts are not meant to be templates or model answers. They are authentic artifacts of how teachers actually use AI - with an added dose of reflection and consideration.
Prefer to watch a walkthrough?
I’ve created a short video guide to help you navigate the exhibit and understand how each piece fits together.
It covers the origins of the project, the key goals of the methodology, and what you’ll find inside each transcript.
The Breakthrough: Conceptual Benchmarking
For those who have followed my writing on “grading the chats,” this may feel familiar. But this Field Guide introduces something new.
Comparative Transcript Analysis makes it possible to benchmark AI use.
Here’s what that means:
Two transcripts are placed side-by-side
Each shows a different kind of interaction with AI
Through close reading and comparison, students begin to see why one transcript is more effective
This creates a conceptual benchmark—a way to assess subjective use on relative terms
This approach goes beyond completion grades or checkbox rubrics. It allows instructors to evaluate process, not just outcome. It turns AI interaction into something legible. It also invites students to reflect on their own choices.
This Field Guide is a first step toward something larger: a shared practice of analyzing AI transcripts together, and eventually, a library of benchmarked interactions that institutions can draw from.
The Method Is Grounded in Research
This work draws from multiple fields:
Writing and Transfer Theory — to support knowledge application across contexts
Media Theory (McLuhan) — to frame the transcript itself as the artifact
Cognitive Distance — to sharpen self-awareness through annotation and time
Conceptual Benchmarking — to evaluate subjective tasks through comparison
Dialogic Interaction — to treat prompting as a conversation, not a command
Together, these strands support a new approach to AI literacy—one that is reflective, comparative, and assessable.
Why This Exhibit Matters
If I were teaching a class on AI literacy in 2025, this would be the first assignment.
It provides:
Real examples from real educators
Honest reflections on their thinking and regrets
A method students and teachers can use immediately
It’s a reading experience.
It’s a mirror.
It’s a prototype of what responsible AI use might look like in public.
Over time, I imagine a future where annotated transcripts are collected and curated. Schools and universities could draw from a shared library of real examples—not polished templates, but genuine conversations that show process, reflection, and revision. These transcripts would live not as static samples but as evolving benchmarks.
This Field Guide is the first move in that direction.
You can explore the exhibit here:
🔗 The Field Guide to Effective AI Use
Let me know what stands out. And if you end up annotating your own transcript, I’d love to see it.
Mike Kentz is an educator, writer, and founder of AI Literacy Partners, where he helps schools, universities, and organizations navigate the challenges of generative AI with clarity and integrity.
AI literacy is his game. Comparative transcript analysis is the method.
If you're looking to bring this work into your school, workshop, or curriculum, reach out.
You can find him here:
🔗 Website
🔗 LinkedIn
🔗 Bluesky
📬 Or just reply to this post on Substack and say hello.