AI EduPathways

AI EduPathways

Share this post

AI EduPathways
AI EduPathways
How Educators Are Designing AI Assessments That Actually Work
Copy link
Facebook
Email
Notes
More

How Educators Are Designing AI Assessments That Actually Work

Helping Students Think Critically, Engage Deeply, and Take Ownership of AI

Mike Kentz's avatar
Mike Kentz
Feb 04, 2025
∙ Paid
8

Share this post

AI EduPathways
AI EduPathways
How Educators Are Designing AI Assessments That Actually Work
Copy link
Facebook
Email
Notes
More
Share

A few weeks ago, I shared stories from educators who are rethinking assessments to highlight the value of students’ interactions with AI. We explored ways to structure these assessments to reduce grading burdens, create productive friction, and inspire students to care about how they engage with AI.

Today, I’m excited to share more stories. The first half of this post is freely available, and the second half is for paid subscribers who want to go deeper. I’m putting this behind a paywall not to exclude, but to create a space for those who are serious about exploring these ideas together. If that’s you, I’d love for you to join.

Before diving into these examples, let’s discuss what it takes to set students up for success when working with AI. Success, of course, is subjective and depends on the task and context, but two factors are critical: student mindset and teacher method.

In this post, I’ll focus on the first: cultivating the right student mindset.


Students as Experts

One of the most important lessons I’ve learned is that students need to see themselves as the experts when working with AI—not the other way around. This received research-based support via a recent study that showed student learning increasing when students were positioned as the teacher and AI as the learner. In these cases, the student is the expert and the bot simply provides a playground on which to demonstrate expertise.

In all three major assessments that I have executed as a High School English Teacher, I framed the student as the expert. We were not using AI because it knew more than us, we were using AI because it provided a valuable testing ground on which to probe our own knowledge and thinking.

Here’s some of the language I used:

  • You know the material better than the AI. When we explored The Catcher in the Rye, I reminded them, “You know Holden Caulfield better than anyone. You are the expert in everything ‘Holden.’ Your job is to push the bot beyond surface-level responses and see if it can give you something meaningful. Maybe it will, maybe it won’t, but through your expertise - we will find out.”

  • You know the goals of your project better than the AI. When using AI to help brainstorm project-based learning tasks, I’d say, “You know better than anyone what kind of project you want to make. AI does not know better than you. But, by using it as a brainstorm partner and focusing on producing a high-quality interaction at the same time, you may find that it can expand your creativity and your project-planning process. Truth is, I don’t know, but let’s find out.”

In these regards, AI was not an expert. It was a possible expander of creativity, knowledge, insight, and reflection. In an effort to further incentivize them towards meaningful engagement, I leaned on my journalism background for additional framing.

  • You are an investigative reporter. You have to dig. “ChatGPT might give you boring or nonsensical ideas at first,” I warned. “You may have to push it, refine your questions, and dig deeper. Even if it doesn’t solve everything, the process of ‘interviewing’ it will prepare you for the task at hand. Pushing yourself to have a meaningful engagement will make the entire process more enjoyable and interesting.”

The goal can be to reposition AI as a tool that is subordinate to the student—not an authority to passively follow. This allows educators to use AI as a learning space and develop AI literacy without framing AI tools as “tutors.”

This reframing also keeps agency, control, curiosity, independence, and choice in the hands of the students, not the other way around. They are driving the car, not the bot.

This is not the only way to use AI as a learning tool, but it’s one that continues to nudge its way to the surface over time. In fact, these types of interactions often produce a rich, fertile grounds for assessing student expertise and knowledge, as you will see below.


What’s Next?

Here are some stories of educators who’ve devised their own approaches to fostering this kind of learning. I’ve bolded key takeaways that stood out to me when I first read them.

Let’s dive in.


Dr. Cheri Kittrell, Professor of Psychology, State College of Florida, Manatee-Sarasota

Dr. Cheri Kittrell created an assessment framework that helped students learn how to review difficult learning objectives and use generative AI to improve their knowledge to an applied level of learning. Students were asked to start by completing an adaptive learning assignment (using Smartbook, a proprietary software from McGraw-Hill, which provides increasingly difficult questions about their chapter reading). After completing their Smartbook assignment, students collected data from their assignment report on the “most challenging learning objectives” that they had in the chapter. This varied from student to student.

This next section is for paid subscribers—because depth matters. I want to create a space where we can move beyond quick takes and into real, meaningful conversations about AI in education. Your support helps build a thoughtful, engaged community, free from the noise of the attention economy. If that sounds like something you want to be part of, I’d love to have you join.

Keep reading with a 7-day free trial

Subscribe to AI EduPathways to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Mike Kentz
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More