The AI Era Demands Curriculum Redesign: Stories from the Frontlines of Change
Sharing concrete examples of process-based assessment methods in the age of AI
Student use of AI is increasing, and traditional language-based assessments no longer provide a clear window into student thinking. As Chris Dede and Lydia Cao put it, “Our current curriculum and high-stakes tests emphasize skills at which AI excels.” Simply put, the educational system must adapt—not because AI is flawless or even desirable in every instance, but because its here, students are using it, we can’t reliably detect it, and we owe it to them to prepare for a future where AI systems are ubiquitous and its use is expected in the workplace.
For educators, this means rethinking curriculum and assessment frameworks in ways that mitigate the risks of AI while leveraging its potential to foster deeper thinking and problem-solving. One promising solution is evaluating student interactions with AI. This concept has faced some skepticism due to workload and feasibility concerns. Transcripts can, after all, be quite long. But those concerns represent a misconception regarding the approach itself.
I personally advocate that educators plan only to construct assessments with this model once a semester, if that, and especially in these early stages as we test and gather data on what works and what doesn’t. It’s not about grading 100 student chats a day, but instead selectively and strategically finding opportunities to evaluate student thinking within broad-based problem-solving exercises where AI becomes a “backboard” for student thinking.
Furthermore, educators can reduce workload by placing time and/or prompt limits on the graded chat. For instance, when I asked my students to brainstorm a plan of attack for a broad-based project-based learning activity last March using ChatGPT, I told them I would evaluate a maximum of seven prompts. They could continue using it afterwards, but I would limit my evaluation to the first seven inputs.
This approach did not just reduce the grading load, it also incentivized students to think harder about their AI use. Without being instructed to do so, I witnessed many students opening up separate Word documents to draft, proofread, and revise their prompts before entering them into the chat. They slowed down and engaged in deep metacognitive thinking around how best to describe their purpose and their goals – and to articulate a question to the system in the hopes of ascertaining help in the face of a large problem.
This is the type of friction that so many leading thinkers have been calling for since last Spring. My students who paused, reflected, wrote and re-wrote before engaging with AI scored better on that portion of the assignment than those who approached it as an informal chat or thought of AI as a tool that will simply do their bidding – without first engaging in a meaningful thought process.
Subsequently in that case, I approach(ed) their prompts as writing samples. The evaluation had little to do with their ability to “get AI to produce something good,” but instead their ability to articulate themselves in a thoughtful and meaningful manner.
If prompts can be construed as writing samples, and if good writing is good thinking, then student prompts can further be categorized as “thinking samples.” Using this language with your students ahead of time can help them understand the “goal” of a graded chat. I want to see your thinking, you might say, and separately model for them how that might look - a separate piece of this framework you can find in the first PDF on my home page.
And if there is one problem that educators should be focused on above all, it is the fact that AI systems are making student thinking invisible. This method is one way to make student thinking visible again.
The Stories
Over the last six months, I’ve worked with educators who are pushing boundaries in curriculum redesign through my free professional development program, CRAFT (Collaborative Reform for AI-Focused Teaching). Together, we’ve explored innovative ways to assess student thinking, and below, I’ve compiled stories from educators across disciplines who are experimenting with this methodology in ways that align with their specific content, skills, and classroom contexts.
Their experiences highlight the adaptability of this approach and its potential to revolutionize how we teach and assess in the AI age. This is the first in a series of posts sharing these examples. I hope you enjoy reading their stories.
Doan Winkel, Associate Professor - Kahl Chair in Entrepreneurship, John Carroll University
Doan Winkel created an assessment framework that enabled students to practice customer interviewing skills and enabled him to quickly identify specific areas for skill development, both for the entire class but also for specific students. Students were asked to use a customer interview CustomGPT developed by Doan. Students told the CustomGPT what product/service they are focusing on, who the typical customer is for the product/service, and what specific problem the product/service solves for that specific customer persona. From there, the customGPT role-played as that customer persona, and the student's job was to ask questions that would extract quality information to inform their product development. Students then posted their chat transcript and annotated it by identifying areas they could improve.
Doan approached the student chat transcripts with an eye toward 1) the quality of questions in providing useful information on the "customer's" experience with the problem, and 2) their depth of reflection in their annotations.
"This methodology enabled students to practice a skill that creates a ton of anxiety in students, so when the time came for them to interview actual human customers, they felt much more confident in their skills, and would thus have much more productive interviews, thus wasting less time and resources working on solutions that the customers didn't want,” he said.
“It was immensely helpful for me to see the chat transcripts so I could identify for the class as a whole, and for specific students, areas for improvement. Long-term, I'm excited about using CustomGPTs for my students to practice critical entrepreneurial skills in a low-anxiety interaction. Overall, educators can consider this approach for anytime students need to practice a specific skill. A teacher can collaborate with ChatGPT to design a custom GPT students can use to develop proficiency in a skill."
Jason Gulya, English Professor, Berkeley College
Instead of assigning a traditional essay, Jason Gulya designed a "Reading Portfolio" exercise for his Literature students. He asked students to choose a story, free-write to generate some ideas, mark up a passage, and more. Then, he assigned a graded chat. He provided students with a custom chatbot that was designed to push against students. Jason asked his students to debate the chatbot using everything they had done so far for the portfolio and submit the entire transcript.
“When evaluating the transcript, I focused on my students' use of critical thinking, their ability to think deeply about the reading material, and their ability to read the chatbot's responses carefully and to write clearly,” he told me. “I learned far more about how my students think and reflect than I often do from traditional essays.”
"Curious about how to adapt this methodology for your classroom? Sign up for my newsletter to get updates on innovative tools for AI-era education and early access to resources like the potential Grade the Chats Manual."
Bruce Clark, Associate Professor of Marketing, D’Amore-McKim School of Business at Northeastern University
“Wow, did this work.”
This was Bruce Clark’s conclusion after asking students to argue with a Generative AI chatbot and then evaluating the quality of their interactions along the way. You can read more about his framework and experience on his Medium website here.
A few excerpts from his piece:
“I was also very satisfied that I was seeing how students thought through (or didn’t) a problem. When I asked them why they chose what they did from the AI’s output, the two most common responses were (a) it was the weakest objection and (b) it was the most interesting objection. Many came up with creative ways to overcome their LLM’s objection. (One, I suspect, may have used an LLM to come up with ideas. Sigh.) And to my delight, students who had been virtually silent in class blossomed in an interaction with their LLM. I gleefully read the chat from one heretofore quiet student who tore apart his LLM.”
“This exercise was the hit of the summer: fourteen of sixteen rated the exercise either ‘useful’ or ‘very useful’ on a five-point scale. Informal feedback suggested it was also highly motivating. Students were fascinated by the conversations. One student did the entire exercise twice just to see how it would differ if she changed her prompts. Two shared with me how happy they were when they felt they “beat” their AI. In retrospect, there is a gamification aspect of this that I had not appreciated in advance. This was the LLM equivalent of a “boss battle” in many online games, and students wanted to beat the boss.”
Bruce also wrote about an attempt involving this approach that did not fare so well. In it, he focused on using AI as a tutor and a coach and found that students did not enjoy or appreciate the experience. You can read more about it here.
Keep Moving Forward
From my perspective, even the “failures” are “good.” By attempting these new techniques, we find out what works and what doesn’t. In a perfect world, we would have much more time and space to test updated assessment strategies with our students in more controlled environments. That would be the case if access to AI systems were more restricted by regulation, for example.
Unfortunately, that is not the case, which means that we need to get going with the get going - and accept that not every attempt will hit the mark. I applaud and appreciate Bruce and all other educators who are willing to try new things and subsequently share the results.
This approach is still in its early stages, but the potential is clear: a future where assessments aren’t just about the answers students produce, but about the process and problem-solving they demonstrate along the way.
I’ll be sharing more stories like these in the coming weeks and months. You’ll hear from Art Teachers, History Teachers, Writing Professors, Psychology Professors, and more. Follow along to see how you might be able to tweak your existing assessment methodologies to adapt to the AI era.
If you’re ready to explore this methodology further, there are plenty of ways to get involved:
Sign up for my newsletter to stay updated on curriculum redesign and future resources like the Grade the Chats Manual.
Join the waitlist for the next CRAFT Program, launching in February-March.
Or, reach out directly (mike@zainetek-edu.com) if you’re curious about testing this methodology in your classroom.
Together, we can shape the future of education in the AI era. Let’s make student thinking visible again.
Thanks for all you do Mike to explore this new world of AI and education.
And thanks for sharing my experience.
Here is a link to the Customer Interviewer CustomGPT I use for my students to practice this critical entrepreneurship skill: https://chatgpt.com/g/g-RIrTf4Y5o-customer-interviewer
It's been a game-changer for my students to build mastery so much quicker!
And it's been a game-changer for me in realizing that I can provide students one of these for every skill I want them to develop and their progression toward mastery is so much richer.
Really like what you're doing here, Mike. Combining multiple approaches, disciplines, and stories helps us think through the possibilities!