16 Comments

This framework seems great. I'm excited to give it a try this school year!

Expand full comment
author

Awesome Brandon! Feel free to join the CRAFT Program which will bring together educators across disciplines and levels to experiment with this approach in a collaborative way!

Sign up is at the link below - it’s free!

www.zainetek-edu.com/products

Expand full comment
Jun 7Liked by Mike Kentz

I am a designer and am finishing up an asynchronous online course that provides some video-based instruction on using Copilot to guide students in identifying research questions, and findings core topics and sources for a scaffolded final paper. We are wanting to have students submit their chat history and I am looking for an easy way for the instructor (who is an adjunct and not involved with the course design so I don't want to place a heavy AI burden on them) to review them.

I am curious if you could share the chat exemplars you provided to students for higher and lower quality chats. I'd love to use them with our students and invite them to evaluate them. Then to later evaluate their own chats prior to submitting .

Expand full comment
author

Hi Amanda, I am working on putting together a free workbook of materials for the approach that would include some exemplars and non-exemplars. They will be available on my website www.aiforschools.info in the next two weeks. I will post an alert on my blog as well. Thanks for reaching out!

Expand full comment

Great! I am thinking we will require students to summarize their prompts along with a brief reflection (1-2 sentences) on the value/relevance of the response they got along with saving any useful chat responses, for their own purposes and to submit as documentation of their work.

We are using Copilot because it provides links to scholarly resources and student conversations are (I believe) protected within our institutions Microsoft account.

Expand full comment
author

That sounds right up my alley! If you are interested, I am also putting together a cohort of educators that are interested in collaborating around this concept later this summer. The idea being to map out all of the ins and outs in each subject area, discipline, task, objective, activity, format, method, mode, on and on. Would love to have you join!

Expand full comment

It all depends on how one hovers eh:)? I want to respond thoughtfully to this. Wow. I love the pushback. I think I might use this convo as a basis for a post and get your name out there to my subscribers. I’ve got not a lot (200) but a lot who read regularly and are well placed in education. I’d love to see some subscribe to you.

Expand full comment
author

Haha! I'm sorry, I hope my comment did not come across as confrontational. I have been sitting with this idea for so long that I have tried to think through every angle and flaw, and so am very ready for discussion and to "debate" its merits.

I'd very much appreciate that! I wonder if it is worthy of a podcast episode. I am thinking of launching one soon. Let me know how I can help or contribute!

Thanks, Terry!

Expand full comment

When I first started using GPT3.5 I knew next to nothing about prompting. I soon learned not to expect much if a prompt was whacky. I’d been reading about hallucinations and once the bot put Jayne Eyre in Act II of Hamlet. Sooo I began to end every conversation with a request: Please select the three best prompts and tell me why they were good. Then select the three worst prompts and explain. This led me to long discussions with the bot about a “good prompt” and how they helped the bot respond. In the end I’d say the bot taught me how to use the machine. I’d be interested in learning from you if this strategy has any utility.

I also learned something about myself and my purpose. My efforts in making a chat plan and crafting a sequence of prompts is directly related to how intensely I want to find something out. For example, I’m having people all the time if I would ask the bot about a real estate question or how to read an MRI etc. when I’m working without a personal investment in the outcome, I’m less patient, less apt to be satisfied with the result. But the benefit isn’t mine to begin with. When I’m dying to know something—tell me more o bot about Heideggers tool-as-being, say—then I turn on the cognitive burners and invest effort. It pays off.

The one element that troubles me in your design is student agency. There is a teacher-hovering quality to it and a fairly heavy reliance on letter grades to motivate. In my experience this approach could diminish the intensity of students who may not have the drive to find out one needs to use a bot—especially important for kids. Adults working professionally for a salary likely have a completely different motivation orientation and may actually work harder to prompt and chat productively. The grade is a paltry motivation compared to a paycheck.

I think you could remedy this by assessing progress through simple pass/redo and use feedback to improve. Grade summative performance. I think the more choices of questions, purposes for chats, design of approach you can give to students the more powerful will be their motivation.

I really admire your obvious commitment to your students and your colleagues. Using Holden in this context is very cool. I look forward to your next post. Thanks for this.

Expand full comment
author
Jun 4·edited Jun 4Author

This is really interesting, Terry!!

I definitely want to test out your approach of asking GPT for feedback on my own/my student's own prompts. The only issue I have with it is that I usually tell my kids "don't ask AI for answers or opinions," since it doesn't really have any. (Instead I say "Ask for ideas/suggestions/pro's and con's and have it explain why it made those suggestions.") But still, it is a really interesting way to analyze prompts. It's almost like you are asking AI to do a close reading of itself. So for example, we could analyze a chat ourselves as a class and come to some group or individual conclusions about what worked/didn't work and why -- and then ask AI to do the same under the same parameters, and then compare the two results....whew! I know some of my kids would have their minds blown by that kind of process!

You are so right about effort being related to how badly you want the project/AI chat to work out. I have seen and experienced this many times. I think, broadly, the message to kids in relation this is -- "OK, sometimes you are not going to want to have a chat that is this intense. That's fine. But just make sure that those chats don't go any deeper than the surface level, don't ask AI for advice or opinions, and are not used as ways to shortcut important thinking." In other words, you can have those "simple" chats -- nothing wrong with that -- just be aware of it.

Thoughts?

I also think you make a really good point about the hovering issue. This approach does take some of the novelty, freedom, and excitement out of the AI interaction process.

But I would respond to it in two ways:

1) The hovering doesn't kill creativity. It actually teaches it. Every time I give my kids feedback on one of their iterations or conversations, the feedback involves "how to think about the problem they are trying to address." It's never -- "Use few shot prompting because I said so." Does that make sense? It's basically a creative writing approach to chat transcripts.

2) I would also argue that of all times this is actually the time when we should hover the most! I don't mean that to be blithe, but in truth I think this is such a pivotal moment for teachers to get "in there" -- meaning in the chats -- to ensure we don't lose an entire generation to AI (like we did with social media).

That said, the hovering point is well-taken. I think over the years I will/we would have to get creative in figuring out when and where to scale back the monitoring.

Last, I completely agree about feedback to improve on a whole class basis. I did this once this year when I saw trends in the chat transcripts (the same way I do with essay feedback). We did a whole class discussion and note-taking session on the gaps - knowing that they would do it once more that year and could try to apply the feedback then. However I have not figured out how to do a pass/fail - and the problem with a re-do is that AI produces different responses every time, even if you use the same exact prompt. So, there's really no way to re-simulate the exact same chat a second time (as far as I am aware). You may have a better idea of how to do this though.

Thank you for this comment and your support! You have really stretched my thinking here and I appreciate it!!

Expand full comment

Fascinating! I've used question asking strategies and response strategies, but never together like that. Especially with AI. Great food for thought!

Expand full comment

This is incredible, Mike. Truly!!! You are working on the bleeding edge here. Let me know how I can assist in getting the word out.

Expand full comment
author

Thank you!

I hope that the conferences and webinars will help. In the meantime if you hear of anyone focused on assessments, let ‘em know!

This one is very much still in the lab, but I think it holds promise.

Expand full comment

I'd say, four cycles of implementation is beyond the lab in today's context.

Impressive conference and webinar cycle. Keep going, Mike!!!

The big breakthrough is right around the corner!!!

Expand full comment

This is great--way better than what I was using to assess my students' ChatGPT chats!

I may be creating a hybrid version, using this as inspiration and adding my own spin!

Thanks for sharing this!

Expand full comment

I love how this approach focused on close reading and getting students to develop metacognition and critical thinking without ever using those words. The idea that you are graded on your efforts and outputs with AI reframes everything beautifully.

Expand full comment