Walk into any faculty meeting today and say the word “AI,” and you’ll likely get one of three reactions: a worried glance, a polite nod, or a visible eye-roll.
I took the lead at my university in integrating AI into all of my student learning. The learning and skill demonstration was an order of magnitude better.
I built tools (apps) that other teachers could use to design AI curriculum for their students. (See: https://jump-ahead-learning.vercel.app/ ). [Note: I am still honing it].
I was showing faculty the impact on skills and learning. I was showing them how to use the tools I've developed. I was reminding them that the ultimate goal is to have the biggest impact on the education of our students. I was also reminding them that universities and colleges are in trouble - that we needed to do everything we could to prove the value of a college education.
But I didn't design your process.
Wondering if you'd like to collaborate on a tool that will facilitate the training process you've defined.
Demonstration of value is a key kickstarter for momentum-building among faculty. The optimism that comes with it is a great incentive. However, it carries its own drawbacks because it sometimes makes AI look like magic and undersells the risks and dangers. Delivering the rhetoric of "we have no choice" is also risky - even if it is true and I agree -- as it makes faculty feel obligated and when people engage begrudgingly they rarely learn in a meaningful way.
Navigating the tightrope between providing a balanced views of the pros/cons while also demonstrating value to build interest and momentum is one of the hardest parts of conducting PD in this area.
Your tool looks fascinating. Yes, I would be interested in collaborating. My email is mike@litpartners.ai. Look forward to chatting!
Thank you for sharing! This is great for me to share with my colleagues as we continue to bring AI and AI Literacy into the classrooms in our district. This is great!
I needed to teach my students to read and write and think. That was hard enough for them and me. I didn’t need to suddenly waste our limited time on an unproven, biased, hallucinating software. Kids need to learn, to develop their minds not play with the shiny new software. Sorry, not sorry.
I couldn't agree more. This was my stance when GPT 3.5 was first released. I was teaching High School English at the time and felt an immense amount of frustration about it. In fact, I still feel frustrated. As an example, I personally don't understand why we, as a culture, are accepting the idea that age limits for AI should be 13+. A 14-year old is nowhere near ready to engage with these tools in a thoughtful way that protects their learning journey and their cognitive development.
My thinking with respect to "learning about AI" changed when I realized my students were using it at home and there was little I could do to stop it, police it, or ban it. Our detection platforms were obviously flawed, savvy students were getting better and better at masking their AI use, and tools were embedded into social media platforms like Snapchat. In fact, my students weren't using ChatGPT in 2023-2024, they were using Snapchat AI. Many didn't even know about ChatGPT, though that has changed in a dramatic way this year.
In that respect, I don't see "learning AI" as a waste of time. In fact, I see it as the only logical solution to a very big problem. AI is probably bad for our brains (depending on how it's used), tools are freely available at home, the temptation for students to use it as a shortcut is massive, we can't detect it reliably, and students are increasingly hearing that corporations won't hire them if they don't use AI. I can dislike AI all I want, but it won't change those realities, at least not in the near term.
As a result, I started my learning journey with the goal of educating my students about it. The goal was to give them experiences that would remove the "magic" of AI experiences and develop a more active, critical, and engaged mindset when using AI tools. I told the story here (https://www.youtube.com/watch?v=DRyWDNjRWaw) and wrote about the pedagogical method here (https://mikekentz.substack.com/p/a-new-assessment-design-framework).
With that in mind, I pose a question to you. How, as an educator, should I preserve the integrity of the learning journey at this time without learning about AI in the first place? If you say ban it, how will you detect it? If you say the eye test - well -- I've got a bridge in Brooklyn to sell to you. And last, what's the alternative? Judge our students through a moral and ethical lens for using it? I can't see that ending well.
Sorry to be blunt, but I have yet to see a practical solution offered by the #resist movement that navigates these problems. Furthermore, it may not seem like it, but I do understand your stance and we actually feel very much the same way. I suppose the difference is - we have chosen to react differently. No judgment, just my perception.
Last, are you still in the classroom? If so, I'd love to hear how you are handling these problems with your students. It was quite a pickle for me from November 2022 to June 2024. I welcome all new ideas and approaches that may "thread the needle" through a very difficult dilemma.
Hi Mike, Thanks for your message. I agree with you about 14 year olds. I taught full time at high school and the middle school from 2006 to 2024, and I saw kids reading less every year. They don’t have stamina for it. I don‘t see how they will catch up on reading. I think it’s clear that phones and other screens are a big part of this, and now we have AI.
I worked really hard on my Masters and every year to be as good a teacher as possible. I see more than ever that good writing requires good thinking, and I reject AI so much because it results in less thinking. I don’t see how that will be good for students long term.
I admit I am also offended by how by how people think AI can just write for us and it’s no big deal. I also feel like it is belittling all the work we do in the classroom. It’s like they are saying, “who needs English teachers?”
I also frankly am unwilling to suddenly learn an entirely new subject, AI. I retired from full time teaching last June in part because I wasn’t going to comply and use AI. I am glad I was close enough to retire. I was 62. If I had been younger, I probably would have stayed and tried to deal with it, but still wouldn’t have allowed my students to use it.
In the last two years, I made my sixth graders do all essay writing in class. I even made them write rough drafts by hand. They didn’t love it, but that didn’t bother me.
The AP English classes still have hand-written essays. I wonder if scores will start going down as students write more and more essays with AI throughout the year.
If I was still in the classroom full time, and teaching high school and college again, I would probably check more of their sources and then bust them if they used AI. If they didn’t quote or paraphrase correctly, I might be able to get them on that.
I had my sixth graders read negative articles about AI. Partly because of my opinion, but also to give them some balance. I know they weren’t getting that anywhere else. Will it matter? Probably not, but I tried.
I know it’s not going away, but I am enjoying seeing AI fans trying to defend it as there are more and more reports of it hallucinating and making mistakes. It feels like a bubble that just might pop. I can only hope.
Maybe faculty are confident in their students' creativity and intelligence. Or maybe they want "freedom" that comes from reading authors you can name, versus mashups. Or maybe "freedom" means they object to billionaire AI tech dudes standing by an authoritarian, lips sealed, while he censors history and what students can read or find in the library?
That certainly would be a nice reality to live in, but in my view, it is being increasingly reduced to rubble.
The mashups you describe are everywhere, and are likely only to proliferate. I absolutely agree that students have a great deal of creativity and intelligence, but they are also human. They experience temptation the same way anyone else does. They procrastinate, are sometimes bored with an assignment, or are sometimes too flat-out fatigued to engage in the way we want them to with an assignment or subject area.
As a result, they use and lean on AI in ways that we likely would not want them to. As faculty, we can certainly lay out the dangers and risks of using AI (there are many), but without modelling active and critical engagement with AI systems, students are left in the dark.
Students are also desperate for guidance when it comes to AI use (statistics below). Not just because they are aware that it could be a major part of the future of work, but because they want to use it well - meaningfully - and not hurt themselves in the long run. They understand the dangers exist, but are looking to adults to show them how to navigate them.
In my view, we as educators owe it to them to learn how to do that, and subsequently share meaningful engagement strategies with students to help them protect their cognitive development and navigate the thorny world into which they have been thrust.
And with respect to the censorship of history, I agree it is a very serious problem. To me, that is another reason to edify students with respect to AI use now via a critical and actively engaged lens - rather than leave them to their own devices.
None of this is meant to be confrontational. As I have written and spoken about many times before, AI is extremely dangerous - for more reasons than I can name here. I think of it like cigarettes, alcohol, or driving a car. We don't let students get behind the wheel of a car without direct instruction. We recognize the dangers of cigarettes and alcohol and put appropriate age limits and restrictions around their use.
We need to get to the same place with AI, but we won't get there by leaving the education of our young people to Sam Altman. We need to pick up the mantle and be a part of the conversation. We need to learn how to drive the car, so we can teach young people the safety mechanisms that will save their lives.
In this way, learning about AI is actually a subversive act. For those that are frustrated about the state of the world and the power that AI companies have over our culture and politics, I say, "Know thy enemy." Yes, they will benefit financially from your usage in the early days. But over the long run, as you embed a deep and critical understanding of these systems, those oligarchs will lose power as the next generation of consumers become fully edified and aware of these risks -- subsequently able to break down power structures as they come of age and compose new regulations and requirements, whatever they may be, because of what we taught them.
I sense that my post may have come across as a criticism of teachers who are against AI. If that is the case, I apologize. The goal was to speak directly to the PD Leaders and Administrators who are seeking ways to engage faculty with AI so we can protect our students. That said, I recognize that my rhetoric may have messaged otherwise.
I hope this response can be received in the spirit in which it is intended. We all want to change the world, we just disagree on how to do it. If we can recognize that we are largely in a similar boat, just paddling in different directions, we might better develop meaningful communications that get us on the same page.
I don't think its entirely rubble quite yet. While I no longer teach, I am in conversation with high school and college students who are excited about reading, writing, and science and not at all into AI - so I have not given up hope that human intelligence and creativity will prevail.
I certainly hope today's students come to realize that the oligarchs would be happier if they didn't read (ban the books!), think, discuss thorny issues, or write something in their own voices.
In earlier times we were concerned about cut-and-paste plagiarism and paper mills - especially since I taught online (Masters and PhD levels.) I wrote about and taught with experiential, collaborative, project-based, service- and active learning approaches - yes, in online courses. It is hard to spit out or copy a team project that involves original research and documentation of the process. See this handbook about designing, teaching, and assessing collaborative learning: "Learning to Collaborate, Collaborating to Learn."
I think the same issues are present today, and still advocate for this kind of instruction, online, offline, or hybrid. Students can learn the subject matter in addition to the ability to communicate, make decisions, compare and contrast different sources, collect original perspectives through observations or interviews, come up with original solutions to a problem, then present and defend their work. These are skills students will need at work and as members of society.
In case you missed it, you might also like to see: "Finding Your Voice in a Ventriloquist’s World – AI and Writing."
Play is exactly what is needed. Thanks for this post!
Hey Mike, Well-timed. I'm looking for a framework for my students to tinker with generative AI this fall. I like this one, so may very well borrow it!
Go for it! Would love to hear how it goes!
Nice. Mike.
I took the lead at my university in integrating AI into all of my student learning. The learning and skill demonstration was an order of magnitude better.
I built tools (apps) that other teachers could use to design AI curriculum for their students. (See: https://jump-ahead-learning.vercel.app/ ). [Note: I am still honing it].
I was showing faculty the impact on skills and learning. I was showing them how to use the tools I've developed. I was reminding them that the ultimate goal is to have the biggest impact on the education of our students. I was also reminding them that universities and colleges are in trouble - that we needed to do everything we could to prove the value of a college education.
But I didn't design your process.
Wondering if you'd like to collaborate on a tool that will facilitate the training process you've defined.
Kevin
Hey Kevin,
Demonstration of value is a key kickstarter for momentum-building among faculty. The optimism that comes with it is a great incentive. However, it carries its own drawbacks because it sometimes makes AI look like magic and undersells the risks and dangers. Delivering the rhetoric of "we have no choice" is also risky - even if it is true and I agree -- as it makes faculty feel obligated and when people engage begrudgingly they rarely learn in a meaningful way.
Navigating the tightrope between providing a balanced views of the pros/cons while also demonstrating value to build interest and momentum is one of the hardest parts of conducting PD in this area.
Your tool looks fascinating. Yes, I would be interested in collaborating. My email is mike@litpartners.ai. Look forward to chatting!
Mike
Hey Mike,
Thank you for sharing! This is great for me to share with my colleagues as we continue to bring AI and AI Literacy into the classrooms in our district. This is great!
I needed to teach my students to read and write and think. That was hard enough for them and me. I didn’t need to suddenly waste our limited time on an unproven, biased, hallucinating software. Kids need to learn, to develop their minds not play with the shiny new software. Sorry, not sorry.
Hey Will,
I couldn't agree more. This was my stance when GPT 3.5 was first released. I was teaching High School English at the time and felt an immense amount of frustration about it. In fact, I still feel frustrated. As an example, I personally don't understand why we, as a culture, are accepting the idea that age limits for AI should be 13+. A 14-year old is nowhere near ready to engage with these tools in a thoughtful way that protects their learning journey and their cognitive development.
My thinking with respect to "learning about AI" changed when I realized my students were using it at home and there was little I could do to stop it, police it, or ban it. Our detection platforms were obviously flawed, savvy students were getting better and better at masking their AI use, and tools were embedded into social media platforms like Snapchat. In fact, my students weren't using ChatGPT in 2023-2024, they were using Snapchat AI. Many didn't even know about ChatGPT, though that has changed in a dramatic way this year.
In that respect, I don't see "learning AI" as a waste of time. In fact, I see it as the only logical solution to a very big problem. AI is probably bad for our brains (depending on how it's used), tools are freely available at home, the temptation for students to use it as a shortcut is massive, we can't detect it reliably, and students are increasingly hearing that corporations won't hire them if they don't use AI. I can dislike AI all I want, but it won't change those realities, at least not in the near term.
As a result, I started my learning journey with the goal of educating my students about it. The goal was to give them experiences that would remove the "magic" of AI experiences and develop a more active, critical, and engaged mindset when using AI tools. I told the story here (https://www.youtube.com/watch?v=DRyWDNjRWaw) and wrote about the pedagogical method here (https://mikekentz.substack.com/p/a-new-assessment-design-framework).
With that in mind, I pose a question to you. How, as an educator, should I preserve the integrity of the learning journey at this time without learning about AI in the first place? If you say ban it, how will you detect it? If you say the eye test - well -- I've got a bridge in Brooklyn to sell to you. And last, what's the alternative? Judge our students through a moral and ethical lens for using it? I can't see that ending well.
Sorry to be blunt, but I have yet to see a practical solution offered by the #resist movement that navigates these problems. Furthermore, it may not seem like it, but I do understand your stance and we actually feel very much the same way. I suppose the difference is - we have chosen to react differently. No judgment, just my perception.
Last, are you still in the classroom? If so, I'd love to hear how you are handling these problems with your students. It was quite a pickle for me from November 2022 to June 2024. I welcome all new ideas and approaches that may "thread the needle" through a very difficult dilemma.
Thanks for engaging,
Mike
Hi Mike, Thanks for your message. I agree with you about 14 year olds. I taught full time at high school and the middle school from 2006 to 2024, and I saw kids reading less every year. They don’t have stamina for it. I don‘t see how they will catch up on reading. I think it’s clear that phones and other screens are a big part of this, and now we have AI.
I worked really hard on my Masters and every year to be as good a teacher as possible. I see more than ever that good writing requires good thinking, and I reject AI so much because it results in less thinking. I don’t see how that will be good for students long term.
I admit I am also offended by how by how people think AI can just write for us and it’s no big deal. I also feel like it is belittling all the work we do in the classroom. It’s like they are saying, “who needs English teachers?”
I also frankly am unwilling to suddenly learn an entirely new subject, AI. I retired from full time teaching last June in part because I wasn’t going to comply and use AI. I am glad I was close enough to retire. I was 62. If I had been younger, I probably would have stayed and tried to deal with it, but still wouldn’t have allowed my students to use it.
In the last two years, I made my sixth graders do all essay writing in class. I even made them write rough drafts by hand. They didn’t love it, but that didn’t bother me.
The AP English classes still have hand-written essays. I wonder if scores will start going down as students write more and more essays with AI throughout the year.
If I was still in the classroom full time, and teaching high school and college again, I would probably check more of their sources and then bust them if they used AI. If they didn’t quote or paraphrase correctly, I might be able to get them on that.
I had my sixth graders read negative articles about AI. Partly because of my opinion, but also to give them some balance. I know they weren’t getting that anywhere else. Will it matter? Probably not, but I tried.
I know it’s not going away, but I am enjoying seeing AI fans trying to defend it as there are more and more reports of it hallucinating and making mistakes. It feels like a bubble that just might pop. I can only hope.
Take care
Maybe faculty are confident in their students' creativity and intelligence. Or maybe they want "freedom" that comes from reading authors you can name, versus mashups. Or maybe "freedom" means they object to billionaire AI tech dudes standing by an authoritarian, lips sealed, while he censors history and what students can read or find in the library?
Hey Janet,
That certainly would be a nice reality to live in, but in my view, it is being increasingly reduced to rubble.
The mashups you describe are everywhere, and are likely only to proliferate. I absolutely agree that students have a great deal of creativity and intelligence, but they are also human. They experience temptation the same way anyone else does. They procrastinate, are sometimes bored with an assignment, or are sometimes too flat-out fatigued to engage in the way we want them to with an assignment or subject area.
As a result, they use and lean on AI in ways that we likely would not want them to. As faculty, we can certainly lay out the dangers and risks of using AI (there are many), but without modelling active and critical engagement with AI systems, students are left in the dark.
Students are also desperate for guidance when it comes to AI use (statistics below). Not just because they are aware that it could be a major part of the future of work, but because they want to use it well - meaningfully - and not hurt themselves in the long run. They understand the dangers exist, but are looking to adults to show them how to navigate them.
1) https://www.govtech.com/education/k-12/survey-k-12-students-want-more-guidance-on-using-ai?utm_source=chatgpt.com
2) https://campustechnology.com/articles/2024/08/28/survey-86-of-students-already-use-ai-in-their-studies.aspx?utm_source=chatgpt.com
In my view, we as educators owe it to them to learn how to do that, and subsequently share meaningful engagement strategies with students to help them protect their cognitive development and navigate the thorny world into which they have been thrust.
And with respect to the censorship of history, I agree it is a very serious problem. To me, that is another reason to edify students with respect to AI use now via a critical and actively engaged lens - rather than leave them to their own devices.
None of this is meant to be confrontational. As I have written and spoken about many times before, AI is extremely dangerous - for more reasons than I can name here. I think of it like cigarettes, alcohol, or driving a car. We don't let students get behind the wheel of a car without direct instruction. We recognize the dangers of cigarettes and alcohol and put appropriate age limits and restrictions around their use.
We need to get to the same place with AI, but we won't get there by leaving the education of our young people to Sam Altman. We need to pick up the mantle and be a part of the conversation. We need to learn how to drive the car, so we can teach young people the safety mechanisms that will save their lives.
In this way, learning about AI is actually a subversive act. For those that are frustrated about the state of the world and the power that AI companies have over our culture and politics, I say, "Know thy enemy." Yes, they will benefit financially from your usage in the early days. But over the long run, as you embed a deep and critical understanding of these systems, those oligarchs will lose power as the next generation of consumers become fully edified and aware of these risks -- subsequently able to break down power structures as they come of age and compose new regulations and requirements, whatever they may be, because of what we taught them.
I sense that my post may have come across as a criticism of teachers who are against AI. If that is the case, I apologize. The goal was to speak directly to the PD Leaders and Administrators who are seeking ways to engage faculty with AI so we can protect our students. That said, I recognize that my rhetoric may have messaged otherwise.
I hope this response can be received in the spirit in which it is intended. We all want to change the world, we just disagree on how to do it. If we can recognize that we are largely in a similar boat, just paddling in different directions, we might better develop meaningful communications that get us on the same page.
Thanks for engaging,
Mike
Thanks Mike.
I don't think its entirely rubble quite yet. While I no longer teach, I am in conversation with high school and college students who are excited about reading, writing, and science and not at all into AI - so I have not given up hope that human intelligence and creativity will prevail.
I certainly hope today's students come to realize that the oligarchs would be happier if they didn't read (ban the books!), think, discuss thorny issues, or write something in their own voices.
In earlier times we were concerned about cut-and-paste plagiarism and paper mills - especially since I taught online (Masters and PhD levels.) I wrote about and taught with experiential, collaborative, project-based, service- and active learning approaches - yes, in online courses. It is hard to spit out or copy a team project that involves original research and documentation of the process. See this handbook about designing, teaching, and assessing collaborative learning: "Learning to Collaborate, Collaborating to Learn."
I think the same issues are present today, and still advocate for this kind of instruction, online, offline, or hybrid. Students can learn the subject matter in addition to the ability to communicate, make decisions, compare and contrast different sources, collect original perspectives through observations or interviews, come up with original solutions to a problem, then present and defend their work. These are skills students will need at work and as members of society.
In case you missed it, you might also like to see: "Finding Your Voice in a Ventriloquist’s World – AI and Writing."
https://www.routledge.com/Learning-to-Collaborate-Collaborating-to-Learn-Engaging-Students-in-the-Classroom-and-Online/Salmons/p/book/9781620368053
https://scholarlykitchen.sspnet.org/2025/01/28/guest-post-finding-your-voice-in-a-ventriloquists-world-ai-and-writing/#comments