Three Sleepless Nights…And One Embarrassing Moment
Ethan Mollick’s “getting up to speed” with AI is missing something
Before I start, I would like to express gratitude to all my subscribers and anyone who has opened this newsletter, engaged with it through restacks and likes, or shared it with friends and colleagues. These small gestures of validation allow me to attract more paid subscribers and devote the requisite time that it takes to write ideas and connect with the broader community of AI + Education readers and thinkers. Each engagement has an impact, so thank you for your time and support.
"Mistakes are the portals of discovery." — James Joyce
In mid-July, I spent a week at the North Carolina School of Science and Mathematics for a Data Science Summer Institute created and hosted by Taylor Gibson. If you are an educator curious about Data Science, I highly recommend the program – now in its third year.
On the third evening, Taylor and I found ourselves playing shuffleboard at a local brewery. We got to talking about AI Literacy – what it truly means and what one must experience to be considered “literate” in this sometimes-baffling new technology.
I mentioned Ethan Mollick’s “three sleepless nights” requirement, his idea that this is necessary to grasp the full impact of AI on society. Taylor paused and added, “It’s more than that. You also need to screw up to realize how closely you have to check it. It’s like you need three sleepless nights…and one embarrassing moment.”
That line stuck with me. Failures and even embarrassment are essential steps in the learning journey. We know this in other fields, but have we applied it to learning how to work safely and effectively with AI? Is it part of AI Literacy?
Failure and the Ego
Let’s dig deeper. How do mistakes and embarrassment act as catalysts for growth?
Failures challenge our ego, which is often the greatest motivator for change. When our pride is tied to a particular way of thinking or doing, we resist change because it threatens our sense of identity or competence. But when failure confronts our ego, it creates a moment of vulnerability, opening us up to reflection and growth.
This reflection, especially when shared, leads to meaningful progress.
From Risk to Failure to Human Connection
In the context of AI, it’s crucial to recognize that failures and embarrassments stem from a willingness to take risks.
Some educators advocate for a cautious “wait and see” approach regarding AI’s role in education. While there's wisdom in caution, consider this: to truly understand the path forward, we need data. To acquire data, we must run experiments, which inherently involves risk.
The root of adaptation is embracing discomfort. Knowing you might stumble, yet trying anyway, is key.
To foster this “comfort with discomfort,” we must normalize mistakes. How? By sharing them openly. This shows that mistakes are less daunting than we often believe.
Sharing mistakes requires vulnerability, honesty, and transparency—qualities that strengthen human connections. Any classroom teacher can attest that sharing mistakes with students is one of the fastest ways to build trust. Trust powers collaborative learning: without it, the delivery of knowledge, wisdom, or insights falls flat.
Failures leave us with scars, and we often wear those scars with pride. They signify risks taken in pursuit of adventure, experience, or knowledge. After all, nothing ventured, nothing gained.
So what…you want me to purposely fail?
To be clear, I am not arguing that teachers (or anybody) should purposely embarrass themselves. That’s not possible anyway. Embarrassment, by its very nature, includes an element of surprise.
But I would be skeptical of anybody who says they have not made a mistake when using AI. All that tells me is that the person has not used it very much, or they are lying.
Here’s my advice: First, experiment. Be thoughtful and as safe as possible but accept that risk and failure are necessary during times of change. You’ll have to make quick decisions, some of which won’t work out. But you won’t progress by delaying action.
Second, when you make mistakes—share them! That’s the only way we’ll learn. Over time, with enough small mistakes, we’ll figure out how to avoid the big ones.
So, in the spirit of transparency, let me share an embarrassing AI moment from my classroom last year.
Dead Links
It was October 2023. I was in the midst of building a curriculum for a new book I planned to teach my Honors students - Born a Crime by Trevor Noah – in only a couple of days. I had been excited to add this book to our annual list, but, due to a mix of laziness, procrastination, and deeper analysis than usual of my student’s final project for our previous book – an AI-infused project that involved interviewing a chatbot – I had left it to the last minute.
Feeling confident after completing a Prompt Engineering Course on Coursera, I turned to ChatGPT with a research project idea on South African apartheid to accompany the first three chapters. In just a few hours, I generated materials, including research topics, directions, and a starter guide with links, templates, and rubrics. Normally, this would take 5-10 hours, but I was “done” in under three.
A few days later, I handed out the materials, confident my students had everything they needed for a robust research project. For any non-teacher out there, the satisfaction of a well-built activity or curriculum is unmatched.
However, within minutes, a student raised their hand: “Mr. Kentz, these links you gave us are messed up…”
I assumed it was a simple tech issue, but when I checked, three of the eight links were dead or irrelevant. I had only skimmed the first few and trusted ChatGPT too much with the others. This was during the peak of my AI bullishness.
As a teacher, this was embarrassing. I’m supposed to model critical thinking and thoroughness, yet I handed out faulty materials. On this day, it was obvious to my students that I made the materials and did not check them.
I still cringe at the memory– but I can tell you one thing. I always check the links now. Since then, I have saved more examples than I can count of ChatGPT producing links that are either irrelevant to its own summary or dead.
I do not use Perplexity very often. Maybe it is better. But it is still shocking to me how often ChatGPT provides a summary and a link that are completely disconnected from one another. This reinforces my notion that LLMs are best approached as a very capable stranger. You wouldn’t blindly trust an articulate alien, would you?
Failure and Vulnerability
Here’s the rub: Would I really understand this lesson without having made the mistake? Probably not. I had to take a risk and fail to learn that LLMs require careful review.
Moreover, if we don’t share these moments, we’ll miss out on valuable learning opportunities. Shining a light on our failures, no matter how embarrassing, helps others avoid the same pitfalls. It’s the most human thing we can do.
And don’t underestimate the role of ego in change. My desire not to be embarrassed again has arguably been the strongest motivator in pushing me to critically review LLM-generated content.
So to Taylor Gibson’s point: Gaining AI Literacy—or “getting up to speed,” as Mollick says—requires more than just three sleepless nights. We all need at least one embarrassing moment, too. If you’ve already had yours, share it with someone. You never know, you might help someone else avoid the same mistake.