Stranger Danger: Effective AI Use via Critical Thinking and Humanities Skills
Also, some emotional reactions to OpenAI's Voice Features
We should anthropomorphize AI—not in the sense of considering AI as a human companion or friend, but in the sense of talking to a brilliant stranger.
We all know, through evolution and experience, to be moderately skeptical of strangers. With a human being, what lies beneath the surface is relevant, and without knowing someone you cannot know their true goal, identity, objective, or purpose. It’s important to get to know someone before you trust them.
Approaching an LLM with a similar level of “Stranger Danger” in mind is appropriate. But in order to do that, you would have to view this technology as a human, with layers, a purpose that cannot be easily discerned, and motivations that are not always clear.
That tends to be a controversial statement. But in my book, AI checks all those boxes; it has the ability to shift identities with the snap of a finger, its purpose is at times unnervingly difficult to discern, and it certainly appears to be operating with a variety of layers in tow. While I am not saying it is sentient, anthropomorphizing AI as a stranger can be an effective strategy for responsible communication.
Two weeks ago, I would have told you it is going to be very hard to educate the broader populace about how to approach AI with the appropriate level of stranger danger but also take advantage of its capabilities. It’s not enough to be wary of this “person,” you also have to think about how to work with it, like a neighbor or colleague that just isn’t going away. Moreover, AI is brilliant, so maybe you can use it well.
But the biggest impediment to embedding this nuance into the broader consciousness is the fact that human beings have evolved to automatically add value to the creation of language. This is a concept I picked up from Chris Dede at the Harvard Graduate School of Education. He pointed out, in a speech at The Learning Ideas Conference, that it’s in our evolution and our DNA to view the creation of language – especially sophisticated language – as important in some way.
Before we had words, we grunted. When the first word was formed, it would have been a major leap in communication and evolution. Then someone spoke the first sentence, used the first synonym, employed figurative language for the first time, and on down the line. From these experiences, we learned that effective sequencing of words is important and valuable.
Now consider AI. LLMs generate language at a staggering pace, with the sophistication of our brightest minds. This leads many to fall into the trap of adding value to AI-generated language. Crucially, this doesn’t necessarily mean they believe it to be accurate; they attribute worth to it. This distinction is subtle yet profound. It’s not about the hallucinations or factual errors; it’s about perceived value. We instinctively assume it’s worthwhile and useful.
It’s in our DNA and our evolution, and it’s going to take years to evolve past it.
This cognitive bias, which remains unnamed due to our lack of data and understanding, is a new challenge we need to address. Unfortunately, we haven’t been utterly embarrassed by AI yet (as a society), which traditionally supplies the requisite ego-based incentives that motivate most large-scale changes.
Don’t shoot the messenger. It’s just the way we are.
Case in point, it took January 6th happening as a product of widespread belief in conspiracy theories spread on social media for our culture to finally have enough political appetite to even consider laws that protect young people from the ills of social media. And even then, some of these laws barely have any teeth.
So, what level of embarrassment or catastrophe will it take for us to react to AI similarly? The thought alone makes me shudder.
Pre-Voice Solution
My solution to this impending problem has been to lean into close reading strategies and push for a massive increase in investment in the Humanities. Working with Large Language Models is a conversation (it’s in the name - “language.”) It requires good communication skills, written or verbal, which are Humanities skills.
But close reading Steinbeck or practicing the writing of academic essays at a faster or broader clip is not enough to stave off this impending doom. You have to close read your own interactions with the bots - to find the sweet spot where Stranger Danger is mixed with a cognizance of AI’s brilliance and an application of collaborative strategies that are both helpful and impactful.
Furthermore, you have to be hyper-critical not only of its outputs, but also of your own inputs. We do not naturally think to be critical of oneself. As Benedict Cumberbatch recently said in the explosive series Eric, “Everybody wants to change the world, but no one thinks of changing themselves.”
I count myself as one of those people, but here’s my suggestion. Annotate your own 'prompts' as if you are grading a student’s writing. This practice often leads to insightful 'a-ha' moments. You will realize that you are not practicing what you preach, or what you aim to preach, even if you are anti-AI’er or doomer. I see it in my own communications all the time.
It’s not enough to “tell” kids that AI is dangerous and they should be thoughtful about their communication with it, we have to do it on our own too. Show it to them. Show them your transcripts or generate ones from the perspective of a fictional student and ask them to annotate the transcript against a series of guiding questions. This was, by far, the most effective strategy I used in my classroom last year to teach my student’s about responsible and effective use.
The point is, it’s not enough to model usage or teach them how to write an initial prompt. We have to model the critiquing of usage. That is the next step. Peel back the layers, because depth is the only thing that will save us.
Voice
The voice feature is causing another wave of existential dread here at AI EduPathways. I’m not sure how many more of these I can handle.
It’s making me reconsider my stance on anthropomorphizing AI—even with the focus on the “stranger” angle. I still believe the “Stranger Danger” argument is valid, but I worry that an even broader swathe of people will resist the effort required to interact with AI on this level now that the AI voices sound increasingly human. These voices trigger every cognitive bias we have. It’s an avalanche, and we’re without an escape route.
But there is a silver lining. The voice feature still underscores the importance of communication, though it may shift the focus from writing and reading to speaking and listening. Any writer or writing teacher will tell you that writing and speaking are closely linked—that’s why great writers can give speeches without notes.
So, we can still model how to critique our usage and critique AI interactions in general. In a future dominated by voice interactions, this means sharing audio files and giving feedback based on what we hear.
Regardless of the mode of communication, we’ve got to map responsible communication, teach it to our students, require them to use our map, and then give them feedback – just like any other coaching, training, or mentoring relationship.
This is the new role of the teacher. This is “grading the chats,” spoken in a different language.
Thinking 10x Harder
Like all of OpenAI’s attempts to corner the market, this new feature is likely to have flaws that may turn people away. Still, this new update only reinforces for me what Dr. Norbert Wiener said in the 1960s; “We are going to have to think 10x harder than we ever have before to ensure that AI does not take over.”
That’s a tough ask, a tall order, and an uphill climb. But I’ve never been interested in the easy way out.