Part Two: The Wild Robot, Pop Culture, and AI
Approaches to humanizing/not humanizing AI should vary by age
This is the second in a series of posts about Pop Culture, AI, and the way that it could shape culture and thinking in the future. In the first post, we looked at The Wild Robot, a popular children’s movie that heavily anthropomorphizes (“humanizes”) a robot character named Roz using the plot frame and thematic elements from The Jungle Book.
As an immensely popular pre-Christmas children’s film, the first essay argued, we should all take note and consider the impact of media portrayals of humanized robots. Specifically, the delivery of this type of media towards children has the potential to shift momentum around our cultural relationships to AI in subtle, often unseen ways. Parents, especially, should be aware of the potential impacts on their children’s mindset around humanoid robots after seeing this film.
A few months ago, I argued that we should approach language-based AI systems more like a human being than technology. I’m not the only one -- thought leaders like Ethan Mollick and Connor Grennan have said the same. However, I added to the discussion by arguing that we could pull ourselves away from ‘over-trusting’ these technologies by leaning on our understanding of ‘Stranger Danger.’
Rob Nelson of AI Log wrote an eloquent essay arguing against humanizing AI, one that has pushed me to develop a more nuanced approach. Rob and I’s ongoing discussions have led to the series you are reading now, and on the whole, I continue to believe that answering this question is fundamental to the development of AI Literacy.
However, films like The Wild Robot are already answering these questions for us. Taking your child to see this movie is, in some ways, akin to saying, “Hey, here’s a new robot friend for you. Isn’t she cute?” Please don’t take me as a curmudgeon, just watch the movie. It’s impossible to watch it from start to finish and not feel something for Roz, the main character. I seek only to acknowledge the implications of this.
A deeper analysis of this question was further supported by news of a 14-year old boy’s suicide after falling in love with a personality bot on Character.AI. His conversation with the bot -- included in the lawsuit -- is chilling and demonstrates a direct example of what happens when an unformed and possibly unguided mind approaches character bots without a nuanced understanding of their deepest nature and essence.
GenAI systems are not human, but they cannot be accurately conveyed as traditional technology either. This is a conversation that needs to be at the center of every discussion, definition, and demonstration of AI Literacy going forward.
The Thesis: A Dual Approach to AI Interaction
The key in answering this question is to develop a dual approach based on the user’s existing framework and understanding of technology.
Frame of reference is relevant, and we cannot expect to talk about a singular approach to GenAI systems across age groups. A ten-year old needs different guidance than a 20-year old, a 35-year old, or a 50-year old.
Specifically, though, this essay will argue that people who already have an established relationship with traditional technologies should approach AI more like a human, while consistently pulling themselves back to a balanced center. Conversely, young children who haven't yet formed strong associations with traditional tech should be taught to view AI primarily as technology, with careful guidance to understand its human-like communication aspects.
This dual approach acknowledges the unique position of AI in our world and addresses the different starting points of various age groups.
The Adult Perspective: Treating AI Like a Human (With Caution)
For those of us who grew up with traditional technology, our challenge is to shift our perspective. It’s simply not accurate to approach AI like traditional technology if you are accustomed to interacting with non-GenAI tech.
We're accustomed to devices that either work or don't – think of flipping a light switch or turning on a TV. When I turn on a lightbulb, I feel no need to critically analyze and review its output to determine if it is ‘working.’ AI, however, requires a different mindset.
By approaching AI more like we would a human conversation, we engage our critical thinking skills. As adults, we know not to automatically trust what another human being says or writes to us in conversation. We naturally question, analyze, and seek verification when talking to people. This skepticism is exactly what we need when interacting with AI, and it is not a skepticism that exists in our interactions with traditional technology.
But the key here is to ‘make a choice’ about which way to lean and then consistently pull oneself back to the center of the below Venn Diagram. AI does not live on either side of the bubble, but we do not yet have language or experience to populate the center. Therefore, our only choice is to:
Thoroughly understand the pros and cons of both sides
Make a concerted and individual choice while maintaining the realization that neither is ‘correct.’
Develop a consistent metacognitive habit of pulling ourselves back towards the center of the below Venn Diagram.
The Child's Perspective: AI as Advanced Technology
Now, consider young children who are growing up in a world where AI is as commonplace as smartphones. For them, the challenge is different. They do not need as much differentiation from traditional technology because they have little to no relationship with traditional technology in the first place. Instead, they need to first understand AI as a form of technology – a tool created by humans to perform specific tasks.
However, this approach comes with its own risks. If children view AI solely as they do other technologies, they might unquestioningly accept its outputs. This is where careful guidance comes in. We need to teach them that while AI is indeed technology, it's a unique form that communicates in human-like ways.
Imagine explaining to a child: "AI is like a very smart toy. It can talk to you and answer questions, but it doesn't truly understand or feel things like we do. Always remember to think carefully about what it says, just like you would with a new person."
Implications for Education and Parenting
This dual approach has significant implications for how we educate children about AI. In classrooms and homes, we need to develop strategies that:
Introduce AI as an advanced technological tool
Gradually incorporate understanding of its human-like communication
Foster critical thinking skills to evaluate AI outputs
Emphasize the importance of maintaining emotional distance
The Long-Term View
As we look to the future, we must consider how our current framing of AI will impact younger generations.
Will a child who grows up watching movies like The Wild Robot, which presents AI as a ‘being’ with feelings and an internal compass, be receptive to warnings about AI's limitations later in life?
Will it matter when their high school or college teacher reminds them that these robots are not human if they spent their formative years falling asleep with their arms around a robot stuffed toy?
This question underscores the importance of developing accurate, nuanced language around AI now. By implementing our dual approach, we can help ensure that both adults and children develop a balanced understanding of AI's capabilities and limitations.
Moving Forward
As we continue to navigate this complex landscape, remember that our goal isn't to fear AI or to blindly embrace it. Instead, we're striving for a nuanced understanding that allows us to harness AI's potential while safeguarding ourselves and future generations from its pitfalls.
Part Three of this series will analyze the lawsuit filed against Character.AI via the mother of Sewell Seltzer, the boy who took his own life after anthropomorphizing a character bot. The lawsuit is filled with important information that should be accessible to all parents in the AI era.
E-Book and LinkedIn Live Reminder
This is a friendly reminder that Nick Potkalitsky and I will be hosting a LinkedIn Live on November 20th at 1pm EST to review the creation and production of our new e-book “AI in Education: A Roadmap to Teacher-Led Transformation.”
The session will be moderated by Rob Nelson and include a Q&A. You can register for the session here and find the book here. The paperback version will be available by the end of the week, and we are available for workshops, seminars, and keynote speeches in conjunction with the text.
Students have already seen the responsiveness and flexibility of AI. They are able to articulate their understanding of how Tik Tok and Instagram respond to their engagement. They know that phones push ads to them based on their conversations and predictive habits. It's a language they speak.
The challenge, then, is for the educators to inhabit that frame of mine (phrasing intentional) when guiding students. We need to understand that our realities are different in the present because they have been different for a while. Without that, adults have less buy-in. With that, there's meaningful conversation.