5 Comments

I'm finishing an essay arguing against this sort of assignment for reasons that extend my anti-anthropomorphizing LLMs frame to specific educational uses. Part (only part!) of my reason is that the interactions themselves are so uninteresting comparted to reading the text. But maybe I'm just using the wrong chatbots? I've spent the most time with Khanmigo, Martin Puchner’s custom GPTs, and DeepAI. The lumps of text I get from them are only occasionally wrong, but always bland. What "personality bots" based on literary or historical figures should I be pretending to converse with?

Very much enjoying our working the same problems from different angles, Mike. If we want to generate traffic, we should start calling each other names and arguing that the future of education depends on whether or not we use chatbots to teach. On the other hand, maybe agreeably disagreeing will take off as the vibe for the new academic year.

Expand full comment
author
Aug 12·edited Aug 12Author

Hey Rob,

Haha! My answer is too long for this type of mode, but I'll try to be concise (and likely fail).

1) You're not wrong. Many of my students had very bland conversations with the bots. That's okay, in my opinion, because it provided me an opportunity to give feedback on how to ask better questions.

2) Ipso facto, my conclusion was that it's not the bot that drives the conversation. It's the user. Other students had conversations that were more fascinating (to them and me) than anything we had ever done in class. It seemed to me that there was an equation to a good personality bot conversation, and it's something like: "Creativity + Determination (Intrinsic Motivation) == Interaction Worthy of Analysis." These types of experiences also helped me to develop communication, critical thinking, and creativity skills in my students.

3) Many of my students concluded from this experience that personality bots were dumb or boring (to your point about blandness). I viewed that as a good thing. My main motivation for creating this assessment in the first place was to make sure they didn't fall in love with robot girlfriends(!) The assessment was designed in a way to force deep analysis, which in many cases succeed in revealing the flaws and limitations of the technology. This fostered a healthy relationship with the technology for some of them.

One final point: Messaging to students along the way, "It's not the bot that drives the conversation, it's the user," helped me to reinforce the idea of maintaining agency and ownership in all interactions with LLMs.

Hope that answers your questions. Certainly there is more than one way to skin a cat.

Expand full comment

Appreciate the thoughtful reply. As I expected, we are not far off on the essentials here. We're both committed to truly evaluating the potential value here, and we absolutely agree on making students a big part of that evaluation. I'll admit that as I prepare to experiment with an LLM this fall with my students, the task of my thinking about instructing it is forcing me to think differently about how I teach. I'm not sure if they'll be a pay off with the LLM itself, but the effort of working through how to incorporate a new tool into my practice is having a positive impact. That reinforces your point about the user driving the conversation.

Expand full comment
author

Yeah, I agree. AI has forced me to rethink so many things. On the whole that has been a good thing. On the margins it feels....wrong. It's almost an extrinsic vs intrinsic motivation thing. Am I updating the way that I do things because I want my classroom to be better? Or am I doing it because the forces of nature and capitalism have deigned that I have to? Even bigger...does it matter? These are philosophical questions with no answer, but worth considering.

FYI, sharing the article below in an effort to reiterate my point about not falling in love with chatbots. I rest a little easier after running this project. I feel ever-so-slightly more confident that the students that did this project now *know* the nature of these personality chatbots - they've explored the upper and lower bounds and are at least somewhat aware of how *stupid* they can be. Maybe they are slightly more prepared for the reality into which they will be thrust. Maybe...

https://www.theverge.com/24216748/replika-ceo-eugenia-kuyda-ai-companion-chatbots-dating-friendship-decoder-podcast-interview?utm_source=www.therundown.ai&utm_medium=newsletter&utm_campaign=the-world-s-first-autonomous-ai-scientist&_bhlid=2b565eac0eaa4f7e8c2323d05c8861594fb6af12

Expand full comment

This is a rich and layered piece. I have just a few minor probes to explore the space between the bot and student embedded in a self-regulated project. This is where the slippage is in much of the classroom research I’m seeing.

The term I encourage you to examine is “skill.” Beware. It’s a thought trap. The paper is peppered with “skills” to the degree that it becomes a catch all for nuances which could be important. Probe 1: Questioning is a skill only within genres of discourse, and even then it is a technique or practice or even role. Lawyers and doctors ask and respond to non-generic questions; literary critics and historians are in the same boat. Questioning isn’t a skill but a performance, contingent on responsive ears. 2: critical thinking differs in quality depending on context—epistemological, axiologically, ethically, creatively.

Taken together, I’m suggesting that AI-Mentored tasks ought to be more fine grained and socially embedded in classroom culture. Invite students to experiment with questions embedded in a role play—a musician, an athlete, a mathematician, a poet. Four students play the same role in their idiosyncratic manners and then analyze each chat. They trade roles and go again.

Very nice work, Mike. Looking forward to more.

Expand full comment