5 Comments
Apr 4·edited Apr 4Liked by Mike Kentz

Cool stuff. I think this reframing is very productive. This is the way!!!

Just to pick your brain a bit:

I am wondering how you are thinking about students inputting content into application applications like GPT etc. Once input, that content becomes part of more expansive training data as students "opt out" of privacy and ultimately control over their work in exchange for various services. I personally have felt very conflicted about these exchanges. I know that GPT offers the option to turn off training data usage, but then again we have been burned many times over by Big Tech's claims to privacy.

I personally am waiting for the next round of products that offers a few more assurances about security and privacy before really digging in. I am working with another author on a piece on SchoolAI, a product that seems to be moving in this direction. I am hoping to pilot the product next week in class so I will let you know how it goes. That said, despite assurances, the application is built on top of OpenAI architecture.

Here is the rub: Our kids need the best AI to have the best pedagogical experiences --- Only a few companies have that AI on tap --- So we must trust those companies to some extent --- This gap in trust between admin and Big tech-AI is slowing down the integration and implementation nationwide.

Expand full comment
author

I ran into this exact same problem a few weeks ago!

I wanted my students to use ChatGPT as a brainstorm partner for a project-based learning activity at the end of reading Romeo and Juliet (I think I told you about this plan.) Of course I could not ask them to sign up for 4.0 and when they signed up for 3.5 I took an extra level of care to make sure that their data was not compromised.

The bug that got me was the fact that 3.5 doesn't save conversations. So, kids chatted with the bot, closed the window, and then came back to find all the information gone! This was problematic not just for them -- they wanted to continue the conversation -- but for me too, because I wanted to grade their prompts in the conversation for evidence of prompt engineering and critical thinking.

A few of them decided to change the data sharing settings so that the engine would save chats after speaking with their parents. But for others, the experience was frustrating.

I think you make a great point and I'm interested to hear about the SchoolAI experience. They definitely did not get nearly as much as they could have from 3.5 -- but at the same time I do not trust Tech companies at all with our data, especially our kid's data (they are even more valuable consumers than me!)

Expand full comment

Hmmm.... interesting. This is a major issue. Something we should definitely do some writing on. Just setting out the challenges and obstacles for others to inspect as an object of study in search of potential solutions would be a helpful first step --- Could be a nice series of articles really.

Expand full comment

But isn't there the risk that the student just trains on thier voice and has it generate imitatively?

Mind you, I've become more convinced that AI leaves spandrells, or obsfucates by "rambling purple prose", so its not the worst problem.

Expand full comment

Thank you for exploring this!

One thing that comes to mind is that the examples cited for success are examples where people had a tremendous amount of previous text by highly-practiced writers--that seems categorically different from people who do not consider themselves writers and/or may be early in their practice (especially given that copying is often part of a practice of learning). I'd be curious what is its rate of error, how it calculates it, and how it presents it--I know Turnitin did a lot of playing with numbers to make certain claims.

Expand full comment