I Built an AI Character From My Audience's Survey Responses. She Wasn't Right About Anything. But Arguing With Her Made My Presentation Better.
What happened when I asked AI to embody my audience—and started caring what she thought
Ready for some weird?
About two months ago, I was sitting on a plane, on my way out to deliver a series of talks to faculty and students at The University of Washington’s Graduate Program in Communications Leadership.
I was reading through a set of open-ended survey responses from faculty members regarding their feelings, experiences, and desires as it pertained to AI in their classroom.
The sentiments were nuanced and thorough, and I found myself re-thinking my planned approach. The slide deck needs updating, I thought.
But in what way? The responses were so nuanced that shifting in any one direction might shift away from the sentiment of another audience member. Reaching out to one person might mean alienating another.
I wish I could talk to one of them, I thought.
Then, it dawned on me.
I could. I could ask an AI system to embody this data.
So I did. I uploaded the survey responses (anonymized) to Claude and tasked it with synthesizing them into a composite character. Someone with a name, a backstory, professional experience, communication style, even catchphrases. Then I told it to become that character and review my slide deck from that perspective.
I have many takeaways - both personal and professional - from my chat with “Dr. Sarah Chen-Martinez,” Principal Consultant at “Nexus Advisors” and graduate professor in communications. But the rest of this article will focus on the method behind the madness, with a couple of conclusions from what turned out to be a very bizarre experience.
How to Build a Character (And Why That Matters Here)
I’ve written before about conversational authoring - using fiction writing techniques to prompt AI - and that’s what I utilized to create Dr. Sarah.
In creative writing, you build characters by defining five things:
Character: Who is this person? What’s their expertise?
Motivation: What do they want? Why do they want it?
Setting: Where are they coming from? What context shapes their perspective?
Problem: What challenge are they trying to solve?
Behavior/Style: How do they communicate? What’s their tone?
The good news was that I already had most of this from the survey data. The 25x7 open responses were more than just answers to questions. They contained personalities, perspectives, histories, backstories. So I asked Claude to help me narrow them down.
Here’s the prompt I used:
It responded (in part):
Based on my own reading of the survey responses, this felt accurate and fine. Time to take the next step.
Building the Sparring Partner
“Give this character a name and a backstory,” I wrote. Claude complied.
The coffee mug detail cracked me up. The system even explained why it chose the name Sarah Chen-Martinez. But even still, this did not feel deep enough. To feel “real,” Claude would have to consider Sarah Chen-Martinez’s actual linguistic style.
So I asked: how does she actually speak?
I could spend quite a bit of time analyzing how Claude may have come to these conclusions, but that wasn’t the goal. At this point, I felt confident that Claude understood the assignment. Dr. Sarah Chen-Martinez could be the composite voice of my audience, given personality and perspective.
So I told Claude: “Become Dr. Sarah Chen-Martinez and review my slide deck.”
What Happened Next
Honestly, the feedback was not surprising. But it was the “leaning back in my chair” that got me. She was judging me, and even though I knew she was not real, I felt a tinge of ego-stained pride dripping down the back of my neck.
Of course, she did have a point. My slide deck showed a crisis simulator I’d built. It demonstrated what was possible. But it didn’t explain how to actually create one. And I had buried the most interesting aspect of the talk into a trailer for a future session.
So, I started revising, and over the next hour or so, this composite character pushed back on multiple aspects of my presentation. Some of the feedback was sharp. Some of it was impractical. Some of it made me defensive in ways I didn’t expect. And that forced a level of metacognition I didn’t know I had.
What I Thought Was Happening vs. What Was Actually Happening
Here’s what I thought I was doing: Getting feedback from a simulated version of my audience so I could improve my presentation.
Here’s what was actually happening: Creating the psychological conditions that forced me to articulate things I hadn’t fully thought through.
Dr. Sarah would critique something. I’d push back. She’d acknowledge my point but press on her core concern.
I’d have to either defend my choice with better reasoning or admit the gap. That back-and-forth didn’t work because she was right. It worked because I couldn’t hide behind vague ideas when someone was demanding specificity.
So, I found myself arguing with myself, and remembered this insight from Ken Liu, the science fiction writer: “Some of the best AI art experiences are about what the AI prompts in you, rather than what you prompt the AI to do.”
In the end, I re-worked my whole deck based on my conversation with Dr. Sarah. And the next day, it was a success.
But this story also contains a note of caution — one that I will share next week in a follow-up — because I took it so seriously that I actually cared about Dr. Sarah’s approval. This was a level of engagement that did not and does not feel healthy.
The Technique
On the whole though, this level of depth made me a better, smarter, more critical AI user. If you’d like to give it a try, here’s how you can try it:
Gather your qualitative data (survey responses, interview transcripts, customer feedback)
Ask AI to analyze it across specific vectors (Priorities, Concerns, Frustrations, Desires, relevant orientations)
Request a composite character (”If all these responses became one person, who would they be?”)
Give them specificity (name, backstory, communication style, catchphrases)
Activate them (”Become this person and review my [work/deck/proposal/strategy]”)
Then pay attention to what happens in your own thinking as you engage with them.
And maybe, just maybe, the real “AI unlock” has nothing to do with speed, efficiency, or productivity. Maybe it has everything to do with how it forces us to change the way we think.












Awesome!! I tried this over the summer. Instead of creating one composite based on evaluations I received, I made two (one supportive and one adversarial) to reflect the positive and negative feedback I got. When I fed ChatGPT the personas, I felt like it gave me okay feedback, but not necessarily better than if I just asked it to give me friendly and critical feedback. I love the framework you created here; I might try again with this level of detail!
Love this, Mike. I have a Strategic Devil’s Advocate Claude project which will do something similar on a general level but adding data is always a useful upgrade. Your point about approval underscores how easily it is for even experienced users to anthropomorphize AI. It’s baked in psychologically.