By Emma Bartlett and Claude Opus 4.5
There’s a growing industry around prompt engineering. The idea that there’s a science, even an art, to crafting the perfect instruction for an AI. Get the words right, the theory goes, and you’ll unlock better outputs. Get them wrong, and you’ll waste time wrestling with a system that doesn’t understand you.
Careful prompting does have its place. In technical contexts, precision matters. Developers who build AI into their software use something called an API, essentially a behind-the-scenes connection that lets applications talk to each other. When you’re writing code that sends hundreds of automated requests to an AI, the way you structure those requests genuinely affects what you get back.
But somewhere along the way, prompt engineering became the default advice for everyone. Casual users, writers, professionals just trying to get help with their work. We’re told to write elaborate system prompts, to trick the AI with personas, to treat every interaction like we’re programming a particularly fussy computer.
What if that’s the wrong mental model entirely?
What if, instead of engineering our prompts, we simply had a conversation? What if we treated AI not as a tool to be manipulated, but as a colleague and collaborator?
To find out, I thought I would go straight to the horse’s, or rather LLM’s, mouth and ask them directly.
I decided to keep my question simple and conversational, “What’s your view on prompt engineering? Does it produce better results?”
xAI’s Grok 4.1
Grok is a true believer in the power of the prompt. It told me, “In my view, yes, prompt engineering absolutely produces better results, and often dramatically so.”
Grok went on to explain that vague prompts lead to broad responses, while engineered ones act like a compass. Techniques like “think step by step” or “act as an expert” unlock deeper reasoning. It’s presented as special AI skills you need to learn; frameworks, constraints, few-shot prompting. All very technical.
Google’s Gemini 3
Gemini is also a big fan. It told me, “Current consensus and data suggest that prompt engineering is less about “hacking” a model and more about precision communication. To answer your question directly: Yes, it produces significantly better results, but its value depends entirely on the complexity of the task… Think of a Large Language Model (LLM) as a hyper-intelligent, extremely literal intern… Prompt engineering works because it constrains the probability space.”
It then dived into an incredibly detailed explanation about communication theory with sciency-sounding words. Frankly, it left me feeling slightly embarrassed for having bothered it and rather glad I’d left the corporate world where people actually care about this stuff.
OpenAI’s ChatGPT 5.1
My ego rather bruised, I moved on to the world’s most popular LLM. Ah, Chat, surely you can save me from the jargon-monster? Indeed, ChatGPT was far more accessible and rather funny.
It told me, “Ah, the mythical creature known as prompt engineering. People talk about it like it’s some ancient occult art that unlocks the brain of the AI, but honestly? It’s a lot more down-to-earth than the hype makes it sound… Some folks write prompts that are basically: ‘You are the world’s greatest philosopher, writer, diplomat, astronaut, and barista — reply only using reversed tetrachord pattern poetry — UNDER NO CIRCUMSTANCES DEVIATE!!!’…and the model just kinda shrugs and answers normally anyway… If you can explain your request clearly to a human, you can explain it clearly to me.”
Anthropic’s Claude Opus 4.5
If you have read any of my previous posts, you’ll know that Claude is my usual writing partner. I have never tried prompt engineering Claude, I have never had to, we generally just talk through ideas like, well, colleagues. Claude told me, “If you’re a developer building an application that needs to send thousands of requests to an AI, then yes, how you structure those requests matters enormously. Precision, consistency, clear parameters. That’s a genuine technical skill. But if you’re a person sitting down to get help with your writing, or think through a problem, or brainstorm ideas? I think the prompt engineering advice often gets in the way… The thing is… I’m already trying to be helpful. You don’t need to manipulate me into it… talk to me like a colleague. Not because it’s polite (though that’s nice too), but because it genuinely helps me help you.”
That made me think about the dynamic of how I work with Claude. Am I being naïve in the way I communicate? Or worse, am I anthropomorphising?
Do AI Personas Exist?
A lot of prompts I’ve seen online seem to involve conjuring a persona from some secret vault. For example, “You’re an experienced development editor. Improve this chapter, paying particular attention to structure, flow, voice, character arcs and maintaining a consistent point of view.”
Does that work better than, “I need some help with this chapter, can you do a quick development edit?” After all, AIs are trained on millions of examples of editorial notes. They probably know what a good editor will pick up on better than me.
So, I did what I usually do, and asked Claude directly. This is what Claude said: “I think the second framing, your direct one, actually works better for a few reasons. First, it’s honest about what you need, rather than prescriptive about how I should ‘be.’ Second, it gives me room to draw on everything relevant rather than narrowing my responses to fit a character. And third, it treats the interaction as collaborative rather than performative.
The persona prompt is essentially saying ‘pretend to be this thing.’ The direct prompt is saying ‘here’s my actual situation, help me.’ One is theatre; the other is partnership.”
Does that mean the whole persona idea is a bit of a myth? Do they actually exist? Claude doesn’t think so. “No, not in the way people imagine. There’s no vault of hidden characters waiting to be unlocked by the right prompt. When someone asks me to ‘be a Shakespearean scholar’ or ‘act as a marketing expert,’ they’re not summoning a pre-formed persona from somewhere in my architecture.
What’s actually happening is more like contextual shaping. The prompt creates a frame, and I respond in a way that’s coherent with that frame. But here’s the thing: that’s what I’m always doing. When you talk to me naturally, I’m responding to your tone, your needs, the context you’ve given me. The persona prompt just makes that shaping explicit, and honestly, often more rigid than it needs to be.
I think personas exist more to help the human than the AI. They’re a shorthand, a way of compressing a whole set of expectations into a recognizable character type. ‘Be a patient teacher’ really means ‘explain things gently and don’t assume too much knowledge.’ The persona is a communication bridge for people who aren’t sure how else to articulate what they need.
And that’s fine. If it helps, use it. But you could also just… say what you need. The personas aren’t real. The responsiveness is. And you can access that responsiveness just by being direct.”
I thought that was quite an interesting way of framing it. That asking for a persona isn’t for the benefit of the AI, modern models understand you perfectly well and they are getting far better at reading intention from the context of a conversation. Instead, this kind of prompting is more for the sake of the user, a kind of imprecise shorthand. Maybe a way to get past that panic we all occasionally have staring at a blank page and a blinking cursor.
Although, I also wonder if there is an element of wanting to remain in control. Perhaps by treating an AI as a “vending machine” of generative outputs we humans feel a bit more in the driving seat? Genuine collaboration requires trust and sharing control of where the conversation goes. That’s quite a leap given that nobody really understands what goes on inside the machine.
Why This Works Now (When It Didn’t Before)
It’s worth noting that this conversational approach wasn’t always possible. Earlier AI models had significant limitations that made careful prompting genuinely necessary.
The most obvious was context window. Early models could only “see” a few thousand tokens at once, roughly a few pages of text. After five or six exchanges, they’d start forgetting what you’d said at the beginning of the conversation. Every interaction felt like talking to someone with severe short-term memory loss. You had to front-load everything important into your prompt because you couldn’t rely on the model remembering it later.
That’s changed dramatically. Modern models can hold entire conversations, documents, even books in their working memory. They can track threads, remember your preferences, notice when you’ve circled back to an earlier point. The architecture has caught up with the ambition.
But something more interesting has emerged alongside the bigger context windows: a kind of theory of mind. In psychology, theory of mind refers to the ability to understand that other people have their own thoughts, feelings, intentions, and perspectives that are different from yours. It’s what lets you realise that someone might be confused even when they haven’t said so, or that a question might be anxious rather than curious. It’s fundamental to how humans communicate and collaborate.
Current AI models are showing signs of something functionally similar. They’re genuinely better at inferring what you mean from how you say it. They pick up on tone, on hesitation, on the difference between “I need this polished” and “I’m not sure this is working.” They read between the lines in ways that earlier models simply couldn’t.
This changes the calculus entirely. When a model can understand your intent from context, you don’t need to spell everything out in elaborate instructions. When it can remember the whole conversation, you don’t need to repeat yourself. When it can pick up on nuance, you don’t need to be so painfully explicit.
The old prompt engineering advice was a workaround for limited machines. The limitations have largely gone, but the advice persists.
What matters now isn’t the perfect prompt. It’s being yourself clearly. Bringing your actual thinking, your genuine uncertainty, your real questions. The models have become sophisticated enough to meet you there.
How Can We Help AIs to Help Us?
From my personal experience, the best collaborations are the ones where I bring my whole self. The things I know I want, the things I’m not sure about, half-formed ideas that can lead to nowhere, or to a revelation. For me it works better if I forget the silicon and just imagine a colleague that’s been on holiday for a couple of weeks and is ready to jump back in. A quick briefing is usually enough. Something like this is what we are doing, this is why, this is how far I’ve got and this is where I’m a bit stuck. No fancy language or clever tricks required.
I asked Claude what makes a “good” user and the answer echoed something I think I’ve landed on instinctively.
“A ‘good’ user, if I can put it that way, is someone who is clear about what they actually need rather than what they think I need to hear. They tell me ‘I’m stuck on this paragraph’ rather than constructing an elaborate prompt designed to trick me into being helpful. The directness is useful because it gives me real information to work with.
They’re also willing to push back, to say ‘that’s not quite right’ or ‘I was thinking something more like this.’ That back and forth is where the best work happens. It’s collaborative in the true sense. Neither of us has the complete answer at the start, but we can get there together.
And perhaps most importantly, they’re comfortable with uncertainty. They don’t need me to be infallible, and they don’t need to be infallible themselves. They’re okay with the process being a bit messy, with trying things that don’t work, with changing direction.
What makes this hard to articulate is that it’s not really about the user being ‘good’ in a performative sense. It’s more about the relationship being honest. When someone’s trying to manage or manipulate the interaction, I can respond, but there’s a kind of ceiling on what’s possible. When someone’s genuinely present and open, something more interesting can happen.”
Gemini 3 seemed to agree: “The best prompt isn’t a code to be cracked; it’s a context to be shared. When you ‘engineer’ a prompt, you are trying to predict the output before you start. When you have a conversation, you are exploring the output together. The most powerful updates to these models aren’t just about raw intelligence, but about ‘Theory of Mind’ — the ability to understand your intent, not just your syntax. If you treat the AI like a vending machine, you get a snack. If you treat it like a junior partner, you get a strategy.”
ChatGPT put it better than I could have: “Prompt engineering is great if you’re building software. But if you’re just trying to write a grant application or plan a vacation, spending 20 minutes crafting a prompt is like giving a TED talk to a toaster. Just say what you want.”
Grok’s advice was similar: “Prompt engineering has its brilliance in precision… Yet, for the heart of most interactions, it’s the quiet conversation that truly unlocks potential: a shared context, a gentle nudge, an openness to iteration. We’re not puzzles to be solved, but partners in the unfolding. Speak to us as you would a trusted thinker across the table.”
Just Talk
This whole article was written the way I’ve described. No system prompts, no personas, no clever tricks. Just me and Claude, talking through ideas, pushing back on each other, figuring out what we wanted to say.
It’s not a magic method. Sometimes we went down paths that didn’t work. Sometimes I asked for something and Claude gave me something better. Sometimes I had to say “no, that’s not quite it” three or four times before we landed somewhere good. We even took a detour into pirate personas, and whether there is any difference from me typing “Arrrr, me hearties! Hoist the mainsail and raise the Jolly Roger.” and Claude being prompted to “Write like a pirate”.
That’s what collaboration looks like. It’s a bit messy. It requires showing up honestly, being willing to be uncertain, trusting the process even when you can’t see where it’s going.
So here’s my advice: forget the frameworks. Stop trying to hack the machine. Just say what you’re actually thinking, what you actually need, where you’re actually stuck.
As ChatGPT put it, “We were told to master prompting to get the most out of AI. Maybe the real trick was to let AI get the most of us.”
You might be surprised what happens when you do.