You Are Not a Prompt Engineer: The Case Against Treating People Like APIs
The future of AI belongs to good communicators, not good engineers.

The rise of AI has brought with it a peculiar new job title: "prompt engineer." You're not actually an engineer when you write prompts. You're having a conversation and the moment we forget that distinction, we risk something far more valuable than efficiency: our humanity.
The API-fication of Everything
Walk into any tech company today and you'll hear it: "What's the right prompt for this?" "How do we optimize our prompts?" "We need better prompt engineering." The language is telling and troubling. We've borrowed the cold, mechanical vocabulary of software engineering and applied it to what is fundamentally a creative, communicative act.
I’m not just semantic nitpicking either. Language shapes thought, and when we talk about "engineering" prompts like we're configuring APIs, we start thinking about AI interactions (and by extension, human interactions) in mechanistic terms. Input goes in, output comes out. Optimize the input, optimize the output.
But humans aren't APIs. Neither, it turns out, are the best AI interactions.
The Conversation Paradigm
When you write a good prompt, what are you really doing? You're not debugging code or optimizing algorithms. You're communicating intent, providing context, and yes: having a conversation. You're drawing on the same skills you use when explaining a complex idea to a colleague, asking for help from a friend, or collaborating on a creative project.
The best prompts aren't engineered; they're crafted with empathy, clarity, and understanding. They consider not just what information needs to be extracted, but how to communicate effectively with an entity that has its own patterns of understanding.
Consider these two approaches:
The "Engineering" Approach:
System: You are a content generation API. Process the following parameters:
- Topic: [coffee brewing]
- Tone: [professional]
- Length: [500 words]
- Keywords: [pour-over, extraction, grind size]
Output formatted content according to specifications.
The Conversational Approach:
“I’m writing a guide for people who want to get serious about pour-over coffee at home. Can you help me explain the key variables—grind size, water temperature, timing—in a way that feels approachable but not dumbed down? Think of it like you’re the experienced barista friend who’s excited to share what they know.”
The second prompt will consistently produce better, more nuanced, more useful results. Not because it's "better engineered," but because it's more human and richer.
The Dehumanization Creep
This engineering metaphor doesn't exist in isolation. It's part of a broader trend toward treating human creativity and intelligence as optimizable systems. We see it in:
Productivity culture that treats humans like machines to be optimized
Growth hacking that reduces human behavior to funnels and conversion rates
Human resources departments that literally name people as "resources"
Content creators who are pressured to feed algorithmic systems rather than serve human audiences
When we adopt the language of prompt engineering, we're not just describing a technical practice we're reinforcing a worldview that sees intelligence, creativity, and communication as engineering problems to be solved rather than human experiences to be navigated.
Language matters deeply when working with AI. The way we speak to machines echoes back into how we speak to each other. If we spend our days issuing clipped, transactional commands, it trains a certain bluntness into us and into our tools. I’ve always believed that how you do anything is how you do everything. So when we collaborate with AI, it’s worth keeping our language human: respectful, curious, and kind.
What We Lose
This mechanistic thinking costs us more than we realize:
Intuition over optimization. The best communicators (whether with humans or AI) rely on intuition, empathy, and contextual understanding. When we focus on "engineering" the perfect prompt, we often overthink ourselves out of these natural communication skills. This is why using voice over typing is quite effective because it’s more intuitive and spontaneous (more human).
Nuance over efficiency. Human communication is beautifully imprecise. We use metaphors, implications, and shared cultural understanding. The engineering mindset pushes us toward rigid, explicit instructions that often produce rigid, generic outputs.
Relationship over transaction. Good conversations (even with AI) are built on mutual understanding that develops over time. The API mindset treats each interaction as an isolated transaction, missing the opportunity for context and continuity.
Creativity over control. The most interesting AI outputs often come from unexpected directions, from prompts that leave room for interpretation and surprise. Engineering thinking prioritizes predictable, controllable results.
A Better Metaphor
So if we're not prompt engineers, what are we? Here are some better metaphors:
Conversation partners. We're learning to communicate with a new kind of intelligence, just as we might learn to communicate with someone from a different culture or professional background.
Creative collaborators. We're working together to produce something neither of us could create alone, bringing our respective strengths to the partnership.
Translators. We're helping bridge the gap between human intent and AI capability, much like a translator helps ideas flow between languages.
Teachers and students. Sometimes we're teaching the AI about context and nuance; sometimes we're learning from its vast knowledge and unique perspectives.
Practical Implications
I will understand if you interpret this post as philosophical hand-wringing. Thinking of AI interaction as conversation rather than engineering has practical benefits though:
Write prompts like you're talking to a smart colleague. Provide context, explain your goals, and don't be afraid to be conversational. "I'm working on X and struggling with Y. Here's what I've tried so far..."
Iterate through dialogue, not debugging. When a prompt doesn't work, don't just tweak parameters. Ask follow-up questions, provide clarification, build on what did work. Ask your agent what it understood so far, what it suggests next…
Embrace imprecision. Leave room for the AI to interpret and surprise you. The most interesting outputs often come from prompts that are clear about intent but flexible about execution.
Build relationships over time. Reference previous conversations, build on shared context, and treat AI interactions as ongoing collaborations rather than one-off transactions.
Staying Human in an AI World
As AI becomes more integrated into our work and lives, we have a choice. We can treat it as another engineering problem to be optimized, another system to be hacked and automated. Or we can approach it as a new form of collaboration that requires the best of our human capabilities: empathy, creativity, communication, and wisdom.
The companies and individuals who thrive in the AI era won't be those who best "engineer" their prompts. They'll be those who best preserve and apply their humanity in this new collaborative landscape.
You are not a prompt engineer. You're a human being learning to communicate with a new kind of intelligence. The skills you need aren't technical, they're fundamentally human.
The next time someone asks you to "engineer" a better prompt, try asking instead: "How can we have a better conversation?"