Designing for Agents, Not Just Humans
Design isn’t just for humans anymore. AI agents are stepping into our products not as features, but as participants. This is your guide to designing shared intelligence, where experience is co-created

Most digital products today are still designed as if there's just one protagonist: the human user. Clicks, scrolls, taps, and text inputs form the core of the user experience today. With AI-native systems (products that integrate generative or autonomous agents) that model breaks. Agents are the new users.
This new entity doesn’t only react to our inputs. It acts autonomously. It learns. It remembers. Sometimes it even takes initiative.
This changes how we design in a radical new way. And we need to adapt. Even more interesting, we get to redefine what it means to design experiences.
From Human-Centered to Agent-Inclusive
Human-centered design has served us well.
It gave us elegant interfaces, intuitive workflows, and frictionless flows. We’ve spent years optimizing time-on-task, conversion, and retention—sometimes to the detriment of user well-being. In some cases, our tools became slot machines.
Things are changing now.
As agents become embedded in our tools, we need a design framework that includes both humans and machines, each with agency, goals, and interaction patterns.
I call this agent-inclusive design, others call it AX (Agentic experiences).
In this paradigm, we’re not only asking:
“How does the user achieve their goal?”
We’re also asking:
“What does the agent need to help?”
“When should the agent act or stay quiet?”
“How should this relationship evolve over time?”
A great primer: From SaaS to Agents – Mustafa Suleyman
What Actually Changes?
Let’s explore some key shifts in the product design mindset when building with agents in mind.
1. The Interface Is No Longer the Product
In AI-native products, the interface is no longer the whole experience, it’s just the tip of the system. Much of the value happens before and after the user interacts.
In tools like Rewind or Mem, the experience isn’t about searching through files. It’s more about how the agent automatically captures, indexes, and surfaces knowledge when it’s needed even if you didn’t know to ask.
💡Design what users don’t see: data flows, memory models, agent timing, and invisible behaviors. Your “UI” includes nudges, timely surfaces, and automated scaffolding.
2. Co-Agency Is the New Default
The human user is no longer the sole driver. Agents suggest, summarize, route, and take action. The product has become a living conversation.
Notion AI actively proposes content completions or outlines. The user may still have the final say, but the rhythm of creation becomes collaborative.
Also, check out Magician, a Figma plugin that turns text into design where the agent becomes a creative partner.
💡Design for shared agency: How do you visually or behaviorally indicate who’s doing what? When should the agent ask for approval vs just acting?
3. You’re Designing Relationships, Not Just Flows
AI agents need to build trust. This trust doesn’t work if it’s only about accuracy, instead it needs to include predictability, respect, tone, and it also needs to evolve over time. You’re crafting an ongoing relationship.
ChatGPT's Memory stores facts about you to personalize responses. This shifts the relationship toward mentorship, assistantship and maybe even friendship?
💡Design the character, tone, and memory of your agent: Will it feel like a butler, a buddy, a co-pilot? How does it handle failure, correction, or growth?
Traditional UX vs Agent-Inclusive UX
| Human-Centered UX | Agent-Inclusive UX |
| ----------------------- | ------------------------------------------- |
| User drives interaction | Agent initiates, filters, or suggests |
| UI is the product | Behavior and context are the product |
| Flows are deterministic | Flows are adaptive and emergent |
| Interfaces are visible | Interfaces may be conversational, latent |
| Goals are fixed | Goals may be inferred, negotiated, evolving |
🤔 Want to go deeper? I highly recommend Microsofts’s work on “Agent UX” and Langchain’s UX for Agents.
4. You Must Design What the Agent Sees: Context Engineering
Most agents don’t "see" the whole product, they work from a narrow context window: user inputs, memory, and system state. Context engineering is the art of shaping what gets passed in.
This isn’t just prompt tuning. It’s UX for cognition. What you feed the agent becomes the experience.
Here’s an example:
An AI scheduling agent with access to just your calendar behaves very differently than one with context on your focus habits, preferences, or energy levels.
Good agents require good context curation. That means thinking about:
Which inputs are relevant now?
What memory should persist?
How do we protect user privacy and agency?
Tools like LangGraph, RAG pipelines, and memory scaffolding are making this easier but designers must now contribute to these decisions, not just engineers.
Traditional IA (Information Architecture) is about structuring screens, content, and flows. Agent-inclusive IA also requires:
Knowledge architecture → What context does the agent need?
Signal design → What actions or inputs will guide the agent?
Memory boundaries → What should it retain, forget, or learn?
Behavioral scaffolding → What’s the right balance of autonomy and feedback?
Pi.ai (by Inflection AI) uses tone, memory, and context windows to craft a long-term relationship. It’s interesting because it does an awesome job of making you feel like it is being intentional about getting to know you and your motivations.
💡When building products, try and map out both the human experience and the agent experience in parallel.
Real-World Patterns and Experiments
To get started designing for agents:
Prompt maps – Define the kinds of requests your agent should understand
Trust ladders – Design interactions that build (or repair) trust
Tone matrix – Codify the emotional range and personality of the agent
Autonomy sliders – Let users choose how much control to give
Feedback moments – Create rituals for correction, reflection, learning
For tools:
Magpie → AI behavior playground
Lamini → Building fine-tuned agents
LangChain Templates → How agent workflows are orchestrated
What Kind of Agent Are You Building?
Just as we once asked “What kind of app is this?”, now we must ask:
Is this agent a co-pilot, concierge, coach, librarian, editor, or teammate? Or something else?
Does it adapt to you or expect you to adapt to it?
Is the agent always on, just-in-time, or on-call?
These choices are design decisions, not just technical ones.
Check out Google PAIR: People + AI Research: Research-backed guides and tools for building responsible, human-centered AI experiences.
Final Thought: Designing a World with Shared Intelligence
We are seeing an evolution of software like never before. Instead of building static products that wait to be used, we’re building products that are never ever really “off”. When we build agentic systems we’re shaping the very fabric of how humans and machines coexist.
So when we design with agents, we must ask:
Is this respectful and ethical? (e.g. % of agent decisions passing bias benchmarks)
Is this aligned with our values? (e.g. # of users flagging tone as off-brand)
Is this a relationship worth nurturing? (e.g. % of feedback suggestions adopted by the agent)
In the AI era, design is no longer about what’s on the screen. It’s about what emerges in its use, what learns, adapts, and grows with us. It’s also about choreographing intelligent relationships, and the very notion of what it means to design them is changing the notion of design.
It lives in architecture. It lives in memory. It lives in shared intelligence.
The team shapes the agent —> The agent shapes the experience.
And if we get it right, the experience might just shape us in return.
It (finally) breaks the boundaries of the designer and happily overflows to the whole team. I think that the make up of teams is fundamentally changing as a result, but that’s a post for another day ;)