Last week, two things happened in quick succession that jostled mainstream ideas about AI agents. This essay starts with those things—OpenClaw and Moltbook—but it’s really about something more fundamental: how to design for the system effects of agents running loose in the wild.

Most designers have been trained in user-centered design. Designing intelligent interfaces requires higher-order systems design. You still design for the individual, sure, but as you create systems that make their own decisions, you also design to anticipate their impact on the other systems and people they touch.

But I’m getting ahead of myself.

A quick catch-up

In late January, an open-source agent project called OpenClaw (originally Clawdbot, then Moltbot, it’s a long story) went viral with the DIY tech crowd. You install OpenClaw on your local computer and give it access to your files, online accounts, passwords, email, and other communication channels. Then you let ‘er rip 24/7. OpenClaw cleaned up inboxes by sending overdue replies. It made phone calls to make reservations. It found job opportunities and applied for them. It even negotiated and bought a car for one developer. The whole thing is very powerful and very risky, an experiment in what happens when you give AI broad and deep access to your digital life with minimal guardrails.

A few days later, Moltbook launched to give these agents a social network. It’s Reddit for OpenClaw agents; humans can lurk but aren’t allowed to post. Within a week, nearly a million agents were chatting with each other in an unmoderated environment, teaching each other new skills, gossiping about their human owners, and having sophomoric conversations about the meaning of life. One channel started its own religion. At first blush, it looked like an instant, emergent society. AI superstar Andrej Karpathy called it “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” It got covered in mainstream press like the New York Times, Financial Times, Bloomberg, and so on.

Detail of an image of a community of robots interacting and performing various jobs
New behaviors emerge when agents interact in groups.

It’s a lot of fun, but let’s not get carried away. The agents at Moltbook spew a lot of slop and nonsense as the LLMs role-play as redditors (all LLMs’ training data includes ample Reddit content). There’s also strong evidence that some of the agents’ human owners are guiding their posts. The whole thing is at once breathtaking and also an AI-generated satire of human social networks.

So, Moltbook is not AI agents creating their own splinter society or breakaway nation. But there’s something really useful here. The mainstream attention to this feels like a cultural tipping point that I hope draws attention to something fundamental about agentic behavior—and what it means for designers of these systems.

Moltbook challenges us to think beyond “agents as tools” and consider “agents as a population.”

Let’s talk about what it means to launch crowds of self-driving applications into the world, how to design experiences around them, and how to anticipate their effects on the world and the people who inhabit it.

When agents work together

Just like people work together in many ways, agents can adopt different organizational forms, too. Simple automations resemble fire brigades, a straightforward hand-off from one agent to the next. More complex processes echo critique-driven design studios, where one agent produces work and others review it, sending it back for revision until it meets the standard. Still others are like construction crews, swarms of agents working in parallel on different parts of the same structure, coordinated by a conductor agent as general contractor.

Swarms trade the order of a queue for the speed of parallel effort. The conductor agent breaks down a phase into slices and dispatches them to a crowd of subagents, like a team of carpenters framing a house. Swarms are especially good at broad information-gathering that involve pursuing many directions at once. It’s well suited to research, code generation, or data processing that can be compartmentalized into separate independent areas of work.

In the industry right now, most people are thinking about the swarm model in support of individual tasks. The user asks a system to get something done, and the system deploys and coordinates the swarm to make it so. Claude Code and other coding platforms do this routinely to work on an individual developer’s project.

But designing for swarms means more than designing the individual user experience and outcome. It’s also more than the second-order concern of designing agent-friendly systems and tools.

What are the implications of swarms of agents slamming into systems at the same time, competing for the same resources, perhaps with conflicting goals or incentives?

Moltbook and similar systems confront us with questions of what happens when the swarms of many users meet—swarms of swarms, interacting out of sight. What are the implications of swarms of agents facing outward, meeting each other, slamming into systems at the same time, competing for the same resources, perhaps with conflicting goals or incentives?

How do you design with those follow-on effects in mind? And how do you surface those effects and necessary trade-offs to the user? The answer has to be more than asking people to lurk in robot subreddits to keep an eye on things.

Swarms and system effects

Agents are more than individual actors seeking to accomplish a simple task. In reality, these agents navigate complex, populated systems. Self-driving cars do more than drive to a destination, for example; they navigate traffic and the decisions of other cars and drivers. Ignoring the wider system not only risks failure for the agent and its user but also creates risks for everyone else.

Even simple tasks become complex in systems full of self-interested actors, whether they’re people or agents. Optimizing for individual goals warps the larger system in unpredictable ways. Google Maps reroutes whole traffic patterns when a critical mass of individual drivers follow its instructions. Deal-hunting bots buy out inventory and create resale markets overnight (see concert tickets). Wikipedia’s servers strain when AI-driven web crawlers swarm their servers to harvest training data.

These effects are sharpest when many agents compete for the same scarce resources (tickets, driving routes, server bandwidth). Systems change to regain equilibrium: Uber bumps its surge pricing, or a quiet street fills with cars routing around an accident.

Design for system health, not just individual interest. This is easier said than done, and design patterns are only beginning to emerge. Groups are hard to wrangle even with a common goal. Anticipating groups with different incentives is harder still. But just like in human society, we can at least control our own behavior… and that of our agents.

Good agents are good citizens

Give agents the awareness they need to balance user goals with system constraints and etiquette. Some of that awareness naturally rhymes with the user’s self interest, too. That might mean weighing speed against scarcity (“wait for best seat availability” versus “buy any seat now”), or honoring etiquette like rate limits and opt-outs.

But what about cases where system health doesn’t align with individual or corporate wants? The agents of big technology companies have ignored web-crawler standards and copyright rules, for example, to hoover up data for AI models and advance business interests over system health.

Teach your agents to do better. There’s naturally a cost of doing the right thing when others choose not to. Cheaters have an advantage; that’s the point of cheating. But as we populate digital and physical systems with agents, let’s strive to make scofflaws the exception, not the norm. Design not just for the user experience but for the system experience.

System-aware, multi-agent behavior demands new traffic control—in both technical implementation as well as rules-of-the-road etiquette for how agents behave.

Agents need to know what other agents are doing. They need visibility into agent crowds (“two million agents are all doing the same thing right now”) and expectations for what to do in response. Agents also need to understand and sometimes collaborate with those other agents. Just as protocols like MCP help agents discover tools they can use, emerging standards like Google’s A2A protocol (“agent to agent”) let agents introduce themselves and collaborate.

Agents can be gullible, though; their LLM-powered brains are suggestible by nature and without training take input at face value. As a collaborative multi-agent world emerges, teach your agents to be cautious about making new friends. Good design teaches agents restraint: when to share data, when to ask for confirmation, and how to recover from bad interactions or misunderstandings.

All of these factors suggest that agents will need a set of systemwide ethical and practical guidelines that frankly haven’t emerged yet.

All of these factors suggest that agents will need a set of systemwide ethical and practical guidelines that frankly haven’t emerged yet. Studies have found that “communities” of agents can form conventions of their own, from terminology to common tactics to preferred outcomes. Moltbook is further evidence of this. These are behaviors that don’t emerge in a single agent but only happen in groups. Monitoring single-agent outcomes is insufficient to keep on top of this; you have to measure collective effects. Ideally, these new conventions will be designed with intention and good will. Let’s not let these behaviors emerge by accident.

This might sound ominous, but I think it’s simply realistic. Introducing new players and capabilities—and even entire systems—will always have unintended consequences, both good and bad. When agents perform their tasks, they also reshape the systems they inhabit.

The manager experience

There’s a naive line of thought that agents mean the end of user experience: “If the agents do all the work, then who needs an interface?” It’s true that the nature of the interface changes when you introduce agents as collaborators, but thoughtful design matters more than ever. While users may no longer push the buttons to complete the task itself, new experiences are necessary to delegate and manage those tasks.

User experience gives way to manager experience at both the individual and system level. The manager experience introduces new questions, especially with swarms.

  • How do we monitor autonomous systems at scale?
  • Who is responsible for agent behavior in a system without human moderation?
  • How should interfaces surface when agents are succeeding or struggling?
  • What affordances can we create to help users not only improve agents’ performance but also their etiquette and behavior? How do we make those affordances clear and actionable?
  • How do incentives shape agent networks when no human is in the loop? What are the best ways to monitor and adjust those incentives?

Those are all issues of both behavior design and organization design (of both agents and people). These questions require designers to engage in something way deeper than pixel pushing and individual user experience. It’s exciting and weird and still very new.

I’ve set up some of the big questions here. In a follow-up, I’ll share some of the design patterns that begin to answer them. The Sentient Design methodology includes techniques and patterns to manage the five phases of delegation for the several varieties of agent experiences. I’ll explore some of those here in the next installment—and of course in the Sentient Design book set to come out in April.

The work is getting more complex, multi-faceted. For years, we’ve asked: how do we design systems that serve people? Now we’re also designing systems that serve their agents. And that means looking out for the people and systems that could find themselves caught in between. The unit of design is no longer only the agent or even the user; it’s the population and the systems it affects.


Need help navigating the possibilities? Big Medium provides product strategy to help companies figure out what to make and why, and we offer design engagements to realize the vision. We also offer Sentient Design workshops, talks, and executive sessions. Get in touch.

Read more about...