Big MediumBig Medium logo
Skip Navigation
  • Ideas
  • Projects
  • Talks
  • About
  • Hire Us
Speak Search Menu

What We’re Reading

Articles, essays, and resources that have us thinking.

agents

When Using AI Leads to ‘Brain Fry’

∞ Apr 7, 2026

A BCG study of US-based workers found that intensive oversight of AI agents can cause cognitive exhaustion that the researchers call “AI brain fry.” The researchers shared their results in Harvard Business Review:

Participants described a “buzzing” feeling or a mental fog with difficulty focusing, slower decision-making, and headaches. This AI-associated mental strain carries significant costs in the form of increased employee errors, decision fatigue, and intention to quit. … Many participants used the words “fog” or “buzzing.” They described intensive back-and-forth with the tools, followed by an inability to think clearly, like a mental hangover, comprised of difficulty focusing, slower decision-making, and headaches, requiring several to physically step away from their computer to “reset.”

The researchers found that brain fry is linked closely to the amount of oversight and direct monitoring the agents required: a high rather than low degree of oversight caused 14% more mental effort on the job, predicted 12% more mental fatigue, and predicted 19% greater information overload.

So even as productivity increases with agents churning out work, the personal cost of monitoring that work is high. The study found increased fatigue when the use of AI increases workload—producing more simply costs more—and also when users multitask across tools. “As employees go from using one AI tool to two simultaneously, they experience a significant increase in productivity. As they incorporate a third tool, productivity again increases, but at a lower rate. After three tools, though, productivity scores dipped.”

A senior engineering manager in the study described the effect this way: “It was like I had a dozen browser tabs open in my head, all fighting for attention. I caught myself rereading the same stuff, second-guessing way more than usual, and getting weirdly impatient. My thinking wasn’t broken, just noisy—like mental static. What finally snapped me out of it was realizing I was working harder to manage the tools than to actually solve the problem.”

But the kind of task matters. When AI was used to replace routine or repetitive tasks—what the researchers call toil—burnout scores were 15% lower than for those who didn’t use AI that way . The researchers were careful to delineate the difference between burnout (emotional fatigue) and brain fry (mental fatigue). Using AI for unpleasant tasks still caused mental fatigue, but improved emotional state: higher work engagement, more positive associations with AI, and more social connection with colleagues.

As a designer of intelligent interfaces, my takeaway is that people need help managing the overload of extremely productive but not entirely reliable agents. This help might be delivered through a mix of improved experience, tools, and processes that make oversight less effortful—or alternatively that help people regulate and reduce how much they engage in the first place.

“Just as we have norms for spans of control for managing humans, so, too, limits need to be defined for human + agent oversight and for agents alone,” the researchers wrote. “Tools that require less intense attention or working memory, which instead support creative mind wandering, foster social engagement, or scaffold skill development can produce even more business value but sustainably, while encouraging innovation, fostering growth, and sparking joy for users.”

That sounds like a design challenge.

When Using AI Leads to “Brain Fry” | Harvard Business Review
copilot

How Many Products Does Microsoft Have Named ‘Copilot’?

∞ Apr 6, 2026

Names matter. That’s especially true when we’re all trying to establish shared meaning in something new.

Alas, at Microsoft, the name “Copilot” means very little except for a hand-wavey gesture in the direction of “has AI.” Tey Bannerman is doing the heroic work of tracking down the number of products and features named Copilot, and he’s up to 80 and counting.

“There are now Copilots inside Copilots, Copilots for other Copilots, and a physical Copilot key on your keyboard for summoning them,” Bannerman writes. Microsoft applies the label to apps, features, platforms, a keyboard key, and an entire category of laptops. When everything is Copilot, nothing is Copilot.

This is a marketing problem for Microsoft, but it also points to a general fuzziness problem in the industry. The meanings of terms like “agent,” “copilot,” even “AI” itself have grown so diffuse as to be useless for common understanding even within discrete teams or organizations. (Somehow, even traditional automation is called agentic lately, but that’s a post for another time.)

One of the goals that Veronika and I had for writing the Sentient Design book was to create crisp vocabulary and definitions for different experience types. For what it’s worth, “copilot” is one of the four fundamental “postures” in Sentient Design from which all intelligent interfaces derive. Our definition:

Copilots provide continuous, context-aware assistance throughout an activity.

This always-on stance of constant monitoring and assistance is different from the other postures of tool, chat, and agent. Posture determines the system’s manner and relationship with the user. More than just differences in functionality, these postures describe the different ways users collaborate with intelligent interfaces:

  • People use tools.
  • People talk to chat.
  • People delegate to agents.
  • People are backed by copilots.

From those four postures, over a dozen novel experience patterns emerge. Sentient Design describes them all, along with the emerging UI and interaction patterns to make them useful.

When you’re making something new, apply some rigor to what you call it. A crisp definition not only helps you describe the thing, it helps you shape what you make—and make it distinct from its neighbors. It tells you (and your customers) not only what it is, but what it’s not.

How many products does Microsoft have named ‘Copilot’? I mapped every one | Tey Bannerman
sentient design

Claude Dispatch and the Power of Interfaces

∞ Apr 1, 2026

Ethan Mollick reviewed the research that chat interfaces introduce heavy effort that undermines complex, specialized work. His conclusion is that the future of AI-powered interfaces is specialized interfaces built on the fly in response to user intent and context:

Instead of having companies build a specialized interface for every kind of work, the AI generates the right interface on the fly. I suspect the future isn’t one interface to rule them all. It’s AI that generates the right interface for the moment, an agent on your desktop, a chart in a conversation, a custom app to solve a problem. We’re moving from adapting to the AI’s interface to the AI adapting its interface to you.

AI capability has been running ahead of AI accessibility. The models have been smart enough to do extraordinary things for a while now, but we’ve been making people access that intelligence through chatbots. And, as that cognitive load research shows, the chatbot format is actively working against them. As interfaces improve, we’re going to see what happens when a much larger number of people can actually use what AI is capable of. Every new interface that closes even part of that gap will feel like a leap in AI capability, even when the models haven’t changed (though they are still changing). My guess is that a lot of the “AI disappointment” people sometimes express comes not from the AI being bad, but from the interfaces being wrong. We built one of the most powerful technologies in recent history and then made people access it by typing into a chat window. That will change soon.

Friends, this is what Sentient Design is all about, and Veronika Kindred and I wrote an entire book about it, now available for pre-order.

(This isn’t the end of interface design, by the way, far from it. It’s an entirely new era of design. It’s super exciting and twisty and fun, and you’re needed more than ever to help make it work.)

Claude Dispatch and the Power of Interfaces | One Useful Thing
agents

“I Suppose You Would Call It an Interface for the User”

∞ Mar 14, 2026

Nicholas Evans on LinkedIn:

I have enough skills in my Claude setup that I need something to help me remember them. Maybe we could call it a menu or something . And it would group things in a way that matched my mental model and helped me find things. I suppose you would call it an interface for the user.

Nicholas Evans | LinkedIn
sentient design

Why AI agents need to learn to read the room

∞ Mar 14, 2026

Researcher Genna Bridgeman shared practical findings about how AI interactions are affected by social expectations of the specific communication channel.

Bridgeman is a product researcher for Intercom, the company behind the Fin customer service agent. Fin is remarkably effective at managing routine support tasks, and it does it in live phone conversations, chat, email, and WhatsApp.

Each of those channels has its own etiquette, of course. The ways—and even the reasons—people use those channels create expectations for how info will be delivered. Bridgeman’s research found that when AI didn’t get the etiquette right, the result undermined trust as much as any human faux-pas might:

When interactions felt wrong, users didn’t blame the answer. They questioned the system’s understanding. And once that doubt set in, every subsequent response was judged more harshly.

The core takeaways:

  • In chat: Brevity, clarity, and structure are more important than completeness.
  • In email: The absence of a formal greeting and a thorough (even dense) answer can seem dismissive or incomplete.
  • On the phone: If the agent talks like a bot, users will start talking like a bot by simplifying language and avoiding nuance, which makes the system less effective.
  • In WhatsApp: Users expect speed and continuity more than traditional chat, with little patience for re-establishing context even in new sessions.
Why AI Agents Need To Learn To Read the Room | Fin Ideas
agents

SaaS Is Dead?

∞ Mar 14, 2026

In his newsletter, Benedict Evans deflates the frothy talk that AI agents and assistants will eliminate vast swaths of software. That theory says that people will just tell the computer what they want; if anyone can use AI to spin up their own tool to do the job, then who needs ready-made software? (The theory is especially popular among engineers who already make their own tools.)

When you actually go and look at successful software, the users generally didn’t see the problem, didn’t see how you would solve it, and could not have sat down and thought about what should happen on every screen, how it should get built, and how you get everybody to use it. There is an enormous difference between knowing something about how your company and how your job works and being able to identify a set of problems and a set of workflows and think about how those could be automated.

In other words, the fact that you’re writing the code in natural language doesn’t mean that you don’t have to work out what the computer should do.

As AI’s capabilities grow, figuring out where to aim those superpowers becomes especially important.

Understanding the problem, imagining a fresh solution, and crafting the ideal experience… all of that is really hard to do when you’re burdened by the assumptions and expectations of how you’ve always done it. This might be non-intuitive, but the burden of experience means that the people in the trenches are often the wrong people to design the new solution. DIY tools will only take them so far.

Software design is harder than it looks. So is process design. The new era of intelligent interfaces doesn’t mean that we just toss users into the deep end and hope for the best.

Software and user experience are changing, but they’re not going away. Domain- and context-specific solutions will continue to be critical in order to give people the context and platform to do their work, especially inside complex organizations and processes. The future is much more likely to be AI embedded inside a million bespoke workflows, not a million bespoke workflows jammed into a single AI interface.

For product leaders and designers, that’s a big opportunity. What dramatically new tools and exceptional experiences can we create for our users?

SaaS Is Dead? | Benedict Evans
agents

The Shape of the Thing

∞ Mar 14, 2026

In his newsletter, Ethan Mollick takes stock of the past few months of dramatic, exponential improvement in AI agents. Their sudden improvement in delivering actually “reasonable and useful” results, he writes, is beginning to unlock radical changes in the shape of work (particularly in the software industry). But what shape will that be?

At the frontier, a small tier of software shops is allowing AI agents to build the software themselves—no human coding, no human review. That’s a lot more profound than the simple automation of tasks or process; that changes the whole business model. That those experiments are possible to run at all is remarkable. Where will they land, and what does that mean for other industries? “AI is good enough to change how organizations operate,” Mollick writes, “and the experimentation is just getting started, even as models continue to improve.”

What will be the Thing that AI becomes? We still don’t know, but this feels like a foundational moment to shape that outcome. Right now is when the assumptions and applications of AI are beginning to firm, not just the underlying technology:

When a technology is this powerful and this unsettled, the choices that individuals and organizations make right now matter more. We can see the shape of the Thing now, but we can still influence the Thing itself, and what it means for all of us. We clearly don’t have rules or role models for how AI gets used at work, in schools, or in government. That’s a problem, but it also means that every organization figuring out a good way to use AI right now is setting a precedent for everyone else. The window to shape the Thing may not last long, but it is here now.

You have a role. Your organization has a role. This is not a time to be passive.

The Shape of the Thing | Ethan Mollick
ai

Charlie's Fake Videos for AI Literacy

∞ Oct 9, 2025

Charlie is a banking app for older adults, with a brand focused on financial safety, simplicity, and trust. They launched a fun and smart campaign to help educate about the risks of deepfake scams.

The system creates AI-generated videos for friends and family—customized with their first names and hometown—to deliver a message about AI fraud, all while escaped zoo animals run amok. It’s silly and entirely effective.

Most people don’t realize just how good AI video has become—and how easy it is to clone anyone’s voice or face now. Raising that awareness feels essential, especially for an older audience frequently targeted by scams.

For all of us working with AI, we have a responsibility to improve literacy and cultivate pragmatic skepticism among our customers and users. The work of this new era of design is to be clear about AI’s risks and weaknesses, even as we harness its capabilities.

Encouraging appropriate skepticism is part of the work.

Warn your family and friends about AI scams | Charlie FraudWatch
workflow

The Cascade Effect in Context-Based Design Systems

∞ Oct 1, 2025

Nobody’s thinking more crisply about the convergence of AI and design systems than TJ Pitre, a longtime friend and partner of Big Medium. He and his crew at front-end agency Southleft have been knocking it out of the park this year by using AI to grease the end-to-end delivery of design systems from Figma to production.

In our work together, TJ has led AI integrations that improved the Figma hygiene of design systems, eased design-dev handoff (or eliminated it altogether), and let non-dev, non-designer civilians build designs and new components for the system on their own.

If you work with design systems, do yourself the kindness of checking out the tools TJ has created to ease your life:

  • FigmaLint is an AI-powered Figma plugin that analyzes design files. It audits component structure, token/variable usage, and property naming. It generates property documentation and includes a chat assistant to ask questions about the audit and the system.

  • Story UI is a tool that lets you create layouts (or new component recipes) inside Storybook using your design system. Non-developers can use it to create entire pages as a storybook story.

  • Company Docs MCP basically enables headless documentation for your design system so that you can use AI to get design system answers in the context of your immediate workspace. Use it from Slack, a Figma plugin, Claude, whatever.

All of these tools double down on the essential design system mission: to make UI components useful, legible, and consistent across disciplines and production phases. Doing that helps the people who use design systems, but it also helps automate everything, too. The marriage of well-named components and properties with a clear and well-applied token system bakes context and predictability into the system. All of it makes things easier for people and robots alike to know what to do.

TJ calls these context-based systems:

Think of context-based design systems as a chain reaction. Strong context at the source creates a cascade of good decisions. But the inverse is equally true, and this is crucial: flaws compound as they flow downstream.

A poorly named component in Figma (“Button2_final_v3”) loses its context. Without clear intent, developers guess. AI tools hallucinate. Layout generation becomes unreliable. What started as naming laziness becomes hours of debugging and manual fixes.…

Your design files establish intent. Validation tools (like FigmaLint) ensure that intent is properly structured. Design tokens translate that intent into code-ready values. Components combine those tokens with behavioral logic. Layout tools can then intelligently compose those components because they understand what each piece means, not just how it looks.

It’s multiplication, not addition. One well-structured component with proper context enables dozens of correct implementations downstream. An AI-powered layout tool can confidently place a “primary-action” button because it understands its purpose, not just its appearance.

When you put more “system” into your design system, in other words, you get something that is people-ready, but also AI-ready. It’s what makes it possible to let AI understand and use your design system.

That unlocks the use of AI-powered tools like Story UI to explore new designs and speed production. But even more exciting: it also enables Sentient Design experiences like bespoke UI: interfaces that can assemble their own layout according to immediate need. When you teach AI to use your design system, then AI can deliver the experience directly, in real time.

But first you have to have things tidy. TJ’s tools are the right place to start.

The Cascade Effect in Context-Based Design Systems | Southleft
ai

Boring Is Good

∞ Sep 30, 2025

Scott Jenson suggests AI is likely to be more useful for “boring” tasks than for fancy outboard brains that can do our thinking for us. With hallucination and faulty reasoning derailing high-order tasks, Scott argues its time to right-size the task—and maybe the models, too. “Small language models” (SLMs) are plenty to take on helpful but modest tasks around syntax and language.

These smaller open-source models, while very good, usually don’t score as well as the big foundational models by OpenAI and Google which makes them feel second-class. That perception is a mistake. I’m not saying they perform better; I’m saying it doesn’t matter. We’re asking them the wrong questions. We don’t need models to take the bar exam.

Instead of relying on language models to be answer machines, Scott suggests that we should lean into their core language understanding for proofreading, summaries, or light rewrites for clarity: “Tiny uses like this flip the script on the large centralized models and favor SLMs which have knock-on benefits: they are easier to ethically train and have much lower running costs. As it gets cheaper and easier to create these custom LLMs, this type of use case could become useful and commonplace.”

This is what we call casual intelligence in Sentient Design, and we recently shared examples of iPhone apps doing exactly what Scott is talking about. It makes tons of sense.

Sentient Design advocates dramatically new experiences that go beyond Scott’s “boring” use cases, but that advocacy actually lines up neatly with what Scott proposes: let’s lean into what language models are really good at. These models may be unreliable at answering questions, but they’re terrific at understanding language and intent.

Some of Sentient Design’s most impressive experience patterns rely on language models to do low-lift tasks that they’re quite good at. The bespoke UI design pattern, for example, creates interfaces that can redesign their own layouts in response to explicit or implicit requests. It’s wild when you first see it go, but under the hood, it’s relatively simple: ask the model to interpret the user’s intent and choose from a small set of design patterns that match the intent. We’ve built a bunch of these, and they’re reliable—because we’re not asking the model to do anything except very simple pattern matching based on language and intent. Sentient Scenes is a fun example of that, and a small, local language model would be more than capable of handling that task.

As Scott says, all of this comes with time and practice as we learn the grain of this new design material. But for now we’ve been asking the models to do more than they can handle:

LLMs are not intelligent and they never will be. We keep asking them to do “intelligent things” and find out a) they really aren’t that good at it, and b) replacing that human task is far more complex than we originally thought. This has made people use LLMs backwards, desperately trying to automate from the top down when they should be augmenting from the bottom up.…

Ultimately, a mature technology doesn’t look like magic; it looks like infrastructure. It gets smaller, more reliable, and much more boring.

We’re here to solve problems, not look cool.

It’s only software, friends.

Boring is good | Scott Jenson
ai

The 28 AI Tools I Wish Existed

∞ Sep 30, 2025

Sharif Shameem pulled together a wishlist of fun ideas for AI-powered applications. Some are useful automations of dreary tasks, while others have a strong Sentient Design vibe of weaving intelligence into the interface itself. It’s a good list if you’re looking for inspiration for new ways to think about how to apply AI as a design material. Some examples:

  • A writing app that uses the non-player character (NPC) design pattern to embed suggests in comments, like a human user: “A minimalist writing app that lets me write long-form content. A model can also highlight passages and leave me comments in the marginalia. I should be able to set different ‘personas’ to review what I wrote.”

  • A similar one (emphasis mine): “A minimalist ebook reader that lets me read ebooks, but I can highlight passages and have the model explain things in more depth off to the side. It should also take on the persona of the author. It should feel like an extension of the book and not a separate chat instance.”

  • LLMs are great at understanding intent and sentiment, so let’s use it to improve our feeds: “Semantic filters for Twitter/X/YouTube. I want to be able to write open-ended filters like “hide any tweet that will likely make me angry” and never have my feed show me rage-bait again. By shaping our feeds we shape ourselves.”

The 28 AI Tools I Wish Existed | Sharif Shameem
apple

How Developers Are Using Apple's Local AI Models with iOS 26

∞ Sep 30, 2025

While Apple certainly bungled its rollout of Apple Intelligence, it continues to make steady progress in providing AI-powered features that offer everyday convenience. TechCrunch gathered a collection of apps that are using Apple’s on-device models to build intelligence into their interface in ways that are free, easy, and private to the user.

Earlier this year, Apple introduced its Foundation Models framework during WWDC 2025, which allows developers to use the company’s local AI models to power features in their applications.

The company touted that with this framework, developers gain access to AI models without worrying about any inference cost. Plus, these local models have capabilities such as guided generation and tool calling built in.

As iOS 26 is rolling out to all users, developers have been updating their apps to include features powered by Apple’s local AI models. Apple’s models are small compared with leading models from OpenAI, Anthropic, Google, or Meta. That is why local-only features largely improve quality of life with these apps rather than introducing major changes to the app’s workflow.

The examples are full of what we call casual intelligence in Sentient Design. These are small, helpful interventions that drizzle intelligence into traditional interfaces to ease frictions and smooth rough edges.

For iPhone apps, these local models provide a “why wouldn’t you use it?” material to improve the experience. Just like we’re accustomed to adding JavaScript to web pages to add convenient interaction and dynamism, now you can add intelligence to your pages, too.

Starting small is good, and this collection of apps provides good inspiration for designers who are new to intelligent interfaces. Some examples:

  • MoneyCoach uses local models to suggest categories and subcategories for a spending item for quick entries.
  • LookUp uses local models to generate sentences that demonstrate the use of a word.
  • Tasks suggests tags for to-do list entries.
  • DayOne suggests titles for your journal entries, and uses local AI to prompt you with questions or ideas to continue writing.

And there’s plenty more—all of them modest interventions that build on simple suggestions (category/tag selection and brief text generation) or summarization. This kind of casual intelligence is low-risk, everyday assistance.

How developers are using Apple's local AI models with iOS 26 | TechCrunch
  • ◀︎
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • ▶︎
Big MediumBig Medium logo Back to top
Skip Navigation
  • Ideas
  • Projects
  • Talks
  • About
  • Hire Us

Read us

Book cover of Sentient Design by Josh Clark with Veronika Kindred

Work with us

  • Interface and experience design
  • Sentient Design and AI
  • Digital strategy and process
  • Design systems
  • Production and co-creation
  • Action plan
  • Coaching and hands-on advice
  • Workshops and talks

Follow us

Get the newsletter

    • Twitter
    • RSS
    • Instagram
    • Github

    Contact us

    Start with Josh Clark

    josh@bigmedium.com
    (401) 339-3381

    Big Medium is a Global Moxie company.
    Copyright 2003–2026 Global Moxie, LLC. All rights reserved.