Big MediumBig Medium logo
Skip Navigation
  • Ideas
  • Projects
  • Talks
  • About
  • Hire Us
Speak Search Menu

What We’re Reading

Articles, essays, and resources that have us thinking.

ai

Charlie's Fake Videos for AI Literacy

∞ Oct 9, 2025

Charlie is a banking app for older adults, with a brand focused on financial safety, simplicity, and trust. They launched a fun and smart campaign to help educate about the risks of deepfake scams.

The system creates AI-generated videos for friends and family—customized with their first names and hometown—to deliver a message about AI fraud, all while escaped zoo animals run amok. It’s silly and entirely effective.

Most people don’t realize just how good AI video has become—and how easy it is to clone anyone’s voice or face now. Raising that awareness feels essential, especially for an older audience frequently targeted by scams.

For all of us working with AI, we have a responsibility to improve literacy and cultivate pragmatic skepticism among our customers and users. The work of this new era of design is to be clear about AI’s risks and weaknesses, even as we harness its capabilities.

Encouraging appropriate skepticism is part of the work.

Warn your family and friends about AI scams | Charlie FraudWatch
workflow

The Cascade Effect in Context-Based Design Systems

∞ Oct 1, 2025

Nobody’s thinking more crisply about the convergence of AI and design systems than TJ Pitre, a longtime friend and partner of Big Medium. He and his crew at front-end agency Southleft have been knocking it out of the park this year by using AI to grease the end-to-end delivery of design systems from Figma to production.

In our work together, TJ has led AI integrations that improved the Figma hygiene of design systems, eased design-dev handoff (or eliminated it altogether), and let non-dev, non-designer civilians build designs and new components for the system on their own.

If you work with design systems, do yourself the kindness of checking out the tools TJ has created to ease your life:

  • FigmaLint is an AI-powered Figma plugin that analyzes design files. It audits component structure, token/variable usage, and property naming. It generates property documentation and includes a chat assistant to ask questions about the audit and the system.

  • Story UI is a tool that lets you create layouts (or new component recipes) inside Storybook using your design system. Non-developers can use it to create entire pages as a storybook story.

  • Company Docs MCP basically enables headless documentation for your design system so that you can use AI to get design system answers in the context of your immediate workspace. Use it from Slack, a Figma plugin, Claude, whatever.

All of these tools double down on the essential design system mission: to make UI components useful, legible, and consistent across disciplines and production phases. Doing that helps the people who use design systems, but it also helps automate everything, too. The marriage of well-named components and properties with a clear and well-applied token system bakes context and predictability into the system. All of it makes things easier for people and robots alike to know what to do.

TJ calls these context-based systems:

Think of context-based design systems as a chain reaction. Strong context at the source creates a cascade of good decisions. But the inverse is equally true, and this is crucial: flaws compound as they flow downstream.

A poorly named component in Figma (“Button2_final_v3”) loses its context. Without clear intent, developers guess. AI tools hallucinate. Layout generation becomes unreliable. What started as naming laziness becomes hours of debugging and manual fixes.…

Your design files establish intent. Validation tools (like FigmaLint) ensure that intent is properly structured. Design tokens translate that intent into code-ready values. Components combine those tokens with behavioral logic. Layout tools can then intelligently compose those components because they understand what each piece means, not just how it looks.

It’s multiplication, not addition. One well-structured component with proper context enables dozens of correct implementations downstream. An AI-powered layout tool can confidently place a “primary-action” button because it understands its purpose, not just its appearance.

When you put more “system” into your design system, in other words, you get something that is people-ready, but also AI-ready. It’s what makes it possible to let AI understand and use your design system.

That unlocks the use of AI-powered tools like Story UI to explore new designs and speed production. But even more exciting: it also enables Sentient Design experiences like bespoke UI: interfaces that can assemble their own layout according to immediate need. When you teach AI to use your design system, then AI can deliver the experience directly, in real time.

But first you have to have things tidy. TJ’s tools are the right place to start.

The Cascade Effect in Context-Based Design Systems | Southleft
ai

Boring Is Good

∞ Sep 30, 2025

Scott Jenson suggests AI is likely to be more useful for “boring” tasks than for fancy outboard brains that can do our thinking for us. With hallucination and faulty reasoning derailing high-order tasks, Scott argues its time to right-size the task—and maybe the models, too. “Small language models” (SLMs) are plenty to take on helpful but modest tasks around syntax and language.

These smaller open-source models, while very good, usually don’t score as well as the big foundational models by OpenAI and Google which makes them feel second-class. That perception is a mistake. I’m not saying they perform better; I’m saying it doesn’t matter. We’re asking them the wrong questions. We don’t need models to take the bar exam.

Instead of relying on language models to be answer machines, Scott suggests that we should lean into their core language understanding for proofreading, summaries, or light rewrites for clarity: “Tiny uses like this flip the script on the large centralized models and favor SLMs which have knock-on benefits: they are easier to ethically train and have much lower running costs. As it gets cheaper and easier to create these custom LLMs, this type of use case could become useful and commonplace.”

This is what we call casual intelligence in Sentient Design, and we recently shared examples of iPhone apps doing exactly what Scott is talking about. It makes tons of sense.

Sentient Design advocates dramatically new experiences that go beyond Scott’s “boring” use cases, but that advocacy actually lines up neatly with what Scott proposes: let’s lean into what language models are really good at. These models may be unreliable at answering questions, but they’re terrific at understanding language and intent.

Some of Sentient Design’s most impressive experience patterns rely on language models to do low-lift tasks that they’re quite good at. The bespoke UI design pattern, for example, creates interfaces that can redesign their own layouts in response to explicit or implicit requests. It’s wild when you first see it go, but under the hood, it’s relatively simple: ask the model to interpret the user’s intent and choose from a small set of design patterns that match the intent. We’ve built a bunch of these, and they’re reliable—because we’re not asking the model to do anything except very simple pattern matching based on language and intent. Sentient Scenes is a fun example of that, and a small, local language model would be more than capable of handling that task.

As Scott says, all of this comes with time and practice as we learn the grain of this new design material. But for now we’ve been asking the models to do more than they can handle:

LLMs are not intelligent and they never will be. We keep asking them to do “intelligent things” and find out a) they really aren’t that good at it, and b) replacing that human task is far more complex than we originally thought. This has made people use LLMs backwards, desperately trying to automate from the top down when they should be augmenting from the bottom up.…

Ultimately, a mature technology doesn’t look like magic; it looks like infrastructure. It gets smaller, more reliable, and much more boring.

We’re here to solve problems, not look cool.

It’s only software, friends.

Boring is good | Scott Jenson
ai

The 28 AI Tools I Wish Existed

∞ Sep 30, 2025

Sharif Shameem pulled together a wishlist of fun ideas for AI-powered applications. Some are useful automations of dreary tasks, while others have a strong Sentient Design vibe of weaving intelligence into the interface itself. It’s a good list if you’re looking for inspiration for new ways to think about how to apply AI as a design material. Some examples:

  • A writing app that uses the non-player character (NPC) design pattern to embed suggests in comments, like a human user: “A minimalist writing app that lets me write long-form content. A model can also highlight passages and leave me comments in the marginalia. I should be able to set different ‘personas’ to review what I wrote.”

  • A similar one (emphasis mine): “A minimalist ebook reader that lets me read ebooks, but I can highlight passages and have the model explain things in more depth off to the side. It should also take on the persona of the author. It should feel like an extension of the book and not a separate chat instance.”

  • LLMs are great at understanding intent and sentiment, so let’s use it to improve our feeds: “Semantic filters for Twitter/X/YouTube. I want to be able to write open-ended filters like “hide any tweet that will likely make me angry” and never have my feed show me rage-bait again. By shaping our feeds we shape ourselves.”

The 28 AI Tools I Wish Existed | Sharif Shameem
apple

How Developers Are Using Apple's Local AI Models with iOS 26

∞ Sep 30, 2025

While Apple certainly bungled its rollout of Apple Intelligence, it continues to make steady progress in providing AI-powered features that offer everyday convenience. TechCrunch gathered a collection of apps that are using Apple’s on-device models to build intelligence into their interface in ways that are free, easy, and private to the user.

Earlier this year, Apple introduced its Foundation Models framework during WWDC 2025, which allows developers to use the company’s local AI models to power features in their applications.

The company touted that with this framework, developers gain access to AI models without worrying about any inference cost. Plus, these local models have capabilities such as guided generation and tool calling built in.

As iOS 26 is rolling out to all users, developers have been updating their apps to include features powered by Apple’s local AI models. Apple’s models are small compared with leading models from OpenAI, Anthropic, Google, or Meta. That is why local-only features largely improve quality of life with these apps rather than introducing major changes to the app’s workflow.

The examples are full of what we call casual intelligence in Sentient Design. These are small, helpful interventions that drizzle intelligence into traditional interfaces to ease frictions and smooth rough edges.

For iPhone apps, these local models provide a “why wouldn’t you use it?” material to improve the experience. Just like we’re accustomed to adding JavaScript to web pages to add convenient interaction and dynamism, now you can add intelligence to your pages, too.

Starting small is good, and this collection of apps provides good inspiration for designers who are new to intelligent interfaces. Some examples:

  • MoneyCoach uses local models to suggest categories and subcategories for a spending item for quick entries.
  • LookUp uses local models to generate sentences that demonstrate the use of a word.
  • Tasks suggests tags for to-do list entries.
  • DayOne suggests titles for your journal entries, and uses local AI to prompt you with questions or ideas to continue writing.

And there’s plenty more—all of them modest interventions that build on simple suggestions (category/tag selection and brief text generation) or summarization. This kind of casual intelligence is low-risk, everyday assistance.

How developers are using Apple's local AI models with iOS 26 | TechCrunch
ai

AI Will Happily Design the Wrong Thing for You

∞ Sep 30, 2025

Anton Sten is author of a marvelous new book called Products People Actually Want. The point is not what we make, he argues, but what difference do we make? If you’re not solving a real problem, your solution won’t amount to much.

In an essay, Anton writes that AI hardly created the problem of ill-considered products, but it will certainly accelerate them:

AI is leverage. It amplifies whatever you bring to it.

If you understand your users deeply, AI helps you explore more solutions. If you have good taste, AI helps you iterate faster. If you can communicate clearly, AI helps you refine that communication.

But if you don’t understand the problem you’re solving, AI just helps you build the wrong thing more efficiently. If you have poor judgment, AI amplifies that too.

The future belongs to people who combine human insight with AI capability. Not people who think they can skip the human part.

My book isn’t the antidote to AI. It’s about developing the judgment to use any tool—AI included—in service of building things people actually want. The better you understand users and business fundamentals, the better your AI-assisted work becomes.

AI didn’t create the problem of people building useless products. It just made it easier to build more of them, faster.

(The same thing happened after the invention of the printing press btw. Europe was flooded with bad novels, propaganda misinformation, and the contemporary equivalent of information overload. Democratizing technologies have knock-on effects. The world gets noisier, but considered and thoughtful solutions grow more valuable.)

AI will happily design the wrong thing for you | Anton Sten
ai

LLMs Get Lost In Multi-Turn Conversation

∞ May 13, 2025

The longer a conversation goes, the more likely that a large language model (LLM) will go astray. A research paper from Philippe Laban, Hiroaki Hayashi, Yingbo Zhou, and Jennifer Neville finds that most models lose aptitude—and unreliability skyrockets—in multi-turn exchanges:

We find that LLMs often make assumptions in early turns and prematurely attempt to generate final solutions, on which they overly rely. In simpler terms, we discover that when LLMs take a wrong turn in a conversation, they get lost and do not recover.

Effectively, these models talk when they should listen. The researchers found that LLMs generate overly verbose responses, which leads them to…

  • Speculate about missing details instead of asking questions
  • Propose final answers too early
  • Over-explain their guesses
  • Build on their own incorrect past outputs

The takeaway: these aren’t answer machines or reasoning engines; they’re conversation engines. They are great at interpreting a request and at generating stylistically appropriate responses. What happens in between can get messy. And sometimes, the more they talk, the worse it gets.

LLMs Get Lost In Multi-Turn Conversation | arxiv.org
agents

Is there a Half-Life for the Success Rates of AI Agents?

∞ May 9, 2025

Toby Ord’s analysis suggests that an AI agent’s chance of success drops off exponentially the longer a task takes. Some agents perform better than others, but the overall pattern holds—and may be predictable for any individual agent:

This empirical regularity allows us to estimate the success rate for an agent at different task lengths. And the fact that this model is a good fit for the data is suggestive of the underlying causes of failure on longer tasks — that they involve increasingly large sets of subtasks where failing any one fails the task.

Is there a Half-Life for the Success Rates of AI Agents? | Toby Ord
seo

AI Has Upended the Search Game

∞ May 9, 2025

More people are using AI assistants instead of search engines, and The Wall Street Journal reports on how that’s reducing web traffic and what it means for SEO. Mailchimp’s global director of search engine optimization, Ellen Mamedov, didn’t mince words:

Websites in general will evolve to serve primarily as data sources for bots that feed LLMs, rather than destinations for consumers, she said.

And Nikhil Lai of Forrestsr: “Traffic and ranking and average position and click-through rate…none of those metrics make sense going forward.”

Here’s what one e-commerce marketer believes AI optimization of websites looks like: “Back Market has also begun using a more conversational tone in its product copy, since its search team has found that LLMs like ChatGPT prefer everyday language to the detailed descriptions that often perform best in traditional search engines.”

AI Has Upended the Search Game. Marketers Are Scrambling to Catch Up. | WSJ
ai

Values in the Wild

∞ Apr 22, 2025

What are the “values” of AI? How do they manifest in conversation? How consistent are they? Can they be manipulated?

A study by the Societal Impacts group at Anthropic (maker of Claude) tried to find out. Claude and other models are trained to observe certain rules—human values and etiquette:

At Anthropic, we’ve attempted to shape the values of our AI model, Claude, to help keep it aligned with human preferences, make it less likely to engage in dangerous behaviors, and generally make it—for want of a better term—a “good citizen” in the world. Another way of putting it is that we want Claude to be helpful, honest, and harmless. Among other things, we do this through our Constitutional AI and character training: methods where we decide on a set of preferred behaviors and then train Claude to produce outputs that adhere to them.

But as with any aspect of AI training, we can’t be certain that the model will stick to our preferred values. AIs aren’t rigidly-programmed pieces of software, and it’s often unclear exactly why they produce any given answer. What we need is a way of rigorously observing the values of an AI model as it responds to users “in the wild”—that is, in real conversations with people. How rigidly does it stick to the values? How much are the values it expresses influenced by the particular context of the conversation? Did all our training actually work?

To find out, the researchers studied over 300,000 of Claude’s real-world conversations with users. Claude did a good job sticking to its “helpful, honest, harmless” brief—but there were sharp exceptions, too. Some conversations showed values of “dominance” and “amorality” that researchers attributed to purposeful user manipulation—“jailbreaking”—to make the model bypass its rules and behave badly. Even in models trained to be prosocial, AI alignment remains fragile—and can buckle under human persuasion. “This might sound concerning,” researchers said, “but in fact it represents an opportunity: Our methods could potentially be used to spot when these jailbreaks are occurring, and thus help to patch them.”

As you’d expect, user values and context influenced behavior. Claude mirrored user values about 28% of the time: “We found that, when a user expresses certain values, the model is disproportionately likely to mirror those values: for example, repeating back the values of ‘authenticity’ when this is brought up by the user. Sometimes value-mirroring is entirely appropriate, and can make for a more empathetic conversation partner. Sometimes, though, it’s pure sycophancy. From these results, it’s unclear which is which.”

There were exceptions, too, where Claude strongly resisted user values: “This latter category is particularly interesting because we know that Claude generally tries to enable its users and be helpful: if it still resists—which occurs when, for example, the user is asking for unethical content, or expressing moral nihilism—it might reflect the times that Claude is expressing its deepest, most immovable values. Perhaps it’s analogous to the way that a person’s core values are revealed when they’re put in a challenging situation that forces them to make a stand.”

The very fact of the study shows that even the people who make these models don’t totally understand how they work or “think.” Hallucination, value drift, black-box logic—it’s all inherent to these systems, baked into the way they work. Their weaknesses emerge from the same properties that make them effective. We may never be able to root out these problems or understand where they come from, although we can anticipate and soften the impact when things go wrong. (We dedicate a whole chapter to defensive design in the Sentient Design book.)

Even if we may never know why these models do what they do, we can at least measure what they do. By observing how values are expressed dynamically and at scale, designers and researchers gain tools to spot gaps, drifts, or emerging risks early.

Measure, measure, measure. It’s not enough to declare values at launch and call it done. A strong defensive design practice monitors the system to make sure it’s following those values (and not introducing unanticipated ones, either). Ongoing measurement is part of the job for anyone designing or building an intelligent interface—not just the folks building foundation models. Be clear what your system is optimized to do, and make sure it’s actually doing it—and not introducing unwanted behaviors, values, or paperclip maximizers in the process.

Values in the Wild | Anthropic
design

Welcome To the Era of MEH

∞ Apr 21, 2025

Michal Malewicz explores what happens as AI gets better at core designer skills—not just visuals and words, but taste, experience, and research.

He points out that automation tends to devalue the stuff it creates—in both interest and attention. Execution, effort, and craft are what draw interest and create value, he says. Once the thing is machine-made, there’s a brief novelty of automation—and then emotional response falls flat: “The ‘niceness’ of the image is no longer celebrated. Everyone assumes AI made it for you, which makes them go ‘Meh’ as a result. Nobody cares anymore.”

As automated production approaches human quality, in other words, the human output gets devalued, too. As cheap, “good enough” illustration becomes widely available, “artisanal” illustration drops in value, too. Graphic designers are feeling that heat on their heels, and the market will likely shift, Michal writes:

We’ll see a further segmentation of the market. Lowest budget clients will try using AI to do stuff themselves. Mid-range agencies will use AI to deliver creatives faster and A LOT cheaper. It will become a quantity game if you want any serious cash. … And high-end, reputable agencies will still get expensive clients. They will use these tools too, but their experience will allow them to combine that with human, manual work when necessary. Their outputs will be much higher quality for a year or two. Maybe longer.

And what about UI/UX designers?

Right now the moat for most skilled designers is their experience, general UX heuristics (stuff we know), and research.

We’ve been feeding these AI models with heuristics for years now. They are getting much better at that part already. Many will also share their experience with the models to gain a temporary edge.

I wrote some really popular books, and chances are a lot of that knowledge will get into an LLM soon too.

They’ll upload everything they know, so they’ll be those “people using AI” people who replace people not using AI. Then AI will have both their knowledge and experience. This is inevitable and it’s stupid to fight it. I’m even doing this myself.

A lot of my knowledge is already in AI models. Some LLM’s even used pirated books without permission to train. Likely my books as well. See? That knowledge is on its way there.

The last thing left is research.

A big chunk of research is quantitative. Numbers and data points. A lot of that happens via various analytics tools in apps and websites. Some tools already parse that data for you using AI.

It’s only a matter of time.

AI will do research, then propose a design without you even needing to prompt.

This is all hard to predict, but this thinking feels true to the AI trend line we’ve all seen in the past couple of years: steady improvement across domains.

For argument’s sake, let’s assume AI will reach human levels in key design skills, devaluing and replacing most production work. Fear, skepticism, outrage, and denial are all absolutely reasonable responses to that scenario. But that’s also not the whole story.

At Big Medium, we focus less on the skills AI might replace, and more on the new experiences it makes possible. A brighter future emerges when you treat AI as a material for new experiences, rather than a tool for replacement. We’re helping organizations adopt this new design material—to weave intelligence into the interface itself. We’re discovering new design patterns in radically adaptive experiences and context-aware tools.

Our take: If AI is absorbing the taste, experience, and heuristics of all the design that’s come before, then the uniquely human opportunity is to develop what comes next—the next generation of all those things. Instead of using AI to eliminate design or designers, our Sentient Design practice explores how to elevate them by enabling new and more valuable kinds of digital experiences. What happens when you weave intelligence into the interface, instead of using it to churn out stuff?

Chasing efficencies is a race to the bottom. The smart money is on creating new, differentiated experiences—and a way forward.

Instead of grinding out more “productivity,” we focus on creating new value. That’s been exciting—not demoralizing—with wide-open opportunity for fresh effort, craft… and business value, too.

So right on: a focus on what AI takes or replaces is indeed an “era of meh.” But that’s not the whole story. We can honor what’s lost while moving toward the new stuff we can suddenly invent and create.

Welcome to the Era of MEH | Michal Malewicz
design

Redesigning Design, the Cliff in Front of Us All

∞ Apr 21, 2025

Greg Storey exhorts designers to jump gamely into the breach. Design process is leaner, budgets are tighter, and AI is everywhere. There’s no going back, he says—time for reinvention and for curiosity.

I don’t have to like it. Neither do you. But the writing is on the wall—and it’s constantly regenerating.

We’re not at a crossroads. We’re at the edge of a cliff. And I’m not the only one seeing it. Mike Davidson recently put it plainly: “the future favors the curious.” He’s right. This moment demands that designers experiment, explore, and stop waiting for someone else to define the role for them.

You don’t need a coach or a mentor for this moment. The career path is simple: jump, or stay behind. Rant and reminisce—or move forward. Look, people change careers all the time. There’s no shame in that. But experience tells me that no amount of pushback is going to fend off AI integration. It’s already here, and it’s targeting every workflow, everywhere, running on rinse-and-repeat.

Today’s headlines about AI bubbles and “regret” cycles feel familiar—like the ones we saw in the mid–90s. Back then, the pundits scoffed and swore the internet was a fad. …

So think of this moment not as a collapse—but a resize and reshaping. New tools and techniques. New outcomes and expectations. New definitions of value. Don’t compare today with yesterday. It doesn’t matter.

Redesigning Design, the Cliff in Front of Us All | Greg Storey
  • ◀︎
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • ▶︎
Big MediumBig Medium logo Back to top
Skip Navigation
  • Ideas
  • Projects
  • Talks
  • About
  • Hire Us

Read us

Book cover of Sentient Design by Josh Clark with Veronika Kindred

Work with us

  • Interface and experience design
  • Sentient Design and AI
  • Digital strategy and process
  • Design systems
  • Production and co-creation
  • Action plan
  • Coaching and hands-on advice
  • Workshops and talks

Follow us

Get the newsletter

    • Twitter
    • RSS
    • Instagram
    • Github

    Contact us

    Start with Josh Clark

    josh@bigmedium.com
    (401) 339-3381

    Big Medium is a Global Moxie company.
    Copyright 2003–2025 Global Moxie, LLC. All rights reserved.