LLMs Get Lost In Multi-Turn Conversation
∞ May 13, 2025The longer a conversation goes, the more likely that a large language model (LLM) will go astray. A research paper from Philippe Laban, Hiroaki Hayashi, Yingbo Zhou, and Jennifer Neville finds that most models lose aptitude—and unreliability skyrockets—in multi-turn exchanges:
We find that LLMs often make assumptions in early turns and prematurely attempt to generate final solutions, on which they overly rely. In simpler terms, we discover that when LLMs take a wrong turn in a conversation, they get lost and do not recover.
Effectively, these models talk when they should listen. The researchers found that LLMs generate overly verbose responses, which leads them to…
- Speculate about missing details instead of asking questions
- Propose final answers too early
- Over-explain their guesses
- Build on their own incorrect past outputs
The takeaway: these aren’t answer machines or reasoning engines; they’re conversation engines. They are great at interpreting a request and at generating stylistically appropriate responses. What happens in between can get messy. And sometimes, the more they talk, the worse it gets.
Is there a Half-Life for the Success Rates of AI Agents?
∞ May 9, 2025Toby Ord’s analysis suggests that an AI agent’s chance of success drops off exponentially the longer a task takes. Some agents perform better than others, but the overall pattern holds—and may be predictable for any individual agent:
This empirical regularity allows us to estimate the success rate for an agent at different task lengths. And the fact that this model is a good fit for the data is suggestive of the underlying causes of failure on longer tasks — that they involve increasingly large sets of subtasks where failing any one fails the task.
AI Has Upended the Search Game
∞ May 9, 2025More people are using AI assistants instead of search engines, and The Wall Street Journal reports on how that’s reducing web traffic and what it means for SEO. Mailchimp’s global director of search engine optimization, Ellen Mamedov, didn’t mince words:
Websites in general will evolve to serve primarily as data sources for bots that feed LLMs, rather than destinations for consumers, she said.
And Nikhil Lai of Forrestsr: “Traffic and ranking and average position and click-through rate…none of those metrics make sense going forward.”
Here’s what one e-commerce marketer believes AI optimization of websites looks like: “Back Market has also begun using a more conversational tone in its product copy, since its search team has found that LLMs like ChatGPT prefer everyday language to the detailed descriptions that often perform best in traditional search engines.”
Values in the Wild
∞ Apr 22, 2025What are the “values” of AI? How do they manifest in conversation? How consistent are they? Can they be manipulated?
A study by the Societal Impacts group at Anthropic (maker of Claude) tried to find out. Claude and other models are trained to observe certain rules—human values and etiquette:
At Anthropic, we’ve attempted to shape the values of our AI model, Claude, to help keep it aligned with human preferences, make it less likely to engage in dangerous behaviors, and generally make it—for want of a better term—a “good citizen” in the world. Another way of putting it is that we want Claude to be helpful, honest, and harmless. Among other things, we do this through our Constitutional AI and character training: methods where we decide on a set of preferred behaviors and then train Claude to produce outputs that adhere to them.
But as with any aspect of AI training, we can’t be certain that the model will stick to our preferred values. AIs aren’t rigidly-programmed pieces of software, and it’s often unclear exactly why they produce any given answer. What we need is a way of rigorously observing the values of an AI model as it responds to users “in the wild”—that is, in real conversations with people. How rigidly does it stick to the values? How much are the values it expresses influenced by the particular context of the conversation? Did all our training actually work?
To find out, the researchers studied over 300,000 of Claude’s real-world conversations with users. Claude did a good job sticking to its “helpful, honest, harmless” brief—but there were sharp exceptions, too. Some conversations showed values of “dominance” and “amorality” that researchers attributed to purposeful user manipulation—“jailbreaking”—to make the model bypass its rules and behave badly. Even in models trained to be prosocial, AI alignment remains fragile—and can buckle under human persuasion. “This might sound concerning,” researchers said, “but in fact it represents an opportunity: Our methods could potentially be used to spot when these jailbreaks are occurring, and thus help to patch them.”
As you’d expect, user values and context influenced behavior. Claude mirrored user values about 28% of the time: “We found that, when a user expresses certain values, the model is disproportionately likely to mirror those values: for example, repeating back the values of ‘authenticity’ when this is brought up by the user. Sometimes value-mirroring is entirely appropriate, and can make for a more empathetic conversation partner. Sometimes, though, it’s pure sycophancy. From these results, it’s unclear which is which.”
There were exceptions, too, where Claude strongly resisted user values: “This latter category is particularly interesting because we know that Claude generally tries to enable its users and be helpful: if it still resists—which occurs when, for example, the user is asking for unethical content, or expressing moral nihilism—it might reflect the times that Claude is expressing its deepest, most immovable values. Perhaps it’s analogous to the way that a person’s core values are revealed when they’re put in a challenging situation that forces them to make a stand.”
The very fact of the study shows that even the people who make these models don’t totally understand how they work or “think.” Hallucination, value drift, black-box logic—it’s all inherent to these systems, baked into the way they work. Their weaknesses emerge from the same properties that make them effective. We may never be able to root out these problems or understand where they come from, although we can anticipate and soften the impact when things go wrong. (We dedicate a whole chapter to defensive design in the Sentient Design book.)
Even if we may never know why these models do what they do, we can at least measure what they do. By observing how values are expressed dynamically and at scale, designers and researchers gain tools to spot gaps, drifts, or emerging risks early.
Measure, measure, measure. It’s not enough to declare values at launch and call it done. A strong defensive design practice monitors the system to make sure it’s following those values (and not introducing unanticipated ones, either). Ongoing measurement is part of the job for anyone designing or building an intelligent interface—not just the folks building foundation models. Be clear what your system is optimized to do, and make sure it’s actually doing it—and not introducing unwanted behaviors, values, or paperclip maximizers in the process.
Welcome To the Era of MEH
∞ Apr 21, 2025Michal Malewicz explores what happens as AI gets better at core designer skills—not just visuals and words, but taste, experience, and research.
He points out that automation tends to devalue the stuff it creates—in both interest and attention. Execution, effort, and craft are what draw interest and create value, he says. Once the thing is machine-made, there’s a brief novelty of automation—and then emotional response falls flat: “The ‘niceness’ of the image is no longer celebrated. Everyone assumes AI made it for you, which makes them go ‘Meh’ as a result. Nobody cares anymore.”
As automated production approaches human quality, in other words, the human output gets devalued, too. As cheap, “good enough” illustration becomes widely available, “artisanal” illustration drops in value, too. Graphic designers are feeling that heat on their heels, and the market will likely shift, Michal writes:
We’ll see a further segmentation of the market. Lowest budget clients will try using AI to do stuff themselves. Mid-range agencies will use AI to deliver creatives faster and A LOT cheaper. It will become a quantity game if you want any serious cash. … And high-end, reputable agencies will still get expensive clients. They will use these tools too, but their experience will allow them to combine that with human, manual work when necessary. Their outputs will be much higher quality for a year or two. Maybe longer.
And what about UI/UX designers?
Right now the moat for most skilled designers is their experience, general UX heuristics (stuff we know), and research.
We’ve been feeding these AI models with heuristics for years now. They are getting much better at that part already. Many will also share their experience with the models to gain a temporary edge.
I wrote some really popular books, and chances are a lot of that knowledge will get into an LLM soon too.
They’ll upload everything they know, so they’ll be those “people using AI” people who replace people not using AI. Then AI will have both their knowledge and experience. This is inevitable and it’s stupid to fight it. I’m even doing this myself.
A lot of my knowledge is already in AI models. Some LLM’s even used pirated books without permission to train. Likely my books as well. See? That knowledge is on its way there.
The last thing left is research.
A big chunk of research is quantitative. Numbers and data points. A lot of that happens via various analytics tools in apps and websites. Some tools already parse that data for you using AI.
It’s only a matter of time.
AI will do research, then propose a design without you even needing to prompt.
This is all hard to predict, but this thinking feels true to the AI trend line we’ve all seen in the past couple of years: steady improvement across domains.
For argument’s sake, let’s assume AI will reach human levels in key design skills, devaluing and replacing most production work. Fear, skepticism, outrage, and denial are all absolutely reasonable responses to that scenario. But that’s also not the whole story.
At Big Medium, we focus less on the skills AI might replace, and more on the new experiences it makes possible. A brighter future emerges when you treat AI as a material for new experiences, rather than a tool for replacement. We’re helping organizations adopt this new design material—to weave intelligence into the interface itself. We’re discovering new design patterns in radically adaptive experiences and context-aware tools.
Our take: If AI is absorbing the taste, experience, and heuristics of all the design that’s come before, then the uniquely human opportunity is to develop what comes next—the next generation of all those things. Instead of using AI to eliminate design or designers, our Sentient Design practice explores how to elevate them by enabling new and more valuable kinds of digital experiences. What happens when you weave intelligence into the interface, instead of using it to churn out stuff?
Chasing efficencies is a race to the bottom. The smart money is on creating new, differentiated experiences—and a way forward.
Instead of grinding out more “productivity,” we focus on creating new value. That’s been exciting—not demoralizing—with wide-open opportunity for fresh effort, craft… and business value, too.
So right on: a focus on what AI takes or replaces is indeed an “era of meh.” But that’s not the whole story. We can honor what’s lost while moving toward the new stuff we can suddenly invent and create.
Redesigning Design, the Cliff in Front of Us All
∞ Apr 21, 2025Greg Storey exhorts designers to jump gamely into the breach. Design process is leaner, budgets are tighter, and AI is everywhere. There’s no going back, he says—time for reinvention and for curiosity.
I don’t have to like it. Neither do you. But the writing is on the wall—and it’s constantly regenerating.
We’re not at a crossroads. We’re at the edge of a cliff. And I’m not the only one seeing it. Mike Davidson recently put it plainly: “the future favors the curious.” He’s right. This moment demands that designers experiment, explore, and stop waiting for someone else to define the role for them.
You don’t need a coach or a mentor for this moment. The career path is simple: jump, or stay behind. Rant and reminisce—or move forward. Look, people change careers all the time. There’s no shame in that. But experience tells me that no amount of pushback is going to fend off AI integration. It’s already here, and it’s targeting every workflow, everywhere, running on rinse-and-repeat.
Today’s headlines about AI bubbles and “regret” cycles feel familiar—like the ones we saw in the mid–90s. Back then, the pundits scoffed and swore the internet was a fad. …
So think of this moment not as a collapse—but a resize and reshaping. New tools and techniques. New outcomes and expectations. New definitions of value. Don’t compare today with yesterday. It doesn’t matter.
Design Artifacts
∞ Apr 21, 2025Robin Rendle challenges designers to step back from rote process and instead consider what will help the end result. Journey maps, personas, wireframes, and the like—they’re only useful if they actually improve the experience that gets to customers. These are only thinking tools—a means to an end—yet they often get treated with the weight of the product itself:
So design artifacts are only useful if progress is made but often these assets lead nowhere and waste endless months investigating and talking with countless meetings in between.
There’s a factory-like production of the modern design process which believes that the assets are more important than the product itself. Bloated, bureaucratic organizations tend to like these assets because it absolves them of the difficulty of making tough decisions and shipping good design. They use these tools and documents and charts as an excuse not to fix things, to avoid the hard problems, to keep the status quo in check.
At Big Medium, we focus on keeping design artifacts light. At every stage, we ask ourselves: What do we need to know or share in order to move things forward? And what’s the smallest, lightest thing we can do to get there? Sometimes it’s just a conversation, not a massive PDF. Figure it out, sketch some things together, keep going.
As I wrote a few years ago, only one deliverable matters: the product that actually ships.
Even wth heavier work like research, we design the output to be light and lean—focused on next action rather than a completionist approach to showing all the data. The goal is not to underscore the work that we did; the point is what happens next. That means we design a lot of our artifacts as disposable thinking tools—facilitate the conversation, and then get on with it.
Alignment and good choices are important; that’s what process is for. But when process gets too heavy—when waystation documents soak up all the oxygen—you have a system that’s optimized to reduce risk, not to create something insightful, new, or timely.
How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use
∞ Apr 21, 2025A study by MIT Media Lab finds that heavy use of chatbots travels with loneliness, emotional dependence, and other negative social impacts.
Overall, higher daily usage–across all modalities and conversation types–correlated with higher loneliness, dependence, and problematic use, and lower socialization. Exploratory analyses revealed that those with stronger emotional attachment tendencies and higher trust in the AI chatbot tended to experience greater loneliness and emotional dependence, respectively.
Artificial personality has always been the third rail of interaction design—from potential Clippy-style annoyance to damaging attachments of AI companions. Thing is, people tend to assign personality to just about anything—and once something starts talking, it becomes nearly unavoidable to infer personality and even emotion. The more human something behaves, the more human our responses to it:
These findings underscore the complex interplay between chatbot design choices (e.g., voice expressiveness) and user behaviors (e.g., conversation content, usage frequency). We highlight the need for further research on whether chatbots’ ability to manage emotional content without fostering dependence or replacing human relationships benefits overall well-being.
Go carefully. Don’t assume that your AI-powered interface must be a chat interface. There are other ways for interfaces to have personality and presence without making them pretend to be human. (See our Sentient Scenes demo that changes style, mood, and behavior on demand.)
And if your interface does talk, be cautious and intentional about the emotional effect that choice may have on people—especially the most vulnerable.
TikTok Will Never Die
∞ Jan 18, 2025Damon Beres in the Atlantic Intelligence newsletter:
“Although it was not the first app to offer an endless feed, and it was certainly not the first to use algorithms to better understand and target its users, TikTok put these ingredients together like nothing else before it.” The app was so effective—so sticky—that every meaningful competitor tried to copy its formula. Now TikTok-like feeds have been integrated into Instagram, Facebook, Snapchat, YouTube, X, even LinkedIn.
Today, AI is frequently conflated with generative AI because of the way ChatGPT has captured the world’s imagination. But generative AI is still a largely speculative endeavor. The most widespread and influential AI programs are the less flashy ones quietly whirring away in your pocket, influencing culture, business, and (in this case) matters of national security in very real ways.
MS Copilot Flying Straight Into the Mountain
∞ Jan 10, 2025AI agents have lately captured the industry’s imagination (and marketing communications) in recent months. Agents work on their own; they set and pursue goals, make decisions about how to achieve them, and take action across multiple systems until they decide the goal is complete. Vaclav Vincalek ponders what happens when anyone can create and set these loose.
Now imagine that anyone in the organization will be able to create, connect, interact with a ‘constellation of agents.’
Perhaps you don’t see this as a problem.
That only means that you were never responsible for technology within your organization.
Maybe you had a glimpse in the news about all the latest threats from viruses, phishing or other various forms of hacking. Every IT department is trying to stay above water just to safely run what they have now.
These departments are managing networks, firewalls, desktops, laptops, people working remotely, integrating applications, running backups and updates.
The list is longer than you can imagine.
Thanks to Microsoft, you will add to the mix an ability for anyone in the company to automate any task to ‘orchestrate business processes ranging from lead generation, to sales order processing, to confirming order deliveries.’
What could possibly go wrong?
Look at the person sitting in the cubicle next to you (or in the next square on your Zoom call).
Would you trust the person with any work automation, or do you still question that person’s ability to differentiate between a left and right mouse click?
When Combinations of Humans and A.I. are Useful
∞ Nov 10, 2024This study from MIT researchers raises some challenging questions about collaborative AI interfaces, “human in the loop” supervision, and the value of explaining AI logic and confidence.
Their meta-study looked at over 100 experiments of humans and AI working both separately and together to accomplish tasks. They found that some tasks benefited a ton from human-AI teamwork, while others got worse from the pairing.
Poor Performers Make Poor Supervisors
For tasks where humans working solo do worse than AI, the study found that putting humans in the loop to make final decisions actually delivers worse results. For example, in a task to detect fake reviews, AI working alone achieved 73% accuracy, while humans hit 55%—but the combined human-AI system landed at 69%, watering down what AI could do alone.
In these scenarios, people oscillate between over-reliance (“using suggestions as strong guidelines without seeking and processing more information”) and under-reliance (“ignoring suggestions because of adverse attitudes towards automation”).
Since the people were less accurate, in general, than the AI algorithms, they were also not good at deciding when to trust the algorithms and when to trust their own judgement, so their participation resulted in lower overall performance than for the AI algorithm alone.
Takeaway: “Human in the loop” may be an anti-pattern for certain tasks where AI is more high-performing. Measure results; don’t assume that human judgment always makes things better.
Explanations Didn’t Help
The study found that common design patterns like AI explanations and confidence scores showed no significant impact on performance for human-AI collaborative systems. “These factors have received much attention in recent years [but] do not impact the effectiveness of human-AI collaboration,” the study found.
Given our result that, on average across our 300+ effect sizes, they do not impact the effectiveness of human-AI collaboration, we think researchers may wish to de-emphasize this line of inquiry and instead shift focus to the significant and less researched moderators we identified: the baseline performance of the human and AI alone, the type of task they perform, and the division of labour between them.
Takeaway: Transparency doesn’t always engage the best human judgment; explanations and confidence scores need refinement—or an entirely new alternative. I suspect that changing the form, manner, or tone of these explanations could improve outcomes, but also: Are there different ways to better engage critical thinking and productive skepticism?
Creative Tasks FTW
The study found that human-AI collaboration was most effective for open-ended creative and generative tasks—but worse at decision-making tasks to choose between defined options. For those decision-making tasks, either humans or AI did better working alone.
We hypothesize that this advantage for creation tasks occurs because even when creation tasks require the use of creativity, knowledge or insight for which humans perform better, they often also involve substantial amounts of somewhat routine generation of additional content that AI can perform as well as or better than humans.
This is a great example of “let humans do what they do best, and let machines do what they do best.” They’re rarely the same thing. And creative/generative tasks tend to have elements of each, where humans excel at creative judgment, and the machines excel at production/execution.
Takeaway: Focus human-machine collaboration on creative and generative tasks; humans and AI may handle decision-making tasks better solo.
Divide and Conquer
A very small number of experiments in the study split tasks between human and machine intelligence based on respectie strengths. While only three of the 100+ experiments explored this approach, the researchers hypothesized that “better results might have been obtained if the experimenters had designed processes in which the AI systems did only the parts of the task for which they were clearly better than humans.” This suggests an opportunity for designers to explore more intentional division of labor in human-AI interfaces. Break out your journey maps, friends.
Takeaway: Divvy up and define tasks narrowly around the demonstrated strengths of both humans and machines, and make responsibilities clear for each.
This AI Pioneer Thinks AI Is Dumber Than a Cat
∞ Oct 14, 2024Christopher Mims of the Wall Street Journal profiles Yann LeCun, AI pioneer and senior researcher at Meta. As you’d expect, LeCun is a big believer in machine intelligence—but has no illusions about the limitations of the current crop of generative AI models. Their talent for language distracts us from their shortcomings:
Today’s models are really just predicting the next word in a text, he says. But they’re so good at this that they fool us. And because of their enormous memory capacity, they can seem to be reasoning, when in fact they’re merely regurgitating information they’ve already been trained on.
“We are used to the idea that people or entities that can express themselves, or manipulate language, are smart—but that’s not true,” says LeCun. “You can manipulate language and not be smart, and that’s basically what LLMs are demonstrating.”
As I’m fond of saying, these are not answer machines, they’re dream machines: “When you ask generative AI for an answer, it’s not giving you the answer; it knows only how to give you something that looks like an answer.”
LLMs are fact-challenged and reasoning-incapable. But they are fantastic at language and communication. Instead of relying on them to give answers, the best bet is to rely on them to drive interfaces and interactions. Treat machine-generated results as signals, not facts. Communicate with them as interpreters, not truth-tellers.