Big MediumBig Medium logo
Skip Navigation
  • Ideas
  • Projects
  • Talks
  • About
  • Hire Us
Speak Search Menu

What We’re Reading

ai

John Maeda: Josh Clark's 2019 talk on Design and AI

∞ Apr 20, 2024

Design legend John Maeda found some old gold in this 2019 talk about Design and AI from Big Medium’s Josh Clark:

What’s especially awesome about Josh’s talk is that it precedes the hullabaloo of the chatgpt revolution. This is a pretty awesome talk by Josh. He has been trailblazing machine learning and design for quite a long time.

The talk, AI Is Your New Design Material, addresses use cases and applications for AI and machine learning, along with some of designing with (and around) the eccentricities of machine intelligence.

(Also be sure to check out John’s excellent SXSW Design in Tech Report, “Design Against AI,”.)

Josh Clark's 2019 talk on Design and AI | John Maeda
ai

US Air Force Confirms First Successful AI Dogfight

∞ Apr 20, 2024

Emma Roth reports for The Verge:

After carrying out dogfighting simulations using the AI pilot, DARPA put its work to the test by installing the AI system inside its experimental X–62A aircraft. That allowed it to get the AI-controlled craft into the air at the Edwards Air Force Base in California, where it says it carried out its first successful dogfight test against a human in September 2023.

Human pilots were on board the X–62A with controls to disable the AI system, but DARPA says the pilots didn’t need to use the safety switch “at any point.” The X–62A went against an F–16 controlled solely by a human pilot, where both aircraft demonstrated “high-aspect nose-to-nose engagements” and got as close as 2,000 feet at 1,200 miles per hour. DARPA doesn’t say which aircraft won the dogfight, however.

What could possibly go wrong?

US Air Force Confirms First Successful AI Dogfight | The Verge
ai

Looking for AI Use Cases

∞ Apr 20, 2024

Benedict Evans makes a savvy comparision between the current Generative AI moment and the early days of the PC. While the new technology is impressive, it’s not (yet) evident how it fits into the everyday lives or workflows of most people. Basically: what do we do with this thing? For many, ChatGPT and its cousins remain curiosities—fun toys to tinker with, but little more so far.

This wouldn’t matter much (‘man says new tech isn’t for him!’), except that that a lot of people in tech look at ChatGPT and LLMs and see a step change in generalisation, towards something that can be universal. A spreadsheet can’t do word processing or graphic design, and a PC can do all of those but someone needs to write those applications for you first, one use-case at a time. But as these models get better and become multi-modal, the really transformative thesis is that one model can do ‘any’ use-case without anyone having to write the software for that task in particular.

Suppose you want to analyse this month’s customer cancellations, or dispute a parking ticket, or file your taxes - you can ask an LLM, and it will work out what data you need, find the right websites, ask you the right questions, parse a photo of your mortgage statement, fill in the forms and give you the answers. We could move orders of magnitude more manual tasks into software, because you don’t need to write software to do each of those tasks one at a time. This, I think, is why Bill Gates said that this is the biggest thing since the GUI. That’s a lot more than a writing assistant.

It seems to me, though, that there are two kinds of problem with this thesis.

The first problem, Evans says, is that the models are still janky. They trip—all the time—on problems that are moderately complex or just a few degrees left of familiar. That’s a technical problem, and the systems are getting better at a startling clip.

The second problem is more twisty—and less clear how it will resolve: as a culture broadly, and as the tech industry specifically, our imaginations haven’t quite caught up with truly useful applications for LLMs.

It reminds me a little of the early days of Google, when we were so used to hand-crafting our solutions to problems that it took time to realise that you could ‘just Google that’. Indeed, there were even books on how to use Google, just as today there are long essays and videos on how to learn ‘prompt engineering.’ It took time to realise that you could turn this into a general, open-ended search problem, and just type roughly what you want instead of constructing complex logical boolean queries on vertical databases. This is also, perhaps, matching a classic pattern for the adoption of new technology: you start by making it fit the things you already do, where it’s easy and obvious to see that this is a use-case, if you have one, and then later, over time, you change the way you work to fit the new tool.

The arrival of startling new technologies often works this way, as we puzzle how to shoehorn them into old ways of doing things. In my essay Of Nerve and Imagination, I framed this less as a problem of imagination than of nerve—the cheek to step out of old assumptions of “how things are done” and into a new paradigm. I wrote that essay just as the Apple Watch and other smartwatches were landing, adding yet another device to a busy ecosystem. Here’s what I said then:

The significance of new combinations tends to escape us. When someone embeds a computer inside a watch, it’s all too natural for us to assume that it will be used like either a computer or a watch. A smartphone on your wrist! A failure of nerve prevents us from imagining the entirely new thing that this combination might represent. The habits of the original technology blind us to the potential opportunities of the new.

Today’s combinations are especially hard to parse because they’re no longer about individual instances of technology. The potential of a smartwatch, for example, hinges not only on the combination of its component parts but on its combination with other smart and dumb objects in our lives.

As we weigh the role of the smartwatch, we have to muster the nerve to imagine: How might it talk to other devices? How can it interact with the physical world? What does it mean to wear data? How might the watch signal identity in the digital world as we move through the physical? How might a gesture or flick of the wrist trigger action around me? What becomes possible if smart watches are on millions of wrists? What are the social implications? What new behaviors will the watch channel and shape? How will it change the way I use other devices? How might it knit them together?

As we begin to embed technology into everything—when anything can be an interface—we can no longer judge each new gadget on its own. The success of any new interface depends on how it controls, reflects, shares, or behaves in a growing community of social devices.

Similarly, how do LLMs fit into a growing community of interfaces, services, and indeed other LLMs? As we confront a new and far more transformational technology in Generative AI, it’s up to designers and product folks to summon the nerve to understand not only how it fits into our tech ecosystem, but how it changes the way we work or think or interact.

Easier said than done, of course. And Evans writes that we’re still finding the right level for working with this technology as both users and product makers. Will we interact with these systems directly as general-purpose, “ask me anything” (or “ask me to do anything”) companions? Or will we instead focus on narrower applications, with interfaces wrapped around purpose-built AI to help focus and nail specific tasks? Can the LLMs themselves be responsible for presenting those interfaces, or do we need to imagine and build each application one at a time, as we traditionally have? There’s an ease and clarity to that narrow interface approach, Evans writes, but it diverges from loftier visions for what the AI interface might be.

Evans writes:

A GUI tells the users what they can do, but it also tells the computer everything we already know about the problem. Can the GUI itself be generative? Or do we need another whole generation of [spreadsheet inventor] Dan Bricklins to see the problem, and then turn it into apps, thousands of them, one at a time, each of them with some LLM somewhere under the hood?

On this basis, we would still have an orders of magnitude change in how much can be automated, and how many use-cases can be found for LLMs, but they still need to be found and built one by one. The change would be that these new use-cases would be things that are still automated one-at-a-time, but that could not have been automated before, or that would have needed far more software (and capital) to automate. That would make LLMs the new SQL, not the new HAL9000.

Looking for AI Use Cases | Benedict Evans
ai

AI + Design: Figma Users Tell Us What’s Coming Next

∞ Mar 23, 2024

Figma surveyed 1800+ of its users about their companies’ expectations and adoption of AI. Responses from this audience of designers, executives, and developers indicate that AI is making its way into most companies’ product pipelines, but the solutions they’re shipping are… uninspired.

Eighty-nine percent of respondents say AI will have at least some impact on their company’s products and services in the next 12 months; 37% say the impact will be “significant or transformative.” The executives overseeing company decision-making are even more bullish and much more likely to see AI as “important to company goals.”

But this kind of thinking presents its own risk. Our survey suggests AI is largely in the experimental phase of development, with 72% of those who have built AI into products saying it plays a minor or non-essential role. Perhaps as a result, most respondents feel it’s too soon to tell if AI is making an impact. Just one third of those surveyed reported improvements to metrics like revenue, costs, or market share because of AI, and fewer than one third say they’re proud of what they shipped.

Worth repeating: Fewer than one third say they’re proud of what they shipped. Figma also says that separate research has turned up “AI feature fatigue” among a general civilian audience of product end-users.

What I take from this is that there’s general confidence that “there’s some there there,” but what that means isn’t yet clear to most companies. There’s a big effort to jam AI into products without first figuring out the right problem to solve, or how to do it in an elegant way. Exhibit #1: chatbots bolted onto everything. Early steps have felt like missteps.

“AI feature fatigue” is a signal in itself. It says that there’s too much user-facing focus on the underlying technology instead of how it’s solving an actual problem. The best AI features don’t shout that they’re AI—they just quietly do the work and get out of the way.

Hey, design is hard. Creating new interaction models is even harder; it requires stepping away from known habits and “best practice” design patterns. That’s the work right now. Algorithm engineers and data scientists have shown us what’s possible with AI and machine learning. It’s up to designers to figure out what to do with it. It’s obviously more than slapping an “AI label” on it, or bolting on a chatbot. The survey suggests that product teams understand this, but haven’t yet landed on the right solutions.

This is a huge focus for us in our client work at Big Medium. Through workshops and product-design engagements, we’re helping our clients make sense of just what AI means for them. Not least, that means helping the designers we work with to understand AI as a design material—the problems it’s good at solving, the emergent design patterns that come out of that, and the ones that fall away.

As an industry, we’re entering a new chapter of digital experience. The growing pains are evident, but here at Big Medium, we’re seeing solid solutions emerge in product and interaction design.

AI + Design: Figma Users Tell Us What’s Coming Next | Figma Blog
sentient design

This is the Moment to Reinvent Your Product

∞ Mar 23, 2024

Alex Klein has been on a roll with UX opportunities for AI. At UX Collective, he asks: will you become an AI shark or fairy?

The sharks will prioritize AI that automates parts of their business and reduces cost. These organizations smell the sweet, sweet efficiency gains in the water. And they’re salivating at AI’s promised ability to maintain productivity with less payroll (aka people).

The fairies will prioritize AI that magically transforms their products into something that is shockingly more valuable for customers. These organizations will leverage AI to break free from the sameness of today’s digital experiences–in order to drive lifetime value and market share.

No, they’re not mutually exclusive. But every company will develop a culture that prioritizes one over the other.

I believe the sharks are making a big mistake: they will commoditize their product precisely when its potential value is exploding.

A broader way to name this difference of approach: will you use AI to get better/faster at things you do already, or will you invent new ways to do things that weren’t previously possible (and maybe not just new “ways”—maybe new “things” entirely)?

Both are entirely legit, by the way. A focus on efficiency will produce more predictable ROI (safe, known), while a focus on new paradigms can uncover opportunities that could be exponentially more valuable… but also maybe not (future-facing, uncertain). The good news: exploring those paradigms in the right way can reduce that uncertainty quickly.

I think of four categories of opportunities that AI and machine learning afford, and the most successful companies will explore all of them:

  1. Be smarter/faster with problems we already solve. The machines are great at learning from example. Show the robots how to do something enough times, and they’ll blaze through the task.

  2. Solve new problems, ask new questions. As the robots understand their worlds with more nuance, they can tackle tasks that weren’t previously possible. Instead of searching by keyword, for example, machines can now search by sentiment or urgency (think customer service queues). Or instead of offering a series of complex decision menus, the machines can propose one or more outcomes, or just do the task for you.

  3. Tap new data sources. The robots can now understand all the messy ways that humans communicate, unlocking information that was previously opaque to them. Speech, handwriting, video, photos, sketches, facial expression… all are available not only as data but as surfaces for interaction.

  4. See invisible patterns, make new connections. AI and machine learning are vast pattern-matching systems that see the world in clusters and vectors and probabilities that our human brains don’t easily discern. How can we partner with them to act on these useful new signals?

Klein’s “sharks” focus on the first item above, while the “fairies” focus on the transformative possibilities of the last three.

That first efficiency-focused opportunity can be a great place to start with AI and machine learning. The problems and solutions are familiar, and the returns fairly obvious. For digital leaders confronting lean times, enlisting the robots for efficiency has to be a focus. And indeed, we’re doing a ton of that at Big Medium with how we use AI to build and maintain design systems.

But focusing solely on efficiency ignores the fact that we’ve already entered a new era of digital experience that will solve new problems in dramatically new ways for both company and customer. Some organizations have been living in that era for a while, and their algorithms already ease and animate everyday aspects of our lives (for better and for worse). Even there, we’re only getting started.

Sentient Design is my term for this emerging future of AI-mediated interfaces—experiences that feel almost self-aware in their response to user needs. In Big Medium’s product design projects, we’re helping our clients explore and capitalize on these emerging Sentient Design patterns—as embedded features or as wholesale products.

Companion/agent experiences are one novel aspect of that work, and Klein offers several useful examples of this approach with what he calls “software as a partnership.” There are several other strains of Sentient Design that we’re building into products and features, too, and they’re proving out. We’ll be sharing more of those design patterns here, stay tuned!

Meanwhile, if your team isn’t yet working with AI, it’s time. And if you’re still in the efficiency phase, get comfortable with the uncomfortable next step of reinvention.

This Is the Moment To Reinvent Your Product | UX Collective
sentient design

The 3 Capabilities Designers Need To Build for the AI Era

∞ Mar 22, 2024

At UX Collective, Alex Klein shares three capabilities designers need to build for the AI era:

  1. AI strategy: how can we use AI to solve legit customer problems (not just bolted-on “we have AI!” features)?
  2. AI interaction design: what new experiences (and risks) does AI introduce?
  3. Model design: prompt-writing means that designers can collaborate with engineers to guide how algorithms work; how can we use designerly skills to improve models?

I agree with all of it, but I share special excitement around the new problems and emerging interaction models that AI invites us to address and explore. I love the way Klein puts it, and why I’m sharing his article here:

We’ve moved from designing “waterslides,” where we focused on minimizing friction and ensuring fluid flow — to “wave pools,” where there is no clear path and every user engages in a unique way.

Over the past several years, the more that I’ve worked with AI and machine learning—with robot-generated content and robot-generated interaction—the more I’ve had to accept that I’m not in control of that experience as a designer. And that’s new. Interaction designers have traditionally designed a fixed path through information and interactions that we control and define. Now, when we allow the humans and machines to interact directly, they create their own experience outside of the tightly constrained paths we’re accustomed to providing.

We haven’t completely lost control, of course. We can choose when and where to allow this free-form interaction, blending those opportunities within controlled interaction paths. This has some implications that are worth exploring in both personal practice and as an industry. We’ve been working in all of these areas in our product work at Big Medium:

  • Sentient design. This is the term I’ve been using for AI-mediated interfaces. When the robots take on the responsibility for responding to humans, what becomes possible? What AI-facilitated experiences lie beyond the current fascination with chatbots? How might the systems themselves morph and adapt to present interfaces and interaction based on the user’s immediate need and interest? This doesn’t mean that every interface becomes a fever dream of information and interaction, but it does mean moving away from fixed templates and set UI patterns.

  • Defensive design. We’re used to designing for success and the happy path. When we let humans and robots interact directly, we have to shift to designing for failure and uncertainty. We have to design defensively, consider what could go wrong, how to prevent those issues where we can, and provide a gentle landing when we fail.

  • Persona-less design. As we get the very real ability to respond to users in a hyper-personalized way, do personas still matter? Is it relevant or useful to define broad categories of people or mindsets, when our systems are capable of addressing the individual and their mindset in the moment? UX tools like personas and journey maps may need a rethink. At the very least, we have to reconsider how we use them and in which contexts of our product design and strategy. As always, let’s understand whether our tools still fit the job. It might be that the robots tell us more about our users than we can tell the robots.

These are exciting times, and we’re learning a ton. At Big Medium, even though we’ve been working for years with machine learning and AI, we’re discovering new interaction models every day—and fresh opportunities to collaborate with the robots. We’re entering a new chapter of user experience and interaction design. It’s definitely a moment to explore, think big, and splash in puddles—or as Klein might put it, leave the waterslide to take a swim in the wave pool.

The 3 Capabilities Designers Need To Build for the AI Era | UX Collective
ai

A Coder Considers the Waning Days of the Craft

∞ Feb 19, 2024

In the New Yorker, writer and programmer James Somers shares his personal journey discovering just how good AI is at writing code—and what this might mean both individually and for the industry: A Coder Considers the Waning Days of the Craft. “Coding has always felt to me like an endlessly deep and rich domain. Now I find myself wanting to write a eulogy for it,” he writes. “What will become of this thing I’ve given so much of my life to?”

Software engineers, as a species, love automation. Inevitably, the best of them build tools that make other kinds of work obsolete. This very instinct explained why we were so well taken care of: code had immense leverage. One piece of software could affect the work of millions of people. Naturally, this sometimes displaced programmers themselves. We were to think of these advances as a tide coming in, nipping at our bare feet. So long as we kept learning we would stay dry. Sound advice—until there’s a tsunami.

Somers travels through several stages of amazement (and grief?) as he gets GPT–4 to produce working code in seconds that would normally take him hours or days—or sometimes that he doubts he’d be capable of at all. If the robots are already so good at writing production-ready code, then what’s the future of the human coder?

Here at Big Medium, we’re wrestling with the same stuff. We’re already using AI (and helping our clients to do the same) to do production engineering that we ourselves used to do: writing front-end code, translating code from one web framework to another, evaluating code quality, writing automated tests. It’s clear that these systems outstrip us for speed and, in some ways, technical execution.

It feels to me, though, that it’s less our jobs that are being displaced than where our attention is focused. We have a new and powerful set of tools that give us room to focus more on the “what” and the “why” while we let the robots worry about the “how.” But our new robot colleagues still need some hand-holding along the way. In 2018, Benedict Evans wrote that machine learning “gives you infinite interns, or, perhaps, infinite ten year olds”—powerful but, in important ways, unsophisticated. AI has come a long, long way in the six years since, but it still misses the big picture and fails to understand human context in a general and reliable way.

Somers writes:

You can’t just say to the A.I., “Solve my problem.” That day may come, but for now it is more like an instrument you must learn to play. You have to specify what you want carefully, as though talking to a beginner. … I found myself asking GPT–4 to do too much at once, watching it fail, and then starting over. Each time, my prompts became less ambitious. By the end of the conversation, I wasn’t talking about search or highlighting; I had broken the problem into specific, abstract, unambiguous sub-problems that, together, would give me what I wanted.

Once again, technology is pushing our attention higher up the stack. Instead of writing the code, we’re defining the goals—and the approach to meet those goals. It’s less about how the car is built and more about where we want to drive it. That means the implementation details become… well, details. As I wrote in Do More With Less, “Done right, this relieves us of nitty-gritty, error-prone, and repetitive production work and frees us to do higher-order thinking, posing new questions that solve bigger problems. This means our teams will eventually engage in more human inquiry and less technical implementation: more emphasis on research, requirements, and outcomes and less emphasis on specific outputs. In other words, teams will focus more on the right thing to do—and less on how to do it. The robots will take care of the how.”

And that seems to be where Somers lands, too:

The thing I’m relatively good at is knowing what’s worth building, what users like, how to communicate both technically and humanely. A friend of mine has called this A.I. moment “the revenge of the so-so programmer.” As coding per se begins to matter less, maybe softer skills will shine.

A Coder Considers the Waning Days of the Craft | The New Yorker
algorithms

How Machines Are Taking Over the World’s Stock Markets

∞ Jan 24, 2020

Time magazine interviewed Marcos López de Prado, a specialist in using machine learning for investment and finance. This quote caught my eye:

“Machine learning should be used as a research tool, not as a forecasting tool. It should be used to identify new theories, and once you identify a new theory, you throw the machine away, you don’t want the machine.”
—Marcos López de Prado

A caveat: López de Prado is speaking specifically about machine learning for market predictions, and he notes that markets resist prediction. “Markets evolve,” he said. “You are an investor and when you extract money from the market, the market learns to prevent you from extracting profits next year.”

Still, this resonates with a philosophy that has deepened for me the more I’ve worked with AI and machine learning: machine learning is better at signals than answers.

The first generation of mainstream AI applications has over-dialed on presenting just-the-facts answers. A one-true-answer mentality has created a whole raft of problems, some of them dangerous. Here’s the thing: the machines are flaky, with narrow and literal interpretations of the world. That means they’re brittle for decision-making. Instead of replacing human judgment, AI should amplify it. Machine learning is a mediocre substitute for human judgment and individual agency, but it’s an excellent signal booster for both.

I love the way López de Prado frames it: use the machines to surface patterns, signals, and suggestions to develop a theory for action—and let humans make the decisions from there.

Time | How Machines Are Taking Over the World's Stock Markets
bots

Slack and the Decline of Bots

∞ Jan 23, 2020

Because most chatbots understand only a very limited vocabulary, using them can become a guessing game to arrive at the precise incantation to make them do your bidding. The more we talk to robots, the more we talk like robots.

Will Oremus wrote this report in October about Slack’s expansion of support for third-party plugins. Those plugins were previously limited to text-only chatbots—via either conversational UI or specific “slash commands”—but can now offer more traditional GUI elements like windows, buttons, forms, and so on.

It seems Slack’s users found the chat-only UI too challenging because of its rigid command-line syntax. Discoverability was a challenge, and users found it hard to remember the precise words to make the bots go, or even which bots were installed. “Nobody should have to be a specialist in the dozens of apps they interact with on a daily or weekly basis,” said Andy Pflaum, Slack’s head of platform, in an interview.

Will writes:

Bots will “continue to exist and have their role in Slack,” Pflaum said. But the company’s research has found that “the typical user isn’t as comfortable with those, or forgets how to use those methods.” Testing of more graphical interfaces has generated “so much positive response,” he added, and should make apps accessible to “a much broader base of users.”

Slack’s investment in feature expansion at once suggests the success of the plugins (1800 third-party apps and counting), but also the limiting nature of plain-text UI at a moment when bots still have very narrow language understanding. This will get better as natural language processing (NLP) improves and bots get more flexible in what they can understand. We’re already seeing that happen in the latest generation of NLP (see AI Dungeon for a fun example).

In the meantime: when you can take advantage of the full range of UI on a specific platform, you should—and that’s exactly what Slack is doing here. The future of interaction is increasingly multi-modal (and multi-platform for that matter). Enabling people to move nimbly among modes and platforms is as important as the ability to move among services, the very point of third-party plugins in the first place.

OneZero | Slack and the Decline of Bots
design

The Decade of Design

∞ Jan 21, 2020

In a wide-ranging essay for Figma, Carmel DeAmicis chronicles the rise of design in the last decade of product and business: The Decade of Design: How the last 10 years transformed design’s role in tech. She asked a dozen people, including me, about the themes that raised design’s profile and shifted its focus since 2010.

“Mobile normalized the idea of computing beyond the screen” is a tidbit I offered. Not only did mobile make touch mainstream as an input alternative to keyboard and mouse, it also made sensor-based computing utterly normal. The camera, the microphone, the accelerometer, GPS—all became expected, everyday surfaces for interaction in the first full decade of the smartphone. For that matter, “invisible” interactions became commonplace, too, with notification-based interfaces driven by data-backed predictive services.

At an even more fundamental level, mobile changed consumer expectations of what software should be—in quality, ease of use, and even personality. “Applications used to be gray, bland, functional affairs imposed upon us to do the mundane tasks of the day,” I suggested to Carmel. “Mobile really blew that up.”

Carmel’s central theme is not only that mobile was the key driver for digital product design in the last decade, but that this sea change also had several follow-on (and follow-on-follow-on effects). “Mobile sped up the pace of everything, design included,” she writes. A few of her call-outs:

  • The collection, storage, and use of personal data exploded.
  • Data-driven design and A/B testing became part of the standard toolkit for designers (for better and for worse).
  • As more companies built on top of identical infrastructure (AWS, cloud services, open source software), design became the distinguishing factor for products, not the code.
  • Big companies started hiring big design staffs, instead of outsourcing design work to agencies.
  • Educational programs, bootcamps, and self-serve courses about design have proliferated to meet the need for more designers.
  • Design tools exploded to meet the growing, varied, and dynamic needs of interaction design.

What comes next?

Carmel closes her essay by noting that designers are now charged with figuring out how to use their new and growing influence to focus tech on meaningful opportunities—and limit tech’s potential and demonstrated dangers. “Many people we interviewed mentioned the moral responsibilities that lie ahead,” she writes.

For me, that will be all about how we choose to feed the algorithms, present their results, and limit their risks. Carmel and I talked about this in our interview for the article, though it didn’t make the final cut. Here are a few of my comments from that conversation:

If mobile defined the last decade of digital product design, machine learning is already defining the next.

Algorithmic interfaces already drive so many of the digital products all of us use every day. For better and for worse, algorithms determine the news we see, the movies we watch, the products that are surfaced, even the way we drive home from work. For designers, the next decade is all about understanding our roles and responsibilities in using and shaping the algorithm as a design material. How do we use data and sensors and machine learning in ways that are meaningful, personal, useful—and most of all, respectful and responsible? That’s the opportunity and challenge that will be mobile’s legacy—and the work of design in the coming years.

Are you or your team wrestling with how to adopt and design for machine learning and AI? Big Medium can help—with executive sessions, workshops, or full-blown engagements for product design and development. Get in touch.

Figma | The Decade of Design: How 10 years transformed design’s role in tech
mindfulness

The Damaging Fiction of the Wellness Industrial Complex™

∞ Jan 21, 2020

The remarkable Liza Kindred posted an elegant and thoughtful takedown of “the wellness industrial complex,” the manipulative mashup of wellness and capitalism. (Even if wellness isn’t your scene, stick with me—this has an important tie-in to design.)

In The Damaging Fiction of the Wellness Industrial Complex™, Liza writes:

There is a lie—a simple but damaging fiction—that seeps under the doors of our yoga studios and burrows into our meditation cushions. It dresses up in cute quotes on Pinterest and has hacked its way into countless Instagram accounts. It smells weird; like someone tried to burn sage over a garbage can. But if we wrinkle up our noses and ask if anyone else smells that weird smell, all we get in response is nervous laughter. Our ears ache from the shrillness of the hollow words; we’re choking on the dust of toxic positivity; and through the haze we think we can see a place where the air is clean–but standing between us and that clearing stands a group of people wearing t-shirts with ancient holy symbols on them, which were purchased from a big box store.

The lie, Liza writes, is the insidious suggestion that you’re not good enough. If only you would hustle harder, do more yoga, keep up your Headspace streaks, and just “work on yourself” more… you could finally be the better person you were meant to be. “Anyone who wants to fix you thinks that you are broken,” Liza warns:

This idea—that we are broken and need to be fixed—is what I call The Golden Cut. It is self aggression masquerading as a journey towards self acceptance. It is a damaging fiction that has invaded the lives and hearts of so many of us, an unchecked moral imperative that has cloaked itself in the language of wellness and well-being.

Constant striving has become the planned obsolescence of the wellness world. It’s precisely how capitalism has invaded wellness.

Liza puts her finger on something that has always felt off-putting to me about aspects of mindfulness as an industry. While mindfulness (and yoga and meditation and its other many cousins) have real and evident benefits, they’re often wrapped in a superficial commercial culture that broadcasts a smug kind of judgment, superiority, and shame to those “not in the club.” For a movement that promises personal growth, that’s just gross.

Liza is explicit about separating out the practices and services of wellness from their toxic alter ego, the wellness industrial complex. It’s not wellness or wellness-related businesses that are bad, she writes; it’s the cynical and manipulative marketing that suggests you are broken and need to be “improved.” It’s healthy to seek and cultivate personal development, and of course to acknowledge our flaws, “but none of this means that we are broken; it means that we are human.”

So what’s this got to do with design?

The very purpose of interaction design is to shape behavior. It guides users through an intentional path to a desired outcome. Commercial design seeks to shape that behavior to the benefit of the company. When done well, of course, it also benefits the customer; both get what they need. All too often, though, that power dynamic becomes lopsided, and the interests of the company steamroll those of the individual.

That’s when you get abusive anti-patterns that incite false urgency or FOMO or shame in order to get that purchase. The confirmshaming anti-pattern in particular reminds me of the “you’re broken” message of the wellness industrial complex. Like this “Nope, I don’t care enough” example:

Madame Noire confirmshaming
“Nope, I don’t care enough.” via confirmshaming.tumblr.com.

Confirmshaming is a familiar tactic that fronts as tongue-in-cheek playfulness even as it forces you to say you’re a jerk for not buying the product. (And whattya know, that one’s for a wellness service to boot.)

Every time you design to stir this kind of guilt or false urgency (“75 people are looking at this hotel room”) or interruption (popover ads and notifications) or other unease, you take advantage of the customer. The trouble, of course, is that the tactics work, at least in the short term. If you’re just measuring conversions, you’ll see those metrics rise. But at what cost? Cynical sales tactics sully the very product they aim to sell; they erode trust. How are you measuring that? And how does that result fit with the brand you’re trying to build, or the effect your business is trying to have for your customers?

Simply wild how much in life is fucked solely because someone somewhere would make less money if it were better.

— nicey goodlady (@super_triangle) January 14, 2020

In the wellness world, Liza shows us that the wellness industrial complex deepens jittery unease and dissatisfaction, instead of the calm and peace it promises on the surface. It is harder work to make something that is both profitable and respectful. But you know what? That’s the work.

We can all do and demand better.

EFF THIS! Meditation | The Damaging Fiction of the Wellness Industrial Complex™
c25k

Simple Plan Gets Millions Running

∞ Jan 20, 2020

Writing for Hong Kong’s South China Morning Press, Jack Lau tells the story of Couch to 5K, the running schedule I created in 1996. Get the scoop on how the thing came to be and, um, compare photos of 25-year-old Josh running in 1996 to 49-year-old Josh running now.

I created the schedule well before I became a designer, but looking back, I consider Couch to 5K (C25K) to be my first big UX project. The “brief”: onboard skeptical would-be runners to a regular, sustainable running habit. Twenty-four years later, many millions of runners have used C25K to do exactly that.

The secret behind the nine-week plan is to offer kind encouragement to go (very) slow and advance (very) gently. Many come to C25K after defeating experiences with fitness: pain, self-flagellation, boredom, or “failure.” The program turns that around by delivering attainable victories. As Jack calls out in the article, the process has deeper effects than the merely physical. I’ve received a truly remarkable number of notes from people who tell me that completing the program revealed new confidence in what they might achieve.

“If a depressed 33-year-old with bad knees who hates cardio can do it, enjoy it, and start to thrive, anyone can,” one C25K alum told Jack. And that’s pretty cool.

South China Morning Post | Simple Plan Gets Millions Running
DDM

How Dotdash, Formerly About.com, Is Taking over the Internet

∞ Jan 14, 2020

Fast Company’s Aaron Cohen shares the story of Dotdash, the network formerly known as About.com. Big Medium had a big role in this tale, and it may be the most successful design- and business-turnaround story we’ve ever been involved with.

Three years ago, About.com’s audience and ad revenue were plummeting, and CEO Neil Vogel told us the company was “circling the drain” and needed drastic change. We helped the company develop a new vertical strategy, carving out the content from the main network into branded premium experiences. The new network, Dotdash, relaunched its vast archive of content with a collection of great-looking, fast, and premium websites, powered by a single CMS and a themed design system. Big Medium led the design of three of those early properties—Verywell, The Balance, and The Spruce—and the network has since grown to nearly a dozen.

We tell our bit of the story here, and Fast Company shares what’s happened since:

Maybe you’ve never even heard of Dotdash, but its service content reaches about 90 million Americans a month. … Collectively, Dotdash’s sites have increased traffic by 44% year over year in Q3 2019. Driven by advertising and e-commerce, the company’s annual revenue grew by 44% in 2018 and 34% as reported in Q3 2019 earnings.

A big part of this success boils down to some very intentional design and technology bets that we made together:

  • Make more money… by showing fewer ads
  • Create a respectful UX that celebrates content instead of desperate revenue grabs
  • Create a front-end architecture that is modular and nimble
  • Make the sites fast

It’s worth noting that all of these choices are counter to what most media companies are doing. Most are pouring on more ads, imposing design that abuses readers and content with popovers etc, slowing their sites with heavy scripts and trackers. No kidding, it was a seriously brave and non-obvious choice to reject those paths. Fast Company describes the impact of Dotdash’s industry-bucking choices:

While other independent media companies were engineering their coverage around social media, video, and trending topics, Dotdash doubled down on text-based articles about enduring topics and avoided cluttering them with ads. … Dotdash sites run fewer ads, with no pop-ups or takeovers, and because the ads are relevant to each article, they perform better. At a time when digital ad rates have continued to crater for most online publishers, Vogel says the company’s ad rates have increased nearly 20 percent each year since 2016, and 25 percent of 2019 revenue came from affiliate marketing fees (bonuses paid to the publisher after Dotdash visitors made purchases via ads on the sites.)

The sites load very quickly, and the company’s proprietary content management system is designed for efficiency: Designers and editors can choose from fast-loading templates that include images, video, and interactive applications. And there’s an emphasis on creating the kinds of detailed, informative articles that turn up in search results. At Verywell, for example, each article is updated at least once every nine months and reviewed by medical professionals.

Dotdash has not only turned itself around, it’s been expanding as other media companies have contracted, selling themselves off piece by piece. Big congrats to our friends at Dotdash: they’ve demonstrated that ad-supported websites can be presented in ways that are both respectful and (very) profitable.

Fast Company | How Dotdash, Formerly About.com, Is Taking over the Internet
ai

AI Dungeon

∞ Dec 6, 2019

It is pitch black. You are likely to be eaten by a generative text model.

AI Dungeon is a pretty astonishing example of generative fiction—in the form of old-school text adventure game: “Imagine an infinitely generated world that you could explore endlessly, continually finding entirely new content and adventures. What if you could also choose any action you can think of instead of being limited by the imagination of the developers who created the game?”

This experiment by Nick Walton is built on top of OpenAI’s recent GPT–2 model for generating text after being primed with an arbitrary input (seed text to establish context). Unlike many prior language models, this one generates entire coherent paragraphs that feel a world ahead of prior-art attempts that simply string plausible words together.

Walton loaded the model with training data from a collection of text adventures obtained from chooseyourstory.com, and then let ’er rip. “The model at times shows remarkable understanding and writing quality like in this sample where the player tries to find love on a deserted island,” Walton writes:

You head to the island and explore it. You find it completely deserted save for a single tree. You walk up to it and examine it closely. It seems to be made of wood, but its leaves seem to be green rather than normal leafy green.
> find love

You take a deep breath and stare intently at the tree. Your eyes meet with the tree’s. You immediately feel a strong sense of euphoria. You smile and touch the tree. Its bark feels warm to the touch. You sniff the tree and find that its scent is very pleasant.

The unfolding adventures are reasonably coherent, too, and generally hang together as you go along. Unlike the virtual assistants many of us use today, this one actually remembers the thing you said to it moments before. Walton says every command is accompanied by the last eight commands and responses, effectively giving the game a short-term memory to maintain context. Even as you throw curve balls at the story engine, it works hard to make it fit what came before. Here’s a snippet from one of the resulting adventures:

"I am a goblin!" you say proudly. "And I'm glad to be one".
"Well then, congratulations," says the orc captain. "We'll let you live". You nod and walk away from the camp.
> ask if you can join the orc band

The orcs don't want to fight anymore so they're looking for someone to help them out in their new endeavors. They offer you a chance to join up with them as a drummer. It would allow you to play music while helping out around camp.
> tell them we should start a musical troupe

They look at you skeptically but agree to give you a try. You tell them about your plans to create a musical troupe and how you plan on using the orcs to help you out.
> teach them how to play the flute

You teach them how to play the flute and soon enough they're playing for you all the time. The orcs seem to enjoy it and they begin to think of you as their leader. Soon other orcs come to you asking to join too.
> accept them into your musical groupe and train to be the best musicians

You continue to teach them and eventually they become better than you. You learn that you need to practice more and more often until you can master the art of playing the flute. Eventually you find yourself practicing every day and finding it very difficult to sleep.
> take orcs to perform at a nearby festival

You take the orcs to perform at a local festival where you get to meet some of the other orcs you've trained. This is followed by another success when the orcs manage to attract the attention of the Emperor himself who grants you permission to use his personal carriage to travel around the Empire.

Why this matters

This is a toy, of course, but it’s a nifty demonstration of how the GPT model can be used on an arbitrary data set to create fairly compelling responses. This stuff remains primitive compared to human prose; AI Dungeon isn’t going to write award-winning fiction, but it does hint at ways that it could help human writers by proposing directional text. In a Hacker News thread, Walton wrote:

This doesn’t reach near the level of good human authors. There’s no long term plot or deep human themes in this. I don’t think this will ever replace quality human writing, but it may be able to augment it in cool ways. I personally would love if rather than every guard in Skyrim telling the exact same story, if each guard could have their own stories or comments generated based on things about their life. Human authors could provide high level details and let AI generators fill in the smaller details.

As with so many instances of machine learning, in other words, the best application here is not to replace human efforts but to augment them. What might be the role for this in supporting common or repetitive writing tasks? In supporting customer-support teams providing tailored responses to frequently asked questions? In giving automated agents better comprehension of the task we want them to accomplish?

AI Dungeon
ai

What is the Role of an AI Designer?

∞ Dec 5, 2019

Facebook’s Amanda Linden shares how AI product designers approach their work in Facebook’s Artificial Intelligence team:

There are big differences in the role of a typical product designer and an AI designer. Rather than launching a product feature that shows up in an app in an immediate and obvious way, our output is often clarity for engineers on how the technology could be applied. Because AI capabilities might take 2–3 years to develop, it’s important for designers to help developers understand the potential of different solutions and their impact on people’s lives when developing AI.

Linden details several roles that designers play in shaping AI at Facebook—not just how it’s applied and presented, but how it’s conceived and built:

  • Designing AI prototypes
  • Shaping new technology
  • Developing AI centered products
  • Collecting data for AI to learn
  • Designing AI developer tools

We’re in a peculiar moment when many designers have a hard time imagining a role with artificial intelligence and machine learning, because it departs in so many ways from traditional product design. Here’s the thing: design’s superpower is understanding how technology can support human goals and ambitions, how to make technology fit our lives instead of the reverse. Developers and algorithm engineers have shown us what’s possible with AI. Now it’s the designer’s role (and responsibility!) to shape how it’s conceived and presented for meaningful use. That’s why AI and machine learning matter for design teams.

Amanda Linden | What is the Role of an AI Designer?
privacy

With Brits Used to Surveillance, More Companies Try Tracking Faces

∞ Dec 4, 2019

The Wall Street Journal reports that companies are using UK’s omnipresent security cameras as cultural permission to bring facial-recognition tech to semi-public spaces, tracking criminal history but also ethnicity and other personal traits. “Retailers, property firms and casinos are all taking advantage of Britain’s general comfort with surveillance to deploy their own cameras paired with live facial-recognition technology,” writes Parmy Olson for the Journal ($). “Companies are also now using watch lists compiled by vendors that can help recognize flagged people who set foot on company property.” For example:

Some outlets of Budgens, a chain of independently owned convenience stores, have been using facial-recognition technology provided by Facewatch Ltd. for more than a year. Facewatch charges retailers for the use of a computer and software that can track the demographics of people entering a store, including their ethnicity, and screen for a watch list of suspected thieves through any modern CCTV camera. The system works by sending an alert to a staff member’s laptop or mobile device after detecting a face on the watch list. Retailers then decide how to proceed.

Why this matters

  1. Assumptions about appropriate (or even inevitable) uses of tech become normalized quickly. As constant surveillance becomes the everyday, it’s all too easy to become resigned or indifferent as that surveillance deepens. Once the cultural foundation for a new technology sets, it’s difficult to change the associated expectations and assumptions—or see the status quo as anything other than inevitable, “just the way things work.” We see it in the decades-long expectation that online content is free and ad supported. We see it in the assumption that giving up personal data is just table stakes for using the internet. And now, with surveillance cameras—at least in the UK—we may be settling into a new expectation that simply moving through the world means that we are seen, tracked, monitored in a very granular, personal way.

    The Journal suggests that the UK’s “comfort” with surveillance cameras makes it ripe for this. A 2013 survey found that Britain had the highest density of surveillance technology outside of China. Since then, the number of surveillance cameras in the UK has nearly doubled from six million to 10 million—one camera for every seven people.

  2. This anti-theft surveillance affects more than just the guilty. Facial recognition is still pretty iffy in real-world conditions, and the false negatives these systems generate could lead to harassment for no good reason except that you walked into the store.

    James Lacey, a staff member at one Budgens store in Aylesbury, southern England, said the system can ping his phone between one and 10 times a day. People have been known to steal large quantities of meat from the store’s refrigeration aisle when staff members are in the stock room, he said. The new system has helped, he said, though about a quarter of alerts are false. A spokesman for Facewatch said a maximum of 15% of alerts are false positives, based on its own analysis.

    (Related: an ACLU study in 2018 found that Amazon’s facial-recognition service incorrectly matched the faces of 28 members of Congress to criminal mugshots.)

  3. Automated identification has implications beyond crime prevention. What’s OK for these corporate systems to track in the first place? Gender? Race and ethnicity? Income? Browser history? Social relationships? Voting record? Sexual preference? The folks at Facewatch promise vaguely that tracking ethnicity “can help retailers understand their marketplace.” This smacks of a shrugging sensibility that “we can do it, so why wouldn’t we?” And that’s the worst reason to use a technology.

  4. Regulation is evolving, but remains vague and often unenforced. Europe’s well-intentioned privacy regulation, the GDPR, puts facial and other biometric data in a special category that requires a company to have a “substantial public interest” in capturing and storing it. That’s fuzzy enough that companies can arguably allow companies to use the technology to fight crime. Tracking ethnicity to “help retailers understand their marketplace” seems like less of a slam dunk. There is also a gray area around how long businesses can hold on to such footage, or use it for other business purposes.

We should adopt a position on this stuff both culturally and civically. If we don’t, the technology will decide for us. What will your company’s position be? And how about you? What’s your stance as a practitioner designing the technology that will set the behaviors and expectations of the next generation?

WSJ ($) | With Brits Used to Surveillance, More Companies Try Tracking Faces
facebook

Facebook Gives Workers a Chatbot to Appease That Prying Uncle

∞ Dec 3, 2019

Facebook sent employees home for the holidays with robot talking points—in case the family had any questions about, y’know, the company’s cynical, grasping, overreaching, damaging, and irresponsible business model and use of technology. (Bots, it seems, are the only ones left who can deliver these lines with a straight face.) The New York Times reports:

If a relative asked how Facebook handled hate speech, for example, the chatbot — which is a simple piece of software that uses artificial intelligence to carry on a conversation — would instruct the employee to answer with these points:

  • Facebook consults with experts on the matter.
  • It has hired more moderators to police its content.
  • It is working on A.I. to spot hate speech.
  • Regulation is important for addressing the issue.

It would also suggest citing statistics from a Facebook report about how the company enforces its standards.

The New York Times | Facebook Gives Workers a Chatbot to Appease That Prying Uncle
data

Inmates in Finland are training AI as part of prison labor

∞ Mar 29, 2019

Grooming data for the machines has a human cost. The Verge reports that startup Vainu is using prisoners in Finland to tag Finnish-language articles. The company uses Mechanical Turk to do this for other languages, but Finnish-speaking turks are hard to come by. So they get (and pay) prison inmates to do it.

There are legit concerns of exploiting prisoners for low-wage labor, but perhaps a broader concern is that this hints at a bleak future of work in the age of the algorithm. Indeed this “future” is already here for a growing segment of humans—with Mechanical-Turk-level labor turns out to be, literally, prison labor.

This type of job tends to be “rote, menial, and repetitive,” says Sarah T. Roberts, a professor of information science at the University of California at Los Angeles who studies information workers. It does not require building high level of skill, and if a university researcher tried to partner with prison laborers in the same way, “that would not pass an ethics review board for a study.” While it’s good that the prisoners are being paid a similar wage as on Mechanical Turk, Roberts points out that wages on Mechanical Turk are extremely low anyway. One recent research paper found that workers made a median wage of $2 an hour.

As we design the future of technology, we also design the future of work. What might we do to improve the quality and pay of labor required to make automated systems work?

The Verge | Inmates in Finland are training AI as part of prison labor
  • ◀︎
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • ▶︎
Big MediumBig Medium logo Back to top
Skip Navigation
  • Ideas
  • Projects
  • Talks
  • About
  • Hire Us

Read us

Book cover of Sentient Design by Josh Clark with Veronika Kindred

Work with us

  • Interface and experience design
  • Sentient Design and AI
  • Digital strategy and process
  • Design systems
  • Production and co-creation
  • Action plan
  • Coaching and hands-on advice
  • Workshops and talks

Follow us

Get the newsletter

    • Twitter
    • RSS
    • Instagram
    • Github

    Contact us

    Start with Josh Clark

    josh@bigmedium.com
    (401) 339-3381

    Big Medium is a Global Moxie company.
    Copyright 2003–2025 Global Moxie, LLC. All rights reserved.