Why Machine Learning and AI Matter for Design Teams
By Josh ClarkPublished Mar 28, 2019
Machine learning is everywhere these days, powering the services, products, and interfaces that all of us use every day. Yet many designers and organizations are still on the sidelines without a clear vision of how to work with this technology. Many aren’t sure if there’s a role for them at all. Fact is, there’s a critical role for design in the era of the algorithm—and your organization almost certainly has what it needs to jump in today.
I’ve been bringing that message home to client companies as we work together to craft products powered by machine learning. But more and more, I’ve also been bringing these perspectives and techniques to stages and workshops around the world. If you’re interested in leveling up your data literacy as a designer, I invite you to join me for one of these sessions. (As I write this, I have workshops scheduled for London, Amsterdam, and Ottawa. As always, keep an eye on the Talks page for upcoming talks and workshops—or get in touch if you’d like me to visit your organization.)
I sat down recently for a Q&A with the good folks at Connecticut UXPA to preview what designers will learn in this workshop series. Maybe even more important, we talked about why designing for machine learning should be on the radar for all design teams today. Here’s the conversation:
Q: Why do designers need to care about machine learning and artificial intelligence?
Perhaps the real question is why wouldn’t you care about it? In the same way that mobile defined the last decade of digital product design, machine learning is already defining the next. It’s the engine behind every single one of today’s important emerging interactions: voice interfaces, computer vision, predictive interfaces, bots, augmented reality, virtual reality, as well as so many of the sensor-based interfaces behind the internet of things. So if you’re interested in understanding the design of these new channels and platforms, machine learning is your fundamental design material.
But beyond these emerging and next-generation interactions, A.I. is already driving so many of the digital products all of us use every day. Algorithms determine what you see in your Facebook and Instagram feeds. They predict what you’ll want to watch on Netflix. They suggest what you should buy at Amazon. They identify fraudulent use of our credit cards. They tell us how to drive home from work.
Designing for machine learning isn’t for the hand-wavy future. This is very much for the here and now.
All of the most successful digital products now have machine learning either at their core, or as an important enhancement to the core offering.
It’s amazing how quickly algorithms have become so woven into the fabric of nearly every moment of our lives. That means it’s also urgently important that those experiences be designed with intention and skill. There’s a critical role for designers here.
Q: Most designers don’t work for one of those high-flying companies. What if your company doesn’t use machine learning right now, and doesn’t have the in-house expertise of a Google, Amazon, or Facebook?
There’s a common assumption that a company has to have a vast army of algorithm engineers and data scientists in order to put machine learning to work. But it turns out the underlying technologies are widely available and don’t even require deep expertise to get started. If you have interesting data—and most companies do—most developers can create interesting machine-learning applications.
But here’s the really exciting thing. There are lots of A.I. services available that don’t require any engineering or data science know-how whatsoever. The big players like Microsoft, IBM, Google, and Amazon all offer practically-free services for speech recognition, image recognition, product recommendation, and more—and the technical bar to use them is incredibly low. A typical designer and web developer can pair up to create their own machine-learning product with astonishingly little effort. All of us have easy access to the superpowers we associate with these tech giants.
In the workshop, we’ll explore those services together and actually work with them. Designers are able to see and understand how they can use the results those services provide as design material in their work. These are tools designers can use today—like right now—with the skills they and their colleagues already have. Designers will leave the workshop with a knowledge of technologies and services to help their companies start working with machine learning immediately.
So it turns out that the technology is not the biggest challenge. The harder thing is identifying meaningful ways to use that technology. That’s the work of product design and UX design. The job and opportunity for designers is to point the machines at problems worth solving—and to present the results in ways that are meaningful and useful to our customers.
The technology is not the biggest challenge. The harder thing is identifying meaningful ways to use that technology.
That’s another big focus of this workshop. We’ll work through some techniques for ideation to identify opportunities for machine learning in everyday products.
Q: What does that look like? I assume we’re not talking about the everyman designer building a self-aware, all-knowing computer intelligence personality?
Ha, that’s right. The term “artificial intelligence” has gotten away from us, and it’s used by marketers to describe everything from the most basic automated system to sci-fi visions of sentient robots. What we’re talking about in this workshop is a practical middle that is in some ways mundane but still incredibly powerful.
Machine learning is basically pattern matching at unprecedented scale. It takes a mountain of historical data and locates patterns that an algorithm can recognize in new data it receives. That means machine learning can identify or categorize information (or images, sounds, products, you name it). But it can also predict or recommend what’s next: based on past history, here’s the most likely thing that will happen or that you should do. Put another way, machine learning figures out what’s “normal” (most common) for any specific context and then predicts the next normal thing, or identifies things that aren’t normal (fraud, crime, disease, etc).
Practically speaking, all of this means that machine learning lets you do four new things:
- Do better at answering questions we already ask.
- Answer new questions. Using sentiment analysis, for example, a customer care center can now search emails for “angry” or “upset” instead of traditional keyword searches.
- Mine new sources of information. Until now, all the messy ways that humans communicate have been opaque to the machines. Suddenly, images, doodles, speech, gestures, facial expression… it’s now all available as meaningful data—or even as surfaces for interaction—thanks to machine learning.
- Uncover invisible patterns. Machine learning can find patterns in vast troves of data that were historically invisible to us. That means we can identify new customer segments, new buying patterns, popular running routes, micro-communities, even sources of disease.
In the workshop, we’ll experiment with each of those four kinds of opportunities. Designers will come away with a firm grounding in how you can use each of them to create entirely new products or, just as powerfully, to improve existing ones.
Q: What are some examples of improving existing products? For designers who aren’t yet working with machine learning, how will this workshop help them in their everyday work?
I’m talking about small, even casual interventions. Think about predictive text in your smartphone’s keyboard. That’s machine learning suggesting the next word based on what you’ve already typed—a small little intervention that improves customers lives by helping in the tedious and error-prone task of touchscreen typing.
Or consider Google Forms, the survey-building tool. When you add a new question, you have to tell it the format of the answer (multiple choice, checkboxes, linear scale, etc.). As you type your question, Google Forms identifies the category of question and changes the default answer type based on the wording—a convenient bit of intelligence to make the process easier.
Or think of a CMS that suggests a caption or alt-text description for images that are uploaded.
Q: What expertise does this require? There’s already the ongoing debate about whether designers should code. Do designers now have to be data scientists, too?
You don’t have to be a data scientist to design for machine-generated content and interactions. For better or worse, though, many machine-generated interfaces have been designed by engineers and data scientists. Those folks have been enormously helpful by showing us what’s possible, what machine learning can do. But we’ve also seen flaws. We’ve seen machine learning pointed at the wrong problems. Or we’ve seen interfaces that don’t reflect the actual confidence (or uncertainty) of the underlying results.
And those are design problems. The presentation of machine-generated results is at least as important as the underlying algorithm.
So, no, you don’t have to be a data scientist to design for machine learning. But you do need to be data-literate. You have to understand the strange new texture of this design material—and some of the uncertainty and weirdness that it introduces into our designs.
An umbrella theme of the workshop is learning to understand the new perspectives and techniques that are required when you’re designing for machine-generated content. We put those practices to work in very concrete, practical, and actionable ways.
You don’t have to be a data scientist to design for machine learning. But you do need to be data-literate.
Q: What about the other way around? For data scientists already working with machine learning, would they benefit from learning about related design and UX considerations?
I love it when data scientists and algorithm engineers join these workshops, because it’s an opportunity to introduce design perspective to the way they think about their work. How do you explain the results of the algorithm in ways that are meaningful and intuitive to end users? How do you translate the numeric confidence of the algorithm into everyday language?
Data scientists understand that these systems are probabilistic. The systems report the statistical likelihood that something is true, but nothing is black and white. So one of the interesting challenges—for both designers and data scientists—is how to express the results as signals or suggestions, not facts. It’s a new kind of design. Manner and presentation become critical—things that designers consider every day, but that aren’t necessarily obvious when the data folks are building or tuning models.
Q: Who else will benefit from the workshop?
This workshop is all about understanding how machine learning fits into the everyday practice of building digital products. Like all good UX design, this ultimately means how we triangulate user needs, business goals, and the technology’s capabilities. That’s a conversation that benefits not only designers, but also product owners, researchers, and developers—everyone involved in the product process. That also includes managers and executives trying to figure out how machine learning applies to their company or industry.
Q: You mentioned the role of machine learning in today’s emerging interfaces. How will the workshop help attendees explore the design of voice assistants, bots, physical interfaces and the rest?
We’ll dip our toe into all of these to talk about some of the unique considerations and challenges of each. For assistants and bots, for example, we’ll look at some of the pitfalls of creating interfaces that ape human behavior—and some techniques and perspectives that can help to avoid the biggest problems, and pave the way for success.
Across the board, the biggest thing you can do is be transparent about what the system does and what it’s good at. Our work as designers is always to set good expectations and channel behavior in ways that match the capabilities of the system. But often, machine-generated systems over-promise. Alexa and Siri invite us to ask them anything, but that sets up an expectation that is far beyond their capability. They constantly disappoint us, even though they are astonishing technologies. It’s not a tech problem, it’s an expectation-setting challenge. That’s design work.
One of the biggest things I’ve learned as I work with machine-generated content and interaction is that we have to design for failure and uncertainty. Traditionally, we’ve always designed for success, crafting a fixed path through content that is under our control. When machines are generating the content—and sometimes even the interaction itself—it’s a new challenge.
In the workshop, we’ll explore techniques and approaches to keep things on the rails even when the machines deliver lousy results.
There are tons of opportunities to do amazing things with machine learning in our work. But there are also lots of ways it can go sideways—and there are plenty of examples of how it already has. Ultimately, the way this technology gets used is a question of design. So really, it’s up to us.
This workshop is all about helping designers discover their own influential role in putting this powerful (and surprisingly accessible) technology to work. And perhaps just as important: it’s all about how to handle this new design material with care and respect.
Are you or your team wrestling with how to adopt and design for machine learning and A.I.? Wrestling with the UX of bots, data-generated interfaces, and artificial intelligence? Big Medium can help—with workshops, executive sessions, or a full-blown design engagement. Get in touch.