When mobile exploded a decade ago, many of us wrestled with designing for the new context of freshly portable interfaces. In fact, we often became blinded by that context, assuming that mobile interfaces should be optimized strictly for on-the-go users: we overdialed on location-based interactions, short attention spans, micro-tasks. The “lite” mobile version ruled.
It turned out that the physical contexts of mobile gadgetsâdevice and environmentâwere largely red herrings. The notion of a single “mobile context” was a myth that distracted from the more meaningful range of “softer” contexts these devices introduced by unchaining us from the desktop. The truth was that we now had to design for a huge swath of temporal, behavioral, emotional, and social contexts. When digital interfaces can penetrate any moment of our lives, the designer can no longer assume any single context in which it will be used.
This already challenging contextual landscape is even more complicated for predictive AI assistants that constantly run in the background looking for moments to provide just-in-time info. How much do they need to know about current context to judge the right moment to interrupt with (hopefully) useful information?
In an essay for O’Reilly, Mike Loukides explores that question, concluding that it’s less a concern of algorithm design than of UX design:
What’s the experience I want in
being âassistedâ How is that experience
designed? A design that requires me to expend more
effort to take advantage of the assistant’s capabilities
is a step backward.
The design problem becomes more complex when we think
about how assistance is delivered. Norvig’s "reminders"
are frequently delivered in the form of asynchronous
notifications. That’s a problem: with many applications
running on every device, users are subjected to a constant
cacophony of notifications. Will AI be smart enough
to know what notifications are actually wanted, and
which are just annoyances? A reminder to buy milk?
That’s one thing. But on any day, there are probably
a dozen or so things I need, or could possibly use,
if I have time to go to the store. You and I probably
don’t want reminders about all of them. And when do
we want these reminders? When we’re driving by a supermarket,
on the way to the aforementioned doctor’s appointment?
Or would it just order it from Amazon? If so, does
it need your permission? Those are all UX questions,
not AI questions.
We’ve made lots of fast progress in just the last few yearsâmonths, evenâin crafting remarkably accurate algorithms. We’re still getting started, though, in crafting the experiences we wrap around them. There’s lots of work to be done right now by designers, including UX research at unprecedented scale, to understand how to put machine learning to use as design material. I have ideas and design principles about how to get started. In the meantime, I really like the way Mike frames the problem:
In a future where humans and computers are increasingly in the loop together, understanding context is essential. But the context problem isn’t solved by more AI. The context is the user experience. What we really need to understand, and what we’ve been learning all too slowly for the past 30 years, is that technology is the easy part.