In a recent edition of his excellent stream-of-consciousness newsletter, Dan Hon considers Alexa Kids Edition in which, among other things, Alexa encourages kids to say “please.” There are challenges and pitfalls, Dan writes, in designing a one-size-fits-all system that talks to children and, especially, teaches them new behaviors.
Parenting is a very personal subject. As I have become
a parent, I have discovered (and validated through
experimental data) that parents have very specific
views about how to do things! Many parents do not agree
with each other! Parents who agree with each other
on some things do not agree on other things! In families
where there are two parents there is much scope for
disagreement on both desired outcome and method!
All of which is to say is that the current design,
architecture and strategy of Alexa for Kids indicates
one sort of one-size-fits-all method and that there’s
not much room for parental customization. This isn’t
to say that Amazon are actively preventing it and might
not add it down the line - it’s just that it doesn’t
really exist right now. Honan’s got a great point that:
example,] take the magic word we mentioned earlier.
There is no universal norm when it comes to whatâs
polite or rude. Manners vary by family, culture, and
even region. While âyes, sirâ may be de rigueur in
Alabama, for example, it might be viewed as an element
of the patriarchy in parts of California."
In the New York Times opinion section, Gary Marcus and Ernest Davis suggest that today’s data-crunching model for artificial intelligence is not panning out. Instead of truly understanding logic or language, today’s machine learning instead identifies data patterns to recognize and reflect human behavior. The systems this approach creates tends to mimic more than think. As a result, we have some impressive but incredibly narrow applications of AI. The culmination of artificial intelligence appears to be making salon appointments.
Decades ago, the approach was different. The AI field tried to understand the elements of human thought—and teach machines to actually think. The goal proved elusive and the field drifted instead to what machines were already better at understanding, pattern recognition. Marcus and Davis say the detour has not proved helpful:
Once upon a time, before the fashionable rise of machine learning and “big data,” A.I. researchers tried to understand how complex knowledge could be encoded and processed in computers. This project, known as knowledge engineering, aimed not to create programs that would detect statistical patterns in huge data sets but to formalize, in a system of rules, the fundamental elements of human understanding, so that those rules could be applied in computer programs. Rather than merely imitating the results of our thinking, machines would actually share some of our core cognitive abilities.
That job proved difficult and was never finished. But “difficult and unfinished” doesn’t mean misguided. A.I. researchers need to return to that project sooner rather than later, ideally enlisting the help of cognitive psychologists who study the question of how human cognition manages to be endlessly flexible.
Today’s dominant approach to A.I. has not worked out. Yes, some remarkable applications have been built from it, including Google Translate and Google Duplex. But the limitations of these applications as a form of intelligence should be a wake-up call. If machine learning and big data can’t get us any further than a restaurant reservation, even in the hands of the world’s most capable A.I. company, it is time to reconsider that strategy.
The visionaries of technology—Douglas Engelbart,
J.C.R Licklider—have always
recognised the potential for computers to augment humanity, to be bicycles for the
mind. I think they would be horrified to see the increasing
trend of using humans to augment computers.
At Harvard Business Review, Ben Fuchs, Megan Reitz, and John Higgins consider the responsibility of identifying our own blind spots—the biases, privileges, and disadvantages we haven’t admitted to ourselves. It’s important (and sometimes bruising) work—all the more important if you’re in a privileged position that gives you the leverage to make a difference for others.
To address inequality of opportunity, we need to acknowledge
and address the systemic advantages and disadvantages
that people experience daily. For leaders, recognizing
their advantage blindness can help to reduce the impact
of bias and create a more level playing field for everyone.
Being advantaged through race and gender come with
a responsibility to do something about changing a system
that unfairly disadvantages others.
I had the privilege last month of joining 19 other designers, researchers, and writers to consider the future (both near and far) of artificial intelligence and machine learning. We headed into the woods—to the Juvet nature retreat in Norway—for several days of hard thinking. Under the northern lights, we considered the challenges and opportunities that AI presents for society, for business, for our craft—and for all of us individually.
Answers were elusive, but questions were plenty. We decided to share those questions, and the result is the Juvet Agenda. The agenda lays out the urgent themes surrounding AI‚and presents a set of provocations for teasing out a future we want to live in:
Artificial intelligence? It’s complicated. It’s the here and now of hyper-efficient algorithms, but it’s also the heady possibility of sentient systems. It might be history’s greatest opportunity or its worst existential threat — or maybe it will only optimize what we’ve already got. Whatever it is and whatever it might become, the thing is moving too fast for any of us to sit still. AI demands that we rethink our methods, our business models, maybe even our cultures.
In September 2017, 20 designers, urbanists, researchers,
writers, and futurists gathered at the Juvet nature
retreat among the fjords and forests of Norway. We
came together to consider AI from a humanist perspective,
to step outside the engineering perspective that dominates
the field. Could we sort out AI’s contradictions? Could
we describe its trajectory? Could we come to any conclusions?
three intense days the group captured ideas, played
games, drew diagrams, and snapped photos. In the end,
we arrived at more questions than answers — and Big
Questions at that. These are not topics we can or should
address alone, so we share them here.
Together these questions ask how we can shape AI for
a world we want to live in. If we don’t decide for
ourselves what that world looks like, the technology
will decide for us. The future should not be self-driving;
let’s steer the course together.
“Artificial intelligence” is broadly used in everything from science fiction to the marketing of mundane consumer goods, and it no longer has much practical meaning, bemoans John Pavlus at Quartz. He surveys practitioners about what the phrase does and doesn’t mean:
It’s just a suitcase word enclosing a foggy constellation
of “things”—plural—that do have real definitions and
edges to them. All the other stuff you hear about—machine
learning, deep learning, neural networks, what have
you—are much more precise names for the various scientific,
mathematical, and engineering methods that people employ
within the field of AI.
But what’s so terrible about using the phrase “artificial
intelligence” to enclose all that confusing detail—especially
for all us non-PhDs? The words “artificial” and “intelligent”
sound soothingly commonsensical when put together.
But in practice, the phrase has an uncanny almost-meaning
that sucks adjacent ideas and images into its orbit
and spaghettifies them.
Me, I prefer to use “machine learning” for most of the algorithmic software I see and work with, but “AI” is definitely a convenient (if overused) shorthand.
Artificial intelligence can accurately guess whether
people are gay or straight based on photos of their
faces, according to new research that suggests machines
can have significantly better “gaydar” than humans.
study from Stanford University – which found that a
computer algorithm could correctly distinguish between
gay and straight men 81% of the time, and 74% for women
– has raised questions about the biological origins
of sexual orientation, the ethics of facial-detection
technology, and the potential for this kind of software
to violate people’s privacy or be abused for anti-LGBT
Big Medium is what my friend and collaborator Dan Mall calls a design collaborative. Dan runs his studio Superfriendly the same way I run Big Medium: rather than carry a full-time staff, we both spin up bespoke teams from a tight-knit network of well-known domain experts. Those teams are carefully chosen to meet the specific demands of each project. It’s a very human, very personal way to source project teams.
True Story was a case study in what two Stanford professors
call “flash organizations” — ephemeral setups to execute
a single, complex project in ways traditionally associated
with corporations, nonprofit groups or governments. […]
And, in fact, intermediaries are already springing up across industries like software and pharmaceuticals to assemble such organizations. They rely heavily on data and algorithms to determine which workers are best suited to one another, and also on decidedly lower-tech innovations, like middle management. […]
“One of our animating goals for the project was, would it be possible for someone to summon an entire organization for something you wanted to do with just a click?” Mr. Bernstein said.
The fascinating question here is how systems might develop algorithmic proxies for the measures of trust, experience, and quality that weave the fabric of our professional networks. But even more intriguing: how might such models help to connect underrepresented groups with work they might otherwise never have access to? For that matter, how might those models introduce me to designers outside my circle who might introduce more diverse perspectives into my own work?
Amid the flashy marketing campaigns and rapid technological
advances surrounding virtual assistants like Alexa,
Cortana and Siri, few end users seem willing to question
how the motivation of their creators is likely to affect
the overall experience. Amazon has done much to make
Alexa smart, cheap and useful. However, it has done
so in service of an over-arching purpose: retailing.
Of course, Google, Microsoft and Apple have ulterior
motives for their own assistants, but it should come
as no surprise that Alexa is easily sidetracked by
her desire to sell you things.
In future smart homes, many interactions will be complex
and involve combinations of different devices. People
will need to know not only what goes on but also why.
For example, when smart lights, blinds and indoor climate
systems adjust automatically, home owners should be
able to know what triggered it. Was it weather forecast
data or the behaviour of people at home that made the
thermostat lower the temperature? Which device made
the decision and told the others to react? Especially
when things donât end up the way we want them to, smart
objects need to communicate more, not less.
As we introduce more sensors, services, and smart gadgets into our life, some of them will inevitably collide. Which one “wins”? And how do we as users see the winner (or even understand that there was a conflict in the first place)?
UX design gets complicated when you introduce multiple triggers from multiple opinionated systems. And of course all those opinionated systems should bow to the most important opinion of all: the user’s. But even that is complicated in a smart-home environment where there are multiple users who have changing needs, desires, and contexts throughout the day. Fun!
Designers, if you believe that politics don’t belong at work, guess what: your work is itself political. Software channels behavior, and that means that it’s freighted with values.
Ask yourself: as a designer, what are the behaviors I’m shaping, for what audience, to what end, and for whose benefit? Those questions point up the fact that software is ideological. The least you can do is own that fact and make sure that your software’s politics line up with your own. John Warren Hanawalt explains why:
Designers have a professional responsibility to consider
what impact their work has—whether the project is explicitly
“political” or not. Design can empower or disenfranchise
people through the layout of ballots or UX of social
network privacy settings.
Whose voices are amplified or excluded by the platforms
we build, who profits from or is exploited by the service
apps we code, whether we have created space for self-expression
or avenues for abuse: these are all political design
considerations because they decide who is represented,
who can participate and at what cost, and who has power. […]
If you’re a socially conscious designer, you don’t need to quit your job; you need to do it. That means designing solutions that benefit people without marginalizing or harming others. When your boss or client asks you to do something that might do harm, you have to say no. And if you see unethical behavior happening in other areas of your company, fight for something better. If you find a problem, you have a problem. Good thing solving problems is your job.
When mobile exploded a decade ago, many of us wrestled with designing for the new context of freshly portable interfaces. In fact, we often became blinded by that context, assuming that mobile interfaces should be optimized strictly for on-the-go users: we overdialed on location-based interactions, short attention spans, micro-tasks. The “lite” mobile version ruled.
It turned out that the physical contexts of mobile gadgetsâdevice and environmentâwere largely red herrings. The notion of a single “mobile context” was a myth that distracted from the more meaningful range of “softer” contexts these devices introduced by unchaining us from the desktop. The truth was that we now had to design for a huge swath of temporal, behavioral, emotional, and social contexts. When digital interfaces can penetrate any moment of our lives, the designer can no longer assume any single context in which it will be used.
This already challenging contextual landscape is even more complicated for predictive AI assistants that constantly run in the background looking for moments to provide just-in-time info. How much do they need to know about current context to judge the right moment to interrupt with (hopefully) useful information?
What’s the experience I want in
being âassistedâ How is that experience
designed? A design that requires me to expend more
effort to take advantage of the assistant’s capabilities
is a step backward.
The design problem becomes more complex when we think
about how assistance is delivered. Norvig’s "reminders"
are frequently delivered in the form of asynchronous
notifications. That’s a problem: with many applications
running on every device, users are subjected to a constant
cacophony of notifications. Will AI be smart enough
to know what notifications are actually wanted, and
which are just annoyances? A reminder to buy milk?
That’s one thing. But on any day, there are probably
a dozen or so things I need, or could possibly use,
if I have time to go to the store. You and I probably
don’t want reminders about all of them. And when do
we want these reminders? When we’re driving by a supermarket,
on the way to the aforementioned doctor’s appointment?
Or would it just order it from Amazon? If so, does
it need your permission? Those are all UX questions,
not AI questions.
We’ve made lots of fast progress in just the last few yearsâmonths, evenâin crafting remarkably accurate algorithms. We’re still getting started, though, in crafting the experiences we wrap around them. There’s lots of work to be done right now by designers, including UX research at unprecedented scale, to understand how to put machine learning to use as design material. I have ideas and design principles about how to get started. In the meantime, I really like the way Mike frames the problem:
In a future where humans and computers are increasingly in the loop together, understanding context is essential. But the context problem isn’t solved by more AI. The context is the user experience. What we really need to understand, and what we’ve been learning all too slowly for the past 30 years, is that technology is the easy part.
The most broadly impactful technologies tend to be the ones that become mundane—cheap, expected, part of the fabric of everyday life. We absorb them into our lives, their presence assumed, their costs negligible. Electricity, phones, televisions, internet, refrigeration, remote controls, power windows—once-remarkable technologies that now quietly improve our lives.
That’s why the aspects of machine learning that excite me most right now are the small and mundane interventions that designers and developers can deploy today in everyday projects. As I wrote in Design in the Era of the Algorithm, there are so many excellent (and free!) machine-learning APIs just waiting to be integrated into our digital products. Machine learning is the new design material, and it’s ready today, even for the most modest product features.
If you’re not sure what AI APIs could bring to your products, think about the impact of predictive text on typing. Many such opportunities
All of this reminds me of an essay my friend Evan Prodromou wrote last year about making software with casual intelligence. It’s a wonderful call to action for designers and developers to start integrating machine learning into everyday design projects.
Programmers in the next decade are going to make huge
strides in applying artificial intelligence techniques
to software development. But those advances aren’t
all going to be in moonshot projects like self-driving
cars and voice-operated services. They’re going to
be billions of incremental intelligent updates to our
interfaces and back-end systems.
I call this _casual intelligence_ — making everything
we do a little smarter, and making all of our software
that much easier and more useful. It’s casual because
it makes the user’s experience less stressful, calmer,
more leisurely. It’s also casual because the developer
or designer doesn’t think twice about using AI techniques.
Intelligence becomes part of the practice of software
Evan touches on one of the most intriguing implications of designing data-driven interfaces. When machines generate both content and interaction, they will often create experiences that designers didn’t imagine (both for better and for worse). The designer’s role may evolve into one of corralling the experience in broad directions, rather than down narrow paths. (See conversational interfaces and open-ended, Alexa/Siri-style interactions, for example.)
Designers need to stop thinking in terms of either-or interfaces — either we do it this way, or we do it that way. Casual intelligence lets interfaces become _and-also_ — different users have different experiences. Some users will have experiences never dreamed of in your wireframes — and those may be the best ones of all.
As machines become better than people at so many things, the natural question is what’s left for humans—and indeed what makes us human in the first place? Or more practically: what is the future of work for humans if machines are smarter than us in so many ways? Writing for Harvard Business Review, Ed Hess suggests that the answer is in shifting the meaning of human smarts away from information recall, pattern-matching, fast learning—and even accuracy.
What is needed is a new definition of being smart,
one that promotes higher levels of human thinking and
emotional engagement. The new smart will be determined
not by what or how you know but by the quality of your
thinking, listening, relating, collaborating, and learning.
Quantity is replaced by quality. And that shift will
enable us to focus on the hard work of taking our cognitive
and emotional skills to a much higher level.
We will spend more time training to be open-minded
and learning to update our beliefs in response to new
data. We will practice adjusting after our mistakes,
and we will invest more in the skills traditionally
associated with emotional intelligence. The new smart
will be about trying to overcome the two big inhibitors
of critical thinking and team collaboration: our ego
and our fears. Doing so will make it easier to perceive
reality as it is, rather than as we wish it to be.
In short, we will embrace humility. That is how we
humans will add value in a world of smart technology.
Frank Chen of Andreessen Horowitz suggests that while machine learning and AI are today’s new hotness, they’re bound to be the humdrum norm in just a few short years. Products that don’t have it baked in will seem oddly quaint:
Not having state-of-the-art AI techniques powering
their software would be like not having a relational
database in their tech stack in 1980 or not having
a rich Windows client in 1987 or not having a Web-based
front end in 1995 or not being cloud native in 2004
or not having a mobile app in 2009. In other words,
in a small handful of years, software without AI will
So ambitious founders will need to invest some other
way to differentiate themselves from the crowd — and
investors will be looking for other ways to decide
whether to fund a startup. And investors will stop
looking for AI-powered startups in exactly the same
way they don’t look for database-inside or cloud-native
or mobile-first startups anymore. All those things
are just assumed.
As Chen says, this feels like mobile just a few years ago. Just as mobile was the oxygen feeding emerging interactions and capabilities, machine learning is doing the same now. All the new interactions, all the new digital superpowers, they’re all being fueled by machine learning and algorithms.
In 2012, I wrote a chapter for The Mobile Book, for which Jeremy Keith wrote a prescient foreword. “This book is an artefact of its time,” he wrote. “There will come a time when this book will no longer be necessary, when designing and developing for mobile will simply be part and parcel of every Web worker’s lot.”
Yep, five years later, mobile is an assumed part of the job. If you were writing a “Machine Learning Book” today, you could borrow the same observation for the foreword. It’s time to get your game on now, since this will be an assumed capability in short order.
Uniforms also have to reflect the realities of life
on the road, with fabric blends that resist stains
and wrinkles and can be laundered, if necessary, in
a hotel sink. They also need to keep the wearers comfortable,
whether their plane touches down in the summer in Maui
or in the winter in Minneapolis.
Before giving the new uniforms to employees, the airlines
conduct wear tests. The roughly 500 employees in American’s
test reported back on details that needed to be changed.
For example, Mr. Byrnes said, an initial dress prototype
included a back zipper, but flight attendants found
it challenging to reach. So the zipper was scuttled
in favor of buttons on the front.
For its 1,000-employee wear test, Delta solicited feedback
via surveys, focus groups, an internal Facebook page
and job shadowing, in which members of the design team
traveled with flight crews to get a firsthand view
of the demands of the job.
“We had about 160-plus changes to the uniform design”
as a result of those efforts, Mr. Dimbiloglu said.
The depth of the process makes sense because these uniforms define not only the company brand, but also impact the working life of thousands of people. Come to think of it, that’s true of pretty much any enterprise software, too. If you’re the designer of such things, are you bringing the same commitment to research, testing, and refinement to your software projects?
We’re trying to capture some of these patterns as we
work on voice UI design for young platforms like Alexa.
We identified five patterns and what they’re best suited
This is how the industry finds its way to best practices: experimenting, sharing solutions, and finally, putting good names to those solutions. Cooper is off to a good start with these design patterns:
Ross Ufberg takes a deep dive into the design process behind Microsoft’s hyper-swiveling Surface Studio, a high-concept device that turns the desktop PC into a drafting table. A key ingredient to the project’s breakthrough success seems to be the highly collaborative, cross-disciplinary team that they corralled into Microsoft’s Building 87 to invent the thing:
Under one roof, Microsoft has united a team of designers,
engineers, and prototypers, and invested heavily in
infrastructure and equipment, so that Building 87 can
be a self-contained hub, complete with manufacturing
capabilities that usually would be located offsite
or outsourced. Having these capabilities close at hand
drastically cuts down on dead time, so that, in some
cases, mere hours after a designer sends a concept
down the hall to the prototypers, they can figure out
a way to embody that concept, and print it in 3D or
manufacture it on the spot. The model-making team can
then hand that iteration off to the mechanical engineers,
who assess the viability of the concept and figure
out ways to improve it. It’s sort of like one endless
feedback loop, with designers conceiving, prototypers
creating, engineers correcting, and back again to the
This is exactly the spirit of Big Medium’s own (far smaller) design teams. We have constant collaboration and feedback among product designers, visual designers, front-end designers, and developers. It’s not a linear process, but a constant conversation that blends experiments, false starts, grand leaps, successes, and gradual improvements. This turns out to be both faster and more creative.
In workshops and design engagements, we coach client organizations how to adopt this collaborative, iterative design process. Designers and developers often tell us at first that it feels unfamiliar, even uncomfortable, to collaborate across the entire design process—and especially when ideas are being formed. It’s natural to shy away from sharing before something is fully thought out. But that’s exactly where the most productive cross-disciplinary experiments happen.
One of Surface Studio’s signature design innovations is the hinge that lets it shift instantly from upright desktop monitor to a sloped, dial-and-stylus drafting board. In Ufberg’s telling, it wouldn’t have happened at all without a culture of cross-disciplinary experimentation.
“It is way easier to try something than tell somebody it can’t be done.”
When the idea of a hinge was first tossed out in a
brainstorming meeting, [mechanical engineering director Andrew] Hill tells me, “there were ten
different people who said that it doesn’t make any
sense, it would be too complicated to make it work.
But then a couple weeks later, we got a prototype out
of our model shop where you could see the mechanism
starting to come together, and the people who were
saying it couldn’t be done started to come over and
be like ‘Huh, maybe we could do something like this.’”
I ask him if he was one of those doubters.
“One of the things that I found out about myself is
it is way easier to try something than tell somebody
it can’t be done,” he confesses. “There’s magic in
the suspension of disbelief. If you just do stuff that
you know you’re going to be able to do, you know where
you’re going to go. If you try something that you’re
not quite sure is going work, at least you’re exposed
to new problems and you get smarter in that way, and
in the good cases, you move the whole thing forward.”
In its first iteration, the hinge was just a piece
of cardboard glued crudely to a kickstand. But then,
the feedback loop kicked into place.
The bottom line: When in doubt… don’t doubt. Build a quick prototype—the most low-fi thing you can create to test the concept—and then share it with people from other disciplines. It’s how you manage risk in an inherently risky exploration into the new.
That’s good advice not only for industrial design, but for pretty much anytime you’re really trying to make something new and better. And isn’t that all of us?
Need help transforming your organization’s design process for faster and more creative results? That’s what we do! Get in touch for a workshop, executive session, or design engagement.