Design Tools Are Running Out of Track
∞ Oct 14, 2018
About a year ago, Colm Tuite reviewed the state of UI design tools and found them wanting: Design Tools Are Running Out of Track. If anything, his critique feels even more relevant a year later. Our most popular design tools are fundamentally disconnected from the realities and constraints of working software:
- They generate static images in an era of voice, video, motion, and complex interactions. (“Our design tools should manipulate the actual product, not a picture of it.”)
- They have no awareness of the layout conventions of the web, so they don’t help designers work with the grain of CSS grid and flexbox.
- They’re tuned for infinite flexibility instead of usefully embracing the constraints of a design system or code base.
As I’ve worked with more and more companies struggling to design at scale, this last point has proven to be especially troublesome when maintaining or evolving existing software. Most design tools are not well tuned to support designer-developer collaboration within design systems (though some are beginning to innovate here). Tuite writes:
Your design tool is never going to tell you that you can’t do something. It’s never going to pull you up for using an off-brand color. It’s never going to prevent you from using a whitespace value which doesn’t belong in your spacing scale. It’s never going to warn you that 20% of the population literally cannot see that light gray text you’ve just designed.
And why not…? Because design tools don’t care.
Design tools are so waywardly enamoured with a vision for unlimited creativity that they have lost sight of what it means to design sensibly, to design inclusively, to design systematically.
Put simply, design tools allow us to do whatever the hell we want. To some extent, this level of boundless creativity is useful, especially in the ideation phases. As UI designers though, the majority of our workflow doesn’t call for much creativity. Rather, our workflow calls for reuse, repetition, familiarity and standardisation; needs that our tools do little to satisfy.
Developer culture and workflow have a strong bias toward consistency and reuse. That’s less true of design, and the tools are part of the problem. When there are no guardrails, it’s easy to wander off the road. Our systems don’t help us stay the path within established design systems.
This causes a disconnect between designers and developers because design comps drift from the realities of the established patterns in the code base. A Sketch library—or any collected drawings of software—can be a canonical UI reference only when the design is first conceived. Once the design gets into code, the product itself should be the reference, and fresh design should work on top of that foundation. It’s more important that our design libraries reflect what’s in the code than the reverse. Production code—and the UI it generates—has to be the single source of truth, or madness ensues.
That doesn’t mean that developers exclusively run the show or that we as designers have no agency in the design system. We can and should offer changes to the design and interaction of established patterns. But we also have to respect the norms that we’ve already helped to establish, and our tools should, too.
That’s the promise of design-token systems like InVision’s Design System Manager. Tokens help to establish baseline palettes and styles across code and design tools. The system gets embedded in whatever environment where designers or developers prefer to work. Designers and developers alike can edit those rules at the source—within the system itself.
This approach is a step forward in helping designers and developers stay in sync by contributing to the same environment: the actual product and the pattern library that feeds it. We’ve seen a lot of success helping client teams to make this transition, but it requires adopting a (sometimes challenging) new perspective on how to work—and where design authority lies. Big rewards come with that change in worldview.
Is your organization wrestling with inconsistent interfaces and duplicative design work? Big Medium helps companies scale great design and improve collaboration through design systems. Get in touch for a workshop, executive session, or design engagement.
Apple Used to Know Exactly What People Wanted — Then It Made a Watch
∞ Oct 5, 2018
The latest version of Apple Watch doubles down on its fitness and health-tracking sensors, but as John Herrmann writes in The New York Times, it’s not yet clear exactly what value all that data-tracking might deliver—and for whom:
For now, this impressive facility for collecting and organizing information about you is just that — it’s a great deal of data with not many places to go. This is sensitive information, of course, and Apple’s relative commitment to privacy — at least compared with advertising-centric companies like Google and Facebook — might be enough to get new users strapped in and recording.
As Apple continues its institutional struggle to conceive of what the Apple Watch is, or could be, in the imaginations of its customers, it’s worth remembering that Apple’s stated commitment to privacy is, in practice, narrow. The competitors that Cook likes to prod about their data-exploitative business models have a necessary and complicit partner in his company, having found many of their customers though Apple’s devices and software.
This is especially relevant as Apple casts about for ideas elsewhere. Apple has already met with the insurance giant Aetna about ways in which the company might use Apple Watches to encourage healthier — and cheaper — behavior in its tens of millions of customers. John Hancock, one of the largest life insurers in America, said after Apple’s latest announcement that it would offer all its customers the option of an interactive policy, in which customers would get discounts for healthy habits, as evidenced by data from wearable devices. Here we see the vague outlines of how the Apple Watch could become vital, or at least ubiquitous, as the handmaiden to another data-hungry industry.
Facebook Is Giving Advertisers Access to Your Shadow Contact Information
∞ Sep 27, 2018
One of the more insidious aspects of the social graph is that companies can mine data about you even if you don’t actively participate in their network. Your friends inadvertently give you up, as Kashmir Hill writes at Gizmodo:
Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn’t hand over at all, but that was collected from other people’s contact books, a hidden layer of details Facebook has about you that I’ve come to call “shadow contact information.”
Information that we assume to be under our control is not. Or, in many cases, information that you provide for one specific purpose is then flipped around and applied to another. Hill mentions an especially cynical dark-pattern example of that pattern:
[Researchers] found that when a user gives Facebook a phone number for two-factor authentication or in order to receive alerts about new log-ins to a user’s account, that phone number became targetable by an advertiser within a couple of weeks. So users who want their accounts to be more secure are forced to make a privacy trade-off and allow advertisers to more easily find them on the social network.
This is despicable. This is a moment when companies should strive to improve literacy about data sharing and data usage. Instead, companies like Facebook purposely obscure and misdirect. This is both a crisis and an opportunity. As designers, how might we build new business models and interactions that rely on honesty and respect, instead of deception and opportunism?
Arguments for transparency are too often met with counterarguments like, “Well, if we tell them what we’re doing, they might not opt in.” (Or, more bluntly, “If people knew about it, they wouldn’t want any part of it.”) When we find ourselves using these words to justify covering our tracks, it’s a cue that we almost certainly shouldn’t be doing that thing in the first place.
Google Data Collection Research
∞ Sep 27, 2018
Whoops, Google, it looks like your business model is showingâ¦
In âGoogle Data Collection,â Douglas C. Schmidt, Professor of Computer Science at Vanderbilt University, catalogs how much data Google is collecting about consumers and their most personal habits across all of its products and how that data is being tied together.
The key findings include:
- A dormant, stationary Android phone (with the Chrome browser active in the background) communicated location information to Google 340 times during a 24-hour period, or at an average of 14 data communications per hour. In fact, location information constituted 35 percent of all the data samples sent to Google.
- For comparisonâs sake, a similar experiment found that on an iOS device with Safari but not Chrome, Google could not collect any appreciable data unless a user was interacting with the device. Moreover, an idle Android phone running the Chrome browser sends back to Google nearly fifty times as many data requests per hour as an idle iOS phone running Safari.
- An idle Android device communicates with Google nearly 10 times more frequently as an Apple device communicates with Apple servers. These results highlighted the fact that Android and Chrome platforms are critical vehicles for Googleâs data collection. Again, these experiments were done on stationary phones with no user interactions. If you actually use your phone the information collection increases with Google.
Pair that with Google’s substantial ad tech, including the network formerly known as DoubleClick, and Google’s data collection reaches well beyond the company’s own properties:
A major part of Googleâs data collection occurs while a user is not directly engaged with any of its products. The magnitude of such collection is significant, especially on Android mobile devices, arguably the most popular personal accessory now carried 24/7 by more than 2 billion people.
If Software Is Eating the World, What Will Come Out the Other End?
∞ Sep 23, 2018“So far, it’s mostly shit,” wrotes John Battelle suggesting that there’s a world beyond the optimization and efficiency so cherished by the would-be disrupters:
But the world is not just software. The world is physics, it’s crying babies and shit on the sidewalk, it’s opioids and ecstasy, it’s car crashes and Senate hearings, lovers and philosophers, lost opportunities and spinning planets around untold stars. The world is still real. Software hasn’t eaten it as much as bound it in a spell, temporarily I hope, while we figure out what comes next.
The iPhone’s original UI designer on Apple’s greatest flaws
∞ Sep 10, 2018Fast Company offers an interview with Imran Chaudhri, the original designer of the iPhone user interface. According to Chaudhri, Apple knew that the device and its notifications would be distracting, that the personal nature of the phone would soak up attention in entirely new ways. But Apple consciously decided not to make it easy to tone down those distractions:
“Inside, getting people to understand that [distraction] was going to be an issue was difficult. Steve [Jobs] understood it…internally though, I think there was always a struggle as to how much control do we want people to have over their devices. When I and a few other people were advocating for more control, that level of control was actually pushed back by marketing. We would hear things like, ‘you can’t do that because then the device will become uncool.’
“The controls exist for you. They’ve always been there and yet it’s incredibly hard to know how to use them and to manage them. You literally have to spend many days to go through and really understand what’s bombarding you and then turn those things off in a singular fashion. So for the people who understand the system really well, they can take advantage of it, but the people that don’t—the people that don’t even change their ringtone, who don’t even change their wallpaper—those are the real people that suffer from this sort of thing. They don’t have that level of control.”
Since then, Apple has embraced privacy as a competitive advantage versus Android, but Chaudhri suggests that iOS could do more to offer transparency and smart adjustments to personal settings:
“The system is intelligent enough to let you know that there are [apps] that you’ve given permission to that are still using your data, and notifications you’ve turned on that you’re not actually responding to. So let’s circle back and let’s reestablish a dialogue between the phone and the customer, where the phone asks, ‘Do you really need these notifications? Do you really want Facebook to be using your address book data? Because you’re not logging into Facebook anymore.’ There’s a lot of ways to remind people if you just design them properly.”
Seems to me that we should all do a similar inventory of the systems we design. There remain so many opportunities to create interventions to improve user literacy and control over privacy, data usage, and distraction. Responsible design in the era of the algorithm demands this kind of transparency.
Also, when Chaudhry says, “there was always a struggle as to how much control do we want people to have over their devices,” my take is: people should have all the control.
Consider the Beer Can
∞ Sep 10, 2018Once upon a time, beer cans had no tab. They were sealed cans, and you used a church key to punch holes in them. In 1962, the “zip top” tab was invented, letting you open the can by peeling off a (razor-sharp) tab. John Updike was not impressed:
This seems to be an era of gratuitous inventions & negative improvements. Consider the beer can-it was beautiful as a clothespin, as inevitable as the wine bottle, as dignified & reassuring as the fire hydrant. A tranquil cylinder of delightfully resonant metal, it could be opened in an instant, requiring only the application of a handy gadget freely dispensed by every grocer… Now we are given instead, a top beeling with an ugly, shmoo-shaped "tab," which after fiercely resisting the tugging, bleeding fingers of the thirsty man, threatens his lips with a dangerous & hideous hole. However, we have discovered a way to thwart Progress… Turn the beer can upside down and open the bottom. The bottom is still the way the top used to be. This operation gives the beer an unsettling jolt, and the sight of a consistently inverted beer can makes some people edgy. But the latter difficulty could be cleared up if manufacturers would design cans that looked the same whichever end was up, like playing cards. Now, that would be progress.
I love this. It conjures lots of questions for designers as we seek to improve existing experiences:
What do innovations cost in social and physical pleasures when they disrupt familiar experiences? What price do we pay (or extract from others) when we design for efficiency? Whose efficiency are we designing for anyway? How do we distinguish nostalgia from real loss (and does the distinction matter)? How can we take useful lessons from the hacks our customers employ to work around our designs?
Related: Eater covers the history of beer-can design. You’re welcome.
How to have a healthy relationship with tech
∞ Sep 10, 2018
At Well+Good, the wonderful Liza Kindred describes how to make personal technology serve you, instead of the reverse. It all starts with realizing that your inability to put down your phone isn’t a personal failing, it’s something that’s been done to you:
“The biggest problem with how people engage with technology is technology, not the people,” she says. “Our devices and favorite apps are all designed to keep us coming back for more. That being said, there are many ways for us to intervene in our own relationships with tech, so that we can live this aspect of our lives in a way we can be proud of.”
Liza offers several pointers for putting personal technology in its place. My personal favorite:
Her biggest recommendation is turning off all notifications not sent by a human. See ya, breaking news, Insta likes, and emails. “Your time is more valuable than that,” Kindred says.
Alas, these strategies are akin to learning self-defense skills during a crime wave. They’re helpful (critical, even), but the core problem remains. In this case, the “crime wave” is the cynical, engagement-hungry strategies that too many companies employ to keep people clicking and tapping. And clicking and tapping. And clicking and tapping.
Liza’s on the case there, too. Her company Holy Shift helps people find mindful and healthy experiences in a modern, distracting, engagement-heavy world. I’ve participated in her Mindful Technology workshops and they’re mind opening. Liza demonstrates that design patterns and business models that you might take for granted as a best practice do more damage than you realize.
Meanwhile, we’ll have to continue to sharpen those self-defense skills.
“Trigger for a rant”
∞ Jul 1, 2018In his excellent Four Short Links daily feature, Nat Torkington has something to say about innovation poseurs—in the mattress industry:
Why So Many Online Mattress Brands – trigger for a rant: software is eating everything, but that doesn’t make everything an innovative company. If you’re applying the online sales playbook to product X (kombucha, mattresses, yoga mats) it doesn’t make you a Level 9 game-changing disruptive TechCo, it makes you a retail business keeping up with the times. I’m curious where the next interesting bits of tech are.
Should computers serve humans, or should humans serve computers?
∞ Jun 30, 2018Nolan Lawson considers dystopian and utopian possibilities for the future, with a gentle suggestion that front-line technologists have some agency here. What kind of world do you want to help build?
The core question we technologists should be asking ourselves is: do we want to live in a world where computers serve humans, or where humans serve computers?
Or to put it another way: do we want to live in a world where the users of technology are in control of their devices? Or do we want to live in a world where the owners of technology use it as yet another means of control over those without the resources, the knowledge, or the privilege to fight back?
s5e11: Things That Have Caught My Attention
∞ May 20, 2018In a recent edition of his excellent stream-of-consciousness newsletter, Dan Hon considers Alexa Kids Edition in which, among other things, Alexa encourages kids to say “please.” There are challenges and pitfalls, Dan writes, in designing a one-size-fits-all system that talks to children and, especially, teaches them new behaviors.
Parenting is a very personal subject. As I have become a parent, I have discovered (and validated through experimental data) that parents have very specific views about how to do things! Many parents do not agree with each other! Parents who agree with each other on some things do not agree on other things! In families where there are two parents there is much scope for disagreement on both desired outcome and method!
All of which is to say is that the current design, architecture and strategy of Alexa for Kids indicates one sort of one-size-fits-all method and that there’s not much room for parental customization. This isn’t to say that Amazon are actively preventing it and might not add it down the line - it’s just that it doesn’t really exist right now. Honan’s got a great point that:
"[For example,] take the magic word we mentioned earlier. There is no universal norm when it comes to whatâs polite or rude. Manners vary by family, culture, and even region. While âyes, sirâ may be de rigueur in Alabama, for example, it might be viewed as an element of the patriarchy in parts of California."
AI Is Harder Than You Think
∞ May 20, 2018In the New York Times opinion section, Gary Marcus and Ernest Davis suggest that today’s data-crunching model for artificial intelligence is not panning out. Instead of truly understanding logic or language, today’s machine learning instead identifies data patterns to recognize and reflect human behavior. The systems this approach creates tends to mimic more than think. As a result, we have some impressive but incredibly narrow applications of AI. The culmination of artificial intelligence appears to be making salon appointments.
Decades ago, the approach was different. The AI field tried to understand the elements of human thought—and teach machines to actually think. The goal proved elusive and the field drifted instead to what machines were already better at understanding, pattern recognition. Marcus and Davis say the detour has not proved helpful:
Once upon a time, before the fashionable rise of machine learning and “big data,” A.I. researchers tried to understand how complex knowledge could be encoded and processed in computers. This project, known as knowledge engineering, aimed not to create programs that would detect statistical patterns in huge data sets but to formalize, in a system of rules, the fundamental elements of human understanding, so that those rules could be applied in computer programs. Rather than merely imitating the results of our thinking, machines would actually share some of our core cognitive abilities.
That job proved difficult and was never finished. But “difficult and unfinished” doesn’t mean misguided. A.I. researchers need to return to that project sooner rather than later, ideally enlisting the help of cognitive psychologists who study the question of how human cognition manages to be endlessly flexible.
Today’s dominant approach to A.I. has not worked out. Yes, some remarkable applications have been built from it, including Google Translate and Google Duplex. But the limitations of these applications as a form of intelligence should be a wake-up call. If machine learning and big data can’t get us any further than a restaurant reservation, even in the hands of the world’s most capable A.I. company, it is time to reconsider that strategy.
Google Duplicitous
∞ May 9, 2018Jeremy Keith comments on Google’s announcement of Google Duplex:
The visionaries of technology—Douglas Engelbart, J.C.R Licklider—have always recognised the potential for computers to augment humanity, to be bicycles for the mind. I think they would be horrified to see the increasing trend of using humans to augment computers.
Do You Have “Advantage Blindness”?
∞ Apr 27, 2018At Harvard Business Review, Ben Fuchs, Megan Reitz, and John Higgins consider the responsibility of identifying our own blind spots—the biases, privileges, and disadvantages we haven’t admitted to ourselves. It’s important (and sometimes bruising) work—all the more important if you’re in a privileged position that gives you the leverage to make a difference for others.
To address inequality of opportunity, we need to acknowledge and address the systemic advantages and disadvantages that people experience daily. For leaders, recognizing their advantage blindness can help to reduce the impact of bias and create a more level playing field for everyone. Being advantaged through race and gender come with a responsibility to do something about changing a system that unfairly disadvantages others.
The Juvet Agenda
∞ Oct 30, 2017I had the privilege last month of joining 19 other designers, researchers, and writers to consider the future (both near and far) of artificial intelligence and machine learning. We headed into the woods—to the Juvet nature retreat in Norway—for several days of hard thinking. Under the northern lights, we considered the challenges and opportunities that AI presents for society, for business, for our craft—and for all of us individually.
Answers were elusive, but questions were plenty. We decided to share those questions, and the result is the Juvet Agenda. The agenda lays out the urgent themes surrounding AI‚and presents a set of provocations for teasing out a future we want to live in:
Artificial intelligence? It’s complicated. It’s the here and now of hyper-efficient algorithms, but it’s also the heady possibility of sentient systems. It might be history’s greatest opportunity or its worst existential threat — or maybe it will only optimize what we’ve already got. Whatever it is and whatever it might become, the thing is moving too fast for any of us to sit still. AI demands that we rethink our methods, our business models, maybe even our cultures.
In September 2017, 20 designers, urbanists, researchers, writers, and futurists gathered at the Juvet nature retreat among the fjords and forests of Norway. We came together to consider AI from a humanist perspective, to step outside the engineering perspective that dominates the field. Could we sort out AI’s contradictions? Could we describe its trajectory? Could we come to any conclusions?
Across three intense days the group captured ideas, played games, drew diagrams, and snapped photos. In the end, we arrived at more questions than answers — and Big Questions at that. These are not topics we can or should address alone, so we share them here.
Together these questions ask how we can shape AI for a world we want to live in. If we don’t decide for ourselves what that world looks like, the technology will decide for us. The future should not be self-driving; let’s steer the course together.
Stop Pretending You Really Know What AI Is
∞ Sep 9, 2017“Artificial intelligence” is broadly used in everything from science fiction to the marketing of mundane consumer goods, and it no longer has much practical meaning, bemoans John Pavlus at Quartz. He surveys practitioners about what the phrase does and doesn’t mean:
It’s just a suitcase word enclosing a foggy constellation of “things”—plural—that do have real definitions and edges to them. All the other stuff you hear about—machine learning, deep learning, neural networks, what have you—are much more precise names for the various scientific, mathematical, and engineering methods that people employ within the field of AI.
But what’s so terrible about using the phrase “artificial intelligence” to enclose all that confusing detail—especially for all us non-PhDs? The words “artificial” and “intelligent” sound soothingly commonsensical when put together. But in practice, the phrase has an uncanny almost-meaning that sucks adjacent ideas and images into its orbit and spaghettifies them.
Me, I prefer to use “machine learning” for most of the algorithmic software I see and work with, but “AI” is definitely a convenient (if overused) shorthand.
AI Guesses Whether You're Gay or Straight from a Photo
∞ Sep 9, 2017Well this seems ominous. The Guardian reports:
Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research that suggests machines can have significantly better “gaydar” than humans.
The study from Stanford University – which found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women – has raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology, and the potential for this kind of software to violate people’s privacy or be abused for anti-LGBT purposes.
The Pop-Up Employer: Build a Team, Do the Job, Say Goodbye
∞ Aug 2, 2017Big Medium is what my friend and collaborator Dan Mall calls a design collaborative. Dan runs his studio Superfriendly the same way I run Big Medium: rather than carry a full-time staff, we both spin up bespoke teams from a tight-knit network of well-known domain experts. Those teams are carefully chosen to meet the specific demands of each project. It’s a very human, very personal way to source project teams.
And so I was both intrigued and skeptical to read about an automated system designed to do just that at a far larger scale. Noam Scheiber reporting for The New York Times:
True Story was a case study in what two Stanford professors call “flash organizations” — ephemeral setups to execute a single, complex project in ways traditionally associated with corporations, nonprofit groups or governments. […]
And, in fact, intermediaries are already springing up across industries like software and pharmaceuticals to assemble such organizations. They rely heavily on data and algorithms to determine which workers are best suited to one another, and also on decidedly lower-tech innovations, like middle management. […]
“One of our animating goals for the project was, would it be possible for someone to summon an entire organization for something you wanted to do with just a click?” Mr. Bernstein said.
The fascinating question here is how systems might develop algorithmic proxies for the measures of trust, experience, and quality that weave the fabric of our professional networks. But even more intriguing: how might such models help to connect underrepresented groups with work they might otherwise never have access to? For that matter, how might those models introduce me to designers outside my circle who might introduce more diverse perspectives into my own work?