Airlines Redesigning Uniforms Find Out How Complicated It Is
∞ Jul 9, 2017I’m a fan of the commitment, iteration and heavy testing that goes into the design of airline uniforms. Martha C. White reports for The New York Times that the process can take 2–3 years from start to finish.
Uniforms also have to reflect the realities of life on the road, with fabric blends that resist stains and wrinkles and can be laundered, if necessary, in a hotel sink. They also need to keep the wearers comfortable, whether their plane touches down in the summer in Maui or in the winter in Minneapolis.
Before giving the new uniforms to employees, the airlines conduct wear tests. The roughly 500 employees in American’s test reported back on details that needed to be changed. For example, Mr. Byrnes said, an initial dress prototype included a back zipper, but flight attendants found it challenging to reach. So the zipper was scuttled in favor of buttons on the front.
For its 1,000-employee wear test, Delta solicited feedback via surveys, focus groups, an internal Facebook page and job shadowing, in which members of the design team traveled with flight crews to get a firsthand view of the demands of the job.
“We had about 160-plus changes to the uniform design” as a result of those efforts, Mr. Dimbiloglu said.
The depth of the process makes sense because these uniforms define not only the company brand, but also impact the working life of thousands of people. Come to think of it, that’s true of pretty much any enterprise software, too. If you’re the designer of such things, are you bringing the same commitment to research, testing, and refinement to your software projects?
Uncovering Voice UI Design Patterns
∞ Jul 9, 2017The folks at Cooper are learning by doing as they experiment with building their own voice apps for Alexa and other platforms. As they’ve begun to encounter recurring problems, they’re taking note of the design patterns that solve them.
We’re trying to capture some of these patterns as we work on voice UI design for young platforms like Alexa. We identified five patterns and what they’re best suited for here.
This is how the industry finds its way to best practices: experimenting, sharing solutions, and finally, putting good names to those solutions. Cooper is off to a good start with these design patterns:
- A la carte menu
- Secret menu
- Confident command
- Call and response
- Educated guess
Surface Deep
∞ Jul 9, 2017Ross Ufberg takes a deep dive into the design process behind Microsoft’s hyper-swiveling Surface Studio, a high-concept device that turns the desktop PC into a drafting table. A key ingredient to the project’s breakthrough success seems to be the highly collaborative, cross-disciplinary team that they corralled into Microsoft’s Building 87 to invent the thing:
Under one roof, Microsoft has united a team of designers, engineers, and prototypers, and invested heavily in infrastructure and equipment, so that Building 87 can be a self-contained hub, complete with manufacturing capabilities that usually would be located offsite or outsourced. Having these capabilities close at hand drastically cuts down on dead time, so that, in some cases, mere hours after a designer sends a concept down the hall to the prototypers, they can figure out a way to embody that concept, and print it in 3D or manufacture it on the spot. The model-making team can then hand that iteration off to the mechanical engineers, who assess the viability of the concept and figure out ways to improve it. It’s sort of like one endless feedback loop, with designers conceiving, prototypers creating, engineers correcting, and back again to the designers.
This is exactly the spirit of Big Medium’s own (far smaller) design teams. We have constant collaboration and feedback among product designers, visual designers, front-end designers, and developers. It’s not a linear process, but a constant conversation that blends experiments, false starts, grand leaps, successes, and gradual improvements. This turns out to be both faster and more creative.
In workshops and design engagements, we coach client organizations how to adopt this collaborative, iterative design process. Designers and developers often tell us at first that it feels unfamiliar, even uncomfortable, to collaborate across the entire design process—and especially when ideas are being formed. It’s natural to shy away from sharing before something is fully thought out. But that’s exactly where the most productive cross-disciplinary experiments happen.
One of Surface Studio’s signature design innovations is the hinge that lets it shift instantly from upright desktop monitor to a sloped, dial-and-stylus drafting board. In Ufberg’s telling, it wouldn’t have happened at all without a culture of cross-disciplinary experimentation.
“It is way easier to try something than tell somebody it can’t be done.”
When the idea of a hinge was first tossed out in a brainstorming meeting, [mechanical engineering director Andrew] Hill tells me, “there were ten different people who said that it doesn’t make any sense, it would be too complicated to make it work. But then a couple weeks later, we got a prototype out of our model shop where you could see the mechanism starting to come together, and the people who were saying it couldn’t be done started to come over and be like ‘Huh, maybe we could do something like this.’” I ask him if he was one of those doubters.
“One of the things that I found out about myself is it is way easier to try something than tell somebody it can’t be done,” he confesses. “There’s magic in the suspension of disbelief. If you just do stuff that you know you’re going to be able to do, you know where you’re going to go. If you try something that you’re not quite sure is going work, at least you’re exposed to new problems and you get smarter in that way, and in the good cases, you move the whole thing forward.” In its first iteration, the hinge was just a piece of cardboard glued crudely to a kickstand. But then, the feedback loop kicked into place.
The bottom line: When in doubt… don’t doubt. Build a quick prototype—the most low-fi thing you can create to test the concept—and then share it with people from other disciplines. It’s how you manage risk in an inherently risky exploration into the new.
That’s good advice not only for industrial design, but for pretty much anytime you’re really trying to make something new and better. And isn’t that all of us?
Need help transforming your organization’s design process for faster and more creative results? That’s what we do! Get in touch for a workshop, executive session, or design engagement.
Oil City High School 2017 Commencement Speech
∞ Jul 9, 2017Returning to scenes of youth is always complicated business, the stuff that makes high school reunions emotionally fraught. How have I changed, how haven’t I, and how do I express those things when I come home? I know Brad Frost was sweating these topics as he toiled over his commencement speech at the high school where he graduated 14 years ago.
Months before he gave the talk, he told me he was already nervous about it. Turns out he didn’t need to worry. In fact, the “what has/hasn’t changed” anxiety turned out to be central to his wonderful speech. I especially loved this message:
The things you will be doing in 14 years’ time will no doubt be different than the things you’re doing at this phase in your life. A recent study by the Department of Labor showed that 65% of students going through the education system today will work in jobs that haven’t been invented yet. Think about that. That means that the majority of today’s students — probably including the majority of this graduating class — will end up working in jobs that don’t presently exist. Technology is advancing at a staggering rate, it’s disrupting industries, it’s inventing new ones, and it’s constantly changing the way we live and work.
When I was a kid, I didn’t say “Mom, Dad, I want to be a web designer when I grow up!” That wasn’t a thing. And yet that’s now how I spend most of my waking hours, and how I earn my living, and how I provide for my family.
Our daughter Nika is about to start her final year of high school, and she sometimes worries that she doesn’t yet have enough vision for what she’ll become in her life and career—that she’s behind. But she knows what she loves, and she has so many talents, so we try to reassure her that knowing her skills, values, and personality is far more important than knowing a vocation. Vocations shift far more quickly than the rest.
When I graduated from high school (30 years ago next year!), the web hadn’t been invented, mainstream email was years away, and phones were cabled to the wall. But even then, I had a passion for both storytelling and systems—and those have been the guiding threads of a career spanning many kinds of jobs, culminating (for now at least!) in work shaping experiences of and for connected devices.
The great thing about returning to scenes of youth is that sometimes—like Brad—you get to talk to the kids coming up behind you. You can share the advice you’d offer your younger self. Nice work, Brad.
So We Redid Our Charts…
∞ Jun 28, 2017Video analytics provider Mux overhauled their data visualizations and shared the process. The thinking and results make for a worthwhile read, but I was especially taken with the project’s framing questions:
When we wanted to revisit our charts, we looked at them from both of these perspectives and asked ourselves: Are we being truthful to our calculations? And, are we presenting the data in a beautiful and sensible manner?
I’m more convinced than ever: the presentation of data is just as important as the underlying algorithm.
Google Will No Longer Scan Gmail for Ad Targeting
∞ Jun 28, 2017In a bit of unexpected good news, Google announced last week that it will stop reading Gmail messages for the purpose of showings ads tailored to the content.
The New York Times reminds, however, that Google will still continue to read your messages for other purposes:
It will continue to scan Gmail to screen for potential spam or phishing attacks as well as offering suggestions for automated replies to email.
So the bots will continue to comb your email missives, but at least they’re seemingly doing this in the service of features that serve the user (instead of advertisers).
As always, the tricky trade-off of machine-learning services is that they only know what we feed them. But what are they eating? It’s us. How much are we willing to reveal, and in exchange for what services?
A Working Pattern Library
∞ Jun 28, 2017The charming and masterful Ethan Marcotte emphasizes the importance of crafting design systems to fit the way an organization works and behaves:
To put a finer point on it, a pattern library’s value to an organization is tied directly to how much—and how easily—it’s used. That’s why I usually start by talking with clients about the intended audiences for their pattern library. Doing so helps us better understand who’ll access the pattern library, why they’ll do so, and how we can design those points of contact.
While a basic pattern library might consist simply of a collection of components, an essential pattern library is presented in a way that makes it not just the “right” thing to use but the easiest. For what it’s worth, this is really hard to do well; as we’ve found in our own projects, there’s some seriously demanding UX/research work behind tailoring a design system to the people who use it.
On Design Systems: Dan Mall of Superfriendly
∞ Jun 28, 2017UXpin interviewed my friend and collaborator Dan Mall to tap his giant brain for insights about building design systems. The 47:15 interview is full of gems, and you should watch or read the whole thing. Meanwhile, here are a few highlights:
I don’t think that a design system should remove design from your organization. A lot of clients think like, “Oh, if we had a design system, we don’t need to design anymore, right? ‘Cause all the decisions would be made.” And I think that I’ve never seen that actually happen. Design systems should just help you design better. So you’re still gonna have to go through the process of design, but a design system should be a good tool in your arsenal for you to be able to design better. And eliminate some of the useless decisions that you might have to make otherwise.
I love this. Design systems are at their best when they simply gather the best practices of the organization—the settled solutions. The most effective design systems are simply containers of institutional knowledge: “this is what good design looks like in our company.” Instead of building (and rebuilding and rebuilding) the same pattern over and over again, designers and developers Instead of designing a card pattern for the 15th time, it’s already done. Product teams can put their time and smarts into solving new problems. Put another way: the most exciting design systems are boring.
Dan also talks about the importance of making the system fit the workflow of the people who use it. In our projects together, we’ve found that deep user research is required to get it right:
I’ve worked with organizations that the design systems are for purely developers. That’s it. It’s not for designers. And then in other organizations, a design system has to work equally well for designers and developers. And so just those two examples, those two design systems have to be drastically different from each other ’cause they need to support different use cases and different people.
If that kind of user-centered research sounds like product work, that’s because it is:
It is a product and you have to treat it like a product. It grows like a product, it gets used like a product. And for people to think like, “Yeah we’ll just create it once and then it’ll sit there.” They’re not realizing the value of it and they may not actually get value from it if they’re not treating it like a thing that also needs to grow in time and adapt over time. … The reason that the design system is so hard to get off the ground is because it requires organizational change. It requires people to have different mindsets about how they’re going to work.
The Future of AI Needs To Have More People in It
∞ Jun 28, 2017If smart machines and AI agents are meant to support human goals, how might we help them better understand human needs and behaviors? UC Berkeley professor and AI scientist Anca Dragan suggests we need a more human-centered approach in our algorithms:
Though we’re building these agents to help and support humans, we haven’t been very good at telling these agents how humans actually factor in. We make them treat people like any other part of the world. For instance, autonomous cars treat pedestrians, human-driven vehicles, rolling balls, and plastic bags blowing down the street as moving obstacles to be avoided. But people are not just a regular part of the world. They are people! And as people (unlike balls or plastic bags), they act according to decisions that they make. AI agents need to explicitly understand and account for these decisions in order for them to actually do well. […]
How do we tell a robot what it should strive to achieve? As researchers, we assume we’ll just be able to write a suitable reward function for a given problem. This leads to unexpected side effects, though, as the agent gets better at optimizing for the reward function, especially if the reward function doesn’t fully account for the needs of the people the robot is helping. What we really want is for these agents to optimize for whatever is best for people. To do this, we can’t have a single AI researcher designate a reward function ahead of time and take that for granted. Instead, the agent needs to work interactively with people to figure out what the right reward function is. […]
Supporting people is not an after-fix for AI, it’s the goal. To do this well, I believe this next generation should be more diverse than the current one. I actually wonder to what extent it was the lack of diversity in mindsets and backgrounds that got us on a non-human-centered track for AI in the first place.
Melinda Gates and Fei-Fei Li Want to Liberate AI from “Guys With Hoodies”
∞ Jun 28, 2017In a wonderful interview with Backchannel, Google Cloud’s chief AI scientist Fei-Fei Li and Microsoft alum and philanthropist Melinda Gates press for more diversity in the artificial intelligence field. Among other projects, Li and Gates have launched AI4ALL, an educational nonprofit working to increase diversity and inclusion in artificial intelligence.
Fei-Fei Li: As an educator, as a woman, as a woman of color, as a mother, I’m increasingly worried. AI is about to make the biggest changes to humanity, and we’re missing a whole generation of diverse technologists and leaders.…
Melinda Gates: If we don’t get women and people of color at the table — real technologists doing the real work — we will bias systems. Trying to reverse that a decade or two from now will be so much more difficult, if not close to impossible. This is the time to get women and diverse voices in so that we build it properly, right?
This reminds me, too, of the call of anthropologist Genevieve Bell to look beyond even technologists to craft this emerging world of machine learning. “If we’re talking about a technology that is as potentially pervasive as this one and as potentially close to us as human beings,” Bell said, “I want more philosophers and psychologists and poets and artists and politicians and anthropologists and social scientists and critics of art.”
We need every perspective we can get. A hurdle: fancy new technologies often come with the damaging misperception that they’re accessible only to an elite few. I love how Gates simply dismisses that impression:
You can learn AI. And you can learn how to be part of the industry. Go find somebody who can explain things to you. If you’re at all interested, lean in and find somebody who can teach you."… I think sometimes when you hear a big technologist talking about AI, you think, “Oh, only he could do it.” No. Everybody can be part of it.
It’s true. Get after it. (If you’re a designer wondering how you fit in, I have some ideas about that: Design in the Era of the Algorithm.)
Social Cooling
∞ Jun 20, 2017Social Cooling is Tijmen Schep’s term to describe the chilling effect of the reputation/surveillance economy on expression and exploration of ideas:
Social Cooling describes the long-term negative side effects of living in a reputation economy:
A culture of conformity. Have you ever hesitated to click on a link because you thought your visit might be logged, and it could look bad?â¦
A culture of risk-aversion.â¦Rating systems can create unwanted incentives, and increase pressure to conform a bureaucratic average.
Increased social rigidity. Digital reputation systems are limiting our ability and our will to protest injustice.
Schep suggests that all of this raises three big questionsâphilosophical, economic, and cultural:
- Are we becoming more well behaved, but less human?
- Are we undermining our creative economy?
- Will this impact our ability to evolve as a society?
How Germany’s Otto Uses Artificial Intelligence
∞ Jun 17, 2017Machine learning is trying to one-up just-in-time inventory with what can only be called before-it’s-time inventory. The Economist reports that German online merchant Otto is using algorithms to predict what you’ll order a week before you order it, reducing surplus stock and speeding deliveries:
A deep-learning algorithm, which was originally designed for particle-physics experiments at the CERN laboratory in Geneva, does the heavy lifting. It analyses around 3bn past transactions and 200 variables (such as past sales, searches on Otto’s site and weather information) to predict what customers will buy a week before they order.
The AI system has proved so reliable—it predicts with 90% accuracy what will be sold within 30 days—that Otto allows it automatically to purchase around 200,000 items a month from third-party brands with no human intervention. It would be impossible for a person to scrutinise the variety of products, colours and sizes that the machine orders. Online retailing is a natural place for machine-learning technology, notes Nathan Benaich, an investor in AI.
Overall, the surplus stock that Otto must hold has declined by a fifth. The new AI system has reduced product returns by more than 2m items a year. Customers get their items sooner, which improves retention over time, and the technology also benefits the environment, because fewer packages get dispatched to begin with, or sent back.
The Surprising Repercussions of Making AI Assistants Sound Human
∞ Jun 17, 2017There’s much effort afoot to make the bots sound less… robotic. Amazon recently enhanced its Speech Synthesis Markup Language to give Alexa a more human range of expression. SSML now lets Alexa whisper, pause, bleep expletives, and vary the speed, volume, emphasis, and pitch of its speech.
This all comes on the heels of Amazon’s February release of so-called speechcons (like emoticons, get it?) meant to add some color to Alexa’s speech. These are phrases like “zoinks,” “yowza,” “read ’em and weep,” “oh brother,” and even “neener neener,” all pre-rendered with maximum inflection. (Still waiting on “whaboom” here.)
The effort is intended to make Alexa feel less transactional and, well, more human. Writing for Wired, however, Elizabeth Stinson considers whether human personality is really what we want from our bots—or whether it’s just unhelpful misdirection.
“If Alexa starts saying things like hmm and well, you’re going to say things like that back to her,” says Alan Black, a computer scientist at Carnegie Mellon who helped pioneer the use of speech synthesis markup tags in the 1990s. Humans tend to mimic conversational styles; make a digital assistant too casual, and people will reciprocate. “The cost of that is the assistant might not recognize what the user’s saying,” Black says.
A voice assistant’s personality improving at the expense of its function is a tradeoff that user interface designers increasingly will wrestle with. "Do we want a personality to talk to or do we want a utility to give us information? I think in a lot of cases we want a utility to give us information,” says John Jones, who designs chatbots at the global design consultancy Fjord. Just because Alexa can drop colloquialisms and pop culture references doesn’t mean it should. Sometimes you simply want efficiency. A digital assistant should meet a direct command with a short reply, or perhaps silence—not booyah! (Another speechcon Amazon added.)
Personality and utility aren’t mutually exclusive, though. You’ve probably heard the design maxim form should follow function. Alexa has no physical form to speak of, but its purpose should inform its persona. But the comprehension skills of digital assistants remain too rudimentary to bridge these two ideals. “If the speech is very humanlike, it might lead users to think that all of the other aspects of the technology are very good as well,” says Michael McTear, coauthor of The Conversational Interface. The wider the gap between how an assistant sounds and what it can do, the greater the distance between its abilities and what users expect from it.
When designing within the constraints of any system, the goal should be to channel user expectations and behavior to match the actual capabilities of the system. The risk of adding too much personality is that it will create an expectation/behavior mismatch. Zoinks!
The Machine Learning Paradox
∞ Jun 17, 2017Mike Loukides describes a fundamental weirdness in creating predictive algorithms: in order to make them flexible enough to deal with real-world data, you also have to make them imperfect.
Building a system that’s 100% accurate on training data is a problem that’s well known to data scientists: it’s called overfitting. It’s an easy and tempting mistake to make, regardless of the technology you’re using. Give me any set of points (stock market prices, daily rainfall, whatever; I don’t care what they represent), and I can find an equation that will pass through them all. Does that equation say anything at all about the next point you give me? Does it tell me how to invest or what raingear to buy? No—all my equation has done is "memorize" the sample data. Data only has predictive value if the match between the predictor and the data isn’t perfect. You’ll be much better off getting out a ruler and eyeballing the straight line that comes closest to fitting.
If a usable machine learning system can’t identify the training data perfectly, what does that say about its performance on real-world data? It’s also going to be imperfect. How imperfect? That depends on the application. 90–95% accuracy is achievable in many applications, maybe even 99%, but never 100%. …
The right question to ask isn’t how to make an error-free system; it’s how much error you’re willing to tolerate, and how much you’re willing to pay to reduce errors to that level.
If errors are inevitable, then the job of design is to present the data in ways that set appropriate expectations. The more I ponder the future of UX in a machine-learning world, the more I’m convinced of this: large swaths of the UX discipline will revolve around presenting data in ways that anticipate the machines’ occasionally odd, strange, and just-plain-wrong pronouncements.
The City Was Connected
∞ Jun 17, 2017Dan Hon imagines what happens when all of a city’s systems are connected, incentivized, gamified, and bureaucratically weaponized:
I was late paying the water bill, so the parking meter refused service until I coughed up.
The meter said I had 30 seconds to pay the water bill until I had to move my car, and… I just froze. Then the meter attendant came. She said she was just doing her job as she booted my car, then looked down at her phone. Reminded me I hadn’t taken out my recycling.
This wasn’t turning out to be a good day.
She told me I was on my second strike: one more, and I’d lose streetlight privileges. I’d heard about that: a social shaming punishment. Streetlights would create a cone of darkness around just you.
(Hon originally published this short fiction as a Twitter thread and shared it again in his email newsletter.)
If Google Teaches an AI to Draw, Will That Help It Think?
∞ Jun 10, 2017Lately I’ve been thinking hard about creatives’ role in a world of artificial intelligence, but what about the reverse: how about AI’s role in creative pursuits? Alexis Madrigal reports for The Atlantic on SketchRNN, one of several Google efforts to teach machines to make art:
The implicit argument is that when humans draw, they make abstractions of the world. They sketch the generalized concept of “pig,” not any particular animal. That is to say, there is a connection between how our brains store “pigness” and how we draw pigs. Learn how to draw pigs and maybe you learn something about the human ability to synthesize pigness. …
What can SketchRNN learn? Below is a network trained on firetrucks generating new fire trucks. Inside the model, there is a variable called “temperature,” which allows the researchers to crank the randomness of the output up or down. In the following images, bluer images have the temperature turned down, redder ones are “hotter.”
[…]
What [project leader Doug] Eck finds fascinating about sketches is that they contain so much with so little information. “You draw a smiley face and it’s just a few strokes,” he said, strokes that look nothing like the pixel-by-pixel photographic representation of a face. And yet any 3-year-old could tell you a face was a face, and if it was happy or sad. Eck sees it as a kind of compression, an encoding that SketchRNN decodes and then can re-encode at will.
In other words, sketches might teach AI portable, human-understandable symbols of abstract concepts—a shorthand description of the world. It strikes me that all creative pursuits, including design and language, traffic in similar symbols and shorthands. I’m impatient to find out how this particular branch of AI develops to understand (and create) the interfaces and interactions that designers make on an ongoing basis.
At the moment, this is the stuff of the research lab. But other flavors are starting to emerge in consumer products, too. Apple has been training iOS to anticipate strokes in sketches and handwriting in make writing with Apple Pencil seem buttery smooth. In Buzzfeed’s overview of iPad updates, John Paczkowski reports:
Meanwhile, the Apple Pencil’s latency — that slight lag you get when drawing — has been reduced to the point where it’s virtually imperceptible; Apple says it’s just 20 milliseconds. And since Apple is so intensely focused on capturing the experience of putting pen to paper, it’s doing additional work in the background to remove the lag entirely with machine learning–based algorithms designed to predict where a Pencil is headed next.
“We actually schedule the next frame for where we think the Pencil’s going to be, so it draws it right when you get there, instead of right after you have been there,” Schiller says.
While Google and SketchRNN are chasing the lofty goal of understanding how humans communicate in symbols, Apple is meanwhile learning the commonplace but useful skill of learning how you write and draw. Machines may not yet be capable of their own creative works, but they’re already beginning to learn to understand and anticipate our own.
The Workshop and the Storefront
∞ Jun 2, 2017Brad Frost nailed down a pair of mighty useful metaphors for pattern libraries (“workshops”) and design-system style guides (“storefronts”). These are useful because they help delineate what kind of work happens where, which is a recurring source of confusion weâve seen in companies struggling to maintain jumbo design-system projects.
The workshop
Brad created the first version of Pattern Lab when we designed the Techcrunch website back in 2013. That pattern-library software was the first glimpse of his Atomic Design methodology, building design pattern “organisms” out of smaller “atoms” and “molecules.” Pattern Lab has since been open-sourced and remains our go-to tool for developing and sharing websites and full-blown design systems.
Pattern Lab is where all our work comes together. It’s a collaborative environment where information architecture gets stubbed out in the browser, where visual design comes together, where the code gets wrangled, and where content is edited. We share ongoing work inside Pattern Lab with stakeholders and clients. And it’s the final deliverable for web projects, complete with page templates and detailed pattern library. Our projects happen almost entirely inside Pattern Lab.
Our pattern libraries are always a wonderful mess, full of experiments and spare parts and tools.
Brad explains:
While Pattern Lab shares some qualities with style guides (for instance, it shows code snippets and you can add pattern documentation), the environment is really designed for teams to effectively build and work with UI components: the navigation across the top is small and unobtrusive, there are viewport resizing tools to stress-test UI components and pages, weâre able to organize components in a way that makes sense to us as creators (such as using the atomic design methodology), and we can design with dynamic data to ensure patterns are robust, resilient, and serve the needs of the organizationâs applications. Like my wifeâs jewelry workshop, the environment is designed for the design system team to be productive and creative.
This is not, however, an especially friendly place for people outside the working production team. Its organization around atoms, molecules, and organisms isn’t relevant to others; it contains building-block patterns that don’t have much useful meaning on their own; and it contains work-in-progress experiments that aren’t ready for prime time.
So when you’re sharing polished patterns and design systems with a group beyond the production team, you need something more refined. You need…
The storefront
If the pattern library is the workshop, then the design-system style guide is the storefront, as Brad explains:
A style guide is the storefront where all the ingredients of the design system are put out on the shelves. The style guide storefront is designed for a different context than the design/dev environment workshop. Rather than being a tool for only the design systems team to make use of, the style guide communicates the design system to the whole organization. That means the style guide audience should be cross-disciplinary, since a design system can help create a shared vocabulary between all the people who are responsible for the success of the products at the organization. The style guide should provide information helpful for both makers and users of the design system, and should be used as a vehicle to continuously sell the value of the design system to the organization.
This isn’t just a difference in presentation. There’s a difference in core content, too. In our recent projects building out gigantic enterprise design systems, we’ve found that the style guide always presents only a subset of the pattern library. We cherrypick the polished patterns that are ready to share, while excluding most of the experiments and building-block “atoms.”
Behind the scenes, we use Brad’s Style Guide Guide to import selected patterns and templates from Pattern Lab, and then display them in a polished website. (In practice, a simple Grunt task exports the HTML for all patterns, and then copies âem into the style guide directory. Style Guide Guide includes them automagically in pages when it builds its website.)
From there, we add lots of guidelines and documentation to help newcomers make sense of the UX, the visual design, and the underlying markup. The end result is a set of settled solutions for common problems, clearly understandable and ready for production.
Distinct places for distinct jobs
We build patterns in the workshop, and we display the best of them in the storefront, showcasing them in the best possible light.
Too often, though, we see organizations try to force everything into one place. We see workshop pattern libraries trying to do double duty as a canonical design-system reference. Or on the other side, we see pattern libraries set up as static storefront references that live outside a useful working development environment. When these resources set up outside the workflow of designers and developers, they donât get used, they get stale, they become irrelevant.
Following the workshop/storefront modelâand stitching the two together so that one feeds the otherâhas ensured that the design systems we create continue to be used, vital, dynamic.
âAlgorithms Arenât Racist. Your Skin Is Just Too Dark.â
∞ Jun 2, 2017Joy Buolamwini on on a tear lately. The founder of the Algorithmic Justice League has received well-deserved press from the likes of the BBC and Guardian for her campaign to uncover inadvertent bias in machine-learning algorithms.
At Hackernoon, Buolamwini responds to criticism she received after demonstrating that facial recognition often breaks down for people of color. (Buolamwini, a woman of color, had to put on a white mask before one algorithm would even detect a face.) Some have told Buolamwini that it’s not the algorithm’s fault but rather that cameras are poor at discerning black faces: “Algorithms aren’t racist,” the argument goes. “Your skin is just too dark.”
Good lord. The problem is not with “photography.” If your eye can discern difference, the camera can, too. It’s true that camera technology has historically favored light skin. But that’s less a factor of underlying technology than of the skewed market forces and customer base that shaped early photography. In other words: it was a miserable design decision. For decades, for example, Kodak’s development process for color film was calibrated to photos called “Shirley cards” (named after the first model to pose for them). Shirley cards reflected a decidedly white concept of beauty. “In the early days, all of them were white and often tagged with the word ‘normal,’” NPR reported.
Now we’re carrying this original bias into the machine-learning era. Machine learning excels at determining what’s “normal” and trying to replicate itâor discard outliers. What the machines think is normal depends entirely on the data we feed their models. As the era of the algorithm begins to embrace the whole broad world, it’s urgent that we examine what “normal” really is and work to avoid propagating exclusionary notions of the past by encoding them into our models.
Instead of doing the hard work of creating truly inclusive algorithms, however, some suggest that Buolamwini should instead carry a lighting kit with her:
More than a few observers have recommended that instead of pointing out failures, I should simply make sure I use additional lighting. Silence is not the answer. The suggestion to get more lights to increase illumination in an already lit room is a stop gap solution. Suggesting people with dark skin keep extra lights around to better illuminate themselves misses the point.
Should we change ourselves to fit technology or make technology that fits us?
Who has to take extra steps to make technology work? Who are the default settings optimized for?
As always with emerging technologies, our challenge is making tech bend to our lives instead of the reverse. It’s profoundly unfair to make some lives bend more than others.
For designers, the arrival of the algorithm era introduces UX research challenges at an unprecedented scale. A big emerging job of design is to help identify where the prevailing definition of “normal” is flawed, and then move heaven and earth to make sure the data models embrace a new, more inclusive definition of normal. That is where we need to add more light.