How Dotdash, Formerly About.com, Is Taking over the Internet
∞ Jan 14, 2020Fast Company’s Aaron Cohen shares the story of Dotdash, the network formerly known as About.com. Big Medium had a big role in this tale, and it may be the most successful design- and business-turnaround story we’ve ever been involved with.
Three years ago, About.com’s audience and ad revenue were plummeting, and CEO Neil Vogel told us the company was “circling the drain” and needed drastic change. We helped the company develop a new vertical strategy, carving out the content from the main network into branded premium experiences. The new network, Dotdash, relaunched its vast archive of content with a collection of great-looking, fast, and premium websites, powered by a single CMS and a themed design system. Big Medium led the design of three of those early properties—Verywell, The Balance, and The Spruce—and the network has since grown to nearly a dozen.
We tell our bit of the story here, and Fast Company shares what’s happened since:
Maybe you’ve never even heard of Dotdash, but its service content reaches about 90 million Americans a month. … Collectively, Dotdash’s sites have increased traffic by 44% year over year in Q3 2019. Driven by advertising and e-commerce, the company’s annual revenue grew by 44% in 2018 and 34% as reported in Q3 2019 earnings.
A big part of this success boils down to some very intentional design and technology bets that we made together:
- Make more money… by showing fewer ads
- Create a respectful UX that celebrates content instead of desperate revenue grabs
- Create a front-end architecture that is modular and nimble
- Make the sites fast
It’s worth noting that all of these choices are counter to what most media companies are doing. Most are pouring on more ads, imposing design that abuses readers and content with popovers etc, slowing their sites with heavy scripts and trackers. No kidding, it was a seriously brave and non-obvious choice to reject those paths. Fast Company describes the impact of Dotdash’s industry-bucking choices:
While other independent media companies were engineering their coverage around social media, video, and trending topics, Dotdash doubled down on text-based articles about enduring topics and avoided cluttering them with ads. … Dotdash sites run fewer ads, with no pop-ups or takeovers, and because the ads are relevant to each article, they perform better. At a time when digital ad rates have continued to crater for most online publishers, Vogel says the company’s ad rates have increased nearly 20 percent each year since 2016, and 25 percent of 2019 revenue came from affiliate marketing fees (bonuses paid to the publisher after Dotdash visitors made purchases via ads on the sites.)
The sites load very quickly, and the company’s proprietary content management system is designed for efficiency: Designers and editors can choose from fast-loading templates that include images, video, and interactive applications. And there’s an emphasis on creating the kinds of detailed, informative articles that turn up in search results. At Verywell, for example, each article is updated at least once every nine months and reviewed by medical professionals.
Dotdash has not only turned itself around, it’s been expanding as other media companies have contracted, selling themselves off piece by piece. Big congrats to our friends at Dotdash: they’ve demonstrated that ad-supported websites can be presented in ways that are both respectful and (very) profitable.
AI Dungeon
∞ Dec 6, 2019
It is pitch black. You are likely to be eaten by a generative text model.
AI Dungeon is a pretty astonishing example of generative fiction—in the form of old-school text adventure game: “Imagine an infinitely generated world that you could explore endlessly, continually finding entirely new content and adventures. What if you could also choose any action you can think of instead of being limited by the imagination of the developers who created the game?”
This experiment by Nick Walton is built on top of OpenAI’s recent GPT–2 model for generating text after being primed with an arbitrary input (seed text to establish context). Unlike many prior language models, this one generates entire coherent paragraphs that feel a world ahead of prior-art attempts that simply string plausible words together.
Walton loaded the model with training data from a collection of text adventures obtained from chooseyourstory.com, and then let ’er rip. “The model at times shows remarkable understanding and writing quality like in this sample where the player tries to find love on a deserted island,” Walton writes:
You head to the island and explore it. You find it completely deserted save for a single tree. You walk up to it and examine it closely. It seems to be made of wood, but its leaves seem to be green rather than normal leafy green.
> find love
You take a deep breath and stare intently at the tree. Your eyes meet with the tree’s. You immediately feel a strong sense of euphoria. You smile and touch the tree. Its bark feels warm to the touch. You sniff the tree and find that its scent is very pleasant.
The unfolding adventures are reasonably coherent, too, and generally hang together as you go along. Unlike the virtual assistants many of us use today, this one actually remembers the thing you said to it moments before. Walton says every command is accompanied by the last eight commands and responses, effectively giving the game a short-term memory to maintain context. Even as you throw curve balls at the story engine, it works hard to make it fit what came before. Here’s a snippet from one of the resulting adventures:
"I am a goblin!" you say proudly. "And I'm glad to be one".
"Well then, congratulations," says the orc captain. "We'll let you live". You nod and walk away from the camp.
> ask if you can join the orc band
The orcs don't want to fight anymore so they're looking for someone to help them out in their new endeavors. They offer you a chance to join up with them as a drummer. It would allow you to play music while helping out around camp.
> tell them we should start a musical troupe
They look at you skeptically but agree to give you a try. You tell them about your plans to create a musical troupe and how you plan on using the orcs to help you out.
> teach them how to play the flute
You teach them how to play the flute and soon enough they're playing for you all the time. The orcs seem to enjoy it and they begin to think of you as their leader. Soon other orcs come to you asking to join too.
> accept them into your musical groupe and train to be the best musicians
You continue to teach them and eventually they become better than you. You learn that you need to practice more and more often until you can master the art of playing the flute. Eventually you find yourself practicing every day and finding it very difficult to sleep.
> take orcs to perform at a nearby festival
You take the orcs to perform at a local festival where you get to meet some of the other orcs you've trained. This is followed by another success when the orcs manage to attract the attention of the Emperor himself who grants you permission to use his personal carriage to travel around the Empire.
Why this matters
This is a toy, of course, but it’s a nifty demonstration of how the GPT model can be used on an arbitrary data set to create fairly compelling responses. This stuff remains primitive compared to human prose; AI Dungeon isn’t going to write award-winning fiction, but it does hint at ways that it could help human writers by proposing directional text. In a Hacker News thread, Walton wrote:
This doesn’t reach near the level of good human authors. There’s no long term plot or deep human themes in this. I don’t think this will ever replace quality human writing, but it may be able to augment it in cool ways. I personally would love if rather than every guard in Skyrim telling the exact same story, if each guard could have their own stories or comments generated based on things about their life. Human authors could provide high level details and let AI generators fill in the smaller details.
As with so many instances of machine learning, in other words, the best application here is not to replace human efforts but to augment them. What might be the role for this in supporting common or repetitive writing tasks? In supporting customer-support teams providing tailored responses to frequently asked questions? In giving automated agents better comprehension of the task we want them to accomplish?
What is the Role of an AI Designer?
∞ Dec 5, 2019Facebook’s Amanda Linden shares how AI product designers approach their work in Facebook’s Artificial Intelligence team:
There are big differences in the role of a typical product designer and an AI designer. Rather than launching a product feature that shows up in an app in an immediate and obvious way, our output is often clarity for engineers on how the technology could be applied. Because AI capabilities might take 2–3 years to develop, it’s important for designers to help developers understand the potential of different solutions and their impact on people’s lives when developing AI.
Linden details several roles that designers play in shaping AI at Facebook—not just how it’s applied and presented, but how it’s conceived and built:
- Designing AI prototypes
- Shaping new technology
- Developing AI centered products
- Collecting data for AI to learn
- Designing AI developer tools
We’re in a peculiar moment when many designers have a hard time imagining a role with artificial intelligence and machine learning, because it departs in so many ways from traditional product design. Here’s the thing: design’s superpower is understanding how technology can support human goals and ambitions, how to make technology fit our lives instead of the reverse. Developers and algorithm engineers have shown us what’s possible with AI. Now it’s the designer’s role (and responsibility!) to shape how it’s conceived and presented for meaningful use. That’s why AI and machine learning matter for design teams.
With Brits Used to Surveillance, More Companies Try Tracking Faces
∞ Dec 4, 2019The Wall Street Journal reports that companies are using UK’s omnipresent security cameras as cultural permission to bring facial-recognition tech to semi-public spaces, tracking criminal history but also ethnicity and other personal traits. “Retailers, property firms and casinos are all taking advantage of Britain’s general comfort with surveillance to deploy their own cameras paired with live facial-recognition technology,” writes Parmy Olson for the Journal ($). “Companies are also now using watch lists compiled by vendors that can help recognize flagged people who set foot on company property.” For example:
Some outlets of Budgens, a chain of independently owned convenience stores, have been using facial-recognition technology provided by Facewatch Ltd. for more than a year. Facewatch charges retailers for the use of a computer and software that can track the demographics of people entering a store, including their ethnicity, and screen for a watch list of suspected thieves through any modern CCTV camera. The system works by sending an alert to a staff member’s laptop or mobile device after detecting a face on the watch list. Retailers then decide how to proceed.
Why this matters
Assumptions about appropriate (or even inevitable) uses of tech become normalized quickly. As constant surveillance becomes the everyday, it’s all too easy to become resigned or indifferent as that surveillance deepens. Once the cultural foundation for a new technology sets, it’s difficult to change the associated expectations and assumptions—or see the status quo as anything other than inevitable, “just the way things work.” We see it in the decades-long expectation that online content is free and ad supported. We see it in the assumption that giving up personal data is just table stakes for using the internet. And now, with surveillance cameras—at least in the UK—we may be settling into a new expectation that simply moving through the world means that we are seen, tracked, monitored in a very granular, personal way.
The Journal suggests that the UK’s “comfort” with surveillance cameras makes it ripe for this. A 2013 survey found that Britain had the highest density of surveillance technology outside of China. Since then, the number of surveillance cameras in the UK has nearly doubled from six million to 10 million—one camera for every seven people.
This anti-theft surveillance affects more than just the guilty. Facial recognition is still pretty iffy in real-world conditions, and the false negatives these systems generate could lead to harassment for no good reason except that you walked into the store.
James Lacey, a staff member at one Budgens store in Aylesbury, southern England, said the system can ping his phone between one and 10 times a day. People have been known to steal large quantities of meat from the store’s refrigeration aisle when staff members are in the stock room, he said. The new system has helped, he said, though about a quarter of alerts are false. A spokesman for Facewatch said a maximum of 15% of alerts are false positives, based on its own analysis.
(Related: an ACLU study in 2018 found that Amazon’s facial-recognition service incorrectly matched the faces of 28 members of Congress to criminal mugshots.)
Automated identification has implications beyond crime prevention. What’s OK for these corporate systems to track in the first place? Gender? Race and ethnicity? Income? Browser history? Social relationships? Voting record? Sexual preference? The folks at Facewatch promise vaguely that tracking ethnicity “can help retailers understand their marketplace.” This smacks of a shrugging sensibility that “we can do it, so why wouldn’t we?” And that’s the worst reason to use a technology.
Regulation is evolving, but remains vague and often unenforced. Europe’s well-intentioned privacy regulation, the GDPR, puts facial and other biometric data in a special category that requires a company to have a “substantial public interest” in capturing and storing it. That’s fuzzy enough that companies can arguably allow companies to use the technology to fight crime. Tracking ethnicity to “help retailers understand their marketplace” seems like less of a slam dunk. There is also a gray area around how long businesses can hold on to such footage, or use it for other business purposes.
We should adopt a position on this stuff both culturally and civically. If we don’t, the technology will decide for us. What will your company’s position be? And how about you? What’s your stance as a practitioner designing the technology that will set the behaviors and expectations of the next generation?
Facebook Gives Workers a Chatbot to Appease That Prying Uncle
∞ Dec 3, 2019Facebook sent employees home for the holidays with robot talking points—in case the family had any questions about, y’know, the company’s cynical, grasping, overreaching, damaging, and irresponsible business model and use of technology. (Bots, it seems, are the only ones left who can deliver these lines with a straight face.) The New York Times reports:
If a relative asked how Facebook handled hate speech, for example, the chatbot — which is a simple piece of software that uses artificial intelligence to carry on a conversation — would instruct the employee to answer with these points:
- Facebook consults with experts on the matter.
- It has hired more moderators to police its content.
- It is working on A.I. to spot hate speech.
- Regulation is important for addressing the issue.
It would also suggest citing statistics from a Facebook report about how the company enforces its standards.
Inmates in Finland are training AI as part of prison labor
∞ Mar 29, 2019Grooming data for the machines has a human cost. The Verge reports that startup Vainu is using prisoners in Finland to tag Finnish-language articles. The company uses Mechanical Turk to do this for other languages, but Finnish-speaking turks are hard to come by. So they get (and pay) prison inmates to do it.
There are legit concerns of exploiting prisoners for low-wage labor, but perhaps a broader concern is that this hints at a bleak future of work in the age of the algorithm. Indeed this “future” is already here for a growing segment of humans—with Mechanical-Turk-level labor turns out to be, literally, prison labor.
This type of job tends to be “rote, menial, and repetitive,” says Sarah T. Roberts, a professor of information science at the University of California at Los Angeles who studies information workers. It does not require building high level of skill, and if a university researcher tried to partner with prison laborers in the same way, “that would not pass an ethics review board for a study.” While it’s good that the prisoners are being paid a similar wage as on Mechanical Turk, Roberts points out that wages on Mechanical Turk are extremely low anyway. One recent research paper found that workers made a median wage of $2 an hour.
As we design the future of technology, we also design the future of work. What might we do to improve the quality and pay of labor required to make automated systems work?
The Google Pixel 3 Is A Very Good Phone. But Maybe Phones Have Gone Too Far.
∞ Nov 14, 2018Mat Honan’s review of the Google Pixel 3 smartphone is a funny, harrowing, real-talk look at the devices that have come to govern our lives. “We are captives to our phones, they are having a deleterious effect on society, and no one is coming to help us,” he writes. “On the upside, this is a great phone.”
The Buzzfeed review is a world-weary acknowledgement of the downside of our personal technologies—its effect on our relationships, on our privacy, on our peace of mind. He does point out the new “digital wellbeing” features in Android, but offers other alternatives:
Another idea: You may instead choose to buy a device with a lousy screen and a lousy camera and a terrible processor. Maybe you would use this less. Or maybe you should walk to the ocean and throw your phone in and turn around and never look back**.
**Please do not do this. It would be very bad for the ocean.
Related recommendation for designers and product makers: check out Liza Kindred’s Mindful Technology for strategies and techniques for making products that focus attention instead of distract it.
Getting the iPad to Pro
∞ Nov 14, 2018Craig Mod considers the new iPad Pro and finds that its sleek and speedy hardware highlights the software’s flaws. Craig is one of the biggest iPad fans and power users I know, and it’s a fascinating read to get the rundown of the weird snags that slow his flow.
I have a near endless bag of these nits to share. For the last year I’ve kept a text file of all the walls I’ve run into using an iPad Pro as a pro machine. Is this all too pedantic? Maybe. But it’s also kind of fun. When’s the last time we’ve been able to watch a company really figure out a new OS in public?
And I think that’s a great way to think about it. Nearly a decade into the iPad form factor, Apple is still trying to sort out the interaction language suited to these jumbo slices of glass. How will this evolve, and what will our future workflow look like? The details elude us, but Craig’s vision sounds good to me:
The ideal of computing software — an optimized and delightful bicycle for the mind — exists somewhere between the iOS and macOS of today. It needs to shed the complexities of macOS but allow for touch. Track pads, for example, feel downright nonsensical after editing photos on an iPad with the Pencil. But the interface also needs to move at the speed of the thoughts of the person using it. It needs to delight with swiftness and capability, not infuriate with plodding, niggling shortcomings. Keystrokes shouldn’t be lost between context switches. Data shouldn’t feel locked up in boxes in inaccessible corners.
Design Tools Are Running Out of Track
∞ Oct 14, 2018
About a year ago, Colm Tuite reviewed the state of UI design tools and found them wanting: Design Tools Are Running Out of Track. If anything, his critique feels even more relevant a year later. Our most popular design tools are fundamentally disconnected from the realities and constraints of working software:
- They generate static images in an era of voice, video, motion, and complex interactions. (“Our design tools should manipulate the actual product, not a picture of it.”)
- They have no awareness of the layout conventions of the web, so they don’t help designers work with the grain of CSS grid and flexbox.
- They’re tuned for infinite flexibility instead of usefully embracing the constraints of a design system or code base.
As I’ve worked with more and more companies struggling to design at scale, this last point has proven to be especially troublesome when maintaining or evolving existing software. Most design tools are not well tuned to support designer-developer collaboration within design systems (though some are beginning to innovate here). Tuite writes:
Your design tool is never going to tell you that you can’t do something. It’s never going to pull you up for using an off-brand color. It’s never going to prevent you from using a whitespace value which doesn’t belong in your spacing scale. It’s never going to warn you that 20% of the population literally cannot see that light gray text you’ve just designed.
And why not…? Because design tools don’t care.
Design tools are so waywardly enamoured with a vision for unlimited creativity that they have lost sight of what it means to design sensibly, to design inclusively, to design systematically.
Put simply, design tools allow us to do whatever the hell we want. To some extent, this level of boundless creativity is useful, especially in the ideation phases. As UI designers though, the majority of our workflow doesn’t call for much creativity. Rather, our workflow calls for reuse, repetition, familiarity and standardisation; needs that our tools do little to satisfy.
Developer culture and workflow have a strong bias toward consistency and reuse. That’s less true of design, and the tools are part of the problem. When there are no guardrails, it’s easy to wander off the road. Our systems don’t help us stay the path within established design systems.
This causes a disconnect between designers and developers because design comps drift from the realities of the established patterns in the code base. A Sketch library—or any collected drawings of software—can be a canonical UI reference only when the design is first conceived. Once the design gets into code, the product itself should be the reference, and fresh design should work on top of that foundation. It’s more important that our design libraries reflect what’s in the code than the reverse. Production code—and the UI it generates—has to be the single source of truth, or madness ensues.
That doesn’t mean that developers exclusively run the show or that we as designers have no agency in the design system. We can and should offer changes to the design and interaction of established patterns. But we also have to respect the norms that we’ve already helped to establish, and our tools should, too.
That’s the promise of design-token systems like InVision’s Design System Manager. Tokens help to establish baseline palettes and styles across code and design tools. The system gets embedded in whatever environment where designers or developers prefer to work. Designers and developers alike can edit those rules at the source—within the system itself.
This approach is a step forward in helping designers and developers stay in sync by contributing to the same environment: the actual product and the pattern library that feeds it. We’ve seen a lot of success helping client teams to make this transition, but it requires adopting a (sometimes challenging) new perspective on how to work—and where design authority lies. Big rewards come with that change in worldview.
Is your organization wrestling with inconsistent interfaces and duplicative design work? Big Medium helps companies scale great design and improve collaboration through design systems. Get in touch for a workshop, executive session, or design engagement.
Apple Used to Know Exactly What People Wanted — Then It Made a Watch
∞ Oct 5, 2018
The latest version of Apple Watch doubles down on its fitness and health-tracking sensors, but as John Herrmann writes in The New York Times, it’s not yet clear exactly what value all that data-tracking might deliver—and for whom:
For now, this impressive facility for collecting and organizing information about you is just that — it’s a great deal of data with not many places to go. This is sensitive information, of course, and Apple’s relative commitment to privacy — at least compared with advertising-centric companies like Google and Facebook — might be enough to get new users strapped in and recording.
As Apple continues its institutional struggle to conceive of what the Apple Watch is, or could be, in the imaginations of its customers, it’s worth remembering that Apple’s stated commitment to privacy is, in practice, narrow. The competitors that Cook likes to prod about their data-exploitative business models have a necessary and complicit partner in his company, having found many of their customers though Apple’s devices and software.
This is especially relevant as Apple casts about for ideas elsewhere. Apple has already met with the insurance giant Aetna about ways in which the company might use Apple Watches to encourage healthier — and cheaper — behavior in its tens of millions of customers. John Hancock, one of the largest life insurers in America, said after Apple’s latest announcement that it would offer all its customers the option of an interactive policy, in which customers would get discounts for healthy habits, as evidenced by data from wearable devices. Here we see the vague outlines of how the Apple Watch could become vital, or at least ubiquitous, as the handmaiden to another data-hungry industry.
Facebook Is Giving Advertisers Access to Your Shadow Contact Information
∞ Sep 27, 2018
One of the more insidious aspects of the social graph is that companies can mine data about you even if you don’t actively participate in their network. Your friends inadvertently give you up, as Kashmir Hill writes at Gizmodo:
Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn’t hand over at all, but that was collected from other people’s contact books, a hidden layer of details Facebook has about you that I’ve come to call “shadow contact information.”
Information that we assume to be under our control is not. Or, in many cases, information that you provide for one specific purpose is then flipped around and applied to another. Hill mentions an especially cynical dark-pattern example of that pattern:
[Researchers] found that when a user gives Facebook a phone number for two-factor authentication or in order to receive alerts about new log-ins to a user’s account, that phone number became targetable by an advertiser within a couple of weeks. So users who want their accounts to be more secure are forced to make a privacy trade-off and allow advertisers to more easily find them on the social network.
This is despicable. This is a moment when companies should strive to improve literacy about data sharing and data usage. Instead, companies like Facebook purposely obscure and misdirect. This is both a crisis and an opportunity. As designers, how might we build new business models and interactions that rely on honesty and respect, instead of deception and opportunism?
Arguments for transparency are too often met with counterarguments like, “Well, if we tell them what we’re doing, they might not opt in.” (Or, more bluntly, “If people knew about it, they wouldn’t want any part of it.”) When we find ourselves using these words to justify covering our tracks, it’s a cue that we almost certainly shouldn’t be doing that thing in the first place.
Google Data Collection Research
∞ Sep 27, 2018
Whoops, Google, it looks like your business model is showingâ¦
In âGoogle Data Collection,â Douglas C. Schmidt, Professor of Computer Science at Vanderbilt University, catalogs how much data Google is collecting about consumers and their most personal habits across all of its products and how that data is being tied together.
The key findings include:
- A dormant, stationary Android phone (with the Chrome browser active in the background) communicated location information to Google 340 times during a 24-hour period, or at an average of 14 data communications per hour. In fact, location information constituted 35 percent of all the data samples sent to Google.
- For comparisonâs sake, a similar experiment found that on an iOS device with Safari but not Chrome, Google could not collect any appreciable data unless a user was interacting with the device. Moreover, an idle Android phone running the Chrome browser sends back to Google nearly fifty times as many data requests per hour as an idle iOS phone running Safari.
- An idle Android device communicates with Google nearly 10 times more frequently as an Apple device communicates with Apple servers. These results highlighted the fact that Android and Chrome platforms are critical vehicles for Googleâs data collection. Again, these experiments were done on stationary phones with no user interactions. If you actually use your phone the information collection increases with Google.
Pair that with Google’s substantial ad tech, including the network formerly known as DoubleClick, and Google’s data collection reaches well beyond the company’s own properties:
A major part of Googleâs data collection occurs while a user is not directly engaged with any of its products. The magnitude of such collection is significant, especially on Android mobile devices, arguably the most popular personal accessory now carried 24/7 by more than 2 billion people.
If Software Is Eating the World, What Will Come Out the Other End?
∞ Sep 23, 2018“So far, it’s mostly shit,” wrotes John Battelle suggesting that there’s a world beyond the optimization and efficiency so cherished by the would-be disrupters:
But the world is not just software. The world is physics, it’s crying babies and shit on the sidewalk, it’s opioids and ecstasy, it’s car crashes and Senate hearings, lovers and philosophers, lost opportunities and spinning planets around untold stars. The world is still real. Software hasn’t eaten it as much as bound it in a spell, temporarily I hope, while we figure out what comes next.
The iPhone’s original UI designer on Apple’s greatest flaws
∞ Sep 10, 2018Fast Company offers an interview with Imran Chaudhri, the original designer of the iPhone user interface. According to Chaudhri, Apple knew that the device and its notifications would be distracting, that the personal nature of the phone would soak up attention in entirely new ways. But Apple consciously decided not to make it easy to tone down those distractions:
“Inside, getting people to understand that [distraction] was going to be an issue was difficult. Steve [Jobs] understood it…internally though, I think there was always a struggle as to how much control do we want people to have over their devices. When I and a few other people were advocating for more control, that level of control was actually pushed back by marketing. We would hear things like, ‘you can’t do that because then the device will become uncool.’
“The controls exist for you. They’ve always been there and yet it’s incredibly hard to know how to use them and to manage them. You literally have to spend many days to go through and really understand what’s bombarding you and then turn those things off in a singular fashion. So for the people who understand the system really well, they can take advantage of it, but the people that don’t—the people that don’t even change their ringtone, who don’t even change their wallpaper—those are the real people that suffer from this sort of thing. They don’t have that level of control.”
Since then, Apple has embraced privacy as a competitive advantage versus Android, but Chaudhri suggests that iOS could do more to offer transparency and smart adjustments to personal settings:
“The system is intelligent enough to let you know that there are [apps] that you’ve given permission to that are still using your data, and notifications you’ve turned on that you’re not actually responding to. So let’s circle back and let’s reestablish a dialogue between the phone and the customer, where the phone asks, ‘Do you really need these notifications? Do you really want Facebook to be using your address book data? Because you’re not logging into Facebook anymore.’ There’s a lot of ways to remind people if you just design them properly.”
Seems to me that we should all do a similar inventory of the systems we design. There remain so many opportunities to create interventions to improve user literacy and control over privacy, data usage, and distraction. Responsible design in the era of the algorithm demands this kind of transparency.
Also, when Chaudhry says, “there was always a struggle as to how much control do we want people to have over their devices,” my take is: people should have all the control.
Consider the Beer Can
∞ Sep 10, 2018Once upon a time, beer cans had no tab. They were sealed cans, and you used a church key to punch holes in them. In 1962, the “zip top” tab was invented, letting you open the can by peeling off a (razor-sharp) tab. John Updike was not impressed:
This seems to be an era of gratuitous inventions & negative improvements. Consider the beer can-it was beautiful as a clothespin, as inevitable as the wine bottle, as dignified & reassuring as the fire hydrant. A tranquil cylinder of delightfully resonant metal, it could be opened in an instant, requiring only the application of a handy gadget freely dispensed by every grocer… Now we are given instead, a top beeling with an ugly, shmoo-shaped "tab," which after fiercely resisting the tugging, bleeding fingers of the thirsty man, threatens his lips with a dangerous & hideous hole. However, we have discovered a way to thwart Progress… Turn the beer can upside down and open the bottom. The bottom is still the way the top used to be. This operation gives the beer an unsettling jolt, and the sight of a consistently inverted beer can makes some people edgy. But the latter difficulty could be cleared up if manufacturers would design cans that looked the same whichever end was up, like playing cards. Now, that would be progress.
I love this. It conjures lots of questions for designers as we seek to improve existing experiences:
What do innovations cost in social and physical pleasures when they disrupt familiar experiences? What price do we pay (or extract from others) when we design for efficiency? Whose efficiency are we designing for anyway? How do we distinguish nostalgia from real loss (and does the distinction matter)? How can we take useful lessons from the hacks our customers employ to work around our designs?
Related: Eater covers the history of beer-can design. You’re welcome.
How to have a healthy relationship with tech
∞ Sep 10, 2018
At Well+Good, the wonderful Liza Kindred describes how to make personal technology serve you, instead of the reverse. It all starts with realizing that your inability to put down your phone isn’t a personal failing, it’s something that’s been done to you:
“The biggest problem with how people engage with technology is technology, not the people,” she says. “Our devices and favorite apps are all designed to keep us coming back for more. That being said, there are many ways for us to intervene in our own relationships with tech, so that we can live this aspect of our lives in a way we can be proud of.”
Liza offers several pointers for putting personal technology in its place. My personal favorite:
Her biggest recommendation is turning off all notifications not sent by a human. See ya, breaking news, Insta likes, and emails. “Your time is more valuable than that,” Kindred says.
Alas, these strategies are akin to learning self-defense skills during a crime wave. They’re helpful (critical, even), but the core problem remains. In this case, the “crime wave” is the cynical, engagement-hungry strategies that too many companies employ to keep people clicking and tapping. And clicking and tapping. And clicking and tapping.
Liza’s on the case there, too. Her company Holy Shift helps people find mindful and healthy experiences in a modern, distracting, engagement-heavy world. I’ve participated in her Mindful Technology workshops and they’re mind opening. Liza demonstrates that design patterns and business models that you might take for granted as a best practice do more damage than you realize.
Meanwhile, we’ll have to continue to sharpen those self-defense skills.
“Trigger for a rant”
∞ Jul 1, 2018In his excellent Four Short Links daily feature, Nat Torkington has something to say about innovation poseurs—in the mattress industry:
Why So Many Online Mattress Brands – trigger for a rant: software is eating everything, but that doesn’t make everything an innovative company. If you’re applying the online sales playbook to product X (kombucha, mattresses, yoga mats) it doesn’t make you a Level 9 game-changing disruptive TechCo, it makes you a retail business keeping up with the times. I’m curious where the next interesting bits of tech are.
Should computers serve humans, or should humans serve computers?
∞ Jun 30, 2018Nolan Lawson considers dystopian and utopian possibilities for the future, with a gentle suggestion that front-line technologists have some agency here. What kind of world do you want to help build?
The core question we technologists should be asking ourselves is: do we want to live in a world where computers serve humans, or where humans serve computers?
Or to put it another way: do we want to live in a world where the users of technology are in control of their devices? Or do we want to live in a world where the owners of technology use it as yet another means of control over those without the resources, the knowledge, or the privilege to fight back?