Big MediumBig Medium logo
Skip Navigation
  • Ideas
  • Projects
  • Talks
  • About
  • Hire Us
Speak Search Menu

What We’re Reading

ai

Melinda Gates and Fei-Fei Li Want to Liberate AI from “Guys With Hoodies”

∞ Jun 28, 2017

In a wonderful interview with Backchannel, Google Cloud’s chief AI scientist Fei-Fei Li and Microsoft alum and philanthropist Melinda Gates press for more diversity in the artificial intelligence field. Among other projects, Li and Gates have launched AI4ALL, an educational nonprofit working to increase diversity and inclusion in artificial intelligence.

Fei-Fei Li: As an educator, as a woman, as a woman of color, as a mother, I’m increasingly worried. AI is about to make the biggest changes to humanity, and we’re missing a whole generation of diverse technologists and leaders.…

Melinda Gates: If we don’t get women and people of color at the table — real technologists doing the real work — we will bias systems. Trying to reverse that a decade or two from now will be so much more difficult, if not close to impossible. This is the time to get women and diverse voices in so that we build it properly, right?

This reminds me, too, of the call of anthropologist Genevieve Bell to look beyond even technologists to craft this emerging world of machine learning. “If we’re talking about a technology that is as potentially pervasive as this one and as potentially close to us as human beings,” Bell said, “I want more philosophers and psychologists and poets and artists and politicians and anthropologists and social scientists and critics of art.”

We need every perspective we can get. A hurdle: fancy new technologies often come with the damaging misperception that they’re accessible only to an elite few. I love how Gates simply dismisses that impression:

You can learn AI. And you can learn how to be part of the industry. Go find somebody who can explain things to you. If you’re at all interested, lean in and find somebody who can teach you."… I think sometimes when you hear a big technologist talking about AI, you think, “Oh, only he could do it.” No. Everybody can be part of it.

It’s true. Get after it. (If you’re a designer wondering how you fit in, I have some ideas about that: Design in the Era of the Algorithm.)

Backchannel | Melinda Gates and Fei-Fei Li Want to Liberate AI from “Guys With Hoodies”
algorithms

Social Cooling

∞ Jun 20, 2017

Social Cooling is Tijmen Schep’s term to describe the chilling effect of the reputation/surveillance economy on expression and exploration of ideas:

Social Cooling describes the long-term negative side effects of living in a reputation economy:

  1. A culture of conformity. Have you ever hesitated to click on a link because you thought your visit might be logged, and it could look bad?…

  2. A culture of risk-aversion.…Rating systems can create unwanted incentives, and increase pressure to conform a bureaucratic average.

  3. Increased social rigidity. Digital reputation systems are limiting our ability and our will to protest injustice.

Schep suggests that all of this raises three big questions—philosophical, economic, and cultural:

  1. Are we becoming more well behaved, but less human?
  2. Are we undermining our creative economy?
  3. Will this impact our ability to evolve as a society?
Social Cooling
algorithms

How Germany’s Otto Uses Artificial Intelligence

∞ Jun 17, 2017

Machine learning is trying to one-up just-in-time inventory with what can only be called before-it’s-time inventory. The Economist reports that German online merchant Otto is using algorithms to predict what you’ll order a week before you order it, reducing surplus stock and speeding deliveries:

A deep-learning algorithm, which was originally designed for particle-physics experiments at the CERN laboratory in Geneva, does the heavy lifting. It analyses around 3bn past transactions and 200 variables (such as past sales, searches on Otto’s site and weather information) to predict what customers will buy a week before they order.

The AI system has proved so reliable—it predicts with 90% accuracy what will be sold within 30 days—that Otto allows it automatically to purchase around 200,000 items a month from third-party brands with no human intervention. It would be impossible for a person to scrutinise the variety of products, colours and sizes that the machine orders. Online retailing is a natural place for machine-learning technology, notes Nathan Benaich, an investor in AI.

Overall, the surplus stock that Otto must hold has declined by a fifth. The new AI system has reduced product returns by more than 2m items a year. Customers get their items sooner, which improves retention over time, and the technology also benefits the environment, because fewer packages get dispatched to begin with, or sent back.

The Economist | Automatic for the People: How Germany’s Otto Uses Artificial Intelligence
sentient design

The Surprising Repercussions of Making AI Assistants Sound Human

∞ Jun 17, 2017

There’s much effort afoot to make the bots sound less… robotic. Amazon recently enhanced its Speech Synthesis Markup Language to give Alexa a more human range of expression. SSML now lets Alexa whisper, pause, bleep expletives, and vary the speed, volume, emphasis, and pitch of its speech.

This all comes on the heels of Amazon’s February release of so-called speechcons (like emoticons, get it?) meant to add some color to Alexa’s speech. These are phrases like “zoinks,” “yowza,” “read ’em and weep,” “oh brother,” and even “neener neener,” all pre-rendered with maximum inflection. (Still waiting on “whaboom” here.)

The effort is intended to make Alexa feel less transactional and, well, more human. Writing for Wired, however, Elizabeth Stinson considers whether human personality is really what we want from our bots—or whether it’s just unhelpful misdirection.

“If Alexa starts saying things like hmm and well, you’re going to say things like that back to her,” says Alan Black, a computer scientist at Carnegie Mellon who helped pioneer the use of speech synthesis markup tags in the 1990s. Humans tend to mimic conversational styles; make a digital assistant too casual, and people will reciprocate. “The cost of that is the assistant might not recognize what the user’s saying,” Black says.

A voice assistant’s personality improving at the expense of its function is a tradeoff that user interface designers increasingly will wrestle with. "Do we want a personality to talk to or do we want a utility to give us information? I think in a lot of cases we want a utility to give us information,” says John Jones, who designs chatbots at the global design consultancy Fjord. Just because Alexa can drop colloquialisms and pop culture references doesn’t mean it should. Sometimes you simply want efficiency. A digital assistant should meet a direct command with a short reply, or perhaps silence—not booyah! (Another speechcon Amazon added.)

Personality and utility aren’t mutually exclusive, though. You’ve probably heard the design maxim form should follow function. Alexa has no physical form to speak of, but its purpose should inform its persona. But the comprehension skills of digital assistants remain too rudimentary to bridge these two ideals. “If the speech is very humanlike, it might lead users to think that all of the other aspects of the technology are very good as well,” says Michael McTear, coauthor of The Conversational Interface. The wider the gap between how an assistant sounds and what it can do, the greater the distance between its abilities and what users expect from it.

When designing within the constraints of any system, the goal should be to channel user expectations and behavior to match the actual capabilities of the system. The risk of adding too much personality is that it will create an expectation/behavior mismatch. Zoinks!

Wired | The Surprising Repercussions of Making AI Assistants Sound Human
ai

The Machine Learning Paradox

∞ Jun 17, 2017

Mike Loukides describes a fundamental weirdness in creating predictive algorithms: in order to make them flexible enough to deal with real-world data, you also have to make them imperfect.

Building a system that’s 100% accurate on training data is a problem that’s well known to data scientists: it’s called overfitting. It’s an easy and tempting mistake to make, regardless of the technology you’re using. Give me any set of points (stock market prices, daily rainfall, whatever; I don’t care what they represent), and I can find an equation that will pass through them all. Does that equation say anything at all about the next point you give me? Does it tell me how to invest or what raingear to buy? No—all my equation has done is "memorize" the sample data. Data only has predictive value if the match between the predictor and the data isn’t perfect. You’ll be much better off getting out a ruler and eyeballing the straight line that comes closest to fitting.

If a usable machine learning system can’t identify the training data perfectly, what does that say about its performance on real-world data? It’s also going to be imperfect. How imperfect? That depends on the application. 90–95% accuracy is achievable in many applications, maybe even 99%, but never 100%. …

The right question to ask isn’t how to make an error-free system; it’s how much error you’re willing to tolerate, and how much you’re willing to pay to reduce errors to that level.

If errors are inevitable, then the job of design is to present the data in ways that set appropriate expectations. The more I ponder the future of UX in a machine-learning world, the more I’m convinced of this: large swaths of the UX discipline will revolve around presenting data in ways that anticipate the machines’ occasionally odd, strange, and just-plain-wrong pronouncements.

O'Reilly Media | The Machine Learning Paradox
future

The City Was Connected

∞ Jun 17, 2017

Dan Hon imagines what happens when all of a city’s systems are connected, incentivized, gamified, and bureaucratically weaponized:

I was late paying the water bill, so the parking meter refused service until I coughed up.

The meter said I had 30 seconds to pay the water bill until I had to move my car, and… I just froze. Then the meter attendant came. She said she was just doing her job as she booted my car, then looked down at her phone. Reminded me I hadn’t taken out my recycling.

This wasn’t turning out to be a good day.

She told me I was on my second strike: one more, and I’d lose streetlight privileges. I’d heard about that: a social shaming punishment. Streetlights would create a cone of darkness around just you.

(Hon originally published this short fiction as a Twitter thread and shared it again in his email newsletter.)

Dan Hon | The City Was Connected
algorithms

If Google Teaches an AI to Draw, Will That Help It Think?

∞ Jun 10, 2017

Lately I’ve been thinking hard about creatives’ role in a world of artificial intelligence, but what about the reverse: how about AI’s role in creative pursuits? Alexis Madrigal reports for The Atlantic on SketchRNN, one of several Google efforts to teach machines to make art:

The implicit argument is that when humans draw, they make abstractions of the world. They sketch the generalized concept of “pig,” not any particular animal. That is to say, there is a connection between how our brains store “pigness” and how we draw pigs. Learn how to draw pigs and maybe you learn something about the human ability to synthesize pigness. …

What can SketchRNN learn? Below is a network trained on firetrucks generating new fire trucks. Inside the model, there is a variable called “temperature,” which allows the researchers to crank the randomness of the output up or down. In the following images, bluer images have the temperature turned down, redder ones are “hotter.”

SketchRNN's AI sketches of firetrucks

[…]

What [project leader Doug] Eck finds fascinating about sketches is that they contain so much with so little information. “You draw a smiley face and it’s just a few strokes,” he said, strokes that look nothing like the pixel-by-pixel photographic representation of a face. And yet any 3-year-old could tell you a face was a face, and if it was happy or sad. Eck sees it as a kind of compression, an encoding that SketchRNN decodes and then can re-encode at will.

In other words, sketches might teach AI portable, human-understandable symbols of abstract concepts—a shorthand description of the world. It strikes me that all creative pursuits, including design and language, traffic in similar symbols and shorthands. I’m impatient to find out how this particular branch of AI develops to understand (and create) the interfaces and interactions that designers make on an ongoing basis.

At the moment, this is the stuff of the research lab. But other flavors are starting to emerge in consumer products, too. Apple has been training iOS to anticipate strokes in sketches and handwriting in make writing with Apple Pencil seem buttery smooth. In Buzzfeed’s overview of iPad updates, John Paczkowski reports:

Meanwhile, the Apple Pencil’s latency — that slight lag you get when drawing — has been reduced to the point where it’s virtually imperceptible; Apple says it’s just 20 milliseconds. And since Apple is so intensely focused on capturing the experience of putting pen to paper, it’s doing additional work in the background to remove the lag entirely with machine learning–based algorithms designed to predict where a Pencil is headed next.

“We actually schedule the next frame for where we think the Pencil’s going to be, so it draws it right when you get there, instead of right after you have been there,” Schiller says.

While Google and SketchRNN are chasing the lofty goal of understanding how humans communicate in symbols, Apple is meanwhile learning the commonplace but useful skill of learning how you write and draw. Machines may not yet be capable of their own creative works, but they’re already beginning to learn to understand and anticipate our own.

The Atlantic | If Google Teaches an AI to Draw, Will That Help It Think?
design system

The Workshop and the Storefront

∞ Jun 2, 2017

Brad Frost nailed down a pair of mighty useful metaphors for pattern libraries (“workshops”) and design-system style guides (“storefronts”). These are useful because they help delineate what kind of work happens where, which is a recurring source of confusion we’ve seen in companies struggling to maintain jumbo design-system projects.

The workshop

Brad created the first version of Pattern Lab when we designed the Techcrunch website back in 2013. That pattern-library software was the first glimpse of his Atomic Design methodology, building design pattern “organisms” out of smaller “atoms” and “molecules.” Pattern Lab has since been open-sourced and remains our go-to tool for developing and sharing websites and full-blown design systems.

Pattern Lab is where all our work comes together. It’s a collaborative environment where information architecture gets stubbed out in the browser, where visual design comes together, where the code gets wrangled, and where content is edited. We share ongoing work inside Pattern Lab with stakeholders and clients. And it’s the final deliverable for web projects, complete with page templates and detailed pattern library. Our projects happen almost entirely inside Pattern Lab.

Our pattern libraries are always a wonderful mess, full of experiments and spare parts and tools.

Brad explains:

While Pattern Lab shares some qualities with style guides (for instance, it shows code snippets and you can add pattern documentation), the environment is really designed for teams to effectively build and work with UI components: the navigation across the top is small and unobtrusive, there are viewport resizing tools to stress-test UI components and pages, we’re able to organize components in a way that makes sense to us as creators (such as using the atomic design methodology), and we can design with dynamic data to ensure patterns are robust, resilient, and serve the needs of the organization’s applications. Like my wife’s jewelry workshop, the environment is designed for the design system team to be productive and creative.

This is not, however, an especially friendly place for people outside the working production team. Its organization around atoms, molecules, and organisms isn’t relevant to others; it contains building-block patterns that don’t have much useful meaning on their own; and it contains work-in-progress experiments that aren’t ready for prime time.

So when you’re sharing polished patterns and design systems with a group beyond the production team, you need something more refined. You need…

The storefront

If the pattern library is the workshop, then the design-system style guide is the storefront, as Brad explains:

A style guide is the storefront where all the ingredients of the design system are put out on the shelves. The style guide storefront is designed for a different context than the design/dev environment workshop. Rather than being a tool for only the design systems team to make use of, the style guide communicates the design system to the whole organization. That means the style guide audience should be cross-disciplinary, since a design system can help create a shared vocabulary between all the people who are responsible for the success of the products at the organization. The style guide should provide information helpful for both makers and users of the design system, and should be used as a vehicle to continuously sell the value of the design system to the organization.

This isn’t just a difference in presentation. There’s a difference in core content, too. In our recent projects building out gigantic enterprise design systems, we’ve found that the style guide always presents only a subset of the pattern library. We cherrypick the polished patterns that are ready to share, while excluding most of the experiments and building-block “atoms.”

Behind the scenes, we use Brad’s Style Guide Guide to import selected patterns and templates from Pattern Lab, and then display them in a polished website. (In practice, a simple Grunt task exports the HTML for all patterns, and then copies ‘em into the style guide directory. Style Guide Guide includes them automagically in pages when it builds its website.)

From there, we add lots of guidelines and documentation to help newcomers make sense of the UX, the visual design, and the underlying markup. The end result is a set of settled solutions for common problems, clearly understandable and ready for production.

Distinct places for distinct jobs

We build patterns in the workshop, and we display the best of them in the storefront, showcasing them in the best possible light.

Too often, though, we see organizations try to force everything into one place. We see workshop pattern libraries trying to do double duty as a canonical design-system reference. Or on the other side, we see pattern libraries set up as static storefront references that live outside a useful working development environment. When these resources set up outside the workflow of designers and developers, they don’t get used, they get stale, they become irrelevant.

Following the workshop/storefront model—and stitching the two together so that one feeds the other—has ensured that the design systems we create continue to be used, vital, dynamic.

Brad Frost | The Workshop and the Storefront
algorithms

“Algorithms Aren’t Racist. Your Skin Is Just Too Dark.”

∞ Jun 2, 2017
Joy Buolamwini and "the coded gaze"
Joy Buolamwini demonstrates “the coded gaze,” which recognizes her face only when she wears a white mask.

Joy Buolamwini on on a tear lately. The founder of the Algorithmic Justice League has received well-deserved press from the likes of the BBC and Guardian for her campaign to uncover inadvertent bias in machine-learning algorithms.

At Hackernoon, Buolamwini responds to criticism she received after demonstrating that facial recognition often breaks down for people of color. (Buolamwini, a woman of color, had to put on a white mask before one algorithm would even detect a face.) Some have told Buolamwini that it’s not the algorithm’s fault but rather that cameras are poor at discerning black faces: “Algorithms aren’t racist,” the argument goes. “Your skin is just too dark.”

Good lord. The problem is not with “photography.” If your eye can discern difference, the camera can, too. It’s true that camera technology has historically favored light skin. But that’s less a factor of underlying technology than of the skewed market forces and customer base that shaped early photography. In other words: it was a miserable design decision. For decades, for example, Kodak’s development process for color film was calibrated to photos called “Shirley cards” (named after the first model to pose for them). Shirley cards reflected a decidedly white concept of beauty. “In the early days, all of them were white and often tagged with the word ‘normal,’” NPR reported.

Now we’re carrying this original bias into the machine-learning era. Machine learning excels at determining what’s “normal” and trying to replicate it—or discard outliers. What the machines think is normal depends entirely on the data we feed their models. As the era of the algorithm begins to embrace the whole broad world, it’s urgent that we examine what “normal” really is and work to avoid propagating exclusionary notions of the past by encoding them into our models.

Instead of doing the hard work of creating truly inclusive algorithms, however, some suggest that Buolamwini should instead carry a lighting kit with her:

More than a few observers have recommended that instead of pointing out failures, I should simply make sure I use additional lighting. Silence is not the answer. The suggestion to get more lights to increase illumination in an already lit room is a stop gap solution. Suggesting people with dark skin keep extra lights around to better illuminate themselves misses the point.

Should we change ourselves to fit technology or make technology that fits us?

Who has to take extra steps to make technology work? Who are the default settings optimized for?

As always with emerging technologies, our challenge is making tech bend to our lives instead of the reverse. It’s profoundly unfair to make some lives bend more than others.

For designers, the arrival of the algorithm era introduces UX research challenges at an unprecedented scale. A big emerging job of design is to help identify where the prevailing definition of “normal” is flawed, and then move heaven and earth to make sure the data models embrace a new, more inclusive definition of normal. That is where we need to add more light.

Hackernoon | Algorithms Aren’t Racist. Your Skin Is Just Too Dark.
ai

The Future of the UX Designer

∞ Jun 1, 2017

Designer Anton Sten ponders the future role of digital designers in a world of more and more Alexas, Siris, and other non-visual interfaces. His conclusion is that much more of our work will be about designing for what goes wrong:

As technology offers us more and more options and possibilities, our work as UX-designers will grow to include even more edge-cases. As our acceptance of friction with these services continues to decrease, our work will increasingly need to include more ‘what if’ scenarios.

I agree, and I’m excited about this. Instead of etching buttons and controls for flows that we wholly control, I see our work evolving into the anticipation of scenarios that spin out of machine-generated content and interaction.

How will we handle the weird, the odd, the unexpected, and the wrong? These are exciting challenges, and they mean designing the experience and expectations around the interaction that the machines themselves create. Among other things, we have to help systems be smart enough to know when they’re not smart enough.

Anton Sten | The Future of the UX Designer
running

Lori Richmond Started #ViewFromMyRun To Merge Her Two Passions

∞ Jun 1, 2017

So happy and proud for my studiomate Lori Richmond, a marvelous illustrator who’s also become an impressive runner over the last several months. Only a year after taking up running, Lori was just featured in Runner’s World—for her inspired illustration project.

Here’s the concept: after every training run, Lori draws or paints a scene she saw. And she executes it in exactly the time it took her to finish that run.

Short runs get quick impressions:

A post shared by Lori Richmond (@loririchmonddraws) on Mar 24, 2017 at 6:09pm PDT

…while long runs get tons of attention and detail:

A post shared by Lori Richmond (@loririchmonddraws) on May 7, 2017 at 6:50pm PDT

Lori’s been posting all of these at her @loririchmonddraws Instagram account. And Runner’s World took notice, which is super-fun recognition for a new runner who’s already logged four half marathons. (Her goal is to run a half marathon in each of New York’s five boroughs.)

“I got kicked out of gym in the 6th grade because I was SO un-athletic,” Lori posted in our studio Slack channel. “The art kids will always come back for you!!”

Runner's World | This Artist Creates Awesome Drawings With Inspiration From Her Training Runs
design system

“The Style Guide Guide”

∞ May 15, 2017

At An Event Apart today, Brad Frost announced the release of “The Style Guide Guide,” a boilerplate template for building style guides for design systems. It’s a terrific starter kit, cribbing the information architecture we’ve found to be most successful in our work on big enterprise design systems.

The Style Guide Guide imports and displays design-pattern HTML from a separate pattern library. Anytime there’s a change in the underlying code of the patterns, the style guide picks it up—always up to date. From there, The Style Guide Guide mixes in your documentation, usage guidelines, and design principles. Because those are entered in Markdown, it’s easy for a whole team to contribute documentation and guidelines, with a very low technical barrier to entry

Brad, Dan, Ian, and I have been using The Style Guide Guide alongside Pattern Lab in our last few design-system projects. It’s proven to be a highly collaborative environment for creating and sharing a design system. (Stay tuned, Brad promises a blog post to walk you through the integration with Pattern Lab, very cool.)

Github | The Style Guide Guide
algorithms

No Need for Alarm About How Neural Nets Work

∞ May 1, 2017

Albert Wenger writes that concerns about “black box” algorithms are overwrought. (See here and here for more about these concerns.) It’s okay, Wenger says, if we can’t follow or audit the logic of the machines, even in life-and-death contexts like healthcare or policing. We often have that same lack of insight into the way humans make decisions, he says—and so perhaps we can adapt our current error prevention to the machines:

It all comes down to understanding failure modes and guarding against them.

For instance, human doctors make wrong diagnoses. One way we guard against that is by getting a second opinion. Turns out we have used the same technique in complex software systems. Get multiple systems to compute something and act only if their outputs agree. This approach is immediately and easily applicable to neural networks.

Other failure modes include hidden biases and malicious attacks (manipulation). Again these are no different than for humans and for existing software systems. And we have developed mechanisms for avoiding and/or detecting these issues, such as statistical analysis across systems.

Continuations | No Need for Alarm About How Neural Nets Work
google

Not OK, Google

∞ May 1, 2017

Ben Thompson reacts to Google’s latest effort to bury fake news and hate speech. In particular, he throws a flag on Google’s plan to favor “authoritative” sources—and especially on the fact that Google will almost certainly not reveal what grants a site this privileged status.

Google is going to be making decisions about who is authoritative and who is not, which is another way of saying that Google is going to be making decisions about what is true and what is not, and that demands more transparency, not less.

For better or worse, of course, Google is our de facto truth machine. Most of the world turns to its search engine to answer a question. That’s what makes this whole situation so thorny: as the world’s primary source for facts, Google must be more discerning than it is now. And yet the act of being more discerning amplifies its influence even more.

Perhaps the most unanticipated outcome of the unfettered nature of the Internet is that the sheer volume of information didn’t disperse influence, but rather concentrated it to a far greater degree than ever before, not to those companies that handle distribution (because distribution is free) but to those few that handle discovery.

Stratechery | Not OK, Google
process

How to Design New Information Environments That Don’t Suck

∞ May 1, 2017

Jorge Arango conjures Gall’s Law, the 40-year-old dictum of systems design that remains as relevant as ever:

“A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.”  — John Gall

Every ambitious project launches amid a thicket of fears and grand hopes. The worst thing you can do is try to design for all those assumed outcomes (let alone the edge cases). Start with a sturdy but simple system and build from there as you learn. As Jorge writes, that’s the appeal (and necessity) of the MVP:

When the product is real and can be tested, it can (and should) evolve towards something more complex. But baking complexity into the first release is a costly mistake. (Note I didn’t say it “can be”. It’s guaranteed.)

Jorge Arango | How to Design New Information Environments That Don’t Suck
DDM

About.com CEO Neil Vogel at Business Insider

∞ Apr 26, 2017
Neil Vogel, CEO of About.com

“I’m not going to be the guy who ruined About.com,” About.com CEO Neil Vogel told Business Insider. “It’s already ruined, so this is all upside here.”

Vogel was referring to his plans to retire the About.com brand next week, on May 2. About.com is one of the most venerable Internet properties out there, over two decades old and still one of the top 100 by traffic. The content will live on, but across several different verticals, none of which will carry the About.com name. About.com is dead; long live About.com.

Shutting down that brand might have the ring of failure, but it turns out it’s a pretty remarkable turnaround story. I’ve been lucky enough to see that turnaround up close.

A few years ago, Google’s algorithm started treating the general-interest site as a content farm, and the site’s search ranking plummeted. At the same time, advertisers were backing out, preferring more targeted sites over About.com (WebMD, for example, instead of About.com’s Health section). Fortunes were not looking good.

In early 2016, Big Medium teamed up with About.com to create new vertical brands out of About.com content. We crafted the brands, designed the sites, and helped revamp the company’s design process. Over the past year, we designed three verticals and advised on the branding for a fourth. These verticals took About.com’s enormous library of how-to content, dusted it off, and wrapped it in premium, branded sites.

The first one was Verywell, a health vertical:

Health is our most valuable, most-trafficked, biggest vertical, so we came up with an idea. Our content is very much in the style of like WebMD or Everyday Health. But we thought those sites, we just didn’t think they have served a market need. We thought that we could make a beautiful, kinder, gentler health site. You go to these some of other sites with a headache, you think you have a brain tumor. You come to us with a headache, we’re going to make your headache feel better and explain why you had a headache and make it better. That was the thesis.

So we took our 100,000 pieces of health content of About.com, threw 50,000 in the garbage because they were old. We didn’t like them. The other 50 [thousand] were read by our writers. If it was medical information; it was read by a doctor. We had 30,000 pieces of content read by physicians, edited, cleaned up. Built a brand-new site from scratch, a new taxonomy for our content, put it on the site.

We did that. We built this beautiful new site from scratch, everything from scratch.

Together we created the new brand, cleaned up the information architecture, and importantly got rid of a ton of cheap advertising. With fewer ads per page and a new premium brand, traffic skyrocketed and revenue soared.

I think we had 8 million uniques when we started a month, I think we have 17 million uniques now to Very Well. So we’ve pretty much doubled in size in 12 months. We’re by far the fastest-growing thing in the health space. I think we’re No. 4 or 5 on comScore on health because our bet was right …

We knew that this would work. Then we launched something in the summer. Ran a very similar playbook on our personal-finance content called The Balance, which has pretty much doubled in traffic since we launched it this summer. We launched something called Lifewire in November, which is our evergreen-content tech site — how to fix my router, how to unbrick my iPhone. We launched three weeks ago, about a month ago something called The Spruce, which is the third-biggest home site on the internet, only behind HGTV and the Hearst Brands. We had such scale on About, that we’re launching these new brands into the world that are new to the space with no legacy issues, look like start ups, but all of a sudden, like we’re top 10 in comScore because we’re coming with such scale. The market’s like, "What? Where did you guys come from?"

It was a treat to work with the whole crew at About.com. There’s a lot of experience under that roof, and it’s been amazing to help release so much pent-up potential.

Vogel says that the About.com name will finally be retired next week to be replaced with a new brand name.

Business Insider | About.com CEO Neil Vogel Interview
business

Dan Mall on Freelance.tv

∞ Apr 26, 2017
Dan Mall Freelance.tv

Over at freelance.tv, my pal and collaborator Dan Mall shares the goods on what it takes to be a world-class indie designer. Dan is not only one of the most talented designers I know, he’s also one of the most generous, openly sharing his hard-earned wisdom of making it work in this industry.

Here’s Dan on the early days of starting his design collaborative SuperFriendly:

I figured out what I was really good at, I figured out what I was good at that I didn’t want to do. I figured out what I was bad at. I figured out what I was bad at that actaully clients were asking for, so I should get better at that stuff.…

The ability to be a generalist is really important for a freelancer. When you’re working by yourself, you’re the CEO but you’re also the janitor. You’ve gotta take care of the plants, too.…

There’s an interesting time in the life of a freelancer when you decide, “I want to team up with somebody, or collaborate with somebody, or hire somebody to do the jobs that I’m not particularly good at.”

I’m very honored to be one of those collaborators—and very happy that Dan is so freakin’ good at so many of the jobs that I’m not good at.

Freelance.tv - Dan Mall Interview
bots

The Humans Hiding Behind the Chatbots

∞ Apr 24, 2017

Ellen Huet, writing for Bloomberg, peeks in on the worklife of the people who backstop the bots by reviewing answers and frequently stepping in to provide their own. They are the “humans pretending to be robots pretending to be humans.”

Huet talked to people who filled this role at two services that automate calendar scheduling, X.ai and Clara, and I t doesn’t sound like the world’s most fulfilling work:

Calvin said he sometimes sat in front of a computer for 12 hours a day, clicking and highlighting phrases. “It was either really boring or incredibly frustrating,” he said. “It was a weird combination of the exact same thing over and over again and really frustrating single cases of a person demanding something we couldn’t provide.”

[…]

As another former X.ai trainer put it, he wasn’t worried about his job being replaced by a bot. It was so boring he was actually looking forward to not having to do it anymore.

I’m confident that putting people in the bot role is the right way to prototype bot services with very small trial audiences. It lets you hone your understanding of what people actually want and build a good set of training data as well as the voice and tone of the service. But it’s also clear that this kind of work—focusing relentlessly and mind-numbingly on the same narrow micro-interaction—is not meant for long-term job roles.

This is why people are trying to automate this stuff in the first place. The risk is that, during the transition, the tedium of modeling this automation will fall heavily and narrowly on a small group who wind up working for the bots, rather than the reverse. How might we avoid making this the future of work?

Bloomberg | The Humans Hiding Behind the Chatbots
  • ◀︎
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • ▶︎
Big MediumBig Medium logo Back to top
Skip Navigation
  • Ideas
  • Projects
  • Talks
  • About
  • Hire Us

Read us

Book cover of Sentient Design by Josh Clark with Veronika Kindred

Work with us

  • Interface and experience design
  • Sentient Design and AI
  • Digital strategy and process
  • Design systems
  • Production and co-creation
  • Action plan
  • Coaching and hands-on advice
  • Workshops and talks

Follow us

Get the newsletter

    • Twitter
    • RSS
    • Instagram
    • Github

    Contact us

    Start with Josh Clark

    josh@bigmedium.com
    (401) 339-3381

    Big Medium is a Global Moxie company.
    Copyright 2003–2025 Global Moxie, LLC. All rights reserved.