Big MediumBig Medium logo
Skip Navigation
  • Ideas
  • Projects
  • Talks
  • About
  • Hire Us
Speak Search Menu

What We’re Reading

principles

Climbing Out Of Facebook's Reality Hole

∞ Apr 22, 2017

BuzzFeed’s Mat Honan takes a world-weary view of Facebook’s unsurprisingly boosterish presentation of new technologies at the company’s F8 show. In particular, he’s disappointed the company didn’t do more to acknowledge the potential for abuse in this new tech:

The problem with connecting everyone on the planet is that a lot of people are assholes. The issue with giving just anyone the ability to live broadcast to a billion people is that someone will use it to shoot up a school. You have to plan for these things. You have to build for the reality we live in, not the one we hope to create. …

Executive after executive took the F8 stage to show off how these effects will manifest themselves in the real world. Deborah Liu, who runs Facebook’s monetization efforts, encouraged the audience to “imagine all the possibilities” as she ran through demos of a café where people could leave Yelp-style ratings tacked up in the air and discoverable with a phone, or a birthday message she generated on top of an image of her daughter, while noting that with digital effects, “I can make her birthday even more meaningful.”

And yet the dark human history of forever makes it certain that people will also use these same tools to attack and abuse and harass and lie. They will leave bogus reviews of restaurants to which they’ve never been, attacking pizzerias for pedophilia. If anyone can create a mask, some people will inevitably create ones that are hateful. …

But Facebook made no nods to this during its keynote — and realistically maybe it’s naive to expect the company to do so. But it would be reassuring to know that Facebook is at least thinking about the world as it is, that it is planning for humans to be humans in all their brutish ways. A simple “we’re already considering ways people can and will abuse these tools and you can trust us to stay on top of that” would go a long way.

I like that. Simply acknowledging potential problems—and stating your resolve in solving them—is a way to make your values clear and to start to bake them into the product and organization, too.

BuzzFeed | Climbing Out Of Facebook's Reality Hole
bots

Facebook’s Perfect, Impossible Chatbot

∞ Apr 22, 2017

At MIT Technology Review, Tom Simonite writes about Facebook’s efforts to make its automated assistant M answer pretty much any request that comes its way, not matter how obscure. And for a very small group of beta testers, the bot actually works, delivering results so good you’d swear you were talking to a human being. Because you are.

M is so smart because it cheats. It works like Siri in that when you tap out a message to M, algorithms try to figure out what you want. When they can’t, though, M doesn’t fall back on searching the Web or saying “I’m sorry, I don’t understand the question.” Instead, a human being invisibly takes over, responding to your request as if the algorithms were still at the helm. (Facebook declined to say how many of those workers it has, or to make M available to try.)

That design is too expensive to scale to the 1.2 billion people who use Facebook Messenger, so Facebook offered M to a few thousand users in 2015 as a kind of semi-public R&D project. Entwining human workers and algorithms was intended to reveal how people would react to an omniscient virtual assistant, and to provide data that would let the algorithms learn to take over the work of their human “trainers.”

This is the way I’ve been prototyping chatbots, too: start with simple human-to-human interactions.

I’m a big fan of this kind of prototype that put people where the pipes will eventually go. In a way, Uber is a similar prototype for self-driving cars: until the robots get the go-ahead to drive on their own, we’ll put a human in the driver’s seat and automate the rest of the experience (calling a car, giving directions, paying the tab).

When you’re trying out new interactions for untested or emerging technologies, the best MVP is often no tech at all. Powering a bot with people instead of artificial intelligence gets you early info about what people want, how they respond, and the kind of language to use. It proves out the demand of the service, hints at the shape it should take, and offers training data to give to the bots down the road.

Eventually the AI steps in. At Facebook, they’re still trying to use all that data to train the bots well enough so that they can take over. Simonite shares some of the techniques the M team is using, with mixed results. Even though machine-learning breakthroughs are coming fast and furious, the holy grail of broad and instant natural-language understanding is still tantalizingly out of reach. “Sometimes we say this is three years, or five years,“ M’s leader Laurent Landowski told Simonite. ”But maybe it’s 10 years or more.”

MIT Technology Review | Facebook’s Perfect, Impossible Chatbot
publishing

Instant Recall

∞ Apr 22, 2017

Writing for The Verge, Casey Newton reports that publishers are abandoning Facebook’s Instant Articles format:

Two years after it launched, a platform that aspired to build a more stable path forward for journalism appears to be declining in relevance. At the same time that Instant Articles were being designed, Facebook was beginning work on the projects that would ultimately undermine it. Starting in 2015, the company’s algorithms began favoring video over other content types, diminishing the reach of Instant Articles in the feed. The following year, Facebook’s News Feed deprioritized article links in favor of posts from friends and family. The arrival this month of ephemeral stories on top of the News Feed further de-emphasized the links on which many publishers have come to depend.

In discussions with Facebook executives, former employees, publishers, and industry observers, a portrait emerges of a product that never lived up to the expectations of the social media giant, or media companies. After scrambling to rebuild their workflows around Instant Articles, large publishers were left with a system that failed to grow audiences or revenues.

Building a business on top of someone else’s platform offers little control or visibility—and ties your fortunes to their priorities, not your own. Newton writes that many publishers are instead throwing in with Google’s AMP platform, which feels like a frying-pan-to-fire maneuver.

The Verge | Instant Recall
sentient design

Our Machines Now Have Knowledge We’ll Never Understand

∞ Apr 22, 2017

David Weinberger considers what it means that machines now construct their own models for understanding data, quite divorced from our own (more simplistic) models. “The nature of computer-based justification is not at all like human justification. It is alien,” Weinberger writes. "But ‘alien’ doesn’t mean ‘wrong.’ When it comes to understanding how things are, the machines may be closer to the truth than we humans ever could be.”

The complexity of this alien logic often makes it completely opaque to humans—even those who program it. If we can’t understand the basis of machine-delivered “truths,” Weinberger suggests, they become categorically different from what we’ve always considered to be “knowledge”:

Clearly our computers have surpassed us in their power to discriminate, find patterns, and draw conclusions. That’s one reason we use them. Rather than reducing phenomena to fit a relatively simple model, we can now let our computers make models as big as they need to. But this also seems to mean that what we know depends upon the output of machines the functioning of which we cannot follow, explain, or understand. … If knowing has always entailed being able to explain and justify our true beliefs — Plato’s notion, which has persisted for over two thousand years — what are we to make of a new type of knowledge, in which that task of justification is not just difficult or daunting but impossible? …

One reaction to this could be to back off from relying upon computer models that are unintelligible to us so that knowledge continues to work the way that it has since Plato. This would mean foreswearing some types of knowledge. We foreswear some types of knowledge already: The courts forbid some evidence because allowing it would give police an incentive for gathering it illegally. Likewise, most research institutions require proposed projects to go through an institutional review board to forestall otherwise worthy programs that might harm the wellbeing of their test subjects.

This is super-intriguing: what are the circumstances where the stakes are so high that we simply can’t allow ourselves to trust the conclusions of our machines, not matter how confident we may be in the algorithm? When it comes to “forbidden” areas of machine-learning models, Weinberger points out credit agencies are already forbidden from tying certain predictive models to credit scores. If the machines decide that certain races, religions or ethnicities are prone to lower or higher credit scores, for example, credit agencies are legally forbidden from acting on that info.

The reason this is a dangerous area is because the machines’ conclusions are only as valuable as the training data we feed to them. And that training data depends on the perspective (and bias) of the folks who collect it:

For example, a system that was trained to evaluate the risks posed by individuals up for bail let hardened white criminals out while keeping in jail African Americans with less of a criminal record. The system was learning from the biases of the humans whose decisions were part of the data. The system the CIA uses to identify targets for drone strikes initially suggested a well-known Al Jazeera journalist because the system was trained on a tiny set of known terrorists. Human oversight is obviously still required, especially when we’re talking about drone strikes instead of categorizing cucumbers.

We’re still in the early days of what this oversight and machine-human partnership might look like, but we’re going to have to learn fast. Machine learning has suddenly become inexpensive and accessible to a whole range of organizations and uses, and we see it everywhere. This revolution has revealed the complexity of everyday systems at the same time that it’s let us cut right through them through the capacity and speed of modern computing—even if we don’t understand how we got there.

Where once we saw simple laws operating on relatively predictable data, we are now becoming acutely aware of the overwhelming complexity of even the simplest of situations. Where once the regularity of the movement of the heavenly bodies was our paradigm, and life’s constant unpredictable events were anomalies — mere “accidents,” a fine Aristotelian concept that differentiates them from a thing’s “essential” properties — now the contingency of all that happens is becoming our paradigmatic example.

This is bringing us to locate knowledge outside of our heads. We can only know what we know because we are deeply in league with alien tools of our own devising. Our mental stuff is not enough.

Backchannel | Our Machines Now Have Knowledge We’ll Never Understand
design

Plainness and Sweetness

∞ Apr 15, 2017

Frank Chimero mulls the beauty of the plain and the normal in design. I like the implicit humility Frank suggests in designs that root their beauty in the quiet satisfaction of their function—not “an overly accentuated, hyper-specific identity”:

I am for a design that’s like vanilla ice cream: simple and sweet, plain without being austere. It should be a base for more indulgent experiences on the occasions they are needed, like adding chocolate chips and cookie dough. Yet these special occassions are rare. A good vanilla ice cream is usually enough. I don’t wish to be dogmatic—every approach has its place, but sometimes plainness needs defending in a world starved for attention and wildly focused on individuality. Here is a reminder: the surest way forward is usually a plain approach done with close attention to detail. You can refine the normal into the sophisticated by pursuing clarity and consistency. Attentiveness turns the normal artful.

Frank Chimero | Plainness and Sweetness
sentient design

Algorithm-Driven Design: How AI is Changing Design

∞ Apr 15, 2017

Designer Yury Vetrov collected this wide-ranging set of algorithm-driven design projects. Spin through for a glimpse at the emerging role of machine learning in everyday digital design.

Examples include automated web designs from The Grid CMS and Wix, as well as the machine-generated page layouts at Vox and Flipboard. There are also bot-built logos, type pairings, image generators, content-aware photo croppers, and more.

Lots to see and learn here about how designers will collaborate with our robot overlords.

Algorithm-Driven Design: How AI is Changing Design
advertising

The Crisis of Attention Theft

∞ Apr 15, 2017

Tim Wu writing for Wired:

Consider, for example, the “innovation” known as Gas Station TV—that is, the televisions embedded in gasoline pumps that blast advertising and other pseudo-programming at the captive pumper. There is no escape: as the CEO of Gas Station TV puts it, “We like to say you’re tied to that screen with an 8-foot rubber hose for about five minutes.” It is an invention that singlehandedly may have created a new case for the electric car.

Attention theft happens anywhere you find your time and attention taken without consent. The most egregious examples are found where, like at the gas station, we are captive audiences. In that genre are things like the new, targeted advertising screens found in hospital waiting rooms (broadcasting things like “The Newborn Channel” for expecting parents); the airlines that play full-volume advertising from a screen right in front of your face; the advertising-screens in office elevators; or that universally unloved invention known as “Taxi TV.”

What to do about ad screens that are imposed on us in these captive scenarios? Wu suggests towns and cities have managed this problem before:

In the 1940s cities banned noisy advertising trucks bearing loudspeakers; the case against advertising screens and sound-trucks is basically the same. It is a small thing cities and towns can do to make our age of bombardment a bit more bearable.

Wired | The Crisis of Attention Theft—Ads That Steal Your Time for Nothing in Return
ai

The Dark Secret at the Heart of AI

∞ Apr 14, 2017

At MIT Technology Review, Will Knight writes about the unknowable logic of our most sophisticated algorithms. We are creating machines that we don’t fully understand. Deep Patient is one example, a system that analyzes hundreds of thousands of medical records looking for patterns:

Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, [project leader Joel] Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”

As deep learning begins to drive decisions in some of the most intimate and impactful aspects of life and culture—policing, medicine, banking, military defense, even how our cars drive—what do we need to know about how they think?

As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable?

This is especially important when the machines come up with bad answers. How do we understand where they went wrong? Or to know how to help them learn from the mistake? Knight offers a few examples of how researchers are experimenting with this, and many come down to new ways of visualizing and presenting the logic flow.

This resonates strongly with a key belief I have: the design of data-driven interfaces has to get just as much attention as the underlying data science itself—perhaps even more. If we’re going to build systems smart enough to know when they’re not smart enough, we need to be especially clever about how those systems signal the confidence of their answers and how they arrived at them. That’s the stuff of truly useful human-machine partnerships, and it’s a design problem I find myself working on more and more these days.

One hitch: we humans aren’t always so great at explaining our thinking or biases, either. What makes us think that we can train machines to do it any better?

Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”

MIT Technology Review | The Dark Secret at the Heart of AI
design system

Managing Technology-Agnostic Design Systems

∞ Apr 14, 2017

Brad Frost tackles the challenge of building a design system that works across different tech frameworks. It’s a nasty and common conundrum.

If your design system or pattern library puts down roots too deeply into any one JavaScript framework—Angular or React, for example—or a specific templating engine, then you’ve created a barrier for teams using a different tech stack. (And you’ve likewise created a barrier for future change, when product teams want to shift to a new and improved tech stack.)

Brad outlines the approach that we’ve followed in the design system work we’ve done together: the core design system should focus on the rendered interface—the HTML, CSS, and presentational JavaScript. It should otherwise be “tech-agnostic” on implementation. Easier said than done, of course:

Of course there’s a lot of work that goes into getting that HTML, CSS, and presentational JavaScript onto a page. That’s why teams reach for different backend languages, templating languages, and JavaScript frameworks. Which is where things get complicated. The evolution of JavaScript has especially made things thorny. since it’s gone from simple UI manipulation (a la jQuery) to full-fledged application frameworks (a la React, Angular, Ember, et al). It can be tough to find to find the seams of where the canonical design system ends and where the tech-specific version begins.

Brad suggests that development teams then build implementation-specific versions of the components that match the recommended rendered output. So you might have a React layer, an Angular layer, and so on. But those implementation details are all carefully segregated from the recommended markup.

The design system itself doesn’t care how you build it as long as the end result comes out the right way. Of course, developers do care how it’s built, and one promise of design systems is to deliver efficiencies there. So organizations should make it a goal for teams to share those platform-specific implementations, Brad writes:

This architecture provides a clear path for getting the tech-agnostic, canonical design system into real working software that uses specific technologies. Because it doesn’t bet the farm on any one technology, the system is able to adapt to inevitable changes to tools, technologies, and trends (hence the placeholder for the “new hotness”). Moreover, product teams that share a tech stack can share efforts in maintaining the tech-specific version of the design system.

Brad Frost | Managing Technology-Agnostic Design Systems
facebook

The More You Use Facebook, the Worse You Feel

∞ Apr 11, 2017

In Harvard Business Review, Holly B. Shakya and Nicholas A. Christakis report the results of their rigorous study of Facebook use:

Overall, our results showed that, while real-world social networks were positively associated with overall well-being, the use of Facebook was negatively associated with overall well-being. These results were particularly strong for mental health; most measures of Facebook use in one year predicted a decrease in mental health in a later year. We found consistently that both liking others’ content and clicking links significantly predicted a subsequent reduction in self-reported physical health, mental health, and life satisfaction.

Our models included measures of real-world networks and adjusted for baseline Facebook use. When we accounted for a person’s level of initial well-being, initial real-world networks, and initial level of Facebook use, increased use of Facebook was still associated with a likelihood of diminished future well-being. This provides some evidence that the association between Facebook use and compromised well-being is a dynamic process.

Be careful out there.

Harvard Business Review | The More You Use Facebook, the Worse You Feel
stats

Web Performance Optimization Stats

∞ Apr 11, 2017

WPO Stats is a super-useful collection of stats from Tammy Everts and Tim Kadlec to demonstrate the business value of faster websites. If you need support for making the business case for your performance project, here’s your go-to library.

A sampling:

BBC has seen that they lose an additional 10% of users for every additional second it takes for their site to load. [source]

…

AliExpress reduced load time by 36% and saw a 10.5% increase in orders and a 27% increase in conversion for new customers. [source]

…

For every 100ms decrease in homepage load speed, Mobify’s customer base saw a 1.11% lift in session based conversion, amounting to an average annual revenue increase of $376,789. [source]

WPO Stats
algorithms

Google Launches New Effort To Flag Upsetting or Offensive Content in Search

∞ Apr 10, 2017

I missed this a few weeks back. At Search Engine Land, Danny Sullivan reported that Google is empowering its 10,000 human reviewers to start flagging offensive content, an effort to get a handle on hate speech in search results. The gambit: with a little human help from these “quality raters,” the algorithm can learn to identify what I call hostile information zones.

Sullivan writes:

The results that quality raters flag is used as “training data” for Google’s human coders who write search algorithms, as well as for its machine learning systems. Basically, content of this nature is used to help Google figure out how to automatically identify upsetting or offensive content in general.…

Google told Search Engine Land that has already been testing these new guidelines with a subset of its quality raters and used that data as part of a ranking change back in December. That was aimed at reducing offensive content that was appearing for searches such as “did the Holocaust happen.”

The results for that particular search have certainly improved. In part, the ranking change helped. In part, all the new content that appeared in response to outrage over those search results had an impact.

“We will see how some of this works out. I’ll be honest. We’re learning as we go,” [Google engineer Paul Haahr] said.

Search Engine Land | Google Launches New Effort to Flag Upsetting or Offensive Content in Search
google

The Most Successful Interface Design of All Time

∞ Apr 10, 2017

Erika Hall at Medium:

In ~20 years, Google Search lost some cruft and gained speech recognition, but the fundamental design of the entry page is virtually identical.

“Fast, easy, and useful beats all,” Erika writes.

Amen. What’s the job of the page? How can you focus exclusively on that and cut out the extraneous?

Erika Hall | The Most Successful Interface Design of All Time
ai

Federated Learning: Collaborative Machine Learning without Centralized Training Data

∞ Apr 9, 2017

Google researchers Brendan McMahan and Daniel Ramage report that Google has begun offloading some of its machine learning to mobile devices to keep private data… private. Instead of pulling all your personal info into a central system to train its algorithms, Google has developed the chops to let your phone do that data analysis. They call it federated learning:

Federated Learning allows for smarter models, lower latency, and less power consumption, all while ensuring privacy. And this approach has another immediate benefit: in addition to providing an update to the shared model, the improved model on your phone can also be used immediately, powering experiences personalized by the way you use your phone.

We’re currently testing Federated Learning in Gboard on Android, the Google Keyboard. When Gboard shows a suggested query, your phone locally stores information about the current context and whether you clicked the suggestion. Federated Learning processes that history on-device to suggest improvements to the next iteration of Gboard’s query suggestion model.

Old way: beam everything you do on your Google keyboard (!!) back to the mothership. New way: keep it all local, and beam back only an encrypted summary of relevant learnings. “Your device downloads the current model, improves it by learning from data on your phone, and then summarizes the changes as a small focused update.” To do this, Google has smartphones running a minature version of TensorFlow, the open-source software library for machine learning .

One knock against predictive interfaces is how much you have to give up about yourself to get the benefits. If this new model works as promised, new systems may be just as helpful, without the central service absorbing your nitty-gritty details to learn how.

Google Research | Federated Learning: Collaborative Machine Learning without Centralized Training Data
design system

The Full Stack Design System

∞ Apr 9, 2017

Emmet Connolly shared some wonderful thoughts about pattern libraries—and how they’re only one part of a full design system:

But there are a few problems with pattern libraries. Yes, they allow you to keep all of the smallest elements consistent. But they don’t have an opinion about how they should be put together. They don’t know anything about your product or the concepts behind it.

To return to our Lego analogy, simply having a limited pattern library of bricks to choose from doesn’t preclude me from building some really crazy shit.

Now think about those branded Lego kits you can buy. Each piece is much more opinionated. It knows what it’s going to get used for. There are still generic pieces involved, but when you put them together in a certain way they form something specific, like the leg of an AT-AT Walker. This is a design system.

I love it. Design systems are more than a kit of parts. The best design systems have a strong point of view—a gravitational force that coerces disparate components into patterns and ultimately into a coherent whole. The design system brings order to the pattern library and what would otherwise appear to be a chaotic jumble of components.

Another metaphor: if components are words, then patterns are sentences, and the design system is the full story.

If this nested arrangement echoes Brad Frost’s Atomic Design methodology, that’s by design. Atomic Design champions design elements built from a common set of lesser design elements. In Atomic Design, UI “atoms” assemble “molecules” which assemble “organisms” which assemble templates which assemble pages.

Atomic Design elements
The matroyshka-doll elements of Brad Frost’s Atomic Design methodology.

But there’s a common misunderstanding about Atomic Design which Connolly in turn suggests is a limitation:

Atomic Design will tell you to take some of your basic elements (label, input, button), stick them together, and call it a molecule. Then you can reuse that molecule again and again. Further, you can stick some molecules together to form a reusable organism.

The problem with every real-world example of a system like this that I’ve encountered is that they remain willfully unaware of the product being built.

Atomic Design does indeed promote reuse, assembling larger parts from smaller ones. However, many mistake this philosophy for linear process, that somehow Atomic Design demands that all design must first start by building its smallest pieces (e.g. “start with buttons and labels”) before proceeding to page- and site-level design. It’s an approach that would indeed be blind to the end-result project, placing design tactics ahead of design strategy. But that’s exactly opposite to how Brad himself approaches projects.

It’s never a linear path from small to large; it’s a constant roundtrip between the two scales.

Right from the start, when Brad was first developing his tools and methodologies in our designs of TechCrunch and Entertainment Weekly, our process constantly zoomed back and forth between page level and atomic level. It’s never a linear path from small to large; it’s a constant roundtrip between the two scales.

As Connolly writes, “Complex systems can be designed, but to do so you must first sketch the outline. Only then can you start filling in the detail.”

Well said, and I totally agree. Indeed, our Atomic Design projects always begin with the big-picture questions. What are the business goals for the project? What are the user needs? What’s the brand promise? When we get to individual pages, it’s about the user mindset when they arrive, and the jobs the page has to do for both user and company.

From there, we do sketching of the whole page, identifying the broad design patterns that the page needs to do its job. We start to imagine the components necessary to bring those patterns to life.

Only then do we start to work at the atomic level, building out those component atoms and molecules to construct the pattern organisms, and ultimately the page itself. As more high-level pages and components are designed, we zoom back down to revisit the atoms and molecules, making adjustments to make them more flexible and support a wider range of organisms and pages. The atoms and molecules might compose the design, but it’s the high-level design that creates the order, the overall system.

In the end, a pattern library emerges. Here’s the important bit: the design system is implicit in the process that led to the library’s construction, and it’s implicit in the design’s use of components. For a small team on a contained project, that implicit knowledge may well be enough, commonly shared in the heads of the designers who built it.

But implicit knowledge won’t do when you’re working at scale across many projects and many teams. The design system has to be documented. That’s where all the other artifacts of a fully articulated design system come in: design principles, style guide, voice and tone, UX guidelines, code repository, and so on.

I agree very much with Connolly that those pieces are required for the “full-stack design system.” My only caveat is to add that an Atomic Design process can get you there, too.

Atomic Design surfaces all of those aspects during the course of the design process. Responsible designers document them.


For more, see The Most Exciting Design Systems Are Boring.

Is your organization wrestling with inconsistent interfaces and duplicative design work? Big Medium helps big companies scale great design through design systems. Get in touch for a workshop, executive session, or design engagement.


Inside Intercom | The Full Stack Design System
iot

Sleep Is the New Status Symbol

∞ Apr 9, 2017

At The New York Times, Penelope Green reports that sleep is big business—and the tech industry is rushing in to tweak our natural rhythms, with mixed results:

Mr. Mercier sent me his Dreem headset, a weighty crown of rubber and wire that he warned would be a tad uncomfortable. The finished product, about $400, he said, will be much lighter and slimmer. But it wasn’t the heft of the thing that had me pulling it off each night. It skeeved me out that it was reading — and interfering with — my brain waves, a process I would rather not outsource.

I was just as wary of the Re-Timer goggles, $299, which make for a goofy/spooky selfie in a darkened room. My eye sockets glowed a deep fluorescent green, and terrified the cat.

The science and research confirm that there’s an epidemic of sleeplessness, which is costly in both health and productivity. Are tech gadgets the answer when tech gadgets are likely a big part of the problem? Our screens keep us awake; always-on information demands contribute to anxiety and stress; and social FOMO is constant.

As technologists, we often suggest that more technology is the solution to technology’s problems. In the case of sleep, perhaps a little less technology is what’s needed. Green quotes “sleep ambassador” Nancy Rothstein:

“Your Fitbit and your Apple Watch are not going to do it for you. We’ve lost the simplicity of sleep. All this writing, all these websites, all this stuff. I’m thinking, Just sleep. I want to say: ‘Shh. Make it dark, quiet and cool. Take a bath.’”

The New York Times | Sleep Is the New Status Symbol
mobile

YouTube To Discontinue Video Annotations Because They Never Worked on Mobile

∞ Apr 9, 2017

At The Verge, Nick Stat reports that YouTube is dropping its video annotations:

“Annotations Editor launched in 2008, before the world went mobile,” writes YouTube product manager Muli Salem in a blog post. “With 60 percent of YouTube’s watchtime now on mobile, why go through the work of creating annotations that won’t even reach the majority of your audience?”

If it doesn’t work on mobile, it doesn’t work, period.

See also: Your Traffic Went Mobile; Why Hasn’t Your Design Process?

The Verge | YouTube To Discontinue Video Annotations Because They Never Worked on Mobile
ai

Self-Driving Cars Are Coming, but Self-Driving Tractors Are Already Here

∞ Apr 8, 2017

Kaleigh Rogers for Motherboard:

Self-driving tractors are becoming more common. A John Deere spokesperson told me the company currently has about 200,000 self-driving tractors on farms around the world, from the US to Germany. And they’re just one example of a major investment that the agriculture sector is making in artificial intelligence and the Internet of Things.

According to John Deere, between 60 and 70 percent of the crop acreage in North America today is farmed using GPS-driven tractors. (Source)

Farmers have been inhabiting the future for a long time—self-driving tractors have been on the go for 15 years. Industrial farms are big business; they feature wide-open spaces; and they operate on private property. All make of this makes farms ideal test beds for tech that includes autonomous vehicles, drones, artificial intelligence, and smart objects.

Motherboard | Self-Driving Cars Are Coming, but Self-Driving Tractors Are Already Here
  • ◀︎
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • ▶︎
Big MediumBig Medium logo Back to top
Skip Navigation
  • Ideas
  • Projects
  • Talks
  • About
  • Hire Us

Read us

Book cover of Sentient Design by Josh Clark with Veronika Kindred

Work with us

  • Interface and experience design
  • Sentient Design and AI
  • Digital strategy and process
  • Design systems
  • Production and co-creation
  • Action plan
  • Coaching and hands-on advice
  • Workshops and talks

Follow us

Get the newsletter

    • Twitter
    • RSS
    • Instagram
    • Github

    Contact us

    Start with Josh Clark

    josh@bigmedium.com
    (401) 339-3381

    Big Medium is a Global Moxie company.
    Copyright 2003–2025 Global Moxie, LLC. All rights reserved.