With Brits Used to Surveillance, More Companies Try Tracking Faces
∞ Dec 4, 2019The Wall Street Journal reports that companies are using UK’s omnipresent security cameras as cultural permission to bring facial-recognition tech to semi-public spaces, tracking criminal history but also ethnicity and other personal traits. “Retailers, property firms and casinos are all taking advantage of Britain’s general comfort with surveillance to deploy their own cameras paired with live facial-recognition technology,” writes Parmy Olson for the Journal ($). “Companies are also now using watch lists compiled by vendors that can help recognize flagged people who set foot on company property.” For example:
Some outlets of Budgens, a chain of independently owned convenience stores, have been using facial-recognition technology provided by Facewatch Ltd. for more than a year. Facewatch charges retailers for the use of a computer and software that can track the demographics of people entering a store, including their ethnicity, and screen for a watch list of suspected thieves through any modern CCTV camera. The system works by sending an alert to a staff member’s laptop or mobile device after detecting a face on the watch list. Retailers then decide how to proceed.
Why this matters
Assumptions about appropriate (or even inevitable) uses of tech become normalized quickly. As constant surveillance becomes the everyday, it’s all too easy to become resigned or indifferent as that surveillance deepens. Once the cultural foundation for a new technology sets, it’s difficult to change the associated expectations and assumptions—or see the status quo as anything other than inevitable, “just the way things work.” We see it in the decades-long expectation that online content is free and ad supported. We see it in the assumption that giving up personal data is just table stakes for using the internet. And now, with surveillance cameras—at least in the UK—we may be settling into a new expectation that simply moving through the world means that we are seen, tracked, monitored in a very granular, personal way.
The Journal suggests that the UK’s “comfort” with surveillance cameras makes it ripe for this. A 2013 survey found that Britain had the highest density of surveillance technology outside of China. Since then, the number of surveillance cameras in the UK has nearly doubled from six million to 10 million—one camera for every seven people.
This anti-theft surveillance affects more than just the guilty. Facial recognition is still pretty iffy in real-world conditions, and the false negatives these systems generate could lead to harassment for no good reason except that you walked into the store.
James Lacey, a staff member at one Budgens store in Aylesbury, southern England, said the system can ping his phone between one and 10 times a day. People have been known to steal large quantities of meat from the store’s refrigeration aisle when staff members are in the stock room, he said. The new system has helped, he said, though about a quarter of alerts are false. A spokesman for Facewatch said a maximum of 15% of alerts are false positives, based on its own analysis.
(Related: an ACLU study in 2018 found that Amazon’s facial-recognition service incorrectly matched the faces of 28 members of Congress to criminal mugshots.)
Automated identification has implications beyond crime prevention. What’s OK for these corporate systems to track in the first place? Gender? Race and ethnicity? Income? Browser history? Social relationships? Voting record? Sexual preference? The folks at Facewatch promise vaguely that tracking ethnicity “can help retailers understand their marketplace.” This smacks of a shrugging sensibility that “we can do it, so why wouldn’t we?” And that’s the worst reason to use a technology.
Regulation is evolving, but remains vague and often unenforced. Europe’s well-intentioned privacy regulation, the GDPR, puts facial and other biometric data in a special category that requires a company to have a “substantial public interest” in capturing and storing it. That’s fuzzy enough that companies can arguably allow companies to use the technology to fight crime. Tracking ethnicity to “help retailers understand their marketplace” seems like less of a slam dunk. There is also a gray area around how long businesses can hold on to such footage, or use it for other business purposes.
We should adopt a position on this stuff both culturally and civically. If we don’t, the technology will decide for us. What will your company’s position be? And how about you? What’s your stance as a practitioner designing the technology that will set the behaviors and expectations of the next generation?
Facebook Gives Workers a Chatbot to Appease That Prying Uncle
∞ Dec 3, 2019Facebook sent employees home for the holidays with robot talking points—in case the family had any questions about, y’know, the company’s cynical, grasping, overreaching, damaging, and irresponsible business model and use of technology. (Bots, it seems, are the only ones left who can deliver these lines with a straight face.) The New York Times reports:
If a relative asked how Facebook handled hate speech, for example, the chatbot — which is a simple piece of software that uses artificial intelligence to carry on a conversation — would instruct the employee to answer with these points:
- Facebook consults with experts on the matter.
- It has hired more moderators to police its content.
- It is working on A.I. to spot hate speech.
- Regulation is important for addressing the issue.
It would also suggest citing statistics from a Facebook report about how the company enforces its standards.
Inmates in Finland are training AI as part of prison labor
∞ Mar 29, 2019Grooming data for the machines has a human cost. The Verge reports that startup Vainu is using prisoners in Finland to tag Finnish-language articles. The company uses Mechanical Turk to do this for other languages, but Finnish-speaking turks are hard to come by. So they get (and pay) prison inmates to do it.
There are legit concerns of exploiting prisoners for low-wage labor, but perhaps a broader concern is that this hints at a bleak future of work in the age of the algorithm. Indeed this “future” is already here for a growing segment of humans—with Mechanical-Turk-level labor turns out to be, literally, prison labor.
This type of job tends to be “rote, menial, and repetitive,” says Sarah T. Roberts, a professor of information science at the University of California at Los Angeles who studies information workers. It does not require building high level of skill, and if a university researcher tried to partner with prison laborers in the same way, “that would not pass an ethics review board for a study.” While it’s good that the prisoners are being paid a similar wage as on Mechanical Turk, Roberts points out that wages on Mechanical Turk are extremely low anyway. One recent research paper found that workers made a median wage of $2 an hour.
As we design the future of technology, we also design the future of work. What might we do to improve the quality and pay of labor required to make automated systems work?
The Google Pixel 3 Is A Very Good Phone. But Maybe Phones Have Gone Too Far.
∞ Nov 14, 2018Mat Honan’s review of the Google Pixel 3 smartphone is a funny, harrowing, real-talk look at the devices that have come to govern our lives. “We are captives to our phones, they are having a deleterious effect on society, and no one is coming to help us,” he writes. “On the upside, this is a great phone.”
The Buzzfeed review is a world-weary acknowledgement of the downside of our personal technologies—its effect on our relationships, on our privacy, on our peace of mind. He does point out the new “digital wellbeing” features in Android, but offers other alternatives:
Another idea: You may instead choose to buy a device with a lousy screen and a lousy camera and a terrible processor. Maybe you would use this less. Or maybe you should walk to the ocean and throw your phone in and turn around and never look back**.
**Please do not do this. It would be very bad for the ocean.
Related recommendation for designers and product makers: check out Liza Kindred’s Mindful Technology for strategies and techniques for making products that focus attention instead of distract it.
Getting the iPad to Pro
∞ Nov 14, 2018Craig Mod considers the new iPad Pro and finds that its sleek and speedy hardware highlights the software’s flaws. Craig is one of the biggest iPad fans and power users I know, and it’s a fascinating read to get the rundown of the weird snags that slow his flow.
I have a near endless bag of these nits to share. For the last year I’ve kept a text file of all the walls I’ve run into using an iPad Pro as a pro machine. Is this all too pedantic? Maybe. But it’s also kind of fun. When’s the last time we’ve been able to watch a company really figure out a new OS in public?
And I think that’s a great way to think about it. Nearly a decade into the iPad form factor, Apple is still trying to sort out the interaction language suited to these jumbo slices of glass. How will this evolve, and what will our future workflow look like? The details elude us, but Craig’s vision sounds good to me:
The ideal of computing software — an optimized and delightful bicycle for the mind — exists somewhere between the iOS and macOS of today. It needs to shed the complexities of macOS but allow for touch. Track pads, for example, feel downright nonsensical after editing photos on an iPad with the Pencil. But the interface also needs to move at the speed of the thoughts of the person using it. It needs to delight with swiftness and capability, not infuriate with plodding, niggling shortcomings. Keystrokes shouldn’t be lost between context switches. Data shouldn’t feel locked up in boxes in inaccessible corners.
Design Tools Are Running Out of Track
∞ Oct 14, 2018About a year ago, Colm Tuite reviewed the state of UI design tools and found them wanting: Design Tools Are Running Out of Track. If anything, his critique feels even more relevant a year later. Our most popular design tools are fundamentally disconnected from the realities and constraints of working software:
- They generate static images in an era of voice, video, motion, and complex interactions. (“Our design tools should manipulate the actual product, not a picture of it.”)
- They have no awareness of the layout conventions of the web, so they don’t help designers work with the grain of CSS grid and flexbox.
- They’re tuned for infinite flexibility instead of usefully embracing the constraints of a design system or code base.
As I’ve worked with more and more companies struggling to design at scale, this last point has proven to be especially troublesome when maintaining or evolving existing software. Most design tools are not well tuned to support designer-developer collaboration within design systems (though some are beginning to innovate here). Tuite writes:
Your design tool is never going to tell you that you can’t do something. It’s never going to pull you up for using an off-brand color. It’s never going to prevent you from using a whitespace value which doesn’t belong in your spacing scale. It’s never going to warn you that 20% of the population literally cannot see that light gray text you’ve just designed.
And why not…? Because design tools don’t care.
Design tools are so waywardly enamoured with a vision for unlimited creativity that they have lost sight of what it means to design sensibly, to design inclusively, to design systematically.
Put simply, design tools allow us to do whatever the hell we want. To some extent, this level of boundless creativity is useful, especially in the ideation phases. As UI designers though, the majority of our workflow doesn’t call for much creativity. Rather, our workflow calls for reuse, repetition, familiarity and standardisation; needs that our tools do little to satisfy.
Developer culture and workflow have a strong bias toward consistency and reuse. That’s less true of design, and the tools are part of the problem. When there are no guardrails, it’s easy to wander off the road. Our systems don’t help us stay the path within established design systems.
This causes a disconnect between designers and developers because design comps drift from the realities of the established patterns in the code base. A Sketch library—or any collected drawings of software—can be a canonical UI reference only when the design is first conceived. Once the design gets into code, the product itself should be the reference, and fresh design should work on top of that foundation. It’s more important that our design libraries reflect what’s in the code than the reverse. Production code—and the UI it generates—has to be the single source of truth, or madness ensues.
That doesn’t mean that developers exclusively run the show or that we as designers have no agency in the design system. We can and should offer changes to the design and interaction of established patterns. But we also have to respect the norms that we’ve already helped to establish, and our tools should, too.
That’s the promise of design-token systems like InVision’s Design System Manager. Tokens help to establish baseline palettes and styles across code and design tools. The system gets embedded in whatever environment where designers or developers prefer to work. Designers and developers alike can edit those rules at the source—within the system itself.
This approach is a step forward in helping designers and developers stay in sync by contributing to the same environment: the actual product and the pattern library that feeds it. We’ve seen a lot of success helping client teams to make this transition, but it requires adopting a (sometimes challenging) new perspective on how to work—and where design authority lies. Big rewards come with that change in worldview.
Is your organization wrestling with inconsistent interfaces and duplicative design work? Big Medium helps companies scale great design and improve collaboration through design systems. Get in touch for a workshop, executive session, or design engagement.
Apple Used to Know Exactly What People Wanted — Then It Made a Watch
∞ Oct 5, 2018The latest version of Apple Watch doubles down on its fitness and health-tracking sensors, but as John Herrmann writes in The New York Times, it’s not yet clear exactly what value all that data-tracking might deliver—and for whom:
For now, this impressive facility for collecting and organizing information about you is just that — it’s a great deal of data with not many places to go. This is sensitive information, of course, and Apple’s relative commitment to privacy — at least compared with advertising-centric companies like Google and Facebook — might be enough to get new users strapped in and recording.
As Apple continues its institutional struggle to conceive of what the Apple Watch is, or could be, in the imaginations of its customers, it’s worth remembering that Apple’s stated commitment to privacy is, in practice, narrow. The competitors that Cook likes to prod about their data-exploitative business models have a necessary and complicit partner in his company, having found many of their customers though Apple’s devices and software.
This is especially relevant as Apple casts about for ideas elsewhere. Apple has already met with the insurance giant Aetna about ways in which the company might use Apple Watches to encourage healthier — and cheaper — behavior in its tens of millions of customers. John Hancock, one of the largest life insurers in America, said after Apple’s latest announcement that it would offer all its customers the option of an interactive policy, in which customers would get discounts for healthy habits, as evidenced by data from wearable devices. Here we see the vague outlines of how the Apple Watch could become vital, or at least ubiquitous, as the handmaiden to another data-hungry industry.
Facebook Is Giving Advertisers Access to Your Shadow Contact Information
∞ Sep 27, 2018One of the more insidious aspects of the social graph is that companies can mine data about you even if you don’t actively participate in their network. Your friends inadvertently give you up, as Kashmir Hill writes at Gizmodo:
Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn’t hand over at all, but that was collected from other people’s contact books, a hidden layer of details Facebook has about you that I’ve come to call “shadow contact information.”
Information that we assume to be under our control is not. Or, in many cases, information that you provide for one specific purpose is then flipped around and applied to another. Hill mentions an especially cynical dark-pattern example of that pattern:
[Researchers] found that when a user gives Facebook a phone number for two-factor authentication or in order to receive alerts about new log-ins to a user’s account, that phone number became targetable by an advertiser within a couple of weeks. So users who want their accounts to be more secure are forced to make a privacy trade-off and allow advertisers to more easily find them on the social network.
This is despicable. This is a moment when companies should strive to improve literacy about data sharing and data usage. Instead, companies like Facebook purposely obscure and misdirect. This is both a crisis and an opportunity. As designers, how might we build new business models and interactions that rely on honesty and respect, instead of deception and opportunism?
Arguments for transparency are too often met with counterarguments like, “Well, if we tell them what we’re doing, they might not opt in.” (Or, more bluntly, “If people knew about it, they wouldn’t want any part of it.”) When we find ourselves using these words to justify covering our tracks, it’s a cue that we almost certainly shouldn’t be doing that thing in the first place.
Google Data Collection Research
∞ Sep 27, 2018Whoops, Google, it looks like your business model is showingâ¦
In âGoogle Data Collection,â Douglas C. Schmidt, Professor of Computer Science at Vanderbilt University, catalogs how much data Google is collecting about consumers and their most personal habits across all of its products and how that data is being tied together.
The key findings include:
- A dormant, stationary Android phone (with the Chrome browser active in the background) communicated location information to Google 340 times during a 24-hour period, or at an average of 14 data communications per hour. In fact, location information constituted 35 percent of all the data samples sent to Google.
- For comparisonâs sake, a similar experiment found that on an iOS device with Safari but not Chrome, Google could not collect any appreciable data unless a user was interacting with the device. Moreover, an idle Android phone running the Chrome browser sends back to Google nearly fifty times as many data requests per hour as an idle iOS phone running Safari.
- An idle Android device communicates with Google nearly 10 times more frequently as an Apple device communicates with Apple servers. These results highlighted the fact that Android and Chrome platforms are critical vehicles for Googleâs data collection. Again, these experiments were done on stationary phones with no user interactions. If you actually use your phone the information collection increases with Google.
Pair that with Google’s substantial ad tech, including the network formerly known as DoubleClick, and Google’s data collection reaches well beyond the company’s own properties:
A major part of Googleâs data collection occurs while a user is not directly engaged with any of its products. The magnitude of such collection is significant, especially on Android mobile devices, arguably the most popular personal accessory now carried 24/7 by more than 2 billion people.
If Software Is Eating the World, What Will Come Out the Other End?
∞ Sep 23, 2018“So far, it’s mostly shit,” wrotes John Battelle suggesting that there’s a world beyond the optimization and efficiency so cherished by the would-be disrupters:
But the world is not just software. The world is physics, it’s crying babies and shit on the sidewalk, it’s opioids and ecstasy, it’s car crashes and Senate hearings, lovers and philosophers, lost opportunities and spinning planets around untold stars. The world is still real. Software hasn’t eaten it as much as bound it in a spell, temporarily I hope, while we figure out what comes next.
The iPhone’s original UI designer on Apple’s greatest flaws
∞ Sep 10, 2018Fast Company offers an interview with Imran Chaudhri, the original designer of the iPhone user interface. According to Chaudhri, Apple knew that the device and its notifications would be distracting, that the personal nature of the phone would soak up attention in entirely new ways. But Apple consciously decided not to make it easy to tone down those distractions:
“Inside, getting people to understand that [distraction] was going to be an issue was difficult. Steve [Jobs] understood it…internally though, I think there was always a struggle as to how much control do we want people to have over their devices. When I and a few other people were advocating for more control, that level of control was actually pushed back by marketing. We would hear things like, ‘you can’t do that because then the device will become uncool.’
“The controls exist for you. They’ve always been there and yet it’s incredibly hard to know how to use them and to manage them. You literally have to spend many days to go through and really understand what’s bombarding you and then turn those things off in a singular fashion. So for the people who understand the system really well, they can take advantage of it, but the people that don’t—the people that don’t even change their ringtone, who don’t even change their wallpaper—those are the real people that suffer from this sort of thing. They don’t have that level of control.”
Since then, Apple has embraced privacy as a competitive advantage versus Android, but Chaudhri suggests that iOS could do more to offer transparency and smart adjustments to personal settings:
“The system is intelligent enough to let you know that there are [apps] that you’ve given permission to that are still using your data, and notifications you’ve turned on that you’re not actually responding to. So let’s circle back and let’s reestablish a dialogue between the phone and the customer, where the phone asks, ‘Do you really need these notifications? Do you really want Facebook to be using your address book data? Because you’re not logging into Facebook anymore.’ There’s a lot of ways to remind people if you just design them properly.”
Seems to me that we should all do a similar inventory of the systems we design. There remain so many opportunities to create interventions to improve user literacy and control over privacy, data usage, and distraction. Responsible design in the era of the algorithm demands this kind of transparency.
Also, when Chaudhry says, “there was always a struggle as to how much control do we want people to have over their devices,” my take is: people should have all the control.
Consider the Beer Can
∞ Sep 10, 2018Once upon a time, beer cans had no tab. They were sealed cans, and you used a church key to punch holes in them. In 1962, the “zip top” tab was invented, letting you open the can by peeling off a (razor-sharp) tab. John Updike was not impressed:
This seems to be an era of gratuitous inventions & negative improvements. Consider the beer can-it was beautiful as a clothespin, as inevitable as the wine bottle, as dignified & reassuring as the fire hydrant. A tranquil cylinder of delightfully resonant metal, it could be opened in an instant, requiring only the application of a handy gadget freely dispensed by every grocer… Now we are given instead, a top beeling with an ugly, shmoo-shaped "tab," which after fiercely resisting the tugging, bleeding fingers of the thirsty man, threatens his lips with a dangerous & hideous hole. However, we have discovered a way to thwart Progress… Turn the beer can upside down and open the bottom. The bottom is still the way the top used to be. This operation gives the beer an unsettling jolt, and the sight of a consistently inverted beer can makes some people edgy. But the latter difficulty could be cleared up if manufacturers would design cans that looked the same whichever end was up, like playing cards. Now, that would be progress.
I love this. It conjures lots of questions for designers as we seek to improve existing experiences:
What do innovations cost in social and physical pleasures when they disrupt familiar experiences? What price do we pay (or extract from others) when we design for efficiency? Whose efficiency are we designing for anyway? How do we distinguish nostalgia from real loss (and does the distinction matter)? How can we take useful lessons from the hacks our customers employ to work around our designs?
Related: Eater covers the history of beer-can design. You’re welcome.
How to have a healthy relationship with tech
∞ Sep 10, 2018At Well+Good, the wonderful Liza Kindred describes how to make personal technology serve you, instead of the reverse. It all starts with realizing that your inability to put down your phone isn’t a personal failing, it’s something that’s been done to you:
“The biggest problem with how people engage with technology is technology, not the people,” she says. “Our devices and favorite apps are all designed to keep us coming back for more. That being said, there are many ways for us to intervene in our own relationships with tech, so that we can live this aspect of our lives in a way we can be proud of.”
Liza offers several pointers for putting personal technology in its place. My personal favorite:
Her biggest recommendation is turning off all notifications not sent by a human. See ya, breaking news, Insta likes, and emails. “Your time is more valuable than that,” Kindred says.
Alas, these strategies are akin to learning self-defense skills during a crime wave. They’re helpful (critical, even), but the core problem remains. In this case, the “crime wave” is the cynical, engagement-hungry strategies that too many companies employ to keep people clicking and tapping. And clicking and tapping. And clicking and tapping.
Liza’s on the case there, too. Her company Mindful Technology helps organizations craft products and business strategies that are kind and respectful while still serving the bottom line. I’ve participated in her Mindful Technology workshops, and they’re mind opening. Liza demonstrates that design patterns and business models that you might take for granted as a best practice do more damage than you realize. She has a collection of these anti-patterns, and product designers should take note.
Meanwhile, we’ll have to continue to sharpen those self-defense skills.
“Trigger for a rant”
∞ Jul 1, 2018In his excellent Four Short Links daily feature, Nat Torkington has something to say about innovation poseurs—in the mattress industry:
Why So Many Online Mattress Brands – trigger for a rant: software is eating everything, but that doesn’t make everything an innovative company. If you’re applying the online sales playbook to product X (kombucha, mattresses, yoga mats) it doesn’t make you a Level 9 game-changing disruptive TechCo, it makes you a retail business keeping up with the times. I’m curious where the next interesting bits of tech are.
Should computers serve humans, or should humans serve computers?
∞ Jun 30, 2018Nolan Lawson considers dystopian and utopian possibilities for the future, with a gentle suggestion that front-line technologists have some agency here. What kind of world do you want to help build?
The core question we technologists should be asking ourselves is: do we want to live in a world where computers serve humans, or where humans serve computers?
Or to put it another way: do we want to live in a world where the users of technology are in control of their devices? Or do we want to live in a world where the owners of technology use it as yet another means of control over those without the resources, the knowledge, or the privilege to fight back?
s5e11: Things That Have Caught My Attention
∞ May 20, 2018In a recent edition of his excellent stream-of-consciousness newsletter, Dan Hon considers Alexa Kids Edition in which, among other things, Alexa encourages kids to say “please.” There are challenges and pitfalls, Dan writes, in designing a one-size-fits-all system that talks to children and, especially, teaches them new behaviors.
Parenting is a very personal subject. As I have become a parent, I have discovered (and validated through experimental data) that parents have very specific views about how to do things! Many parents do not agree with each other! Parents who agree with each other on some things do not agree on other things! In families where there are two parents there is much scope for disagreement on both desired outcome and method!
All of which is to say is that the current design, architecture and strategy of Alexa for Kids indicates one sort of one-size-fits-all method and that there’s not much room for parental customization. This isn’t to say that Amazon are actively preventing it and might not add it down the line - it’s just that it doesn’t really exist right now. Honan’s got a great point that:
"[For example,] take the magic word we mentioned earlier. There is no universal norm when it comes to whatâs polite or rude. Manners vary by family, culture, and even region. While âyes, sirâ may be de rigueur in Alabama, for example, it might be viewed as an element of the patriarchy in parts of California."
AI Is Harder Than You Think
∞ May 20, 2018In the New York Times opinion section, Gary Marcus and Ernest Davis suggest that today’s data-crunching model for artificial intelligence is not panning out. Instead of truly understanding logic or language, today’s machine learning instead identifies data patterns to recognize and reflect human behavior. The systems this approach creates tends to mimic more than think. As a result, we have some impressive but incredibly narrow applications of AI. The culmination of artificial intelligence appears to be making salon appointments.
Decades ago, the approach was different. The AI field tried to understand the elements of human thought—and teach machines to actually think. The goal proved elusive and the field drifted instead to what machines were already better at understanding, pattern recognition. Marcus and Davis say the detour has not proved helpful:
Once upon a time, before the fashionable rise of machine learning and “big data,” A.I. researchers tried to understand how complex knowledge could be encoded and processed in computers. This project, known as knowledge engineering, aimed not to create programs that would detect statistical patterns in huge data sets but to formalize, in a system of rules, the fundamental elements of human understanding, so that those rules could be applied in computer programs. Rather than merely imitating the results of our thinking, machines would actually share some of our core cognitive abilities.
That job proved difficult and was never finished. But “difficult and unfinished” doesn’t mean misguided. A.I. researchers need to return to that project sooner rather than later, ideally enlisting the help of cognitive psychologists who study the question of how human cognition manages to be endlessly flexible.
Today’s dominant approach to A.I. has not worked out. Yes, some remarkable applications have been built from it, including Google Translate and Google Duplex. But the limitations of these applications as a form of intelligence should be a wake-up call. If machine learning and big data can’t get us any further than a restaurant reservation, even in the hands of the world’s most capable A.I. company, it is time to reconsider that strategy.
Google Duplicitous
∞ May 9, 2018Jeremy Keith comments on Google’s announcement of Google Duplex:
The visionaries of technology—Douglas Engelbart, J.C.R Licklider—have always recognised the potential for computers to augment humanity, to be bicycles for the mind. I think they would be horrified to see the increasing trend of using humans to augment computers.