In a powerful and historically grounded essay, Jeremy Wagstaff asks that we not abdicate the vision for AI solely to the companies who stand to gain from it:

“Admittedly, it’s not easy to assess the implications of a complex technology like AI if you’re not an expert in it, so we tend to listen to the experts,” Wagstaff writes. “But listening to the experts should tell you all you need to know about the enormity of the commitment we’re making, and how they see the future of AI. And how they’re most definitely not the people we should be listening to.”

The potential impact of AI on work, culture, and individual agency is both deep and broad. And that impact will have effects that are both positive and negative—including effects that we haven’t yet imagined. We should be prepared to adapt to both, but history tells us that when policy is in the hands of those who would profit from transformative technology, bad things get buried. See oil, plastics, asbestos, pesticides, etc.—and now big tech, where Wagstaff points out we’ve seen a cynical evolution of how technology “helps” us:

At first Google search required us to define what it was that we wanted; Facebook et al required us to do define who and what we wanted to share our day with, and Twitter required us to be pithy, thoughtful, incisive, to debate. Tiktok just required us to scroll. At the end it turned out the whole social media thing was not about us creating and sharing wisdom, intelligent content, but for the platforms to outsource the expensive bit — creating entertainment — to those who would be willing to sell themselves, their lives, hawking crap or doing pratfalls.

AI has not reached that point. Yet. We’re in this early-Google summer where we have to think about what we want our technology to do for us. The search prompt would sit there awaiting us, cursor blinking, as it does for us in ChatGPT or Claude. But this is just a phase. Generative AI will soon anticipate what we want, or at least a bastardised version of what we want. It will deliver a lowest-common denominator version which, because it doesn’t require us to say it out loud, and so see in text see what a waste of our time we are dedicating to it, strip away while our ability to compute — to think — along with our ability, and desire, to do complex things for which we might be paid a salary or stock options.

It doesn’t have to turn out that way, of course. But it does require intention to change the course of technology and how companies and culture respectively profit from it, and not only financially. That intention has to come from many sources—from users, from policymakers, and from those of us who shape the digital experiences that use AI.

We all have to ask: What goals do we want to achieve with this technology? What is our vision for it? If we don’t decide for ourselves, the technology will decide for us. (Or the companies who would profit from it.) As I’m fond of saying: the future should not be self-driving.

Consider health care. What goals do we want to achieve by applying AI to patient care? If the primary goal is profit (reduce patient visit time and maximize the patient load), then the result might focus on AI taking over as much of the patient visit as possible. The machines would handle the intake, evaluate your symptoms and test, handle the diagnosis, suggest the course of action, and send you on your way. You might not even see another human being during most routine visits. If the experience ended there, that might be considered a business win in the coldest terms, but holy shit, what a terrible outcome for human care—even more soulless than our current health care machinery.

What if, instead, we change the goal to better care, lower health costs, and more employment? In that case, AI might still aid in intake, synthesize symptoms and test results, and provide a summary for medical review—so that medical staff don’t have to do as much rote data entry and summation.

But THEN the doctor or physician’s assistant comes in. Because the machines have already done the initial medical analysis, the caregiver’s role is to deliver the message in a way that is caring and warm. Their time can be spent on letting patients tell their stories. Instead of a rushed five minutes with a doctor, the patient will get time to feel heard, ask questions, get info, be reassured.

And perhaps that caregiver doesn’t need as much education as doctors today, because they are supported by knowledgeable systems. That in turn makes health care less expensive for the patient. It also means we could afford more caregivers, for more jobs. Instead of using AI to reduce human contact, in other words, we can use the technology to create the circumstances for better, more humane connection in the times and contexts when people can be so much more effective than machines. At the same time, we can also reduce costs and increase employment.

But that won’t happen on its own. We first have to talk about it. We have to decide what’s important and what our vision should be. Here’s how Wagstaff puts it:

What’s missing is a discussion about what we want our technology to do for us. This is not a discussion about AI; it’s a discussion about where we want our world to go. This seems obvious, but nearly always the discussion doesn’t happen — partly because of our technology fetish, but also because entrenched interests will not be honest about what might happen. We’ve never had a proper debate about the pernicious effects of Western-built social media, but our politicians are happy to wave angry fingers at China over TikTok. …

AI is not a distant concept. It is fundamentally changing our lives at a clip we’ve never experienced. To allow those developing AI to lead the debate about its future is an error we may not get a chance to correct.

Read more about...