A Protopian Frontier
∞ Jun 28, 2024Take five minutes to watch A Protopian Future, an Ignite talk by Jenny Johnston. She offers a provocation to really think about and describe the future you imagine will come of the things you/we are trying so hard to change right now.
Here, Jenny asks what the world might look like 50 years after nuclear weapons are abolished. Your thing might be something different. You’re probably PUSHING for something to be changed / added / removed in the world; but what future are you PULLING toward? What’s the good and the bad—intentional or unintentional—of the future that you’re designing today?
Protopian stories imagine better futures but not perfect futures. They embrace a kind of messy progress. The reason we’re seeing this protopian surge right now is because humanity is in a weird place. We have this tangle of existential threats in front of us that we’re having a hard time seeing past and certainly our way through…. Protopian stories are powerful tools for reorienting ourselves toward hope and possibility and not dystopian dread.
After you watch Jenny’s video, go check out the farfutures.horizon2045.org project she edited. So great.
WWDC 2024: Apple Intelligence
∞ Jun 23, 2024John Gruber shares an under-reported tidbit from Apple’s many “Apple Intelligence” reveals:
The most unheralded aspect of Apple Intelligence is that the data centers Apple is building for Private Cloud Compute are not only carbon neutral, but are operating entirely on renewable energy sources. That’s extraordinary, and I believe unique in the entire industry.
LLMs are crazy-expensive across many dimensions, including environmental cost. Great to hear at least one company is tackling this head-on.
As usual, John also has lots of other insights on the announcements.
A Unified Theory of F*cks
∞ Jun 22, 2024The inimitable Mandy Brown reminds us that the f*cks we have to give are a limited resource. Spend them in the right place:
Why love your work? It won’t, of course, love you back. It can’t. Work isn’t a thing that can love. It isn’t alive, it isn’t and won’t ever be living. And my answer is: don’t. Don’t give a f*ck about your work. Give all your f*cks to the living. Give a f*ck about the people you work with, and the people who receive your work—the people who use the tools and products and systems or, more often than not, are used by them. Give a f*ck about the land and the sea, all the living things that are used or used up by the work, that are abandoned or displaced by it, or—if we’re lucky, if we’re persistent and brave and willing—are cared for through the work. Give a f*ck about yourself, about your own wild and tender spirit, about your peace and especially about your art. Give every last f*ck you have to living things with beating hearts and breathing lungs and open eyes, with chloroplasts and mycelia and water-seeking roots, with wings and hands and leaves. Give like every f*ck might be your last.
Because here’s what I’ve learned: if you give your f*cks to the unliving—if you plant those f*cks in institutions or systems or platforms or, gods forbid, interest rates—you will run out of f*cks.
Illuminate
∞ Jun 21, 2024Illuminate is an experimental project from Google that generates accessible, podcast-style interviews from academic papers:
Illuminate is an experimental technology that uses AI to adapt content to your learning preferences. Illuminate generates audio with two AI-generated voices in conversation, discussing the key points of select papers. Illuminate is currently optimized for published computer science academic papers.
The service has a waitlist, but you can try out some generated conversations (and I recommend that you do!). The enthusiasm, intonation, and ums & ahs are convincing and feel authentic to the genre that the project mimics. (See also the PDF to Podcast project which does similar things but with flatter voice results.)
But it’s not the seeming authenticity that feels important here. Machine-generated voices—even at this level of fidelity—are nothing new. What’s more interesting is how this project demonstrates what large language models (and now large multimodal models) are truly great at: they are prodigious translators and transformers of symbols, whether those symbols are for language, visuals, or broad concepts. These models can shift those symbols nimbly among formats: from English to Chinese to structured data to speech to UI components to audio to image. These are systems that can understand a concept they are given and then work their alchemy to present that concept in a new medium or language or format.
There are exciting opportunities here for unlocking content that is trapped in unfriendly formats (where the definition of “unfriendly” might be unique to the individual). This application leans into what generative AI is good at (understanding, transforming) around tightly scoped content—and avoids what these models are uneven at: answering questions or building content from scratch. How might this kind of transformation support education efforts, particularly around accessibility and inclusivity?
I Will F***ing Piledrive You If You Mention AI Again
∞ Jun 21, 2024Has breathless AI mania primed you for a ragey rant about overhyped technology and huckster pitchmen? Nikhil Suresh has you covered:
Most organizations cannot ship the most basic applications imaginable with any consistency, and you’re out here saying that the best way to remain competitive is to roll out experimental technology that is an order of magnitude more sophisticated than anything else your IT department runs, which you have no experience hiring for, when the organization has never used a GPU for anything other than junior engineers playing video games with their camera off during standup, and even if you do that all right there is a chance that the problem is simply unsolvable due to the characteristics of your data and business? This isn’t a recipe for disaster, it’s a cookbook for someone looking to prepare a twelve course f***ing catastrophe.
How about you remain competitive by fixing your shit?
There’s no such thing as a quick fix for a broken organization. And there’s no silver bullet for product excellence. AI is capable of amazing things, but you can’t shortcut great execution or ignore its very real downsides.
In another context, I often say, “High-performing teams have design systems, but having a design system won’t make you a high-performing team.” The same is true for AI.
There’s only one route to success: get your process and operations in order, understand the technologies you’re using, know their strengths and weaknesses, and above all: start with the right problem to solve.
We Need To Talk About Our AI Fetish
∞ Apr 21, 2024In a powerful and historically grounded essay, Jeremy Wagstaff asks that we not abdicate the vision for AI solely to the companies who stand to gain from it:
“Admittedly, it’s not easy to assess the implications of a complex technology like AI if you’re not an expert in it, so we tend to listen to the experts,” Wagstaff writes. “But listening to the experts should tell you all you need to know about the enormity of the commitment we’re making, and how they see the future of AI. And how they’re most definitely not the people we should be listening to.”
The potential impact of AI on work, culture, and individual agency is both deep and broad. And that impact will have effects that are both positive and negative—including effects that we haven’t yet imagined. We should be prepared to adapt to both, but history tells us that when policy is in the hands of those who would profit from transformative technology, bad things get buried. See oil, plastics, asbestos, pesticides, etc.—and now big tech, where Wagstaff points out we’ve seen a cynical evolution of how technology “helps” us:
At first Google search required us to define what it was that we wanted; Facebook et al required us to do define who and what we wanted to share our day with, and Twitter required us to be pithy, thoughtful, incisive, to debate. Tiktok just required us to scroll. At the end it turned out the whole social media thing was not about us creating and sharing wisdom, intelligent content, but for the platforms to outsource the expensive bit — creating entertainment — to those who would be willing to sell themselves, their lives, hawking crap or doing pratfalls.
AI has not reached that point. Yet. We’re in this early-Google summer where we have to think about what we want our technology to do for us. The search prompt would sit there awaiting us, cursor blinking, as it does for us in ChatGPT or Claude. But this is just a phase. Generative AI will soon anticipate what we want, or at least a bastardised version of what we want. It will deliver a lowest-common denominator version which, because it doesn’t require us to say it out loud, and so see in text see what a waste of our time we are dedicating to it, strip away while our ability to compute — to think — along with our ability, and desire, to do complex things for which we might be paid a salary or stock options.
It doesn’t have to turn out that way, of course. But it does require intention to change the course of technology and how companies and culture respectively profit from it, and not only financially. That intention has to come from many sources—from users, from policymakers, and from those of us who shape the digital experiences that use AI.
We all have to ask: What goals do we want to achieve with this technology? What is our vision for it? If we don’t decide for ourselves, the technology will decide for us. (Or the companies who would profit from it.) As I’m fond of saying: the future should not be self-driving.
Consider health care. What goals do we want to achieve by applying AI to patient care? If the primary goal is profit (reduce patient visit time and maximize the patient load), then the result might focus on AI taking over as much of the patient visit as possible. The machines would handle the intake, evaluate your symptoms and test, handle the diagnosis, suggest the course of action, and send you on your way. You might not even see another human being during most routine visits. If the experience ended there, that might be considered a business win in the coldest terms, but holy shit, what a terrible outcome for human care—even more soulless than our current health care machinery.
What if, instead, we change the goal to better care, lower health costs, and more employment? In that case, AI might still aid in intake, synthesize symptoms and test results, and provide a summary for medical review—so that medical staff don’t have to do as much rote data entry and summation.
But THEN the doctor or physician’s assistant comes in. Because the machines have already done the initial medical analysis, the caregiver’s role is to deliver the message in a way that is caring and warm. Their time can be spent on letting patients tell their stories. Instead of a rushed five minutes with a doctor, the patient will get time to feel heard, ask questions, get info, be reassured.
And perhaps that caregiver doesn’t need as much education as doctors today, because they are supported by knowledgeable systems. That in turn makes health care less expensive for the patient. It also means we could afford more caregivers, for more jobs. Instead of using AI to reduce human contact, in other words, we can use the technology to create the circumstances for better, more humane connection in the times and contexts when people can be so much more effective than machines. At the same time, we can also reduce costs and increase employment.
But that won’t happen on its own. We first have to talk about it. We have to decide what’s important and what our vision should be. Here’s how Wagstaff puts it:
What’s missing is a discussion about what we want our technology to do for us. This is not a discussion about AI; it’s a discussion about where we want our world to go. This seems obvious, but nearly always the discussion doesn’t happen — partly because of our technology fetish, but also because entrenched interests will not be honest about what might happen. We’ve never had a proper debate about the pernicious effects of Western-built social media, but our politicians are happy to wave angry fingers at China over TikTok. …
AI is not a distant concept. It is fundamentally changing our lives at a clip we’ve never experienced. To allow those developing AI to lead the debate about its future is an error we may not get a chance to correct.
Nobody Wants To Work with Our Best Engineer
∞ Apr 20, 2024If Jon was a great engineer, why was he so hard to work with? Isn’t his job to get things right?
No. The job of an engineer is to get things done. And getting anything done past a certain point requires working well with others.
If you are right but nobody wants to work with you, what net value are you bringing to the team? …
Kindness is a valuable trait. Practice it.
Designing, building, and delivering digital experiences is hard, but it turns out the biggest challenges are almost always human, not technical. Are you making things better or worse? How can you improve collaboration and understanding to make your team more successful in realizing shared vision?
John Maeda: Josh Clark's 2019 talk on Design and AI
∞ Apr 20, 2024Design legend John Maeda found some old gold in this 2019 talk about Design and AI from Big Medium’s Josh Clark:
What’s especially awesome about Josh’s talk is that it precedes the hullabaloo of the chatgpt revolution. This is a pretty awesome talk by Josh. He has been trailblazing machine learning and design for quite a long time.
The talk, AI Is Your New Design Material, addresses use cases and applications for AI and machine learning, along with some of designing with (and around) the eccentricities of machine intelligence.
(Also be sure to check out John’s excellent SXSW Design in Tech Report, “Design Against AI,”.)
US Air Force Confirms First Successful AI Dogfight
∞ Apr 20, 2024Emma Roth reports for The Verge:
After carrying out dogfighting simulations using the AI pilot, DARPA put its work to the test by installing the AI system inside its experimental X–62A aircraft. That allowed it to get the AI-controlled craft into the air at the Edwards Air Force Base in California, where it says it carried out its first successful dogfight test against a human in September 2023.
Human pilots were on board the X–62A with controls to disable the AI system, but DARPA says the pilots didnât need to use the safety switch âat any point.â The X–62A went against an F–16 controlled solely by a human pilot, where both aircraft demonstrated âhigh-aspect nose-to-nose engagementsâ and got as close as 2,000 feet at 1,200 miles per hour. DARPA doesnât say which aircraft won the dogfight, however.
What could possibly go wrong?
Looking for AI Use Cases
∞ Apr 20, 2024Benedict Evans makes a savvy comparision between the current Generative AI moment and the early days of the PC. While the new technology is impressive, it’s not (yet) evident how it fits into the everyday lives or workflows of most people. Basically: what do we do with this thing? For many, ChatGPT and its cousins remain curiosities—fun toys to tinker with, but little more so far.
This wouldn’t matter much (‘man says new tech isn’t for him!’), except that that a lot of people in tech look at ChatGPT and LLMs and see a step change in generalisation, towards something that can be universal. A spreadsheet can’t do word processing or graphic design, and a PC can do all of those but someone needs to write those applications for you first, one use-case at a time. But as these models get better and become multi-modal, the really transformative thesis is that one model can do ‘any’ use-case without anyone having to write the software for that task in particular.
Suppose you want to analyse this month’s customer cancellations, or dispute a parking ticket, or file your taxes - you can ask an LLM, and it will work out what data you need, find the right websites, ask you the right questions, parse a photo of your mortgage statement, fill in the forms and give you the answers. We could move orders of magnitude more manual tasks into software, because you don’t need to write software to do each of those tasks one at a time. This, I think, is why Bill Gates said that this is the biggest thing since the GUI. That’s a lot more than a writing assistant.
It seems to me, though, that there are two kinds of problem with this thesis.
The first problem, Evans says, is that the models are still janky. They trip—all the time—on problems that are moderately complex or just a few degrees left of familiar. That’s a technical problem, and the systems are getting better at a startling clip.
The second problem is more twisty—and less clear how it will resolve: as a culture broadly, and as the tech industry specifically, our imaginations haven’t quite caught up with truly useful applications for LLMs.
It reminds me a little of the early days of Google, when we were so used to hand-crafting our solutions to problems that it took time to realise that you could ‘just Google that’. Indeed, there were even books on how to use Google, just as today there are long essays and videos on how to learn ‘prompt engineering.’ It took time to realise that you could turn this into a general, open-ended search problem, and just type roughly what you want instead of constructing complex logical boolean queries on vertical databases. This is also, perhaps, matching a classic pattern for the adoption of new technology: you start by making it fit the things you already do, where it’s easy and obvious to see that this is a use-case, if you have one, and then later, over time, you change the way you work to fit the new tool.
The arrival of startling new technologies often works this way, as we puzzle how to shoehorn them into old ways of doing things. In my essay Of Nerve and Imagination, I framed this less as a problem of imagination than of nerve—the cheek to step out of old assumptions of “how things are done” and into a new paradigm. I wrote that essay just as the Apple Watch and other smartwatches were landing, adding yet another device to a busy ecosystem. Here’s what I said then:
The significance of new combinations tends to escape us. When someone embeds a computer inside a watch, it’s all too natural for us to assume that it will be used like either a computer or a watch. A smartphone on your wrist! A failure of nerve prevents us from imagining the entirely new thing that this combination might represent. The habits of the original technology blind us to the potential opportunities of the new.
Today’s combinations are especially hard to parse because they’re no longer about individual instances of technology. The potential of a smartwatch, for example, hinges not only on the combination of its component parts but on its combination with other smart and dumb objects in our lives.
As we weigh the role of the smartwatch, we have to muster the nerve to imagine: How might it talk to other devices? How can it interact with the physical world? What does it mean to wear data? How might the watch signal identity in the digital world as we move through the physical? How might a gesture or flick of the wrist trigger action around me? What becomes possible if smart watches are on millions of wrists? What are the social implications? What new behaviors will the watch channel and shape? How will it change the way I use other devices? How might it knit them together?
As we begin to embed technology into everything—when anything can be an interface—we can no longer judge each new gadget on its own. The success of any new interface depends on how it controls, reflects, shares, or behaves in a growing community of social devices.
Similarly, how do LLMs fit into a growing community of interfaces, services, and indeed other LLMs? As we confront a new and far more transformational technology in Generative AI, it’s up to designers and product folks to summon the nerve to understand not only how it fits into our tech ecosystem, but how it changes the way we work or think or interact.
Easier said than done, of course. And Evans writes that we’re still finding the right level for working with this technology as both users and product makers. Will we interact with these systems directly as general-purpose, “ask me anything” (or “ask me to do anything”) companions? Or will we instead focus on narrower applications, with interfaces wrapped around purpose-built AI to help focus and nail specific tasks? Can the LLMs themselves be responsible for presenting those interfaces, or do we need to imagine and build each application one at a time, as we traditionally have? There’s an ease and clarity to that narrow interface approach, Evans writes, but it diverges from loftier visions for what the AI interface might be.
Evans writes:
A GUI tells the users what they can do, but it also tells the computer everything we already know about the problem. Can the GUI itself be generative? Or do we need another whole generation of [spreadsheet inventor] Dan Bricklins to see the problem, and then turn it into apps, thousands of them, one at a time, each of them with some LLM somewhere under the hood?
On this basis, we would still have an orders of magnitude change in how much can be automated, and how many use-cases can be found for LLMs, but they still need to be found and built one by one. The change would be that these new use-cases would be things that are still automated one-at-a-time, but that could not have been automated before, or that would have needed far more software (and capital) to automate. That would make LLMs the new SQL, not the new HAL9000.
AI + Design: Figma Users Tell Us Whatâs Coming Next
∞ Mar 23, 2024Figma surveyed 1800+ of its users about their companies’ expectations and adoption of AI. Responses from this audience of designers, executives, and developers indicate that AI is making its way into most companies’ product pipelines, but the solutions they’re shipping are… uninspired.
Eighty-nine percent of respondents say AI will have at least some impact on their companyâs products and services in the next 12 months; 37% say the impact will be âsignificant or transformative.â The executives overseeing company decision-making are even more bullish and much more likely to see AI as âimportant to company goals.â
But this kind of thinking presents its own risk. Our survey suggests AI is largely in the experimental phase of development, with 72% of those who have built AI into products saying it plays a minor or non-essential role. Perhaps as a result, most respondents feel itâs too soon to tell if AI is making an impact. Just one third of those surveyed reported improvements to metrics like revenue, costs, or market share because of AI, and fewer than one third say theyâre proud of what they shipped.
Worth repeating: Fewer than one third say they’re proud of what they shipped. Figma also says that separate research has turned up “AI feature fatigue” among a general civilian audience of product end-users.
What I take from this is that there’s general confidence that “there’s some there there,” but what that means isn’t yet clear to most companies. There’s a big effort to jam AI into products without first figuring out the right problem to solve, or how to do it in an elegant way. Exhibit #1: chatbots bolted onto everything. Early steps have felt like missteps.
“AI feature fatigue” is a signal in itself. It says that there’s too much user-facing focus on the underlying technology instead of how it’s solving an actual problem. The best AI features don’t shout that they’re AIâthey just quietly do the work and get out of the way.
Hey, design is hard. Creating new interaction models is even harder; it requires stepping away from known habits and “best practice” design patterns. That’s the work right now. Algorithm engineers and data scientists have shown us what’s possible with AI and machine learning. It’s up to designers to figure out what to do with it. It’s obviously more than slapping an “AI label” on it, or bolting on a chatbot. The survey suggests that product teams understand this, but haven’t yet landed on the right solutions.
This is a huge focus for us in our client work at Big Medium. Through workshops and product-design engagements, we’re helping our clients make sense of just what AI means for them. Not least, that means helping the designers we work with to understand AI as a design materialâthe problems it’s good at solving, the emergent design patterns that come out of that, and the ones that fall away.
As an industry, we’re entering a new chapter of digital experience. The growing pains are evident, but here at Big Medium, we’re seeing solid solutions emerge in product and interaction design.
This is the Moment to Reinvent Your Product
∞ Mar 23, 2024Alex Klein has been on a roll with UX opportunities for AI. At UX Collective, he asks: will you become an AI shark or fairy?
The sharks will prioritize AI that automates parts of their business and reduces cost. These organizations smell the sweet, sweet efficiency gains in the water. And they’re salivating at AI’s promised ability to maintain productivity with less payroll (aka people).
The fairies will prioritize AI that magically transforms their products into something that is shockingly more valuable for customers. These organizations will leverage AI to break free from the sameness of today’s digital experiences–in order to drive lifetime value and market share.
No, they’re not mutually exclusive. But every company will develop a culture that prioritizes one over the other.
I believe the sharks are making a big mistake: they will commoditize their product precisely when its potential value is exploding.
A broader way to name this difference of approach: will you use AI to get better/faster at things you do already, or will you invent new ways to do things that weren’t previously possible (and maybe not just new “ways”—maybe new “things” entirely)?
Both are entirely legit, by the way. A focus on efficiency will produce more predictable ROI (safe, known), while a focus on new paradigms can uncover opportunities that could be exponentially more valuable… but also maybe not (future-facing, uncertain). The good news: exploring those paradigms in the right way can reduce that uncertainty quickly.
I think of four categories of opportunities that AI and machine learning afford, and the most successful companies will explore all of them:
Be smarter/faster with problems we already solve. The machines are great at learning from example. Show the robots how to do something enough times, and they’ll blaze through the task.
Solve new problems, ask new questions. As the robots understand their worlds with more nuance, they can tackle tasks that weren’t previously possible. Instead of searching by keyword, for example, machines can now search by sentiment or urgency (think customer service queues). Or instead of offering a series of complex decision menus, the machines can propose one or more outcomes, or just do the task for you.
Tap new data sources. The robots can now understand all the messy ways that humans communicate, unlocking information that was previously opaque to them. Speech, handwriting, video, photos, sketches, facial expression… all are available not only as data but as surfaces for interaction.
See invisible patterns, make new connections. AI and machine learning are vast pattern-matching systems that see the world in clusters and vectors and probabilities that our human brains don’t easily discern. How can we partner with them to act on these useful new signals?
Klein’s “sharks” focus on the first item above, while the “fairies” focus on the transformative possibilities of the last three.
That first efficiency-focused opportunity can be a great place to start with AI and machine learning. The problems and solutions are familiar, and the returns fairly obvious. For digital leaders confronting lean times, enlisting the robots for efficiency has to be a focus. And indeed, we’re doing a ton of that at Big Medium with how we use AI to build and maintain design systems.
But focusing solely on efficiency ignores the fact that we’ve already entered a new era of digital experience that will solve new problems in dramatically new ways for both company and customer. Some organizations have been living in that era for a while, and their algorithms already ease and animate everyday aspects of our lives (for better and for worse). Even there, we’re only getting started.
Sentient Design is my term for this emerging future of AI-mediated interfaces—experiences that feel almost self-aware in their response to user needs. In Big Medium’s product design projects, we’re helping our clients explore and capitalize on these emerging Sentient Design patterns—as embedded features or as wholesale products.
Companion/agent experiences are one novel aspect of that work, and Klein offers several useful examples of this approach with what he calls “software as a partnership.” There are several other strains of Sentient Design that we’re building into products and features, too, and they’re proving out. We’ll be sharing more of those design patterns here, stay tuned!
Meanwhile, if your team isn’t yet working with AI, it’s time. And if you’re still in the efficiency phase, get comfortable with the uncomfortable next step of reinvention.