Still Trying to Sound Smart About AI? The Boss Is, Too
∞ Jul 28, 2024There’s a lot of “ready, fire, aim” in the industry right now as execs feel pressure to move on AI, even though most admit they don’t have confidence in how to use it. At The Wall Street Journal, Ray A. Smith rounds up some recent surveys that capture the situation:
Rarely has such a transformative, new technology spread and evolved so quickly, even before business leaders have grasped its basics.
No wonder that in a recent survey of 2,000 C-suite executives, 61% said AI would be a “game-changer.” Yet nearly the same share said they lacked confidence in their leadership teams’ AI skills or knowledge, according to staffing company Adecco and Oxford Economics, which conducted the survey.
The upshot: Many chief executives and other senior managers are talking a visionary game about AI’s promise to their staff—while trying to learn exactly what it can do.
Smith also points to a separate spring survey of 10,000 workers and executives that cited AI as a reason 71% of CEOs and two-thirds of other senior leaders said they had impostor syndrome in their positions.
With limited confidence at the top, AI innovation is trickling up from the bottom. (This rhymes with our strong belief at Big Medium that to be expert in a thing, you have to use the thing.)
In fact, much of what business leaders are gleaning about AI’s transformative potential is coming from individual employees, who are experimenting with AI on their own much faster than businesses are building bespoke, top-down applications of the technology, executives say.
In a survey of 31,000 working adults published by Microsoft last month, 75% of knowledge workers said they had started using AI on the job, the vast majority of whom reported bringing their own AI tools to work. Only 39% of the AI users said their employers had supplied them with AI training.
A Protopian Frontier
∞ Jun 28, 2024Take five minutes to watch A Protopian Future, an Ignite talk by Jenny Johnston. She offers a provocation to really think about and describe the future you imagine will come of the things you/we are trying so hard to change right now.
Here, Jenny asks what the world might look like 50 years after nuclear weapons are abolished. Your thing might be something different. You’re probably PUSHING for something to be changed / added / removed in the world; but what future are you PULLING toward? What’s the good and the bad—intentional or unintentional—of the future that you’re designing today?
Protopian stories imagine better futures but not perfect futures. They embrace a kind of messy progress. The reason we’re seeing this protopian surge right now is because humanity is in a weird place. We have this tangle of existential threats in front of us that we’re having a hard time seeing past and certainly our way through…. Protopian stories are powerful tools for reorienting ourselves toward hope and possibility and not dystopian dread.
After you watch Jenny’s video, go check out the farfutures.horizon2045.org project she edited. So great.
WWDC 2024: Apple Intelligence
∞ Jun 23, 2024John Gruber shares an under-reported tidbit from Apple’s many “Apple Intelligence” reveals:
The most unheralded aspect of Apple Intelligence is that the data centers Apple is building for Private Cloud Compute are not only carbon neutral, but are operating entirely on renewable energy sources. That’s extraordinary, and I believe unique in the entire industry.
LLMs are crazy-expensive across many dimensions, including environmental cost. Great to hear at least one company is tackling this head-on.
As usual, John also has lots of other insights on the announcements.
A Unified Theory of F*cks
∞ Jun 22, 2024The inimitable Mandy Brown reminds us that the f*cks we have to give are a limited resource. Spend them in the right place:
Why love your work? It won’t, of course, love you back. It can’t. Work isn’t a thing that can love. It isn’t alive, it isn’t and won’t ever be living. And my answer is: don’t. Don’t give a f*ck about your work. Give all your f*cks to the living. Give a f*ck about the people you work with, and the people who receive your work—the people who use the tools and products and systems or, more often than not, are used by them. Give a f*ck about the land and the sea, all the living things that are used or used up by the work, that are abandoned or displaced by it, or—if we’re lucky, if we’re persistent and brave and willing—are cared for through the work. Give a f*ck about yourself, about your own wild and tender spirit, about your peace and especially about your art. Give every last f*ck you have to living things with beating hearts and breathing lungs and open eyes, with chloroplasts and mycelia and water-seeking roots, with wings and hands and leaves. Give like every f*ck might be your last.
Because here’s what I’ve learned: if you give your f*cks to the unliving—if you plant those f*cks in institutions or systems or platforms or, gods forbid, interest rates—you will run out of f*cks.
Illuminate
∞ Jun 21, 2024Illuminate is an experimental project from Google that generates accessible, podcast-style interviews from academic papers:
Illuminate is an experimental technology that uses AI to adapt content to your learning preferences. Illuminate generates audio with two AI-generated voices in conversation, discussing the key points of select papers. Illuminate is currently optimized for published computer science academic papers.
The service has a waitlist, but you can try out some generated conversations (and I recommend that you do!). The enthusiasm, intonation, and ums & ahs are convincing and feel authentic to the genre that the project mimics. (See also the PDF to Podcast project which does similar things but with flatter voice results.)
But it’s not the seeming authenticity that feels important here. Machine-generated voices—even at this level of fidelity—are nothing new. What’s more interesting is how this project demonstrates what large language models (and now large multimodal models) are truly great at: they are prodigious translators and transformers of symbols, whether those symbols are for language, visuals, or broad concepts. These models can shift those symbols nimbly among formats: from English to Chinese to structured data to speech to UI components to audio to image. These are systems that can understand a concept they are given and then work their alchemy to present that concept in a new medium or language or format.
There are exciting opportunities here for unlocking content that is trapped in unfriendly formats (where the definition of “unfriendly” might be unique to the individual). This application leans into what generative AI is good at (understanding, transforming) around tightly scoped content—and avoids what these models are uneven at: answering questions or building content from scratch. How might this kind of transformation support education efforts, particularly around accessibility and inclusivity?
I Will F***ing Piledrive You If You Mention AI Again
∞ Jun 21, 2024Has breathless AI mania primed you for a ragey rant about overhyped technology and huckster pitchmen? Nikhil Suresh has you covered:
Most organizations cannot ship the most basic applications imaginable with any consistency, and you’re out here saying that the best way to remain competitive is to roll out experimental technology that is an order of magnitude more sophisticated than anything else your IT department runs, which you have no experience hiring for, when the organization has never used a GPU for anything other than junior engineers playing video games with their camera off during standup, and even if you do that all right there is a chance that the problem is simply unsolvable due to the characteristics of your data and business? This isn’t a recipe for disaster, it’s a cookbook for someone looking to prepare a twelve course f***ing catastrophe.
How about you remain competitive by fixing your shit?
There’s no such thing as a quick fix for a broken organization. And there’s no silver bullet for product excellence. AI is capable of amazing things, but you can’t shortcut great execution or ignore its very real downsides.
In another context, I often say, “High-performing teams have design systems, but having a design system won’t make you a high-performing team.” The same is true for AI.
There’s only one route to success: get your process and operations in order, understand the technologies you’re using, know their strengths and weaknesses, and above all: start with the right problem to solve.
We Need To Talk About Our AI Fetish
∞ Apr 21, 2024In a powerful and historically grounded essay, Jeremy Wagstaff asks that we not abdicate the vision for AI solely to the companies who stand to gain from it:
“Admittedly, it’s not easy to assess the implications of a complex technology like AI if you’re not an expert in it, so we tend to listen to the experts,” Wagstaff writes. “But listening to the experts should tell you all you need to know about the enormity of the commitment we’re making, and how they see the future of AI. And how they’re most definitely not the people we should be listening to.”
The potential impact of AI on work, culture, and individual agency is both deep and broad. And that impact will have effects that are both positive and negative—including effects that we haven’t yet imagined. We should be prepared to adapt to both, but history tells us that when policy is in the hands of those who would profit from transformative technology, bad things get buried. See oil, plastics, asbestos, pesticides, etc.—and now big tech, where Wagstaff points out we’ve seen a cynical evolution of how technology “helps” us:
At first Google search required us to define what it was that we wanted; Facebook et al required us to do define who and what we wanted to share our day with, and Twitter required us to be pithy, thoughtful, incisive, to debate. Tiktok just required us to scroll. At the end it turned out the whole social media thing was not about us creating and sharing wisdom, intelligent content, but for the platforms to outsource the expensive bit — creating entertainment — to those who would be willing to sell themselves, their lives, hawking crap or doing pratfalls.
AI has not reached that point. Yet. We’re in this early-Google summer where we have to think about what we want our technology to do for us. The search prompt would sit there awaiting us, cursor blinking, as it does for us in ChatGPT or Claude. But this is just a phase. Generative AI will soon anticipate what we want, or at least a bastardised version of what we want. It will deliver a lowest-common denominator version which, because it doesn’t require us to say it out loud, and so see in text see what a waste of our time we are dedicating to it, strip away while our ability to compute — to think — along with our ability, and desire, to do complex things for which we might be paid a salary or stock options.
It doesn’t have to turn out that way, of course. But it does require intention to change the course of technology and how companies and culture respectively profit from it, and not only financially. That intention has to come from many sources—from users, from policymakers, and from those of us who shape the digital experiences that use AI.
We all have to ask: What goals do we want to achieve with this technology? What is our vision for it? If we don’t decide for ourselves, the technology will decide for us. (Or the companies who would profit from it.) As I’m fond of saying: the future should not be self-driving.
Consider health care. What goals do we want to achieve by applying AI to patient care? If the primary goal is profit (reduce patient visit time and maximize the patient load), then the result might focus on AI taking over as much of the patient visit as possible. The machines would handle the intake, evaluate your symptoms and test, handle the diagnosis, suggest the course of action, and send you on your way. You might not even see another human being during most routine visits. If the experience ended there, that might be considered a business win in the coldest terms, but holy shit, what a terrible outcome for human care—even more soulless than our current health care machinery.
What if, instead, we change the goal to better care, lower health costs, and more employment? In that case, AI might still aid in intake, synthesize symptoms and test results, and provide a summary for medical review—so that medical staff don’t have to do as much rote data entry and summation.
But THEN the doctor or physician’s assistant comes in. Because the machines have already done the initial medical analysis, the caregiver’s role is to deliver the message in a way that is caring and warm. Their time can be spent on letting patients tell their stories. Instead of a rushed five minutes with a doctor, the patient will get time to feel heard, ask questions, get info, be reassured.
And perhaps that caregiver doesn’t need as much education as doctors today, because they are supported by knowledgeable systems. That in turn makes health care less expensive for the patient. It also means we could afford more caregivers, for more jobs. Instead of using AI to reduce human contact, in other words, we can use the technology to create the circumstances for better, more humane connection in the times and contexts when people can be so much more effective than machines. At the same time, we can also reduce costs and increase employment.
But that won’t happen on its own. We first have to talk about it. We have to decide what’s important and what our vision should be. Here’s how Wagstaff puts it:
What’s missing is a discussion about what we want our technology to do for us. This is not a discussion about AI; it’s a discussion about where we want our world to go. This seems obvious, but nearly always the discussion doesn’t happen — partly because of our technology fetish, but also because entrenched interests will not be honest about what might happen. We’ve never had a proper debate about the pernicious effects of Western-built social media, but our politicians are happy to wave angry fingers at China over TikTok. …
AI is not a distant concept. It is fundamentally changing our lives at a clip we’ve never experienced. To allow those developing AI to lead the debate about its future is an error we may not get a chance to correct.
Nobody Wants To Work with Our Best Engineer
∞ Apr 20, 2024If Jon was a great engineer, why was he so hard to work with? Isn’t his job to get things right?
No. The job of an engineer is to get things done. And getting anything done past a certain point requires working well with others.
If you are right but nobody wants to work with you, what net value are you bringing to the team? …
Kindness is a valuable trait. Practice it.
Designing, building, and delivering digital experiences is hard, but it turns out the biggest challenges are almost always human, not technical. Are you making things better or worse? How can you improve collaboration and understanding to make your team more successful in realizing shared vision?
John Maeda: Josh Clark's 2019 talk on Design and AI
∞ Apr 20, 2024Design legend John Maeda found some old gold in this 2019 talk about Design and AI from Big Medium’s Josh Clark:
What’s especially awesome about Josh’s talk is that it precedes the hullabaloo of the chatgpt revolution. This is a pretty awesome talk by Josh. He has been trailblazing machine learning and design for quite a long time.
The talk, AI Is Your New Design Material, addresses use cases and applications for AI and machine learning, along with some of designing with (and around) the eccentricities of machine intelligence.
(Also be sure to check out John’s excellent SXSW Design in Tech Report, “Design Against AI,”.)
US Air Force Confirms First Successful AI Dogfight
∞ Apr 20, 2024Emma Roth reports for The Verge:
After carrying out dogfighting simulations using the AI pilot, DARPA put its work to the test by installing the AI system inside its experimental X–62A aircraft. That allowed it to get the AI-controlled craft into the air at the Edwards Air Force Base in California, where it says it carried out its first successful dogfight test against a human in September 2023.
Human pilots were on board the X–62A with controls to disable the AI system, but DARPA says the pilots didnât need to use the safety switch âat any point.â The X–62A went against an F–16 controlled solely by a human pilot, where both aircraft demonstrated âhigh-aspect nose-to-nose engagementsâ and got as close as 2,000 feet at 1,200 miles per hour. DARPA doesnât say which aircraft won the dogfight, however.
What could possibly go wrong?
Looking for AI Use Cases
∞ Apr 20, 2024Benedict Evans makes a savvy comparision between the current Generative AI moment and the early days of the PC. While the new technology is impressive, it’s not (yet) evident how it fits into the everyday lives or workflows of most people. Basically: what do we do with this thing? For many, ChatGPT and its cousins remain curiosities—fun toys to tinker with, but little more so far.
This wouldn’t matter much (‘man says new tech isn’t for him!’), except that that a lot of people in tech look at ChatGPT and LLMs and see a step change in generalisation, towards something that can be universal. A spreadsheet can’t do word processing or graphic design, and a PC can do all of those but someone needs to write those applications for you first, one use-case at a time. But as these models get better and become multi-modal, the really transformative thesis is that one model can do ‘any’ use-case without anyone having to write the software for that task in particular.
Suppose you want to analyse this month’s customer cancellations, or dispute a parking ticket, or file your taxes - you can ask an LLM, and it will work out what data you need, find the right websites, ask you the right questions, parse a photo of your mortgage statement, fill in the forms and give you the answers. We could move orders of magnitude more manual tasks into software, because you don’t need to write software to do each of those tasks one at a time. This, I think, is why Bill Gates said that this is the biggest thing since the GUI. That’s a lot more than a writing assistant.
It seems to me, though, that there are two kinds of problem with this thesis.
The first problem, Evans says, is that the models are still janky. They trip—all the time—on problems that are moderately complex or just a few degrees left of familiar. That’s a technical problem, and the systems are getting better at a startling clip.
The second problem is more twisty—and less clear how it will resolve: as a culture broadly, and as the tech industry specifically, our imaginations haven’t quite caught up with truly useful applications for LLMs.
It reminds me a little of the early days of Google, when we were so used to hand-crafting our solutions to problems that it took time to realise that you could ‘just Google that’. Indeed, there were even books on how to use Google, just as today there are long essays and videos on how to learn ‘prompt engineering.’ It took time to realise that you could turn this into a general, open-ended search problem, and just type roughly what you want instead of constructing complex logical boolean queries on vertical databases. This is also, perhaps, matching a classic pattern for the adoption of new technology: you start by making it fit the things you already do, where it’s easy and obvious to see that this is a use-case, if you have one, and then later, over time, you change the way you work to fit the new tool.
The arrival of startling new technologies often works this way, as we puzzle how to shoehorn them into old ways of doing things. In my essay Of Nerve and Imagination, I framed this less as a problem of imagination than of nerve—the cheek to step out of old assumptions of “how things are done” and into a new paradigm. I wrote that essay just as the Apple Watch and other smartwatches were landing, adding yet another device to a busy ecosystem. Here’s what I said then:
The significance of new combinations tends to escape us. When someone embeds a computer inside a watch, it’s all too natural for us to assume that it will be used like either a computer or a watch. A smartphone on your wrist! A failure of nerve prevents us from imagining the entirely new thing that this combination might represent. The habits of the original technology blind us to the potential opportunities of the new.
Today’s combinations are especially hard to parse because they’re no longer about individual instances of technology. The potential of a smartwatch, for example, hinges not only on the combination of its component parts but on its combination with other smart and dumb objects in our lives.
As we weigh the role of the smartwatch, we have to muster the nerve to imagine: How might it talk to other devices? How can it interact with the physical world? What does it mean to wear data? How might the watch signal identity in the digital world as we move through the physical? How might a gesture or flick of the wrist trigger action around me? What becomes possible if smart watches are on millions of wrists? What are the social implications? What new behaviors will the watch channel and shape? How will it change the way I use other devices? How might it knit them together?
As we begin to embed technology into everything—when anything can be an interface—we can no longer judge each new gadget on its own. The success of any new interface depends on how it controls, reflects, shares, or behaves in a growing community of social devices.
Similarly, how do LLMs fit into a growing community of interfaces, services, and indeed other LLMs? As we confront a new and far more transformational technology in Generative AI, it’s up to designers and product folks to summon the nerve to understand not only how it fits into our tech ecosystem, but how it changes the way we work or think or interact.
Easier said than done, of course. And Evans writes that we’re still finding the right level for working with this technology as both users and product makers. Will we interact with these systems directly as general-purpose, “ask me anything” (or “ask me to do anything”) companions? Or will we instead focus on narrower applications, with interfaces wrapped around purpose-built AI to help focus and nail specific tasks? Can the LLMs themselves be responsible for presenting those interfaces, or do we need to imagine and build each application one at a time, as we traditionally have? There’s an ease and clarity to that narrow interface approach, Evans writes, but it diverges from loftier visions for what the AI interface might be.
Evans writes:
A GUI tells the users what they can do, but it also tells the computer everything we already know about the problem. Can the GUI itself be generative? Or do we need another whole generation of [spreadsheet inventor] Dan Bricklins to see the problem, and then turn it into apps, thousands of them, one at a time, each of them with some LLM somewhere under the hood?
On this basis, we would still have an orders of magnitude change in how much can be automated, and how many use-cases can be found for LLMs, but they still need to be found and built one by one. The change would be that these new use-cases would be things that are still automated one-at-a-time, but that could not have been automated before, or that would have needed far more software (and capital) to automate. That would make LLMs the new SQL, not the new HAL9000.
AI + Design: Figma Users Tell Us Whatâs Coming Next
∞ Mar 23, 2024Figma surveyed 1800+ of its users about their companies’ expectations and adoption of AI. Responses from this audience of designers, executives, and developers indicate that AI is making its way into most companies’ product pipelines, but the solutions they’re shipping are… uninspired.
Eighty-nine percent of respondents say AI will have at least some impact on their companyâs products and services in the next 12 months; 37% say the impact will be âsignificant or transformative.â The executives overseeing company decision-making are even more bullish and much more likely to see AI as âimportant to company goals.â
But this kind of thinking presents its own risk. Our survey suggests AI is largely in the experimental phase of development, with 72% of those who have built AI into products saying it plays a minor or non-essential role. Perhaps as a result, most respondents feel itâs too soon to tell if AI is making an impact. Just one third of those surveyed reported improvements to metrics like revenue, costs, or market share because of AI, and fewer than one third say theyâre proud of what they shipped.
Worth repeating: Fewer than one third say they’re proud of what they shipped. Figma also says that separate research has turned up “AI feature fatigue” among a general civilian audience of product end-users.
What I take from this is that there’s general confidence that “there’s some there there,” but what that means isn’t yet clear to most companies. There’s a big effort to jam AI into products without first figuring out the right problem to solve, or how to do it in an elegant way. Exhibit #1: chatbots bolted onto everything. Early steps have felt like missteps.
“AI feature fatigue” is a signal in itself. It says that there’s too much user-facing focus on the underlying technology instead of how it’s solving an actual problem. The best AI features don’t shout that they’re AIâthey just quietly do the work and get out of the way.
Hey, design is hard. Creating new interaction models is even harder; it requires stepping away from known habits and “best practice” design patterns. That’s the work right now. Algorithm engineers and data scientists have shown us what’s possible with AI and machine learning. It’s up to designers to figure out what to do with it. It’s obviously more than slapping an “AI label” on it, or bolting on a chatbot. The survey suggests that product teams understand this, but haven’t yet landed on the right solutions.
This is a huge focus for us in our client work at Big Medium. Through workshops and product-design engagements, we’re helping our clients make sense of just what AI means for them. Not least, that means helping the designers we work with to understand AI as a design materialâthe problems it’s good at solving, the emergent design patterns that come out of that, and the ones that fall away.
As an industry, we’re entering a new chapter of digital experience. The growing pains are evident, but here at Big Medium, we’re seeing solid solutions emerge in product and interaction design.
This is the Moment to Reinvent Your Product
∞ Mar 23, 2024Alex Klein has been on a roll with UX opportunities for AI. At UX Collective, he asks: will you become an AI shark or fairy?
The sharks will prioritize AI that automates parts of their business and reduces cost. These organizations smell the sweet, sweet efficiency gains in the water. And they’re salivating at AI’s promised ability to maintain productivity with less payroll (aka people).
The fairies will prioritize AI that magically transforms their products into something that is shockingly more valuable for customers. These organizations will leverage AI to break free from the sameness of today’s digital experiences–in order to drive lifetime value and market share.
No, they’re not mutually exclusive. But every company will develop a culture that prioritizes one over the other.
I believe the sharks are making a big mistake: they will commoditize their product precisely when its potential value is exploding.
A broader way to name this difference of approach: will you use AI to get better/faster at things you do already, or will you invent new ways to do things that weren’t previously possible (and maybe not just new “ways”—maybe new “things” entirely)?
Both are entirely legit, by the way. A focus on efficiency will produce more predictable ROI (safe, known), while a focus on new paradigms can uncover opportunities that could be exponentially more valuable… but also maybe not (future-facing, uncertain). The good news: exploring those paradigms in the right way can reduce that uncertainty quickly.
I think of four categories of opportunities that AI and machine learning afford, and the most successful companies will explore all of them:
Be smarter/faster with problems we already solve. The machines are great at learning from example. Show the robots how to do something enough times, and they’ll blaze through the task.
Solve new problems, ask new questions. As the robots understand their worlds with more nuance, they can tackle tasks that weren’t previously possible. Instead of searching by keyword, for example, machines can now search by sentiment or urgency (think customer service queues). Or instead of offering a series of complex decision menus, the machines can propose one or more outcomes, or just do the task for you.
Tap new data sources. The robots can now understand all the messy ways that humans communicate, unlocking information that was previously opaque to them. Speech, handwriting, video, photos, sketches, facial expression… all are available not only as data but as surfaces for interaction.
See invisible patterns, make new connections. AI and machine learning are vast pattern-matching systems that see the world in clusters and vectors and probabilities that our human brains don’t easily discern. How can we partner with them to act on these useful new signals?
Klein’s “sharks” focus on the first item above, while the “fairies” focus on the transformative possibilities of the last three.
That first efficiency-focused opportunity can be a great place to start with AI and machine learning. The problems and solutions are familiar, and the returns fairly obvious. For digital leaders confronting lean times, enlisting the robots for efficiency has to be a focus. And indeed, we’re doing a ton of that at Big Medium with how we use AI to build and maintain design systems.
But focusing solely on efficiency ignores the fact that we’ve already entered a new era of digital experience that will solve new problems in dramatically new ways for both company and customer. Some organizations have been living in that era for a while, and their algorithms already ease and animate everyday aspects of our lives (for better and for worse). Even there, we’re only getting started.
Sentient Design is my term for this emerging future of AI-mediated interfaces—experiences that feel almost self-aware in their response to user needs. In Big Medium’s product design projects, we’re helping our clients explore and capitalize on these emerging Sentient Design patterns—as embedded features or as wholesale products.
Companion/agent experiences are one novel aspect of that work, and Klein offers several useful examples of this approach with what he calls “software as a partnership.” There are several other strains of Sentient Design that we’re building into products and features, too, and they’re proving out. We’ll be sharing more of those design patterns here, stay tuned!
Meanwhile, if your team isn’t yet working with AI, it’s time. And if you’re still in the efficiency phase, get comfortable with the uncomfortable next step of reinvention.
The 3 Capabilities Designers Need To Build for the AI Era
∞ Mar 22, 2024At UX Collective, Alex Klein shares three capabilities designers need to build for the AI era:
- AI strategy: how can we use AI to solve legit customer problems (not just bolted-on “we have AI!” features)?
- AI interaction design: what new experiences (and risks) does AI introduce?
- Model design: prompt-writing means that designers can collaborate with engineers to guide how algorithms work; how can we use designerly skills to improve models?
I agree with all of it, but I share special excitement around the new problems and emerging interaction models that AI invites us to address and explore. I love the way Klein puts it, and why I’m sharing his article here:
We’ve moved from designing “waterslides,” where we focused on minimizing friction and ensuring fluid flow — to “wave pools,” where there is no clear path and every user engages in a unique way.
Over the past several years, the more that I’ve worked with AI and machine learning—with robot-generated content and robot-generated interaction—the more I’ve had to accept that I’m not in control of that experience as a designer. And that’s new. Interaction designers have traditionally designed a fixed path through information and interactions that we control and define. Now, when we allow the humans and machines to interact directly, they create their own experience outside of the tightly constrained paths we’re accustomed to providing.
We haven’t completely lost control, of course. We can choose when and where to allow this free-form interaction, blending those opportunities within controlled interaction paths. This has some implications that are worth exploring in both personal practice and as an industry. We’ve been working in all of these areas in our product work at Big Medium:
Sentient design. This is the term I’ve been using for AI-mediated interfaces. When the robots take on the responsibility for responding to humans, what becomes possible? What AI-facilitated experiences lie beyond the current fascination with chatbots? How might the systems themselves morph and adapt to present interfaces and interaction based on the user’s immediate need and interest? This doesn’t mean that every interface becomes a fever dream of information and interaction, but it does mean moving away from fixed templates and set UI patterns.
Defensive design. We’re used to designing for success and the happy path. When we let humans and robots interact directly, we have to shift to designing for failure and uncertainty. We have to design defensively, consider what could go wrong, how to prevent those issues where we can, and provide a gentle landing when we fail.
Persona-less design. As we get the very real ability to respond to users in a hyper-personalized way, do personas still matter? Is it relevant or useful to define broad categories of people or mindsets, when our systems are capable of addressing the individual and their mindset in the moment? UX tools like personas and journey maps may need a rethink. At the very least, we have to reconsider how we use them and in which contexts of our product design and strategy. As always, let’s understand whether our tools still fit the job. It might be that the robots tell us more about our users than we can tell the robots.
These are exciting times, and we’re learning a ton. At Big Medium, even though we’ve been working for years with machine learning and AI, we’re discovering new interaction models every day—and fresh opportunities to collaborate with the robots. We’re entering a new chapter of user experience and interaction design. It’s definitely a moment to explore, think big, and splash in puddles—or as Klein might put it, leave the waterslide to take a swim in the wave pool.
A Coder Considers the Waning Days of the Craft
∞ Feb 19, 2024In the New Yorker, writer and programmer James Somers shares his personal journey discovering just how good AI is at writing code—and what this might mean both individually and for the industry: A Coder Considers the Waning Days of the Craft. “Coding has always felt to me like an endlessly deep and rich domain. Now I find myself wanting to write a eulogy for it,” he writes. “What will become of this thing I’ve given so much of my life to?”
Software engineers, as a species, love automation. Inevitably, the best of them build tools that make other kinds of work obsolete. This very instinct explained why we were so well taken care of: code had immense leverage. One piece of software could affect the work of millions of people. Naturally, this sometimes displaced programmers themselves. We were to think of these advances as a tide coming in, nipping at our bare feet. So long as we kept learning we would stay dry. Sound advice—until there’s a tsunami.
Somers travels through several stages of amazement (and grief?) as he gets GPT–4 to produce working code in seconds that would normally take him hours or days—or sometimes that he doubts he’d be capable of at all. If the robots are already so good at writing production-ready code, then what’s the future of the human coder?
Here at Big Medium, we’re wrestling with the same stuff. We’re already using AI (and helping our clients to do the same) to do production engineering that we ourselves used to do: writing front-end code, translating code from one web framework to another, evaluating code quality, writing automated tests. It’s clear that these systems outstrip us for speed and, in some ways, technical execution.
It feels to me, though, that it’s less our jobs that are being displaced than where our attention is focused. We have a new and powerful set of tools that give us room to focus more on the “what” and the “why” while we let the robots worry about the “how.” But our new robot colleagues still need some hand-holding along the way. In 2018, Benedict Evans wrote that machine learning “gives you infinite interns, or, perhaps, infinite ten year olds”—powerful but, in important ways, unsophisticated. AI has come a long, long way in the six years since, but it still misses the big picture and fails to understand human context in a general and reliable way.
Somers writes:
You can’t just say to the A.I., “Solve my problem.” That day may come, but for now it is more like an instrument you must learn to play. You have to specify what you want carefully, as though talking to a beginner. … I found myself asking GPT–4 to do too much at once, watching it fail, and then starting over. Each time, my prompts became less ambitious. By the end of the conversation, I wasn’t talking about search or highlighting; I had broken the problem into specific, abstract, unambiguous sub-problems that, together, would give me what I wanted.
Once again, technology is pushing our attention higher up the stack. Instead of writing the code, we’re defining the goals—and the approach to meet those goals. It’s less about how the car is built and more about where we want to drive it. That means the implementation details become… well, details. As I wrote in Do More With Less, “Done right, this relieves us of nitty-gritty, error-prone, and repetitive production work and frees us to do higher-order thinking, posing new questions that solve bigger problems. This means our teams will eventually engage in more human inquiry and less technical implementation: more emphasis on research, requirements, and outcomes and less emphasis on specific outputs. In other words, teams will focus more on the right thing to do—and less on how to do it. The robots will take care of the how.”
And that seems to be where Somers lands, too:
The thing I’m relatively good at is knowing what’s worth building, what users like, how to communicate both technically and humanely. A friend of mine has called this A.I. moment “the revenge of the so-so programmer.” As coding per se begins to matter less, maybe softer skills will shine.
Thoughts on a Global Design System
∞ Feb 6, 2024Hot on the heels of his podcast conversation with Big Medium’s Brad Frost, Chris Coyier shared his ruminations on Brad’s call to create a universal design system.
Chris’s post, Thoughts on a Global Design System, is a smart, incisive, and thought-provoking set of questions and critiques about Brad’s proposal. It’s the tough love that an ambitious idea needs in order to survive.
Chris nods to the problem that Brad seeks to solve with a global design system: “Surely, the world is wasting the brain power of too many smart people solving the same set of problems again and again.” And then he pokes at whether this is the right solution. How would it work in practice? Who would run it? How could it be opinionated enough to be useful without being so opinionated that it’s no longer universal? And why haven’t similar efforts succeeded? Is what we already have as close as we’re likely to get?
That feels like I’m being awfully critical. Sorry! Like I said, I like the enthusiasm and I do think there is potential here. But to realize the potential, I think you need to ask really hard questions and have really strong answers. Ultimately I sincerely hope it can be done. Having a super robust go-to set of components that are essentially vetted by the world would be awesome. I think it will take very strong set of principals and leadership to get there.
We love both the spirit of the critique, and the depth of the commentary. Thanks, Chris, for the great framing for the conversation to come.
How Machines Are Taking Over the Worldâs Stock Markets
∞ Jan 24, 2020Time magazine interviewed Marcos López de Prado, a specialist in using machine learning for investment and finance. This quote caught my eye:
âMachine learning should be used as a research tool, not as a forecasting tool. It should be used to identify new theories, and once you identify a new theory, you throw the machine away, you donât want the machine.â
âMarcos López de Prado
A caveat: López de Prado is speaking specifically about machine learning for market predictions, and he notes that markets resist prediction. “Markets evolve,” he said. “You are an investor and when you extract money from the market, the market learns to prevent you from extracting profits next year.”
Still, this resonates with a philosophy that has deepened for me the more I’ve worked with AI and machine learning: machine learning is better at signals than answers.
The first generation of mainstream AI applications has over-dialed on presenting just-the-facts answers. A one-true-answer mentality has created a whole raft of problems, some of them dangerous. Here’s the thing: the machines are flaky, with narrow and literal interpretations of the world. That means they’re brittle for decision-making. Instead of replacing human judgment, AI should amplify it. Machine learning is a mediocre substitute for human judgment and individual agency, but it’s an excellent signal booster for both.
I love the way López de Prado frames it: use the machines to surface patterns, signals, and suggestions to develop a theory for actionâand let humans make the decisions from there.
Slack and the Decline of Bots
∞ Jan 23, 2020Because most chatbots understand only a very limited vocabulary, using them can become a guessing game to arrive at the precise incantation to make them do your bidding. The more we talk to robots, the more we talk like robots.
Will Oremus wrote this report in October about Slack’s expansion of support for third-party plugins. Those plugins were previously limited to text-only chatbots—via either conversational UI or specific “slash commands”—but can now offer more traditional GUI elements like windows, buttons, forms, and so on.
It seems Slack’s users found the chat-only UI too challenging because of its rigid command-line syntax. Discoverability was a challenge, and users found it hard to remember the precise words to make the bots go, or even which bots were installed. “Nobody should have to be a specialist in the dozens of apps they interact with on a daily or weekly basis,” said Andy Pflaum, Slack’s head of platform, in an interview.
Bots will “continue to exist and have their role in Slack,” Pflaum said. But the company’s research has found that “the typical user isn’t as comfortable with those, or forgets how to use those methods.” Testing of more graphical interfaces has generated “so much positive response,” he added, and should make apps accessible to “a much broader base of users.”
Slack’s investment in feature expansion at once suggests the success of the plugins (1800 third-party apps and counting), but also the limiting nature of plain-text UI at a moment when bots still have very narrow language understanding. This will get better as natural language processing (NLP) improves and bots get more flexible in what they can understand. We’re already seeing that happen in the latest generation of NLP (see AI Dungeon for a fun example).
In the meantime: when you can take advantage of the full range of UI on a specific platform, you should—and that’s exactly what Slack is doing here. The future of interaction is increasingly multi-modal (and multi-platform for that matter). Enabling people to move nimbly among modes and platforms is as important as the ability to move among services, the very point of third-party plugins in the first place.