More than one thing can be true at once

Most designers I talk to hold very real and very valid concerns about AI. But very few are getting past that wall of concern to the place where they know enough to do something about the stuff that troubles them.

More than one thing can be true at once
Close up photograph of dirty hands in wet clay. Image: Midjourney. 

AI slop. Environmental pillage. IP theft. Sociopathic tech bros. AI creating real opportunities to design and make things that matter. Purpose-led organisations innovating in ways they never dreamed possible.

All of these things can be true at once.

Most designers I talk to hold very real and very valid concerns about AI. But very few are getting past that wall of concern to the place where they know enough to do something about the stuff that troubles them.

It’s hard to go deeper into something that feels wrong, so the statement becomes the stopping point: “AI plagiarises artists’ work.” “The data centres are causing environmental carnage.” Yes. But the race to train better models will continue whether you’re in or you’re out. So do you put your fingers in your ears and tune out? Or do you figure out how to chase some light?

So much of the best practice is still in front of us, waiting to be shaped.

But we can’t get to any of that if we don’t immerse ourselves enough to understand what’s actually happening under the hood, and where the real opportunities for change are.

Environmental impact is the one I hear most, and it should be. The scale of harm is real and it’s worsening. But if the weight of it keeps us stuck, we can’t participate in the change that’s needed. AI is already accelerating materials science for batteries and solar cells—identifying hundreds of thousands of stable new compounds that would have taken human researchers centuries to find through trial and error. Models that run on a laptop at a fraction of the energy cost of frontier models are good enough for many tasks, and getting better fast. Model selection. On-prem AI. Frameworks for deciding where not to use AI at all. The more designers and devs we have grappling with this stuff, the better. If we all lean back with our arms crossed, demanding to know when everything’s “solved”, we’re leaving it to the very people we see as unfit to have their hands on the wheel.

Artists’ IP is another big one. And rightly so—there’s real reward asymmetry in what’s been used to train models and what AI operators are raking in. We can’t undo how models were trained, and we probably can’t do much to slow the arms race going on with the mega tech companies—that’s carrying on regardless. Consent-based training models and opt-in licensing are starting to emerge, not because the industry had a crisis of conscience, but because artists and regulators forced the conversation. It’s early and it’s imperfect. But which tools you use, and which companies you support with your money and attention, that’s a choice you can make right now.

Beyond that, if our best designers and developers don’t understand how pre-training and reinforcement learning work, we might be, as the kids say, cooked. We need to know the difference between memorisation and transformation. We need to educate ourselves on the legal challenges that are landing. The more precise we get about the problem, the more we know where our leverage actually sits—in what we build on top of the models, the design decisions we make about what they do, the services we advocate for, and which foundations we choose to build on.

The visual outputs of AI are, by and large, pretty shitty. But most designers are judging AI by what comes back from a single prompt. The interesting work isn’t happening there. It’s happening in entirely new workflows—custom skills, structured instructions, ways of working that didn’t exist two years ago. If you haven’t seen that layer, you’re making a judgement based on the least interesting thing AI can do. There are genuinely helpful AI workflows emerging in our studio’s design practice—but only because we’re as committed to learning as we are to finding new ways to make an impact with our partners.

When I look across the industry I see very few designers building the knowledge and craft to get to interesting results. Too few are working with AI rather than zero-shot prompting the thing once and walking away unimpressed.

The gap between default output and what’s possible with skill and creative judgement is massive. As far as I can tell, it’s close to entirely unexplored.

In our studio right now, we’re prototyping AI tools with a government agency to support community engagement in the clean energy transition. We’re working with GPs and allied health practitioners to surface research frameworks in real time, so they can stay with their patients instead of disappearing into the literature. We’re helping health networks tackle social determinants of health—supporting older people to get social, get active, and stay connected. This is live work, with significant organisations, on problems that matter. Every one of these projects has required us to develop ethical frameworks, collaborate with clinical psychologists and subject matter experts, test efficacy safely, and model risk. That work—the hard, careful, unglamorous work of responsible AI—is only possible because we’re deep enough in to know what the questions actually are, and where to look for viable answers.

If you’re waiting for a clean, simple, and pure answer to “ethical AI” before you decide whether to dive in, you’ll be waiting a long time. The frameworks don’t exist yet in any complete and perfectly practical form, and the people who engage now are the ones who’ll shape what could become canon. Designers are well placed for that work, right now. The decisions we make about systems, about people, about unintended consequences, about who gets left out—all of that only makes a dent if designers know the levers for change, and can tell the stories that shift how people think.

We need the best design talent opting in—eyes open, curiosity intact, all the worry still there—to make sure the best practices, and the products and services themselves take shape in the interests of people and planet.