Imagination follows making

The value of prototyping is what it reveals about the people you’re trying to reach and serve, and what bold move it makes possible that wasn’t visible before you built it.

Imagination follows making

We’re in a room full of smart, purpose-driven people, most of whom haven’t built anything with AI yet. The first hour is almost always cautious—governance, risk, hallucinations, ethics. All valid, all important. But the energy is defensive. People are working out what's allowed.

Then, we make. A rough prototype—maybe an intake tool, maybe a way to search ten years of community consultation. It takes fifteen minutes. It works. And the room changes.

The person who wasn’t sure AI was ready to trust with anything real is now leaning forward, adjusting a prompt, wanting to see what happens if they change one thing. The caution doesn’t get argued away, it dissolves. Because the thing in front of them is real and useful and they made it. Sometimes there’s genuine joy. That shift is one of the most valuable things we’ve learned to create.

Making produces imagination

Imagination doesn’t precede making, it follows from it. Vision matters, but sharpens dramatically when you’re building something real. The tooling is good enough now that even a rough prototype can demonstrate real, tangible value. It’s not production-grade software, but it’s more than enough to change someone’s mind about what’s worth building.

We can fall into the trap of thinking about AI as a productivity tool. Most people’s first encounter was ChatGPT drafting an email or Copilot summarising a meeting—useful enough, slightly magical. But it set a mental model that’s hard to shift: AI is a clever shortcut. A faster way to do the things you were already doing. That's a ceiling, not a floor.

Early photographers spent decades trying to make photos look like paintings. Formal compositions, posed subjects—the whole aesthetic borrowed from a medium they already understood. Photography had nothing to prove to painting, but it took a generation to figure that out. What came next—documentary photography, capturing the unrepeatable informal moment, entirely new ways of seeing the world—none of that was possible while the question was still “how do I replicate what already exists?

That question is sitting in front of every purpose-led organisation right now. The ones we're working with are dealing with problems where capacity has always been the constraint—housing, family violence, climate transition, public health. More need than hours. More complexity than any team can fully hold. Institutional knowledge about communities that lives mostly in people's heads, and quietly walks out the door when they move on.

What if ten years of community consultation were actually retrievable, and the patterns across thousands of case notes were visible? What if the intake process was designed around the people least likely to show up, rather than the most organised? The technology to do most of that exists right now.

The value of prototyping is what it reveals about the people you’re trying to reach and serve, and what bold move it makes possible that wasn’t visible before you built it. That’s where design thinking and technological possibility need to work together. Not as separate disciplines bolted onto a project, but as a fused way of seeing. Creative judgement applied to what the prototype is telling you.

Making something is also how we move from abstract governance to tangible and practical decisions. There's high-quality guidance all around us now—Australia's AI Ethics Principles as the foundation, the National AI Centre’s Guidance for AI Adoption building on them, and we have state-level frameworks like the NSW Mandatory Ethical Principles and AI Assessment Framework; and Victoria’s guidance on safe and responsible use of generative AI in the public sector. These establish real foundations around accountability, risk, data quality, human oversight. But what those guardrails mean in practice—where the bias is in your data, what counts as high-quality context, what “human oversight actually looks like for your team—becomes real when you’re looking at a working prototype. Small, safe experiments turn governance from a document into a practice, and prepare you to get bigger and bolder with confidence.

Careful can tip into frozen

They also build culture, which is hard and important. In government, institutions and large organisations, experimentation doesn’t come naturally. For good reason—accountability matters, public trust matters. What unfreezes it is tangibility and visibility. When someone shares what they built—what worked, what broke, what surprised them—people around them start to see where the value is, in their own context, with their own data. That’s when the really useful questions start: what do we want to be capable of that we currently aren’t? What’s the work we've always said we’d do if we had the capacity?

Those are design questions. They deserve a design-led process—staying anchored to communities, testing with real people, surfacing what no one person can see on their own. And they deserve the ambition to act on what you find.

That kind of ambition is hard to plan for. It emerges when someone makes a rough, imperfect, surprisingly useful thing, and the person next to them says; “wait, could we do that for this”?

When you make something small enough to learn from, the imagination follows.