A real space to design in the age of agents
Agents are shipping faster than we can think so we need a space to zoom out more than ever. The canvas must evolve to embrace the new paradigm. It should connect design to code and back, and it should just work.
A few months ago, we were debating whether designers should code. Now we’re debating whether engineers should even review code. Wild times, man. The agents are here, and hopefully they’re not chasing us with dark glasses (lol). They can just build things insanely fast. They can become infinite extensions of our minds. And the scary part is that we might not even know what comes next.
Why a new canvas?
Paper Desktop is a thing now because of the rise of agents.
A desktop app allows MCP servers, so you can talk with LLMs that have access to your apps and can also connect directly to your repos and files.
But why? Why another canvas? Why push pixels by hand when everyone’s just prompting stuff into existence? Just flip it: what if the canvas is a really good place where agents could live?
As one of those “designers who ship”, I’ve been thinking a lot about where we should focus when agents are doing a big chunk of the work. The buzz is that execution is becoming purely conversational: you describe a thing, an agent writes the code, and hands you something cool. For some people, that feels like the end of the old way. No more drawing rectangles. Just prompt and ship.
I’ve been watching that narrative a little too closely and… come on. Agents can’t read between the lines and do serious design work in a broader sense of the process. Sometimes it feels like we either forgot what “design” actually means, or we’ve been stuck in AI marketing meetings for too long.
Agents can probably help a lot, especially with connectivity and repetitive tasks. But if you’re a designer who thinks AI will design for you, you’re like a little fucked, like thinking that design is still about drawing mocks and maintaining design systems.
I like to use a canvas for my design workflow the same way I like to understand a bit of the codebase before I start prompting like crazy. I often observe my prototypes or mocks more than actually iterating on them. I like to hold multiple versions of the same thing on my screen, see them side by side, and mull over them before I commit to a decision.
You can’t scale design decisions in a chat box. You can’t explore widely with an agent that only writes code for your codebase. When the vibe-coding fever passes, people might realize you can bring agents to other apps too. Sometimes you might start in code, sometimes drawing rectangles. In my case, direct manipulation and AI operations are two sides of the same coin, whether I’m building frontend in a design file or in JSX. I just want zero drift between the two.
So, spatial thinking does what a chat log can’t: it keeps multiple futures visible at once, allowing you to make better connections between your elements and directions. AI agents are getting more powerful every day; they can not only read, but also see and think. Collaborating with agents is more than just code generation these days. You can now bring agents to a canvas and we built Paper for that.
The visual tool shouldn’t replace the code. The problem is that the canvas never learned to speak the same language. There wasn’t a tool based on HTML and CSS that made it easy for agents to work with. And that’s why Paper exists.
We needed to rethink the foundations and finally build something that exports code, imports React components, fetches APIs and lets you iterate quickly—but 20 years of digital design should compound with that, not reset. Typography, color, vectors, filters, all the way to multiplayer, image gen and talking to agents.
I hope 2026 is the year people realize the canvas is more relevant than ever—not just as a drawing tool, but as a thinking tool, and most importantly as a visual interface for collaborating with people, teams and their personal agents.
The “slot machine” trap
We are learning a new reflex: Prompt, wait, output, repeat.
It’s fast. It’s magical. You can have an army of agents now. But the bottleneck had to go somewhere. The buzz says it’s taste… I’d say it’s more about care. In Gabe’s words: ”if AI can think better and software can scale wider, then the scarce thing is the particular, messy way a human mind cares about something“.
When execution becomes cheap, we tend to skip the “spatial” phase of thinking. The bottleneck has moved up. It’s our mind now. Writing code is easy now; the hard part is maintaining context and coherence across everything. Some people might say that “taste is a new core skill” but it’s really more about caring for what you’re building and why.
We need to take more perspective now. We need a surface that allows us to see the “road not taken” right next to the one we’re building. A space to collaborate more, remix freely and connect siloed teams and tools. We need to expand before we narrow. It’s better to shape a vision before we build.
Why the “old canvas” failed production
So, if the canvas is so important for spatial thinking, why did it lose a place in production? The canvas earned the right to explore, but it often didn’t earn the right to ship. The answer is abstraction.
For the last decade, our design tools have been beautiful lies. Designers are leaving static mocks behind for a reason. If you’ve ever spent hours drawing your product that doesn’t exist yet—only to hand it off and get something slightly different back—then of course you want to design closer to reality.
Eventually, the code won. The canvas died the moment the first commit was pushed. The traditional design tool was tied to its abstract layers—no matter how good the screenshots agents could take, you were still designing a picture of a div, not the div itself.
Is there an alternative future? Will the fun of moving rectangles disappear? We don’t think so. We believe we need to get a broader perspective and think more on paper. Pun intended.
Closing the gap with a connected canvas
We need a new standard. A more connected environment where you can go from design to code and back whenever you want without much of the drama.
If the canvas is built on the same standards as the product—html, css, dom—then you’re not drawing a metaphor of a UI. You’re working in the medium.
We need tools that connect with the real thing. We want to bring the spatial power of the canvas back into the production loop and stop losing context in translation.
The canvas needs to be made of the same material as the product. Imagine a surface where “designing” isn’t just drawing a picture of your component, but actual React code that can be exported. A place where HTML and CSS are the medium, and the output.
This is the part that changes the whole dynamic: it goes both ways. You can pull reality onto the canvas when you need to go wide, and you can push decisions back into code when you need to go narrow—without turning either side into a dead end.
This is the tool we deserved before AI—and it’s the tool that gets 100x better because of AI. When the gap disappears in both directions, the canvas becomes the map: where humans provide the spatial reasoning, intent and direction.
We don’t want to replace code with visual tools. It’s about acknowledging that code is a terrible interface for spatial problems, and chat is a terrible interface for systemic decisions.
Leverage, not magic, no slop
Our bet is that we are heading toward a bi-directional workflow—where you can pull a section of your live app onto a canvas, iterate on it spatially, probably with some help of agents, and push it back as code. Not a picture of the change, but the change itself.
This unlocks the true era of agentic work. The canvas is where humans do the part agents are bad at: holding ambiguity, comparing paths, deciding what matters. The agent becomes an extension of your hands: handling the boilerplate, the refactoring, the tedious responsive adjustments.
When your canvas speaks the same language as your code, LLMs can finally connect the dots without drift. Agents can pull live data, connect to your APIs, and read your local environment. It’s no longer a blind handoff; it’s a shared context where design, code, and data live together.
Sloppy outputs aren’t about agents failing—they’re about understanding and instructions not being there. You can now give your agents a whole website to build, with all the annotations you want. Plan mode just got better. It’s a living spec for building with agents.
Back to Paper
We have Paper to test this hypothesis. Paper is built on web standards because we believe that to fix the workflow, you have to respect the medium.
It’s time to stop treating design and implementation as two separate jobs and start treating them as overlapping layers of the same reality. And beyond that: now that agents are here, we need more spaces to step back for perspective, zoom in for details, and clarify intent—both between humans, and between humans and agents.
Infinite power without a map is chaos. The canvas gives us that map—to connect the dots and let agents do their thing. We might be building faster than ever, but we need to remember why we’re building in the first place. We hope Paper becomes that real space for you, pretty close to code and friendly to how you actually think.







