Claude Code to Canvas is a Figma integration that converts Claude-generated UI code into editable Figma frames, announced February 2026. It works by screenshotting a running Claude Code UI, then using AI to reconstruct the layout as native Figma layers, components, and auto-layout groups.
Claude Code to Figma (officially called “Code to Canvas”) is a new integration between Anthropic’s Claude Code and the Figma design canvas. Announced on February 17, 2026, it lets developers, designers, and product teams capture a functioning UI built with Claude Code and convert it into a fully editable Figma frame. Not a screenshot. Not a flattened image. A real design artifact that teams can manipulate, annotate, and iterate on.
For years, the design-to-code pipeline moved in one direction. Designers handed off. Engineers interpreted. Context got lost somewhere in between. This integration opens the reverse direction, and it changes how teams evaluate AI-generated interfaces.
What Is Claude Code to Figma? (And Why It Matters Now)
AI coding tools have made it trivially easy to go from idea to working prototype. Claude Code, Cursor, Windsurf. You describe what you want, and you get a functioning interface in minutes. The bottleneck moved. It is no longer “how do we build this?” It is “how do we decide which version to ship?”
That decision process lives on the canvas. It lives in Figma, where teams compare options side by side, leave comments, and align before committing to a direction. Until now, there was no clean way to bring a coded prototype back into that decision space.
Figma’s partnership with Anthropic addresses exactly this gap. The question is no longer whether AI can build interfaces. It is whether teams can evaluate and refine what AI builds, together, in a shared space.
How Code to Canvas Works: The Step-by-Step Workflow
The core workflow has four steps:
- Build or iterate on a UI using Claude Code. Local dev server, staging environment, production. Anything running in a browser.
- Capture the screen. The integration grabs the live browser state and converts it into a Figma-compatible frame.
- Paste into Figma. The captured screen lands on your canvas as an editable design artifact. Not a flat image. A real frame.
- Collaborate. Your team annotates, duplicates, rearranges, and compares options directly on the canvas. No code access required.
The power is in multi-screen sessions. You can capture an entire flow (onboarding, checkout, settings) and lay it out on the canvas in a single session, preserving sequence and context. Duplicate frames, test structural changes, compare alternatives. Rejected ideas stay visible for future reference. The canvas becomes a decision-making space for AI-generated interfaces.
Setting Up the Figma MCP Server with Claude Code
The integration runs on Figma’s MCP (Model Context Protocol) Server. MCP is an open standard that allows AI tools to connect with external data sources and applications. Think of it as a universal adapter between Claude Code and Figma’s design environment.
Setup takes three steps:
- Enable the MCP server. Open Figma desktop app preferences, turn on “Dev Mode MCP Server.” It runs locally at
http://127.0.0.1:3845/sse. - Connect Claude Code. Run a single terminal command:
claude mcp add --transport sse figma-dev-mode-mcp-server http://127.0.0.1:3845/sse - Start working. Reference Figma designs by selecting frames directly in the desktop app, or paste design links into Claude Code prompts.
Requirements: Figma desktop app (not the browser version), a Figma Dev or Full seat, and Claude Code installed via npm.
Step 1: Enable the MCP server in Figma
Open the Figma desktop app. Go to Preferences (Cmd + , on Mac), find the Dev Mode MCP Server toggle, and turn it on. Figma starts a local server at http://127.0.0.1:3845. You will see a confirmation notification. The server only runs while Figma is open, so leave it running during your session.
Step 2: Connect Claude Code to the Figma MCP server
In your terminal, run: claude mcp add --transport sse figma-dev-mode-mcp-server http://127.0.0.1:3845/sse
Verify the connection with: claude mcp list
You should see figma-dev-mode-mcp-server listed as active. If it shows as disconnected, confirm the Figma desktop app is open and the MCP toggle is still on.
Step 3: Select a frame in Figma and reference it in Claude
In Figma, select the frame or component you want to work with. Right-click and copy the link (Cmd + L). In your Claude Code session, paste the link directly into your prompt:
“Look at this Figma frame [paste link] and build the card component as a React component using Tailwind.”
Claude reads the frame’s structure, properties, and layout via the MCP connection and generates code that reflects your actual design rather than a generic interpretation.
Step 4: Capture a running UI back to Figma
Once Claude has built or modified a component, run your local dev server and open the UI in a browser. Use the Figma “Code to Canvas” capture button (available in the Figma desktop app toolbar when Dev Mode is active) to screenshot the live state. The captured frame lands on your canvas as an editable layer group, not a flat image.
What to expect: The first capture is the slowest. Figma reconstructs layers from the screenshot using AI inference, which takes 10 to 30 seconds depending on complexity. Subsequent captures are faster.
Once connected, the pipeline flows both ways. You can push code into Figma, and you can pull design context into Claude Code. The MCP server does not just pass screenshots. It reads components, variables, styles, and layout structure. Claude understands your design system semantically.
If you are already working with MCP-connected design workflows, this is the natural next step.
Claude Code to Figma vs. Figma Make vs. Figma MCP: What Is the Difference?
These three tools serve different purposes within the same ecosystem. Here is how they compare:
| Claude Code to Figma (Code to Canvas) | Figma Make | Figma MCP Server | |
|---|---|---|---|
| Direction | Code to design | Text/design to code | Bidirectional context layer |
| Starting point | A working UI in a browser | A prompt or an existing design | Any Figma frame or Claude Code session |
| Output | Editable Figma frames | Front-end code or prototype | Structured design context for AI tools |
| Primary user | Developers, technical designers | Designers, non-technical users | Both, as infrastructure |
| Best for | Bringing AI-built prototypes back to the canvas for team review | Generating code directly from designs or natural language | Connecting design systems to AI coding tools |
Figma positioned these as complementary: different starting points, same destination. Figma Make is more accessible to non-engineers. Claude Code to Figma is faster for teams already building full working web apps in the terminal.
What Designers Can Do (Without Writing Code)
Once a coded UI lands on the Figma canvas, designers work in their native environment:
- Side-by-side comparison. Place multiple AI-generated variants next to each other. Spot patterns, gaps, and inconsistencies across flows.
- Structural exploration. Duplicate frames, rearrange steps, test layout changes. No code required to explore a different information hierarchy.
- Annotation and feedback. Leave comments on actual built interfaces, not approximations. PMs, designers, and engineers react to the same artifact at the same fidelity.
- Design system alignment. Check whether the AI-generated UI matches your existing components, tokens, and patterns. Flag inconsistencies before they reach production.
The designer’s role shifts. When AI generates five variants in minutes, the bottleneck is choosing. The canvas is where choosing happens.
Canvas to Code: The Return Trip
The reverse direction matters just as much. Select a frame in Figma, prompt Claude Code with a link to it, and Claude generates production-ready code that respects your design system. It reads your components, tokens, and Tailwind variables. Not a rough approximation. Actual code that matches your system.
This creates a true round-trip workflow:
Design in Figma > Generate code with Claude > Capture working UI back to Figma > Refine on canvas > Push updates back to code
Each cycle preserves context. Nothing gets lost in translation because the same system of record (MCP) connects both environments. For teams working with AI design tools, this is the closest thing to a closed loop between design and development.
Code to Canvas solved how AI-generated UI gets into Figma. But that was just the first step. Now Figma is taking it further, letting AI agents work directly inside the canvas itself.
Not just importing interfaces, but actively creating and modifying them in place.
We broke down what this shift means for designers here.
Known Limitations and Workarounds
This integration changes real workflows, but it comes with constraints worth knowing before you commit to it.
- Terminal-first workflow
Claude Code lives in the command line. Designers unfamiliar with terminal tools will need engineering support for the setup phase, or a brief onboarding session to get comfortable with the 3 commands they will use
repeatedly. Workaround: document your team’s 5 most-used commands in a shared Notion doc. Most designers are productive after one pairing session with an engineer.
- No direct visual refinement loop.
Once you are back in code, adjusting padding, hover states, or spacing requires editing code manually. There is no point-and-click “push to code” from Figma yet. Workaround: use the Figma MCP reverse
direction (frame link into Claude prompt) to describe visual changes in natural language. Claude translates your design intent into code changes without you touching the file directly. - Multi-screen capture is manual.
Converting a full flow requires capturing each screen individually, then arranging them on the canvas yourself. Workaround: name your captures systematically before you start (“onboarding-step-1”,
“onboarding-step-2”) so the canvas stays organized. Build a Figma template with pre-labeled frame slots for your most common flows. - Claude Code operates on your live codebase.
Changes go into the same files engineers ship. This is not a sandbox. Workaround: run Claude Code sessions on a dedicated feature branch. Treat AI-generated code the same way you would treat any
unreviewed PR. Review before merging. - Desktop app required.
The MCP server runs through Figma’s desktop application, not the browser version. If your team uses Figma in-browser by default, one team member needs to run the desktop app to act as the MCP host. Workaround: designate
a single “MCP workstation” per team (usually the lead designer’s machine) for capture sessions, while other team members collaborate on the resulting canvas from any device. - Token costs scale with complexity.
Larger design files and multi-screen flows consume more Claude tokens. A simple component capture costs very little. A 20-screen flow with annotation requests can add up. Workaround: batch your capture
sessions and limit annotation prompts to the highest-value questions. Use Claude for structural decisions; use Figma comments for small feedback.
The Workflow That Actually Works in Practice
The round-trip workflow (Figma to Claude to canvas and back) works best when you treat each direction as a distinct phase rather than a continuous loop.
Phase 1 is divergence. Start in Figma with your design intent: the layout, the component structure, the interaction model. Use the Figma MCP connection to give Claude specific, bounded instructions. “Build this card component exactly as
designed. Use Tailwind. Match the spacing tokens from the frame.” Claude produces a first pass. You capture it to canvas.
Phase 2 is evaluation. On the canvas, run your design review: does this match the system? Are the spacing values correct? Do the interactive states hold up? Annotate directly on the captured frames. Bring in your PM or a second designer. This
is where Figma earns its role: not as a source of truth for final code, but as a shared decision-making surface.
Phase 3 is convergence. Take the annotated canvas decisions back to Claude with a focused brief. “The button spacing is 4px short. The hover state is missing. The mobile breakpoint needs to collapse the grid to single column.” Claude applies
the changes. You capture the result. One more review cycle.
In most cases, two to three cycles produces a component that is closer to spec than a traditional design-to-handoff-to-interpretation pipeline. The key constraint: keep each Claude prompt scoped to a single component or a single change.
Multi-component or multi-change prompts produce harder-to-review outputs and require longer correction cycles.
For teams that have not yet adopted this workflow, the lowest-friction entry point is using the MCP connection for a single, non-critical component in your next sprint. Run one full cycle, review the result alongside your standard handoff,
and compare the output quality and time cost before committing the whole team.
What This Means for the Design-to-Dev Handoff
The bigger story is not about one feature. It is about the direction.
Design tools and coding tools are converging, not as competitors, but as parts of the same system. Figma is betting that AI does not replace the canvas. It feeds the canvas with more options, faster. The designer’s role shifts from producing artifacts to curating and refining what AI generates.
For teams already building with AI coding tools and Figma plugins, this integration removes the last major friction point: getting the work back into a shared space where everyone can contribute.
Code is powerful for converging on a solution. The canvas is powerful for diverging, exploring, and deciding. Now they are connected.
💡 Stay inspired every day with Muzli!
Follow us for a daily stream of design, creativity, and innovation.
Linkedin | Instagram | Twitter