How MCP Turns Any AI Into a Design Tool
A technical deep dive into how MCP works with Playas — from protocol to canvas in real-time.
The Missing Link
For years, the dream was simple: talk to an AI, get a design. But there was always a missing link. The AI could generate descriptions, even code — but it couldn't manipulate a visual canvas. It couldn't create a node, set its styles, and render it in real-time on your screen.
MCP (Model Context Protocol) is that missing link. It turns any AI into a design tool by giving it hands — the ability to reach into Playas and directly create, modify, and arrange design elements on a living canvas.
Here's how it actually works.
What Is MCP?
MCP is an open standard, originally created by Anthropic, that lets AI models interact with external tools through a structured interface. Instead of the AI just generating text, MCP lets it call functions — tools — that perform real actions in the outside world.
Think of it like this: without MCP, your AI is a brain in a jar. It can think and speak, but it can't do anything. MCP gives it hands, eyes, and a connection to real applications.
The protocol defines a simple contract: the tool (Playas) publishes a list of available operations and their parameters. The AI client (Claude, ChatGPT, Cursor) discovers these operations and can call them as part of a conversation. Results flow back to the AI so it can decide what to do next.
It's like a universal API that any AI can discover and use without custom integration.
What Playas Exposes via MCP
When you connect your AI to Playas, it discovers a set of tools that give it full control over your design canvas:
list_projects — See all your Playas projects and their pages.
get_tree — Read the current node tree of any page. The AI can inspect what's already on the canvas before making changes.
apply_patches — The core tool. Send Immer patches to modify the node tree. Add nodes, remove nodes, change styles, rearrange children — any mutation is expressed as a patch.
create_page — Add new pages to a project.
navigate — Switch the canvas to a specific page so the user sees the right context.
set_mode — Switch between edit and view modes on the canvas.
get_status — Check the current state of the editor: which page is open, which device is selected, whether the user is in edit or view mode.
These tools compose naturally. The AI reads the current tree, decides what to change, sends patches, and the canvas updates instantly. It's a conversation between AI and canvas, mediated by MCP.
The Real-Time Architecture
Here's where it gets interesting. When your AI sends patches through MCP, how do they appear on your canvas instantly?
The architecture uses Server-Sent Events (SSE) for real-time streaming:
Your AI --> MCP --> Playas API --> Database
|
SSE Stream
|
Browser --> Canvas
Step 1: Your AI (in Claude Desktop, ChatGPT, etc.) decides to modify the design. It calls apply_patches through MCP with a set of Immer patches.
Step 2: The Playas API receives the patches, validates them, and applies them to the stored node tree.
Step 3: The API pushes the patches through an SSE stream to your browser.
Step 4: The browser's Zustand store applies the patches to the local tree using Immer, and React re-renders the canvas.
The result: you type "add a footer with three columns" in Claude Desktop, and it appears on your Playas canvas in real-time. No page refresh. No polling. Instant, streaming updates.
Why Patches, Not Full Trees
A critical design decision in the architecture: changes are expressed as patches, not as full tree replacements.
When your AI says "make the header background darker," it doesn't regenerate the entire page. It sends a single patch:
[
{
"op": "replace",
"path": "/children/0/style/backgroundColor",
"value": "#1f2937"
}
]
This matters for three reasons:
Speed. A patch targeting one property is tiny. The full tree for a complex page could be thousands of nodes. Sending the diff instead of the whole thing means instant updates, even on complex designs.
Undo/redo. Every patch has an inverse. Applying a patch moves forward; applying its inverse moves backward. This gives Playas granular undo/redo — not "revert to a previous snapshot" but "undo exactly the last change the AI made."
Conflict safety. If you're manually editing the canvas while the AI is generating, patches are isolated. The AI's change to the header doesn't conflict with your manual edit to the footer. Patches target specific paths in the tree, so concurrent edits coexist naturally.
A Real Session: What the AI Sees
Let's trace a real interaction. You open Claude Desktop with Playas connected and type:
"Create a pricing page with three tiers: Starter, Pro, and Enterprise"
Here's what happens behind the scenes:
-
Claude sees the available Playas MCP tools. It decides to first check the current state by calling
get_tree. -
The tree comes back empty (new page). Claude plans the layout.
-
Claude calls
apply_patcheswith patches that create the full node tree: a container with a heading, three pricing cards, each with a tier name, price, feature list, and CTA button. All with real CSS — flexbox for layout, proper spacing, colors, typography. -
The patches stream to your browser via SSE. You see the pricing page materialize on the canvas.
-
Claude responds in chat: "I've created a pricing page with three tiers. The Pro tier is highlighted as the recommended option. Would you like me to adjust anything?"
Now you say: "Make the Enterprise tier darker with white text, like a premium feel."
-
Claude calls
get_treeto see the current state (including any manual edits you may have made). -
Claude calls
apply_patcheswith targeted patches — only the Enterprise card's background and text colors change. -
The canvas updates in real-time. Your other cards are untouched.
This back-and-forth continues as long as you want. Every interaction is your AI, your subscription, your conversation. Playas is just the canvas that makes it visual.
Why MCP Beats a Proprietary API
Playas could have built a custom API for each AI provider. A Claude integration, a ChatGPT integration, a Gemini integration. Many tools do this. Here's why MCP is better:
One integration, every AI. We built the MCP server once. It works with Claude, ChatGPT, Cursor, VS Code, and every future AI client that adopts MCP. No maintenance per provider.
The AI discovers capabilities. We don't need to document our API for each AI separately. The MCP server publishes tool definitions, and the AI reads them automatically. Update a tool, and every AI immediately knows about the new capability.
Open standard, no lock-in. MCP isn't controlled by Playas or any single company. It's an open protocol. If a better design tool launches tomorrow with MCP support, your AI can connect to it too. Competition happens on product quality, not on proprietary integrations.
User-controlled authorization. MCP includes an authorization flow. You explicitly grant your AI access to your Playas projects. The AI can only do what you've permitted. Revoke access anytime.
The Bigger Picture
MCP isn't just about Playas. It's the beginning of a world where your AI is the universal interface to every tool you use.
Design in Playas. Code in Cursor. Write in Notion. Manage projects in Linear. All through the same AI, using MCP as the connector. One conversation, many tools.
Playas is one of the first design tools built for this world. We don't compete with AI — we extend it. Your AI brings the intelligence. We bring the canvas.
Connect your AI to Playas via MCP, and see what it feels like when any AI becomes a design tool. The protocol is open, the setup takes 30 seconds, and the result is a design workflow that's faster, cheaper, and more flexible than anything a proprietary AI wrapper can offer.