A browser playground where a prompt becomes a real, editable interface in seconds — with streaming, iteration, and full spec transparency.
↗ Try it live — ai2ui.edwinvakayil.info
The problem
Design tools don't run real UI. Code editors don't speak natural language. There's a gap between "I want a login form" and a working, interactive interface — one that existing tools don't close cleanly. v0 and similar tools generate raw code you then have to paste and adapt. There's no place to describe → see → tweak in one continuous loop.
Raw LLM output breaks iterative workflows. If you ask a model to "add a phone field," it regenerates everything from scratch. You lose previous state, edits, and context. A structured spec format was needed so follow-up prompts could patch, merge, or diff — not restart.
How it works
Prompt → Groq streams spec → Client patches live → React renders UI → Follow-up merges
The key architectural decision: instead of generating raw JSX, the LLM outputs a declarative spec — a stable, parseable JSON/YAML format describing components, layout, state, and actions. This spec is the contract between the model and the renderer.
type: Stack
props: { direction: "column", gap: 4 }
children:
- type: Input
props: { label: "Email", $bindState: "email" }
- type: Button
props: { label: "Submit", action: "submit" }Because the spec is structured and the component catalog is fixed, the model can't hallucinate arbitrary HTML or styles — it's constrained to a known set of primitives. This keeps generated UIs consistent without post-processing.
What makes iteration work
Follow-up prompts send the current spec as context. The model responds with a patch, merge, or diff — so "add a phone field" extends the form rather than regenerating it. This is the core behaviour that makes the tool feel like a real design loop rather than a sequence of one-shot generations.
What you see
Live preview — rendered React UI
Spec view — JSON or YAML
Raw stream — tokens as they arrive
Generated JSX — copy-ready code
Component catalog — what the model can use
Version history — per-prompt snapshots
The hard parts
Streaming parser robustness. Supporting both JSONL patches and YAML fenced blocks meant building a client-side parser that handles partial, malformed, or truncated streams without breaking the live preview mid-generation.
State ownership during streaming. Keeping the current spec, selected version, and follow-up context in sync while tokens are arriving required careful separation between refs (for the submit handler) and state (for versioned history).