OpenUISpec is a YAML or JSON file you drop at the root of a component library that describes every component, its props, types, defaults, and enums. The pitch is “AI-Native Specification for UIs”. The point is to stop AI tools having to read your docs site, your Storybook, and your TypeScript definitions to figure out what a Button accepts.
The shadcn example covers 50+ components. The accordion entry looks like this:
accordion:
description: "A vertically stacked set of interactive headings that
each reveal a section of content."
props:
type:
enum: ["single", "multiple"]
description: "Determines whether one or multiple items can be opened."
collapsible:
type: boolean
description: "Allows closing content when clicking trigger for open item."
Five fields (name, description, props, types, defaults) are what a model needs to use the component correctly.
Why this is the right shape
When an LLM has to “use” a component library today, it does one of three things. It guesses from training data and gets the props wrong. It reads the docs site, which is 90% layout and navigation. Or it reads the source, which means parsing TypeScript and JSX to recover information that the library author already knows.
A spec file collapses all three into one fetch. The model gets the contract from the source instead of reconstructing it from rendered pages.
This is the same pattern as openapi.yaml, robots.txt, sitemap.xml, and llms.txt: a small file at a known location that says what’s available, so machines don’t have to infer from a rendered page.
Scraping is the bigger prize
Component libraries are a small slice of the problem. The same shape would help anywhere an agent has to read a rendered page to extract structured data. Point an agent at a product page today and it loads 200KB of HTML, half of which is analytics and consent banners. It pays for every token of that. Then it pattern-matches on currency symbols to find the price.
Schema.org markup helps a bit, when sites bother. But schema.org is verbose, often wrong, and assumes the consumer is a search crawler.
A spec file at /openui.yaml or equivalent for data could declare:
product:
fields:
sku: { type: string, selector: "[data-sku]" }
price: { type: number, currency: "GBP", selector: ".price" }
stock: { type: integer, selector: "[data-stock]" }
An agent fetches that once and extracts the fields directly from every product page on the site, so the token bill stops scaling with page weight.
Token economics
Reading the rendered shadcn docs to learn the Button API means downloading the marketing site, finding the right page, parsing the rendered example, and inferring the props from what you see. That’s tens of thousands of tokens before the model has answered anything.
The OpenUISpec entry for Button is a few hundred bytes.
In an agent loop where the model calls back repeatedly to check what props exist, that ratio compounds. The same applies to scraping a site for data.
Same instinct as ilo
This is the bet ilo-lang is built on, just at the language layer instead of the docs layer. ilo strips a program down to the smallest token count that still expresses the work, because every token an agent writes or reads is paid for and slows the loop down.
OpenUISpec is the same trade for libraries. Don’t make the model read a marketing site to find a prop name. Hand it the contract directly, in the smallest form that still answers the question.
The same trade shows up in AGENTS.md, llms.txt, and agent skills: each one is a small machine-readable artefact that replaces a larger thing the model would otherwise have to infer. ilo applies the same idea at the language layer, so the inference step over syntax doesn’t have to happen.
What’s missing
OpenUISpec works only if libraries adopt it. So far the examples on the site are demonstrations, not specs published by the libraries themselves. Until shadcn ships an openui.yaml of its own, agents still have to read the docs.
Same problem on the data side. The sites whose data is most valuable to scrape have the least incentive to make scraping easy. So the spec-at-a-known-location pattern only takes off where the publisher actively wants AI tools to consume them. Component libraries do. Most product pages don’t.
The fix mirrors robots.txt: a small standard at a known path, with enough early adopters that not having one starts to look like a bug.