ilo works best when you think of it not as a language, but as a typed shell for AI agents. Like bash discovers executables on $PATH, ilo discovers typed tools from configured sources and lets agents compose them with verified types and error handling.
What bash got right
Bash commands are mini programs. Each one is self-contained, has a universal interface (stdin/stdout/exit code), is discoverable on $PATH, and composes with any other command via |.
ilo functions follow the same shape:
Bash ilo
───────────────────────── ─────────────────────────────────
Self-contained binary Self-contained function with declared deps
stdin/stdout/stderr Typed params → typed result (ok/err)
$PATH discovery Tool graph registration
cmd1 | cmd2 Sequential binding + ? matching
Exit codes Typed error variants
The Unix philosophy maps directly: do one thing well (small units), expect output to become input (composable), don’t require interactive input (agent-friendly).
What bash got wrong for agents
No types. Everything is text. jq output looks the same as an error message. An agent parsing bash output has to guess where the data ends and the noise begins.
Silent failures. curl can fail and the pipeline continues with empty input. The agent doesn’t know something went wrong until the final output is garbage. In ilo, every call returns ok or err, and unhandled errors are a verification failure.
Text parsing tax. Agents must generate grep, awk, sed patterns to extract structured data from text streams. Each pattern costs tokens and often needs retrying when the output shape shifts. ilo passes typed records, not text.
Quoting. Bash’s escaping rules cost tokens. Nested quotes, dollar-sign expansion, whitespace sensitivity. Each one causes retries. ilo has no quoting because there’s no string interpolation in the shell sense.
Implicit composition
ilo’s ? operator works like an implicit pipe. The result of the previous call flows directly into the match without a variable binding:
get-user uid;?{!e:handle-error;~data:use-data}
This is equivalent to get-user uid | match in a hypothetical typed bash. No intermediate variable needed for single-use results.
Explicit binding is only needed when a value is referenced more than once or later:
rid=reserve items;charge pid amt;?{!e:release rid;!+"Payment failed"...}
Here rid must be named because it’s used in the error-compensation branch. If the charge fails, we need to release the reservation. Bash handles this with tee or temp files, which is worse.
The graph loading problem
The open question: if ilo is a shell, and the agent gets a “world” of available tools, how do you load 500 tools and 200 types without blowing the context window?
A few options:
Full graph. Load everything upfront. Only works for small projects. 500 tools with signatures is thousands of tokens before the agent writes a line.
Subgraph by task. Something decides which slice of the world is relevant. But who decides? An outer agent? A retrieval system? This feels like it just moves the problem.
Query on demand. The agent starts with nothing and asks the runtime “what tools can do X?” Total context cost: 2 tool signatures instead of 500. But now you need a runtime that can answer natural-language queries about its capabilities.
Progressive disclosure. Load tool names first (cheap, just a list of strings), then load full signatures on demand. The agent gets an overview for free and pays only for what it uses.
Progressive disclosure minimises tokens while keeping the agent informed. Not yet built.
Where ilo sits
ilo sits between bash and traditional languages:
- Bash: implicit pipes, no types, no verification, text everywhere
- Traditional languages: explicit everything, types, verbose, lots of ceremony
- ilo: implicit where safe (
?matching), explicit where needed (multi-use values), types verified before execution
The composition model is Unix pipes with a type checker. Programs should feel like shell scripts, sequences of tool calls with branching, not like class hierarchies or module systems.