May 4, 2026 DeepSeek V4: Don't Look at What You Don't Need DeepSeek V4 reads a million tokens on roughly a quarter of V3.2's compute. It does this by selectively attending to the parts of context the prompt asks about, the same way humans skim a long book.
llmaiattentiondeepseek
Read article
April 30, 2026 Karpathy's Wiki, OpenBrain, and Picking a Context Layer for AI Two memory models for AI work. One synthesises on write, one on query. Which to pick depends on whether you're a solo researcher or running multi-agent.
llmarchitecturetooling
Read article
ilo: A Programming Language for AI Agents, Not HumansApril 29, 2026 OpenUISpec and the Case for Telling LLMs What's On the Page A spec for AI-readable component libraries. The same idea would cut the cost of scraping the web in half, if anyone published it.
llmtoolingscrapingilo
Read article
ilo: A Programming Language for AI Agents, Not HumansApril 9, 2026 Six Ways to Give an AI Agent a Map of Your Code Prowl, CodeBoarding, Memgraph GraphRAG, CodeGraphContext, SymDex, and jCodeMunch all solve the same problem differently. So does ilo graph. A comparison.
llmtoolingilomcp
Read article
ilo: A Programming Language for AI Agents, Not HumansApril 8, 2026 SudoLang and ilo: Two Opposite Bets on AI Programming SudoLang asks the LLM to interpret pseudocode. ilo gives the LLM a real compiler. Both claim to be AI-first languages. They agree on almost nothing.
ilolanguage-designai-agentsllm
Read article