Tagged: llm
Showing 6–10 of 36 articles
DeepSeek V4: Don't Look at What You Don't Need DeepSeek V4 reads a million tokens on roughly a quarter of V3.2's compute. It does this by selectively attending to the parts of context the prompt asks about, the same way humans skim a long book. Read article Karpathy's Wiki, OpenBrain, and Picking a Context Layer for AI Two memory models for AI work. One synthesises on write, one on query. Which to pick depends on whether you're a solo researcher or running multi-agent. Read article OpenUISpec and the Case for Telling LLMs What's On the Page A spec for AI-readable component libraries. The same idea would cut the cost of scraping the web in half, if anyone published it. Read article Six Ways to Give an AI Agent a Map of Your Code Prowl, CodeBoarding, Memgraph GraphRAG, CodeGraphContext, SymDex, and jCodeMunch all solve the same problem differently. So does ilo graph. A comparison. Read article SudoLang and ilo: Two Opposite Bets on AI Programming SudoLang asks the LLM to interpret pseudocode. ilo gives the LLM a real compiler. Both claim to be AI-first languages. They agree on almost nothing. Read article