Tagged: llm
Showing 26–30 of 36 articles
Setting Up an Autonomous AI Agent on a VPS with OpenClaw Installing OpenClaw on a Hetzner VPS, connecting Telegram and ProtonMail, and trying to get a model to act without asking permission first. Read article Why I Switched From Cursor to Claude Code After a year with Cursor, I moved to Claude Code's CLI. No IDE lock-in, better context management, and a workflow built around plans and persistent memory. Read article The Hidden Cost of Embedding OpenAI's embedding API charges per token across ingestion, re-ingestion, and every query. Switching to a local Ollama model eliminated the recurring cost with comparable retrieval quality. Read article Tool-Forced RAG: Stopping the LLM From Making Up Clinical Guidelines LLMs confidently generate plausible clinical advice that doesn't match published standards. Forcing document retrieval for professional questions prevents this. Read article Per-Channel vs Per-User: What Memory Model Works for AI Chat Slack conversations are collaborative. Persisting context per channel captures the full picture. Per-user memory misses half the conversation. Read article