| ▲ | Show HN: TurnZero – Persistent Expert for LLMs | |
| 2 points by dmilicev2 10 hours ago | ||
In an attempt to reduce cold starts in AI sessions Ive made a tool that runs as an MCP server and loads the context before Turn 0. Two things happen: Personal Priors - your workflows and standards loads once per session and persists across every supported AI client. Expert Priors - when prompt is stack specific, relevar priors inject based on semantic similarity. This is to reduce errors and unwanted behaviour of the AI. Privacy guarantee: local-first by design. Raw prompts are never stored. Injection is always client-side. ```bash pipx install turnzero turnzero setup # registers MCP server with Claude Code, Cursor, Claude Desktop, Gemini CLI turnzero verify # confirms everything is wired correctly ``` | ||