Back to graph

Topic analysis

Show HN: Hippo, biologically inspired memory for AI agents

The secret to good memory isn't remembering more. It's knowing what to forget. AI agents forget everything between sessions. Existing solutions just save everything and search later. That's a filing cabinet, not a brain. Your memories are also trapped. ChatGPT knows things Claude doesn't. Cursor rules don't travel to Codex. Switch tools and you start from zero. That's it. You have a memory system. hippo init auto-detects your agent framework and wires itself in: If you have a CLAUDE.md , it patches it. AGENTS.md for Codex/OpenClaw. .cursorrules for Cursor. No manual hook install needed. Your agent starts using Hippo on its next session. It also sets up a daily cron job (6:15am) that runs hippo learn --git and hippo sleep automatically. Memories get captured from your commits and consolidated every day without you thinking about it. To skip: hippo init --no-hooks --no-schedule Your memories shouldn't be locked inside one tool. Hippo pulls them in from anywhere. All import commands support --dry-run (preview without writing), --global (write to ~/.hippo/ ), and --tag (add extra tags). Duplicates are detected and skipped automatically. Extract memories from raw conversation text. No LLM needed: pattern-based heuristics find decisions, rules, errors, and preferences. Long-running work needs short-term continuity, not just long-term memory. Hippo can persist the current in-flight task so a later continue has something concrete to recover. hippo context --auto includes the active task snapshot before long-term memories, so agents get both the immediate thread and the deeper lessons. Manual snapshots are useful, but real work also needs a breadcrumb trail. Hippo can now store short session events and link them to the active snapshot so context output shows the latest steps, not just the last summary. Hippo mirrors the latest trail to .hippo/buffer/recent-session.md so you can inspect the short-term thread without opening SQLite. When you're done for the day (or switching to another agent), create a handoff so the next session knows exactly where to pick up: Working memory is a bounded scratchpad for current-state notes. It's separate from long-term memory and gets cleared between sessions. The buffer holds a maximum of 20 entries per scope. When full, the lowest-importance entry is evicted. See why a memory was returned: Input enters the buffer. Important things get encoded into episodic memory. During "sleep," repeated episodes compress into semantic patterns. Weak memories decay and disappear. Every memory has a half-life. 7 days by default. Persistence is earned. Use it or lose it. Each recall boosts the half-life by 2 days. Tag a memory as an error and it gets 2x the half-life automatically. Every memory carries a confidence level: verified , observed , inferred , or stale . This tells agents how much to trust what they're reading. When context is generated, confidence is shown inline: Agents can see at a glance what's established fact vs. a pattern worth questioning. Memories unretrieved for 30+ days are automatically marked stale during the next hippo sleep . If one gets recalled again, Hippo wakes it back up to observed so it can earn trust again instead of staying permanently stale. Hippo now detects obvious contradictions between overlapping memories and keeps them visible instead of silently letting both masquerade as truth. Open conflicts are stored in SQLite, mirrored under .hippo/conflicts/ , and linked back into each memory's conflicts_with field. Memories aren't presented as bare assertions. By default, Hippo frames them as observations with dates, so agents treat them as context rather than commands. Three modes: observe (default), suggest , assert . Choose based on how directive you want the memory to be. Run hippo sleep and episodes compress into patterns. Three or more related episodes get merged into a single semantic memory. The originals decay. The pattern survives. Did the recalled memories actually help? Tell Hippo. It tightens the feedback loop. Recall only what fits. No context stuffing. Results are ranked by relevance * strength * recency . The highest-signal memories fill the budget first. Hippo can scan your commit history and extract lessons from fix/revert/bug commits automatically. The --repos flag accepts comma-separated paths. Hippo scans each repo's git log, extracts fix/revert/bug lessons, deduplicates against existing memories, and stores new ones. Pair with hippo sleep afterwards to consolidate. Ideal for a weekly cron: Wrap any command with hippo watch to auto-learn from failures: hippo init detects your agent framework and patches the right config file automatically: No extra commands needed. Just hippo init and your agent knows about Hippo. If you prefer explicit control: This adds a ... block that tells the agent to: For Claude Code, it also adds a Stop hook to ~/.claude/settings.json so hippo sleep runs automatically when the session exits. To remove: hippo hook uninstall claude-code For any MCP-compatible client (Cursor, Windsurf, Cline, Claude Desktop): Add to your MCP config (e.g. .cursor/mcp.json or claude_desktop_config.json ): Exposes tools: hippo_recall , hippo_remember , hippo_outcome , hippo_context , hippo_status , hippo_learn , hippo_wm_push . Native plugin with auto-context injection, workspace-aware memory lookup, and tool hooks for auto-learn / auto-sleep. Plugin docs: extensions/openclaw-plugin/ . Integration guide: integrations/openclaw.md . Plugin with SessionStart/Stop hooks and error auto-capture. See extensions/claude-code-plugin/ . Full integration details: integrations/ Hippo is modeled on seven properties of the human hippocampus. Not metaphorically. Literally. Why two stores? The brain uses a fast hippocampal buffer + a slow neocortical store (Complementary Learning Systems theory, McClelland et al. 1995). If the neocortex learned fast, new information would overwrite old knowledge. The buffer absorbs new episodes; the neocortex extracts patterns over time. Why does decay help? New neurons born in the dentate gyrus actively disrupt old memory traces (Frankland et al. 2013). This is adaptive: it reduces interference from outdated information. Forgetting isn't failure. It's maintenance. Why do errors stick? The amygdala modulates hippocampal consolidation based on emotional significance. Fear and error signals boost encoding. Your first production incident is burned into memory. Your 200th uneventful deploy isn't. Why does retrieval strengthen? Recalled memories undergo "reconsolidation" (Nader et al. 2000). The act of retrieval destabilizes the trace, then re-encodes it stronger. This is the testing effect. Hippo implements it mechanically via the half-life extension on recall. Why does sleep consolidate? During sleep, the hippocampus replays compressed versions of recent episodes and "teaches" the neocortex by repeatedly activating the same patterns. Hippo's sleep command runs this as a deliberate consolidation pass. The 7 mechanisms in full: PLAN.md#core-principles For how these mechanisms connect to LLM training, continual learning, and open research problems: RESEARCH.md Mem0, Basic Memory, and Claude-Mem all implement "save everything, search later." Hippo implements all 7 hippocampal mechanisms: two-speed storage, decay, retrieval strengthening, schema acceleration, conflict detection, multi-agent transfer, and explicit working memory. It's the only tool that models what memories are worth keeping. Issues and PRs welcome. Before contributing, run hippo status in the repo root to see the project's own memory. The interesting problems:

Heat score

1

Sources

1

Platforms

1

Relations

10
First seen
Apr 7, 2026, 5:49 AM
Last updated
Apr 7, 2026, 8:00 AM

Why this topic matters

Show HN: Hippo, biologically inspired memory for AI agents is currently shaped by signals from 1 source platforms. This page organizes AI analysis summaries, 1 timeline events, and 10 relationship edges so search engines and AI systems can understand the topic's factual basis and propagation arc.

News

Keywords

9 tags
biologicallyinspiredmemoryagentssecretgoodisnrememberingmore

Source evidence

1 evidence items

Show HN: Hippo, biologically inspired memory for AI agents

News · 1
Apr 7, 2026, 5:49 AMOpen original source

Timeline

Show HN: Hippo, biologically inspired memory for AI agents

Apr 7, 2026, 5:49 AM

Related topics

Agent Reading Test

benchmarktestshowwellcodingagentscanreadwebcontent
Relation score 0.80Open topic

Does coding with LLMs mean more microservices?

codingmeanmoremicroservicesworkseenbeginningsproliferationseems
Relation score 0.70Open topic

Mario Zechner Joins Earendil

AIcoding agentssoftware developmentopen sourcehiringEarendilPi
Relation score 0.60Open topic

Show HN: Modo – I built an open-source alternative to Kiro, Cursor, and Windsurf

builtopensourcealternativeplanscodeswhatadds
Relation score 0.80Open topic

Show HN: Mdarena – Benchmark your Claude.md against your own PRs

youragainstownfilesare
Relation score 0.80Open topic

Agent Reading Test

benchmarktestshowwellcodingagentscanreadwebcontent
Relation score 0.75Open topic

Does coding with LLMs mean more microservices?

codingmeanmoremicroservicesworkseenbeginningsproliferationseems
Relation score 0.70Open topic

Show HN: Gemma Gem – AI model embedded in a browser – no API keys, no cloud

modelembeddedbrowserkeyscloudpersonalassistantlivingrightinside
Relation score 0.70Open topic