Layer 1 · Memory engine
Hippo Fabric

The memory layer
AI has been missing.

Hippo Fabric replaces vector search with a biological cognitive memory engine — weighted concepts, Hebbian edges, spreading activation. Memory that actually learns from use.

Book a DemoTry it live →
Vector search0.71 · MISS
Hippo FabricFOUND
#1 LongMemEvalICLR 2025 gold standard
0.46sinference speed · vs 2–5s
Luthen solutionsexclusive cognitive capability
85% fewer tokens22,000 → 3,300 per query

The problem

Why AI memory has
always fallen short.

Two years building AI systems led us to one root cause — semantic search was never designed to think. It finds similar text. It doesn't understand context, build associations, or learn.

False positives at scale

Semantic similarity returns near-matches that are confidently wrong. The bigger your knowledge base, the worse it gets — not better.

vector_search.find_similar()

No associations

Vector search finds similar vectors — not related concepts. "Budget" and "Q4 forecast" live far apart in embedding space but are deeply connected in meaning.

no graph traversal

Static, frozen memory

Embeddings are computed once and never change. They can't learn from use, strengthen with repetition, or adapt from correction. Memory that can't grow isn't memory.

embeddings.frozen = True

No behavioral memory

Every session starts from zero. User preferences, corrections, and learned rules vanish the moment the conversation ends. Your agent forgets you every single time.

session.memory = None

Live playground

Type a query. Watch
spreading activation fire.

No embeddings, no cosine similarity. Hippo Fabric activates one concept and propagates through weighted graph edges to surface everything genuinely related.

brain.think("Q4 forecast")

Try a different query:

API surface

Three verbs.
Everything else emerges.

ingest() · think() · sleep(). The entire Hippo Fabric API fits on a postcard.

ingest()

Write to memory

Concepts and relationships stored as weighted nodes and edges. Every new piece of knowledge joins the graph and auto-links to what's already there.

think()

Associative recall

Activate one concept — spreading activation traverses links by weight. The right context surfaces because it's connected, not because it's a cosine match.

sleep()

Offline consolidation

The brain replays traces, strengthens useful edges, and crystallises schemas. Your agent gets measurably sharper every night.

Sleep consolidation

Your agent gets
smarter overnight.

While your users sleep, Hippo Fabric replays the day's activation traces, strengthens the edges that actually mattered, prunes noise, and crystallises recurring patterns into stable schemas. No other production memory does this.

REPLAYActivation traces from the last 24h are re-run
STRENGTHENEdges that led to useful answers gain weight
PRUNEUnused associations decay toward zero
CRYSTALLISECo-activated clusters become schemas

Performance

Dominating LongMemEval.

Evaluated against the ICLR 2025 gold standard for long-term memory in conversational AI.

State of the Art Comparison

Hippo Fabric v3
61.1%
ChatGPT
57.7%
Coze (GPT-4o)
33%

Category Breakdown (Strict)

Multi-Session Reasoning
90.6%
Knowledge Update
90.6%
Temporal Reasoning
75%
Assistant Info
40%
User Info
37.5%

0.46s

Inference Speed

vs 2–5s competitors

$0.00

API Cost for Retrieval

Free forever

85%

Token Cost Reduction

22,000 → 3,300 tokens

"90.6% accuracy in multi-session reasoning — more than 50% better than ChatGPT, at 10× faster inference speed and zero API cost."

LongMemEval · ICLR 2025 gold standard

Request access

Hippo Fabric is available
through Luthen.

Hippo Fabric powers Luthen's cognitive solutions — book a demo to see it in action.