Integration Guide

Hebbrix + LangChain

LangChain gives your agents tools, reasoning, and structure. Now add memory: persistent, intelligent, and self-improving. Together, they make agents that actually remember what matters.

$pip install hebbrix langchain-openai

LangChain is powerful. Memory makes it smarter.

LangChain is the go-to framework for building AI agents. It handles chains, tools, retrieval, and orchestration beautifully. But its built-in memory options (ConversationBufferMemory, ConversationSummaryMemory) are session-scoped. They live in Python objects. When the process ends, the memory is gone.

For production agents that talk to real users across days, weeks, and months, you need memory that persists, searches intelligently, and gets better over time.

Cross-session persistence5-layer hybrid searchKnowledge graphAuto-learningMemory decayMulti-tenancy

Two ways to integrate

Option 1

Drop-in chat endpoint

The fastest path. Hebbrix's chat API is OpenAI-compatible, so you can use it as the LLM in any LangChain chain. Just point ChatOpenAI to Hebbrix's endpoint. Memory search and injection happens automatically before the model responds.

No changes to your chain logic required.

drop-in approach
from langchain_openai import ChatOpenAI

# Point LangChain to Hebbrix's endpoint
llm = ChatOpenAI(
    base_url="https://api.hebbrix.com/v1",
    api_key="your_hebbrix_key",
    model="gpt-4"
)

# Use in any chain. Memory is automatic.
response = llm.invoke("What does Sarah prefer?")
# Hebbrix searches memories, injects context,
# then forwards to the LLM with full history
Option 2

SDK for full control

For teams that want granular control over when and how memory is stored and retrieved. Use the Hebbrix Python SDK alongside LangChain to store memories after tool calls, search before chain invocations, and build custom memory logic that fits your agent's architecture.

Full control over the memory lifecycle.

SDK approach
from hebbrix import Hebbrix
from langchain_openai import ChatOpenAI

hebbrix = Hebbrix()
llm = ChatOpenAI(model="gpt-4")

# Before the chain runs, get relevant context
memories = hebbrix.search("user preferences")

# Inject memories into the system prompt
context = "\n".join(m.content for m in memories)

# After the chain runs, store what was learned
hebbrix.memories.create(
    content="User asked about billing, prefers email"
)

What Hebbrix adds to your LangChain stack

Cross-session memory

Persists across restarts and deployments. Your agent remembers users from weeks ago.

Intelligent retrieval

5-layer hybrid search finds specific relevant memories, not just the last N messages.

Knowledge graph

Automatic entity extraction and relationship mapping. Reason about connections, not just text.

Self-improving memory

RL evaluates which memories lead to good responses. Helpful ones get reinforced automatically.

Multi-tenancy

Collections isolate memories per user, per project, or per any scope you define.

Natural decay

Ebbinghaus forgetting curve keeps retrieval sharp. Old irrelevant memories fade naturally.

Common patterns

Real-world patterns from teams using both tools together.

1

Conversational agent with history

Use Hebbrix's chat endpoint as the LLM in a ConversationalRetrievalChain. Every message is automatically enriched with past interactions. No buffer management needed.

2

Tool-using agent with learning

After your LangChain agent completes a tool chain, store the outcome as a Hebbrix memory. Next time a similar task comes up, the agent has the solution without re-running the tools.

3

Multi-agent coordination

Multiple LangChain agents sharing knowledge through Hebbrix collections. A research agent stores findings, a summarization agent reads them, and a decision agent acts on them. All through shared memory.

Featured pattern

RAG + memory hybrid

Use LangChain's document retrieval for static knowledge (docs, FAQs) and Hebbrix for dynamic, personalized memory (user preferences, conversation history, learned patterns).

RAG + Memory hybrid
from hebbrix import Hebbrix
from langchain_community.vectorstores import FAISS
from langchain_openai import ChatOpenAI

hebbrix = Hebbrix()
vectorstore = FAISS.load_local("docs_index", embeddings)
llm = ChatOpenAI(
    base_url="https://api.hebbrix.com/v1",
    api_key="your_hebbrix_key"
)

# Static knowledge from your docs
docs = vectorstore.similarity_search(query)

# Dynamic memory from past interactions
memories = hebbrix.search(query, collection="support")

# Combine both into the prompt
context = format_docs(docs) + format_memories(memories)

LangChain memory vs. Hebbrix memory

LangChain's memory modules are great for prototyping. Hebbrix is built for production.

PersistenceIn-process (lost on restart)Cloud-persisted across sessions
SearchRecent N messages or summary5-layer hybrid search
StructureFlat text buffer3-tier + knowledge graph
LearningNoneAutomatic RL (6 quality checks)
Multi-userManual implementationCollections with scoping
DecayWindow eviction onlyEbbinghaus forgetting curve
LangChainHebbrix

Give your LangChain agent a memory upgrade

Free tier. No credit card. Plenty of room to experiment.