LangGraph Integration

Your graph has state. Now give it memory.

LangGraph is brilliant at modeling complex agent workflows as graphs. But its state lives in a single invocation. When the graph finishes, the state vanishes. Hebbrix adds the layer LangGraph doesn't have: memory that persists across runs, days, and deployments.

$pip install hebbrix langgraph langchain-openai
langgraph-with-memory.py
from langgraph.graph import StateGraph, MessagesState
from langchain_openai import ChatOpenAI
from hebbrix import Hebbrix

hebbrix = Hebbrix()

# Point your LLM at Hebbrix for automatic memory
llm = ChatOpenAI(
    base_url="https://api.hebbrix.com/v1",
    api_key="your_hebbrix_key",
    model="gpt-4"
)

def agent_node(state: MessagesState):
    # Hebbrix automatically searches memories
    # and injects relevant context before GPT responds
    response = llm.invoke(state["messages"])
    return {"messages": [response]}

def memory_node(state: MessagesState):
    # Explicitly store important outcomes
    last_msg = state["messages"][-1].content
    hebbrix.memories.create(content=last_msg)
    return state

# Build your graph with memory at key nodes
graph = StateGraph(MessagesState)
graph.add_node("agent", agent_node)
graph.add_node("store_memory", memory_node)
graph.add_edge("agent", "store_memory")

app = graph.compile()

State is not memory

LangGraph's state is powerful within a single run. But production agents need something that outlives the graph execution.

LangGraph State

Lives in a single graph run

Lost when the process ends

No search or retrieval

No cross-session awareness

Manual serialization to persist

Hebbrix Memory

Persists across runs, restarts, deployments

5-layer hybrid search retrieval

Knowledge graph with entity relationships

Learns what context leads to good outcomes

Natural memory decay keeps context clean

Where to add memory in your graph

You get to decide. Memory fits naturally at specific nodes in your workflow.

Before the agent node: load context

Search Hebbrix for relevant memories before the LLM runs. Inject user history, preferences, and past outcomes into the prompt. The agent starts every run with full context.

After tool execution: store outcomes

When your agent completes a tool chain (API call, database query, web search), store the result as a memory. Next time a similar task comes up, the agent might skip the tool entirely.

At decision nodes: recall past decisions

When the graph reaches a branching point, search memories for how similar situations were handled before. Pattern recognition through memory, not hardcoded rules.

At the end: capture what was learned

Before the graph completes, store a summary of what happened. The 6 RL quality checks evaluate whether this run produced good outcomes. Future runs benefit from the lessons.

Add memory to your LangGraph agents

Your graph already has the logic. Now give it the memory to learn from every run.