Integration Guide

Hebbrix + CrewAI

CrewAI orchestrates teams of AI agents. Hebbrix gives them shared memory that persists across runs, scales without lock files, and retrieves context through knowledge graphs instead of basic similarity search.

$pip install crewai requests

What CrewAI's built-in memory can't do

CrewAI ships with short-term (ChromaDB), long-term (SQLite), and entity memory. They work for basic use cases, but hit real limits in production.

Crew-scoped only

Memory is locked to individual crews. Agents in one crew can't access what another crew learned.

ChromaDB lock files

Running multiple crews in parallel? ChromaDB's SQLite backend blocks concurrent writes.

Basic similarity only

No keyword matching, no knowledge graph traversal. Proper nouns and exact identifiers get missed.

No cross-run learning

Short-term memory resets each run. Long-term memory stores raw task results without intelligent retrieval.

Unbounded growth

ChromaDB's database grows without cleanup. No built-in decay or importance scoring to keep things relevant.

No relationship awareness

Entity memory stores flat facts. No graph connecting people, projects, and events together.

Integrate with custom tools

The simplest way to add Hebbrix to CrewAI is through custom tools. Create a search tool and a store tool, then assign them to any agent that needs memory.

Search Tool

Retrieve memories

Any agent with this tool can search Hebbrix for relevant memories. Results come back in under 50ms using 5 layer hybrid search.

hebbrix_search_tool.py
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
import requests

class HebbrixSearchInput(BaseModel):
    query: str = Field(..., description="What to search for")
    user_id: str = Field(default="default", description="User scope")

class HebbrixMemorySearch(BaseTool):
    name: str = "search_memory"
    description: str = "Search Hebbrix for relevant memories"
    args_schema: type[BaseModel] = HebbrixSearchInput

    def _run(self, query: str, user_id: str) -> str:
        resp = requests.post(
            "https://api.hebbrix.com/v1/memories/search",
            json={"query": query, "user_id": user_id},
            headers={"Authorization": "Bearer YOUR_KEY"}
        )
        results = resp.json().get("results", [])
        return "\n".join(r["content"] for r in results)
Store Tool

Save new memories

Agents store what they learn using infer:true mode. Hebbrix automatically extracts clean, atomic facts from whatever the agent writes.

hebbrix_store_tool.py
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
import requests

class HebbrixStoreInput(BaseModel):
    content: str = Field(..., description="Memory to store")
    user_id: str = Field(default="default", description="User scope")

class HebbrixMemoryStore(BaseTool):
    name: str = "store_memory"
    description: str = "Store a new memory in Hebbrix"
    args_schema: type[BaseModel] = HebbrixStoreInput

    def _run(self, content: str, user_id: str) -> str:
        resp = requests.post(
            "https://api.hebbrix.com/v1/memories",
            json={
                "content": content,
                "user_id": user_id,
                "infer": True  # Auto-extract facts
            },
            headers={"Authorization": "Bearer YOUR_KEY"}
        )
        return "Memory stored successfully"
Crew Setup

Shared memory across agents

Give different agents different tools. A researcher stores findings. A writer retrieves them. Memory flows between agents through Hebbrix, not through CrewAI's internal state.

crew_with_memory.py
from crewai import Agent, Task, Crew

# Create agents with Hebbrix memory tools
researcher = Agent(
    role="Research Analyst",
    goal="Find and store relevant research",
    tools=[HebbrixMemorySearch(), HebbrixMemoryStore()],
    backstory="You research topics and store findings."
)

writer = Agent(
    role="Content Writer",
    goal="Write using stored research",
    tools=[HebbrixMemorySearch()],
    backstory="You use stored memories to write content."
)

# The writer can access everything the researcher stored
crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, writing_task],
    verbose=True
)

What Hebbrix adds to your CrewAI stack

Cross-crew memory

All agents in all crews access the same memory. Research from one run is available to every future run.

Intelligent retrieval

5 layer hybrid search finds relevant memories by meaning, keywords, relationships, importance, and recency.

Knowledge graph

Entities and relationships are mapped automatically. Ask about a project and find the people connected to it.

Concurrent safe

No lock files. Run 50 crews in parallel and they all read and write to Hebbrix without conflicts.

Smart extraction

infer:true mode extracts clean facts from agent outputs. No need to parse or format what agents write.

Natural decay

Old, irrelevant memories fade over time. The memory stays focused on what's actually useful.

CrewAI memory vs. Hebbrix memory

CrewAI's memory works for prototyping. Hebbrix is for when you ship to production.

PersistenceSQLite (local, per-crew)Cloud API (shared, cross-crew)
SearchChromaDB similarity5 layer hybrid search
ConcurrencyLock file issuesAPI based (no locks)
KnowledgeFlat entity storageKnowledge graph with relations
LearningNoneAutomatic quality checks
ScaleSingle machineMillions of memories
CrewAIHebbrix

Give your crew a shared memory

Free tier. No credit card. Enough room to test with real crews.