Product Updates

Changelog

Everything we ship, in the order we ship it. New features, improvements, fixes, and the occasional refactor that makes everything faster.

March 7, 2026v2.4.0New Feature

Smart Memory Ingestion Pipeline

Introduced infer:true mode for automatic fact extraction from conversations. Send raw conversation turns and Hebbrix extracts clean, atomic facts using a 3 tier pipeline: embedding classifier, LLM extraction, and content deduplication.

Added memory worthiness classifier that rejects queries, greetings, and noise before they reach the LLM. Saves token costs and keeps your memory index clean.

Fact extraction now runs on gpt-5-nano for 4x faster inference at 17x lower cost compared to the default model.

New SEARCH_MIN_SCORE setting lets you configure a minimum relevance threshold. Results below this score are automatically filtered out, reducing noise in search responses.

March 5, 2026v2.3.0Improvement

5 Layer Hybrid Search Engine

Upgraded the search engine to combine five retrieval layers: semantic vectors, BM25 keyword matching, knowledge graph traversal, importance scoring, and recency boosting.

Added cross encoder reranking option for production workloads that need maximum precision.

Search results now include score explanations showing which layers contributed to each result's ranking.

Improved proper noun and keyword matching through tuned BM25 integration. Queries like 'Sarah from TechCorp' now correctly prioritize exact entity matches.

February 28, 2026v2.2.1Improvement

Knowledge Graph Performance

Entity extraction now runs asynchronously in the background, removing it from the critical path of memory storage. Write latency reduced from 2 3 seconds to under 200ms.

Knowledge graph indexing can be toggled per environment using the ENABLE_KNOWLEDGE_GRAPH_INDEXING setting.

Fixed a timeout issue where Neo4j connection failures would block the entire memory ingestion pipeline.

February 20, 2026v2.2.0New Feature

OpenAI Compatible Chat Endpoint

New /v1/chat/completions endpoint that follows the OpenAI API format. Drop Hebbrix into any existing OpenAI integration by changing the base URL and API key.

Automatic memory retrieval during chat: the system searches relevant memories and injects them into the conversation context before generating a response.

Support for streaming responses with memory augmented context.

February 15, 2026v2.1.0New Feature

Webhook Notifications

Added webhook support for memory lifecycle events: created, updated, deleted, and searched.

Configurable webhook endpoints per collection, with automatic retry and failure logging.

Webhook payloads include full memory content, metadata, and event context.

February 10, 2026v2.0.2Security

Security Hardening

Added TrustedHostMiddleware to reject requests from unauthorized origins.

Improved API key rotation with zero downtime key swapping.

Added rate limiting per API key with configurable burst and sustained limits.

All memory content is now encrypted at rest using AES 256.

February 1, 2026v2.0.0New Feature

Hebbrix v2: 3 Tier Memory Architecture

Complete rewrite of the memory storage engine. Memories now flow through three tiers: short term for recent context, medium term for ongoing relationships, and long term for permanent knowledge.

Automatic memory promotion and decay based on access patterns, importance, and recency. Memories that matter get strengthened. Noise fades naturally.

New collection system for organizing memories by application, namespace, or user. Full multi tenant isolation out of the box.

Bulk memory operations: import, export, and delete across collections.

New dashboard for managing memories, viewing knowledge graphs, and monitoring usage.

January 15, 2026v1.5.0New Feature

Knowledge Graph Integration

Automatic entity and relationship extraction from stored memories. Store a sentence like 'Alex joined the product team and reports to Jordan' and Hebbrix maps the entities and connections automatically.

Graph traversal during search: ask about Jordan's team and Hebbrix finds Alex through the relationship graph, even without a direct mention in the query.

Visual knowledge graph explorer in the dashboard.

January 5, 2026v1.4.0New Feature

Document Ingestion

Upload PDFs, markdown files, and plain text documents. Hebbrix chunks, embeds, and indexes them automatically.

Intelligent chunking that preserves paragraph and section boundaries.

Document memories are searchable alongside conversation memories, giving your agent a unified knowledge base.

Want to try the latest?

Every update listed here is live and available right now. Create a free account and start building.