Your team's best decisions
are buried at message #4,217.
Six months ago someone in #backend said “let's use Postgres.” The wiki still says MySQL. The onboarding doc links to a Confluence page that hasn't been updated since 2023. The truth lives in Slack, 400 messages deep in a channel nobody scrolls anymore.
Ran the benchmarks. Postgres handles our query patterns 3x better than MySQL for the new analytics pipeline. Sharing the results in thread.
Makes sense. Migration path from MySQL is straightforward too. Let's go with Postgres.
+1. I'll update the infra terraform configs this sprint. Should be live in staging by Thursday.
Build #4821 passed on main (3m 12s)
This decision is now a searchable memory.
3 human messages captured. 1 bot message filtered. Names resolved from Slack user IDs.
Three things that sync
Initial sync pulls all history. Incremental syncs every 30 minutes fetch only new messages. Read-only OAuth. Nothing is written to your workspace.
What the connector actually does
Raw Slack exports are noisy. Every deploy-bot notification, every Jira status change, every “X joined the channel”. That all gets filtered before anything touches memory. What's left is the human conversation: the decisions, the context, the reasoning that mattered.
Messages from the same channel are grouped into a single memory, not stored one-per-message. This means when someone in #infrastructure says “let's switch to Kubernetes” and the reply is “agreed, but after Q2,” your agent captures both as one coherent thread. The full arc of a conversation, not a sentence ripped from context.
Slack stores user mentions as opaque IDs like <@U04ABCDEF>. The connector resolves every one of these to real names using a cached user lookup, so your agent reads “Sarah mentioned we should use Redis” instead of a string of characters that means nothing outside Slack's UI.
The real cost of forgetting
A new engineer joins in March. In their second week they ask in #engineering why the service uses connection pooling instead of direct connections. Someone answered this exact question eight months ago, with benchmarks, in a thread that has since scrolled past 6,000 messages. Nobody remembers the thread. So the team explains it again. Thirty minutes, four people, one meeting that didn't need to happen.
Product and engineering agree in a DM to descope the real-time notifications feature to batch-only for launch. Good call. It saves two weeks. But the decision lives in a DM between two people. The spec doc still says real-time. Three weeks later, a frontend engineer builds the WebSocket layer that nobody asked for.
Someone shares a load test CSV in #infrastructure showing that the current database tops out at 2,000 concurrent connections. That file influences the decision to add a read replica. Six months later, capacity planning starts from scratch because nobody can find the original numbers.
Make Slack's memory permanent.
One OAuth connection. Read-only. Your agent gets every channel, DM, and file synced every 30 minutes. No setup beyond clicking the button.
