Troubleshooting / FAQ
Solutions to common issues and frequently asked questions.
Connection Issues
"Tenant or user not found" when connecting to the database
This error occurs when using a Supabase pooler URL instead of the direct connection. The pooler (on port 6543) uses a different authentication format than the direct connection (port 5432).
Fix: Use the direct connection URL with port 5432:
# Wrong (pooler) - causes "Tenant or user not found"
postgres://postgres.PROJECT_REF:password@pooler.supabase.com:6543/postgres
# Correct (direct connection)
postgres://postgres:password@db.PROJECT_REF.supabase.co:5432/postgresSSL connection error in production
When NODE_ENV=production, the database connection requires SSL. If your PostgreSQL instance does not have SSL configured (common in local Docker setups), you will see SSL-related connection errors.
Fix: For local development, set NODE_ENV=development. For production with services like Supabase, SSL is already configured — just ensure your connection string does not include sslmode=disable.
"pgvector extension not found" / "type 'vector' does not exist"
The pgvector extension is required but is not installed by default on all PostgreSQL installations. Managed services like Fly.io Postgres do not include pgvector.
Fix: Use one of these options:
- Use the
pgvector/pgvector:pg17Docker image (includes pgvector) - Use Supabase (includes pgvector on all plans)
- Install pgvector manually:
apt install postgresql-17-pgvector
Database connection pool exhaustion
If you see "too many clients" or connection timeout errors, the connection pool is exhausted. This typically happens when running tests in parallel or when the pool size is too small for the workload.
Fix: Increase the pool size in the DATABASE_URL or reduce concurrent connections. For testing, ensure fileParallelism: false andsingleFork: true are set in vitest.config.ts.
MCP Connection Issues
Agent cannot connect to the MCP server
Verify the MCP URL is correct and the server is running. For self-hosted instances, ensure the agent can reach the server on the network.
Checklist:
- Is the API server running? Check with
curl http://localhost:3000/health - Is the MCP URL correct? It should end with your token, not a trailing slash
- Is the transport type correct? Epitome uses Streamable HTTP, not stdio
- Is the firewall allowing connections on the API port?
# Test the MCP endpoint directly
curl -X POST http://localhost:3000/mcp \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"jsonrpc":"2.0","method":"tools/list","id":1}'"CONSENT_REQUIRED" error when calling a tool
The agent has not been granted consent to access the requested resource. Each agent must be explicitly authorized by the user through the dashboard.
Fix: Open the Epitome dashboard, go to the Agents page, find the agent, and grant the required permissions (e.g., "vectors: read_write" for store_memory and search_memory).
"Tool not found" error
The agent is trying to call a tool that does not exist. This can happen if the agent's tool list is cached or outdated.
Fix: Verify the tool name matches exactly (e.g.,store_memory, not storeMemory). Restart the agent to refresh its tool list. The available tools are: read_profile, update_profile, query_table, insert_record, search_memory, store_memory, query_graph, get_entity_neighbors, log_activity.
Agent sees empty results from search_memory
If search returns empty results when you know there are stored memories, the issue may be the similarity threshold or the embedding model configuration.
Fix: Lower the min_similarity parameter (try 0.2 or 0.1). Verify that the OPENAI_EMBEDDING_MODEL environment variable matches the model used to create the embeddings. If you changed models, existing embeddings will not match new queries.
Performance Issues
Slow vector search queries
Vector search performance depends on the HNSW index. If queries are slow, the index may not have been created or may need tuning.
Fix: Verify the HNSW index exists:
-- Check if the HNSW index exists
SELECT indexname FROM pg_indexes
WHERE tablename = 'vector_entries'
AND indexdef LIKE '%hnsw%';
-- If missing, create it
CREATE INDEX idx_vector_entries_embedding
ON vector_entries USING hnsw (embedding vector_cosine_ops)
WITH (m = 16, ef_construction = 64);Deadlocks in withUserSchema
Deadlocks can occur if withUserSchema() calls are nested. Each call acquires a database connection from the pool and opens a transaction. With a small pool, the inner call may wait for a connection that the outer call is holding.
Fix: Never nest withUserSchema() calls. Instead, use the *Internal(tx, ...) variant of service functions that accepts an existing transaction:
// Wrong - causes deadlock
await withUserSchema(userId, async (tx) => {
const profile = await getProfile(userId); // opens another withUserSchema!
});
// Correct - pass the transaction
await withUserSchema(userId, async (tx) => {
const profile = await getProfileInternal(tx); // reuses existing transaction
});High latency on store_memory calls
The store_memory tool generates an embedding via the OpenAI API, which adds 200-400ms of latency. Entity extraction runs asynchronously and does not affect response time.
Fix: This latency is expected. If it is too high, check your OpenAI API response times separately. You can also batch multiple memories in quick succession — each call is independent.
Frequently Asked Questions
Can multiple agents share the same Epitome account?
Yes! That is the core design of Epitome. Each AI agent (Claude, ChatGPT, custom bots) gets its own API key and consent permissions, but they all read from and write to the same user data. This means Claude can see what ChatGPT stored, and vice versa. You control what each agent can access via the consent system.
How is my data protected from other users?
Epitome uses per-user PostgreSQL schemas for hard data isolation. Your data lives in a completely separate namespace from every other user. Even if there were a SQL injection bug, it could only access data within your own schema. See the Schema Isolation section for technical details.
Can I export all my data?
Yes. For self-hosted instances, you can use pg_dump to export your entire schema. For the hosted service, the dashboard Settings page provides a data export feature that generates a JSON archive of all your data (profile, memories, tables, knowledge graph, activity log).
What happens if I delete my account?
Account deletion drops your entire PostgreSQL schema (DROP SCHEMA CASCADE), removing all profile data, memories, tables, graph entities, activity logs, and consent records. This is irreversible. Your shared account record and API keys are also deleted.
Does Epitome send my data to OpenAI?
Epitome sends two types of data to OpenAI: (1) text content for embedding generation (via text-embedding-3-small), and (2) memory content for entity extraction (via gpt-5-mini). Both are API calls subject to OpenAI's data usage policies for API customers (your data is not used for training). For maximum privacy, you can self-host and swap in a local embedding model, though entity extraction currently requires OpenAI.
How many memories can Epitome store?
There is no hard limit. PostgreSQL with pgvector handles hundreds of thousands of 1536-dimensional vectors efficiently with HNSW indexing. For personal use (thousands to tens of thousands of memories), performance remains excellent. The hosted service may impose storage quotas per plan.
Can I use Epitome with agents that do not support MCP?
Yes. Epitome exposes a full REST API alongside the MCP server. Any agent or application that can make HTTP requests can use the REST API with API key authentication. The MCP server is simply a convenient wrapper for MCP-native agents like Claude.