Self-Hosting Guide
Deploy your own Epitome instance with Docker Compose.
System Requirements
Epitome is designed to run on modest hardware. A single-server deployment handles most personal use cases comfortably.
| Component | Minimum | Recommended |
|---|---|---|
| CPU | 1 vCPU | 2+ vCPU |
| RAM | 1 GB | 2+ GB |
| Storage | 5 GB | 20+ GB SSD |
| OS | Linux, macOS, or Windows with Docker | |
Software requirements:
- Docker 24+ and Docker Compose v2
- Node.js 22+ (only needed if running without Docker)
- PostgreSQL 17 with pgvector 0.8+ (only needed if running without Docker)
Docker Compose Setup
The easiest way to deploy Epitome is with Docker Compose. The provideddocker-compose.yml configures three services: PostgreSQL (with pgvector), the Hono API server, and the React dashboard.
# docker-compose.yml (simplified)
services:
db:
image: pgvector/pgvector:pg17
environment:
POSTGRES_DB: epitome
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- pgdata:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 3s
retries: 5
api:
build: ./api
environment:
DATABASE_URL: postgres://postgres:${DB_PASSWORD}@db:5432/epitome
JWT_SECRET: ${JWT_SECRET}
OPENAI_API_KEY: ${OPENAI_API_KEY}
NODE_ENV: production
ports:
- "3000:3000"
depends_on:
db:
condition: service_healthy
dashboard:
build: ./dashboard
environment:
VITE_API_URL: http://localhost:3000
ports:
- "5173:8080"
depends_on:
- api
volumes:
pgdata:Start all services:
# Start in the background
docker compose up -d
# View logs
docker compose logs -f
# Stop everything
docker compose down
# Stop and remove volumes (deletes all data!)
docker compose down -vEnvironment Variables
Copy .env.example to.env and fill in the required values. Here is a complete reference of all environment variables:
| Variable | Required | Description |
|---|---|---|
| DATABASE_URL | Yes | PostgreSQL connection string. Example: postgres://user:pass@host:5432/epitome |
| JWT_SECRET | Yes | Secret key for signing JWTs. Use a random 64-character hex string. |
| OPENAI_API_KEY | Yes | OpenAI API key for embeddings (text-embedding-3-small) and entity extraction (gpt-5-mini). |
| OPENAI_MODEL | No | Model for entity extraction. Default: gpt-5-mini |
| OPENAI_EMBEDDING_MODEL | No | Model for embeddings. Default: text-embedding-3-small |
| PORT | No | API server port. Default: 3000 |
| NODE_ENV | No | Set to "production" for production deployments. Enables SSL for database connections. |
| GITHUB_CLIENT_ID | No | GitHub OAuth client ID for sign-in. Only needed if enabling GitHub auth. |
| GITHUB_CLIENT_SECRET | No | GitHub OAuth client secret. |
| GOOGLE_CLIENT_ID | No | Google OAuth client ID for sign-in. |
| GOOGLE_CLIENT_SECRET | No | Google OAuth client secret. |
# Generate a secure JWT_SECRET
openssl rand -hex 32Database Setup
If you are using Docker Compose, the database is initialized automatically via theinit.sql file mounted into the PostgreSQL container. If you are running PostgreSQL separately, you need to initialize it manually.
# Connect to your PostgreSQL instance
psql -h localhost -U postgres -d epitome
# Run the init script
\i init.sqlThe init script performs the following:
- Enables required extensions:
vector,pg_trgm,uuid-ossp - Creates the
sharedschema for cross-user data (users, accounts, sessions) - Creates the
template_userschema with all per-user tables, triggers, and indexes - Sets up functions for cloning the template schema when new users sign up
Important: The pgvector extension must be available in your PostgreSQL installation. If you are using a managed database service, ensure it supports pgvector 0.8+. The Docker image pgvector/pgvector:pg17 includes it.
Reverse Proxy
For production deployments, you should put Epitome behind a reverse proxy to handle TLS termination, HTTP/2, and caching. Here are example configurations for popular reverse proxies.
Caddy (Recommended)
Caddy automatically provisions TLS certificates via Let's Encrypt:
# Caddyfile
epitome.example.com {
# API
handle /v1/* {
reverse_proxy localhost:3000
}
# MCP endpoint
handle /mcp/* {
reverse_proxy localhost:3000
}
# Dashboard (catch-all for SPA routing)
handle {
reverse_proxy localhost:5173
}
}Nginx
server {
listen 443 ssl http2;
server_name epitome.example.com;
ssl_certificate /etc/letsencrypt/live/epitome.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/epitome.example.com/privkey.pem;
# API and MCP
location /v1/ {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /mcp/ {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
# Dashboard
location / {
proxy_pass http://127.0.0.1:5173;
proxy_set_header Host $host;
}
}Backup Strategy
Your Epitome database contains irreplaceable personal data. Set up regular backups.
Manual Backup
# Full database dump (compressed)
pg_dump -h localhost -U postgres -d epitome \
--format=custom --compress=9 \
-f epitome_backup_$(date +%Y%m%d_%H%M%S).dump
# Restore from backup
pg_restore -h localhost -U postgres -d epitome \
--clean --if-exists \
epitome_backup_20260217_143000.dumpAutomated Daily Backups
Add a cron job for daily automated backups with 30-day retention:
# Add to crontab (crontab -e)
# Run backup daily at 2:00 AM, keep 30 days
0 2 * * * pg_dump -h localhost -U postgres -d epitome \
--format=custom --compress=9 \
-f /backups/epitome_$(date +\%Y\%m\%d).dump \
&& find /backups -name "epitome_*.dump" -mtime +30 -deleteDocker Volume Backup
If you are running PostgreSQL in Docker, you can also back up the volume directly:
# Backup the Docker volume
docker run --rm \
-v epitome_pgdata:/data \
-v $(pwd)/backups:/backups \
alpine tar czf /backups/pgdata_$(date +%Y%m%d).tar.gz -C /data .Recommendation: Store backups in at least two locations (e.g., local disk + cloud storage like S3 or Backblaze B2). Test your restore procedure periodically to ensure backups are valid.