Self-Hosting Guide — Epitome Docs

Self-Hosting Guide

Deploy your own Epitome instance with Docker Compose.

System Requirements

Epitome is designed to run on modest hardware. A single-server deployment handles most personal use cases comfortably.

ComponentMinimumRecommended
CPU1 vCPU2+ vCPU
RAM1 GB2+ GB
Storage5 GB20+ GB SSD
OSLinux, macOS, or Windows with Docker

Software requirements:

  • Docker 24+ and Docker Compose v2
  • Node.js 22+ (only needed if running without Docker)
  • PostgreSQL 17 with pgvector 0.8+ (only needed if running without Docker)

Docker Compose Setup

The easiest way to deploy Epitome is with Docker Compose. The provideddocker-compose.yml configures three services: PostgreSQL (with pgvector), the Hono API server, and the React dashboard.

yaml
# docker-compose.yml (simplified)
services:
  db:
    image: pgvector/pgvector:pg17
    environment:
      POSTGRES_DB: epitome
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - pgdata:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
    ports:
      - "5432:5432"
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 3s
      retries: 5

  api:
    build: ./api
    environment:
      DATABASE_URL: postgres://postgres:${DB_PASSWORD}@db:5432/epitome
      JWT_SECRET: ${JWT_SECRET}
      OPENAI_API_KEY: ${OPENAI_API_KEY}
      NODE_ENV: production
    ports:
      - "3000:3000"
    depends_on:
      db:
        condition: service_healthy

  dashboard:
    build: ./dashboard
    environment:
      VITE_API_URL: http://localhost:3000
    ports:
      - "5173:8080"
    depends_on:
      - api

volumes:
  pgdata:

Start all services:

bash
# Start in the background
docker compose up -d

# View logs
docker compose logs -f

# Stop everything
docker compose down

# Stop and remove volumes (deletes all data!)
docker compose down -v

Environment Variables

Copy .env.example to.env and fill in the required values. Here is a complete reference of all environment variables:

VariableRequiredDescription
DATABASE_URLYesPostgreSQL connection string. Example: postgres://user:pass@host:5432/epitome
JWT_SECRETYesSecret key for signing JWTs. Use a random 64-character hex string.
OPENAI_API_KEYYesOpenAI API key for embeddings (text-embedding-3-small) and entity extraction (gpt-5-mini).
OPENAI_MODELNoModel for entity extraction. Default: gpt-5-mini
OPENAI_EMBEDDING_MODELNoModel for embeddings. Default: text-embedding-3-small
PORTNoAPI server port. Default: 3000
NODE_ENVNoSet to "production" for production deployments. Enables SSL for database connections.
GITHUB_CLIENT_IDNoGitHub OAuth client ID for sign-in. Only needed if enabling GitHub auth.
GITHUB_CLIENT_SECRETNoGitHub OAuth client secret.
GOOGLE_CLIENT_IDNoGoogle OAuth client ID for sign-in.
GOOGLE_CLIENT_SECRETNoGoogle OAuth client secret.
bash
# Generate a secure JWT_SECRET
openssl rand -hex 32

Database Setup

If you are using Docker Compose, the database is initialized automatically via theinit.sql file mounted into the PostgreSQL container. If you are running PostgreSQL separately, you need to initialize it manually.

bash
# Connect to your PostgreSQL instance
psql -h localhost -U postgres -d epitome

# Run the init script
\i init.sql

The init script performs the following:

  1. Enables required extensions: vector, pg_trgm, uuid-ossp
  2. Creates the shared schema for cross-user data (users, accounts, sessions)
  3. Creates the template_user schema with all per-user tables, triggers, and indexes
  4. Sets up functions for cloning the template schema when new users sign up

Important: The pgvector extension must be available in your PostgreSQL installation. If you are using a managed database service, ensure it supports pgvector 0.8+. The Docker image pgvector/pgvector:pg17 includes it.

Reverse Proxy

For production deployments, you should put Epitome behind a reverse proxy to handle TLS termination, HTTP/2, and caching. Here are example configurations for popular reverse proxies.

Caddy (Recommended)

Caddy automatically provisions TLS certificates via Let's Encrypt:

text
# Caddyfile
epitome.example.com {
    # API
    handle /v1/* {
        reverse_proxy localhost:3000
    }

    # MCP endpoint
    handle /mcp/* {
        reverse_proxy localhost:3000
    }

    # Dashboard (catch-all for SPA routing)
    handle {
        reverse_proxy localhost:5173
    }
}

Nginx

nginx
server {
    listen 443 ssl http2;
    server_name epitome.example.com;

    ssl_certificate     /etc/letsencrypt/live/epitome.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/epitome.example.com/privkey.pem;

    # API and MCP
    location /v1/ {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    location /mcp/ {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
    }

    # Dashboard
    location / {
        proxy_pass http://127.0.0.1:5173;
        proxy_set_header Host $host;
    }
}

Backup Strategy

Your Epitome database contains irreplaceable personal data. Set up regular backups.

Manual Backup

bash
# Full database dump (compressed)
pg_dump -h localhost -U postgres -d epitome \
  --format=custom --compress=9 \
  -f epitome_backup_$(date +%Y%m%d_%H%M%S).dump

# Restore from backup
pg_restore -h localhost -U postgres -d epitome \
  --clean --if-exists \
  epitome_backup_20260217_143000.dump

Automated Daily Backups

Add a cron job for daily automated backups with 30-day retention:

bash
# Add to crontab (crontab -e)
# Run backup daily at 2:00 AM, keep 30 days
0 2 * * * pg_dump -h localhost -U postgres -d epitome \
  --format=custom --compress=9 \
  -f /backups/epitome_$(date +\%Y\%m\%d).dump \
  && find /backups -name "epitome_*.dump" -mtime +30 -delete

Docker Volume Backup

If you are running PostgreSQL in Docker, you can also back up the volume directly:

bash
# Backup the Docker volume
docker run --rm \
  -v epitome_pgdata:/data \
  -v $(pwd)/backups:/backups \
  alpine tar czf /backups/pgdata_$(date +%Y%m%d).tar.gz -C /data .

Recommendation: Store backups in at least two locations (e.g., local disk + cloud storage like S3 or Backblaze B2). Test your restore procedure periodically to ensure backups are valid.