Documentation Index Fetch the complete documentation index at: https://mintlify.com/vemetric/vemetric/llms.txt
Use this file to discover all available pages before exploring further.
Database Setup
Vemetric uses two databases: PostgreSQL for application data and ClickHouse for analytics events. This guide covers setup, migrations, and maintenance.
PostgreSQL Setup
Docker Compose (Recommended)
The easiest way to run PostgreSQL locally:
postgres :
image : postgres:17-alpine
container_name : vemetric-postgres
environment :
POSTGRES_USER : postgres
POSTGRES_PASSWORD : password
POSTGRES_DB : vemetric
ports :
- '5433:5432'
volumes :
- postgres_data:/var/lib/postgresql/data
Start PostgreSQL:
docker-compose up -d postgres
PostgreSQL is exposed on port 5433 (not 5432) to avoid conflicts with existing PostgreSQL installations.
Manual Installation
For production or manual setup:
Ubuntu/Debian
macOS
Docker
# Add PostgreSQL repository
sudo apt install -y postgresql-common
sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh
# Install PostgreSQL 17
sudo apt install -y postgresql-17
# Start service
sudo systemctl start postgresql
sudo systemctl enable postgresql
# Create database and user
sudo -u postgres psql
CREATE DATABASE vemetric ;
CREATE USER vemetric WITH PASSWORD 'your-secure-password' ;
GRANT ALL PRIVILEGES ON DATABASE vemetric TO vemetric;
# Install via Homebrew
brew install postgresql@17
# Start service
brew services start postgresql@17
# Create database
createdb vemetric
docker run -d \
--name vemetric-postgres \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_DB=vemetric \
-p 5433:5432 \
-v postgres_data:/var/lib/postgresql/data \
postgres:17-alpine
Update your .env file:
DATABASE_URL = "postgresql://postgres:password@localhost:5433/vemetric?schema=public"
For production, use a strong password and consider SSL:
DATABASE_URL = "postgresql://user:password@db.example.com:5432/vemetric?schema=public&sslmode=require"
PostgreSQL Migrations
Vemetric uses Prisma for PostgreSQL schema management.
Generate Prisma Client
Generate the Prisma client from the schema:
cd packages/database
bun run db:generate
This creates the TypeScript client for database access.
Run Migrations
View Schema
The schema is defined in packages/database/prisma/schema.prisma: datasource db {
provider = "postgresql"
url = env ( "DATABASE_URL" )
}
model User {
id String @id @unique
email String @unique
emailVerified Boolean @default ( false )
name String ?
// ...
}
Apply Migrations
Deploy all pending migrations: cd packages/database
bun run db:deploy
This runs prisma migrate deploy, which applies migrations from prisma/migrations/.
Verify
Connect to the database and verify tables: # Docker
docker exec -it vemetric-postgres psql -U postgres -d vemetric
# Local
psql -d vemetric
-- List all tables
\dt
-- View users table
\d user
Create New Migrations
When modifying the schema:
cd packages/database
# Edit prisma/schema.prisma
# Create migration (development)
bun run db:migrate
# This opens an editor to name your migration
# Apply migration
bun run db:deploy
Always test migrations in a development environment before applying to production. Prisma migrations are not automatically reversible.
ClickHouse Setup
Docker Compose (Recommended)
Run ClickHouse with Docker:
clickhouse :
image : clickhouse/clickhouse-server:23.10-alpine
container_name : vemetric-clickhouse
environment :
- CLICKHOUSE_DB=vemetric
- CLICKHOUSE_PASSWORD=password
ports :
- '8123:8123' # HTTP
- '9000:9000' # Native
- '9363:9363' # Metrics
volumes :
- clickhouse_data:/var/lib/clickhouse
- clickhouse_logs:/var/log/clickhouse-server
ulimits :
nproc : 65535
nofile :
soft : 262144
hard : 262144
Start ClickHouse:
docker-compose up -d clickhouse
Manual Installation
Ubuntu/Debian
macOS
Docker
# Add ClickHouse repository
sudo apt-get install -y apt-transport-https ca-certificates dirmngr
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 8919F6BD2B48D754
echo "deb https://packages.clickhouse.com/deb stable main" | sudo tee /etc/apt/sources.list.d/clickhouse.list
sudo apt-get update
# Install ClickHouse
sudo apt-get install -y clickhouse-server clickhouse-client
# Start service
sudo systemctl start clickhouse-server
sudo systemctl enable clickhouse-server
# Create database
clickhouse-client
CREATE DATABASE vemetric ;
# Install via Homebrew
brew install clickhouse
# Start service
brew services start clickhouse
# Create database
clickhouse-client
CREATE DATABASE vemetric ;
docker run -d \
--name vemetric-clickhouse \
-e CLICKHOUSE_DB=vemetric \
-e CLICKHOUSE_PASSWORD=password \
-p 8123:8123 \
-p 9000:9000 \
--ulimit nofile=262144:262144 \
clickhouse/clickhouse-server:23.10-alpine
Update your .env file:
# ClickHouse Migrations
CH_MIGRATIONS_HOST = http://localhost:8123
CH_MIGRATIONS_USER = default
CH_MIGRATIONS_PASSWORD = password
CH_MIGRATIONS_DB = vemetric
# ClickHouse Client
CLICKHOUSE_URL = http://localhost:8123
CLICKHOUSE_USER = default
CLICKHOUSE_PASSWORD = password
CLICKHOUSE_DATABASE = vemetric
Test Connection
# HTTP API
curl http://localhost:8123/ping
# Expected: Ok.
# Query
curl "http://localhost:8123/?query=SELECT%20version()"
# Expected: ClickHouse version
ClickHouse Migrations
Vemetric uses clickhouse-migrations for schema management.
Migration Files
Migrations are stored in packages/clickhouse/migrations/:
packages/clickhouse/migrations/
├── 1_init.sql # Initial tables (event, session, device)
├── 2_user.sql # User table
├── 3_remove_pageleave.sql # Schema changes
├── 4_user_firstseen.sql
├── 5_referrer_url.sql
├── 6_subdomain_to_origin.sql
├── 7_geodata_latlong.sql
├── 8_session_user_identifier.sql
├── 9_referrer.sql
├── 10_event_headers.sql
├── 11_session_v2.sql
├── 12_import_source.sql
└── 13_user_avatarurl.sql
Run Migrations
Local Development
cd packages/clickhouse
bun run migrate-local
This uses environment variables from .env.
Production
Set environment variables and run: export CH_MIGRATIONS_HOST = http :// clickhouse . example . com : 8123
export CH_MIGRATIONS_USER = default
export CH_MIGRATIONS_PASSWORD = your-secure-password
export CH_MIGRATIONS_DB = vemetric
cd packages/clickhouse
bun run migrate-prod
Verify
Check that tables were created: clickhouse-client --query= "SHOW TABLES FROM vemetric"
Expected output:
Initial Schema
The first migration (1_init.sql) creates core tables:
event Table
session Table
device Table
CREATE TABLE IF NOT EXISTS event (
`sign` Int8,
`projectId` UInt64,
`userId` UInt64,
`sessionId` String,
`deviceId` UInt64,
`id` String,
`name` String,
`createdAt` DateTime64( 3 , 'UTC' ) DEFAULT NOW64( 3 ),
`isPageView` Int8 DEFAULT 0 ,
`countryCode` LowCardinality(FixedString( 2 )),
`city` String,
`osName` LowCardinality(String),
`clientName` LowCardinality(String),
`deviceType` LowCardinality(String),
`userIdentifier` String,
`userDisplayName` String,
`customData` String,
-- ... more columns
)
ENGINE = CollapsingMergeTree(sign)
PARTITION BY toYYYYMM(createdAt)
ORDER BY (projectId, userId, createdAt)
SETTINGS index_granularity = 8192 ;
CREATE TABLE IF NOT EXISTS session (
`deleted` Int8 DEFAULT 0 ,
`projectId` UInt64,
`userId` UInt64,
`id` String,
`startedAt` DateTime64( 3 , 'UTC' ) DEFAULT NOW64( 3 ),
`endedAt` DateTime64( 3 , 'UTC' ) DEFAULT NOW64( 3 ),
`duration` UInt32 DEFAULT 0 ,
`countryCode` LowCardinality(FixedString( 2 )),
`city` String,
-- ... more columns
)
ENGINE = ReplacingMergeTree
PARTITION BY toYYYYMM(startedAt)
ORDER BY (projectId, userId, startedAt)
SETTINGS index_granularity = 8192 ;
CREATE TABLE IF NOT EXISTS device (
`sign` Int8,
`projectId` UInt64,
`userId` UInt64,
`id` UInt64,
`createdAt` DateTime64( 3 , 'UTC' ) DEFAULT NOW64( 3 ),
`osName` LowCardinality(String),
`osVersion` LowCardinality(String),
`clientName` LowCardinality(String),
`deviceType` LowCardinality(String),
-- ... more columns
)
ENGINE = CollapsingMergeTree(sign)
ORDER BY (projectId, userId, id, createdAt)
SETTINGS index_granularity = 8192 ;
Table Engines :
CollapsingMergeTree : Supports updates via sign column (+1 for insert, -1 for delete)
ReplacingMergeTree : Automatically deduplicates rows with the same primary key
Partitioning : Data is partitioned by month for efficient queries and data retention
Database Maintenance
PostgreSQL
Create regular backups: # Docker
docker exec vemetric-postgres pg_dump -U postgres vemetric > backup.sql
# Local
pg_dump vemetric > backup.sql
Restore: # Docker
docker exec -i vemetric-postgres psql -U postgres vemetric < backup.sql
# Local
psql vemetric < backup.sql
Reclaim storage and update statistics: Schedule weekly in production.
For high-traffic deployments, use PgBouncer: docker run -d \
--name pgbouncer \
-e POSTGRESQL_HOST=postgres \
-e POSTGRESQL_DATABASE=vemetric \
-e PGBOUNCER_POOL_MODE=transaction \
-p 6432:6432 \
bitnami/pgbouncer:latest
Update DATABASE_URL to use PgBouncer port 6432.
ClickHouse
Implement TTL for automatic data deletion: ALTER TABLE event MODIFY TTL createdAt + INTERVAL 90 DAY ;
This deletes events older than 90 days.
Manually trigger merges and optimizations: OPTIMIZE TABLE event FINAL;
OPTIMIZE TABLE session FINAL;
Run during low-traffic periods.
Check table sizes: SELECT
table ,
formatReadableSize( sum (bytes)) AS size ,
sum ( rows ) AS rows
FROM system . parts
WHERE database = 'vemetric'
AND active
GROUP BY table ;
ClickHouse supports incremental backups: BACKUP TABLE vemetric . event TO Disk ( 'backups' , 'event_backup' );
For Docker, use volume snapshots or export to S3.
Redis
Redis is configured with AOF (Append-Only File) persistence: command : redis-server --appendonly yes
Data is saved to /data volume.
Create snapshot: docker exec vemetric-redis redis-cli BGSAVE
Copy RDB file: docker cp vemetric-redis:/data/dump.rdb backup/
Check memory usage: docker exec vemetric-redis redis-cli INFO memory
Troubleshooting
PostgreSQL: Connection refused
Check if PostgreSQL is running: docker-compose ps postgres
Verify port in DATABASE_URL matches exposed port (5433)
Check firewall rules if connecting remotely
ClickHouse: Code 516 - Authentication failed
Verify CLICKHOUSE_PASSWORD matches Docker environment variable
Default user is default, not admin or root
Prisma: Migration already applied
Check migration status: bunx prisma migrate status
Reset development database: bunx prisma migrate reset (WARNING: deletes all data)
ClickHouse: Table already exists
Migrations are idempotent and use CREATE TABLE IF NOT EXISTS
Check migration state: SELECT * FROM system.migrations
Manually drop table if needed: DROP TABLE IF EXISTS event
Production Recommendations
Use Managed Databases Consider managed services:
PostgreSQL : AWS RDS, Google Cloud SQL, Azure Database
ClickHouse : ClickHouse Cloud, Altinity.Cloud
Redis : AWS ElastiCache, Redis Cloud
Enable SSL Always use SSL/TLS for database connections in production: DATABASE_URL=postgresql://...?sslmode=require
Automated Backups Schedule daily backups with retention policies. Store in separate location (S3, GCS, etc.).
Monitoring & Alerts Monitor:
Database disk usage
Query performance
Connection pool saturation
Replication lag (if using replicas)
Next Steps
Monitoring Set up monitoring and observability for your deployment
Architecture Learn more about Vemetric’s architecture and data flow