A service that helps implement the Event-Driven architecture by capturing PostgreSQL database changes and publishing them to message brokers.
To maintain the consistency of data in the system, we use transactional messaging - publishing events in a single transaction with a domain model change.
The service allows you to subscribe to changes in the PostgreSQL database using its logical decoding capability and publish them to Redis or other message brokers.
- Flexible Publication Strategies: Single publication for all tables or individual publications per table
- Auto-sync Publications: Automatically creates and manages PostgreSQL publications
- Redis Publishing: Events published to Redis with configurable topics
- Table Mapping: Custom topic names for different tables
- Real-time Processing: Low-latency event processing using PostgreSQL WAL
- Configuration-driven: YAML-based configuration for easy management
- Logic of Work
- Event Publishing
- Configuration
- Publication Strategies
- Database Setup
- Environment Variables
- Usage
- Tools & Scripts
- Documentation
- Docker & CI/CD
To receive events about data changes in our PostgreSQL DB, we use the standard logical decoding module (pgoutput). This module converts changes read from the WAL into a logical replication protocol, and we consume all this information on our side.
Then we filter out only the events we need and publish them to Redis with configurable topics.
Service currently supports Redis as the message broker.
The service publishes the following structure to Redis topics:
{
ID uuid.UUID // unique ID
Schema string
Table string
Action string // insert, update, delete
Data map[string]any // new data
DataOld map[string]any // old data (for updates/deletes)
EventTime time.Time // commit time
}Topic Structure: {prefix_watch_list}.{mapping}
Messages are published to the broker at least once!
Configuration is managed via config/config.yml file:
# Publication strategy: "single" (default) or "multiple"
publication_strategy: "single"
publication_prefix: "ditto" # used for multiple strategy
# Redis topic prefix
prefix_watch_list: "events"
# Tables to watch
watch_list:
deposit_events:
mapping: "deposits" # custom topic name (optional)
withdraw_events:
mapping: "withdrawals"
loan_events:
mapping: "loans"| Field | Description | Default |
|---|---|---|
publication_strategy |
"single" or "multiple" | "single" |
publication_prefix |
Prefix for multiple publications | "ditto" |
prefix_watch_list |
Redis topic prefix | "" |
watch_list |
Tables to monitor | {} |
mapping |
Custom topic name for table | table name |
Ditto supports two publication strategies:
- One publication for all tables
- Simple and efficient
- Lower resource usage
- Easy to maintain
publication_strategy: "single" # or omit (default)Results in: ditto publication with all specified tables
- Individual publication per table
- Better fault isolation
- More flexible scaling
- Higher resource usage
publication_strategy: "multiple"
publication_prefix: "ditto"Results in: ditto_deposit_events, ditto_withdraw_events, etc.
π See Publication Strategies Guide for detailed comparison
You must make the following settings in postgresql.conf:
wal_level = logical
max_replication_slots >= 1
max_wal_senders >= 1To receive DataOld field for UPDATE/DELETE operations:
ALTER TABLE your_table REPLICA IDENTITY FULL;Publications are automatically created and managed by the service. However, you can also manage them manually:
-- Check current publications
SELECT * FROM pg_publication;
-- Create custom publication
CREATE PUBLICATION ditto FOR TABLE table1, table2;
-- Drop publication
DROP PUBLICATION IF EXISTS ditto;# Database connection (with replication=database)
DB_DSN="postgresql://postgres:password@localhost:5432/dbname?replication=database"
# Redis connection
REDIS_URL="redis://localhost:6379"
# Optional: Log level
LOG_LEVEL="info"
# Optional: Application environment
APP_ENV="dev"# 1. Build the image
./scripts/build-docker.sh --load -p linux/arm64 # For Mac M1/M2
# OR
./scripts/build-docker.sh --load -p linux/amd64 # For Intel
# 2. Run with auto-configuration
./scripts/run-docker.sh -d -f
# The script will auto-create .env and config files if missing
# Edit them as needed and restart:
./scripts/run-docker.sh restart# 1. Copy example files
cp config/config.example.yml config/config.yml
cp docker-compose.example.yml docker-compose.yml
# 2. Edit configuration if needed
nano config/config.yml
# 3. Start all services
docker-compose up -d
# 4. Watch logs
docker-compose logs -f ditto
# 5. Test with sample data
docker-compose exec postgres psql -U postgres -d ditto_db -c "SELECT generate_test_events(10);"
# 6. Monitor Redis events (optional)
docker-compose --profile debug up redis-cli# Copy example config
cp config/config.example.yml config/config.yml
# Edit your configuration
nano config/config.ymlexport DB_DSN="postgresql://postgres:password@localhost:5432/dbname?replication=database"
export REDIS_URL="redis://localhost:6379"# Using Go
go run main.go
# Using Docker
docker build -t ditto .
docker run --env-file .env ditto
# Using Task
task runUse the SQL script to verify publications:
# Check current publication status
psql -d your_database -f scripts/check_publications.sqlAutomated release script that creates tags and triggers CI/CD:
# Create a new release
./scripts/release.sh v1.0.0
# This will:
# - Validate version format
# - Check git status
# - Create and push git tag
# - Trigger GitHub Actions to build Docker image and create releaseEasy Docker container management:
# Run with default settings (interactive)
./scripts/run-docker.sh
# Run in detached mode
./scripts/run-docker.sh -d
# Run detached and follow logs
./scripts/run-docker.sh -d -f
# Use production image
./scripts/run-docker.sh -i phathdt379/ditto:latest
# Container management
./scripts/run-docker.sh stop # Stop container
./scripts/run-docker.sh restart # Restart container
./scripts/run-docker.sh logs # View logs
./scripts/run-docker.sh exec # Execute shell
# Remove existing and run new
./scripts/run-docker.sh -rWith configuration:
prefix_watch_list: "events"
watch_list:
deposit_events:
mapping: "deposits"
withdraw_events:
mapping: "withdrawals"Published topics:
events.depositsevents.withdrawals
- Publication Strategies Guide - Detailed comparison of strategies
- Configuration Examples - Sample configurations
PostgreSQL WAL β Logical Decoding β Ditto Service β Redis Topics
β β β β
Tables β pgoutput β Event Processing β Consumer Apps
Pre-built Docker images are available on Docker Hub:
# Latest version
docker pull phathdt379/ditto:latest
# Specific version
docker pull phathdt379/ditto:v1.0.0
# Run with environment variables
docker run --env-file .env phathdt379/ditto:latestReleases are automated via GitHub Actions:
- Create Release: Push a git tag (e.g.,
v1.0.0) - Auto Build: GitHub Actions builds multi-platform Docker images
- Auto Deploy: Images pushed to Docker Hub
- GitHub Release: Automatically created with release notes
version: '3.8'
services:
ditto:
image: phathdt379/ditto:latest
environment:
- DB_DSN=postgresql://postgres:password@postgres:5432/dbname?replication=database
- REDIS_URL=redis://redis:6379
- LOG_LEVEL=info
volumes:
- ./config:/app/config
depends_on:
- postgres
- redis
postgres:
image: postgres:15
environment:
POSTGRES_DB: dbname
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
command: >
postgres
-c wal_level=logical
-c max_replication_slots=4
-c max_wal_senders=4
redis:
image: redis:7-alpine- Support multiple message brokers (NATS, Kafka)
- Add condition-based filtering
- Web UI for configuration management
- Metrics and monitoring
- Cluster support
- Dead letter queue handling
- Schema evolution support
- Fork the repository
- Create your feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License.
Note: This service is designed for high-throughput, low-latency event processing. Make sure your PostgreSQL and Redis instances are properly configured for your expected load.