Fletcher is a data orchestration platform that uses in-memory directed acyclic graphs (DAGs) to orchestrate the triggering of compute jobs. With its precise orchestration, your data products won't rush or drag — no one can say "Not quite my tempo."
- What is Fletcher?
- Features
- Quality Assurance & CI/CD
- Quick Start
- Web User Interface
- Authentication
- API Endpoints
- Usage Examples
- Development
- Deployment
- Load Testing
- Configuration
- Monitoring
- Contributing
- Why is this repo called Fletcher?
- License
Fletcher manages Plans - collections of Data Products organized into Datasets with Dependencies that form a DAG. When data products succeed, Fletcher automatically triggers downstream jobs that are ready to run, ensuring efficient and reliable data pipeline execution.
- Dataset: A container for a plan that can be paused/unpaused
- Data Product: An individual compute job with states (waiting, queued, running, success, failed, disabled)
- Dependencies: Parent-child relationships between data products that form the execution DAG
- Plan: The complete specification of data products and their dependencies for a dataset
- 🎯 DAG-based Orchestration: Automatically resolves dependencies and triggers ready jobs
- 🔄 Real-time State Management: Track and update data product states with automatic downstream triggering
- 🌐 REST API: Full OpenAPI/Swagger documented REST interface
- 🖥️ Web UI: Search, visualize, and manage your data pipelines
- 🐘 PostgreSQL Backend: Reliable data persistence with migrations
- 🔍 Cycle Detection: Validates DAGs to prevent infinite loops
- ⏸️ Pause/Resume: Control dataset execution flow
- 🧪 Multiple Compute Types: Support for CAMS and DBXaaS compute platforms
- 📊 GraphViz Visualization: Visual representation of your DAG execution plans
- 🚀 Load Testing: Built-in Locust-based performance testing with realistic scenarios
Fletcher maintains high code quality through comprehensive automated testing and continuous integration:
- Rust Quality Checks: Code formatting, linting (Clippy), compilation, and unit/integration tests
- Database Integration: Full PostgreSQL integration testing with SQLx migrations
- Docker Validation: Multi-stage container builds with health check verification
- Security Scanning: Trivy vulnerability scanning for both codebase and container images
- Python Tooling: Type checking (pyright), linting (Ruff), and dependency auditing
- Documentation: Markdown linting for consistent documentation quality
All pull requests automatically trigger workflows located in .github/workflows/:
rust.yaml- Comprehensive Rust code quality and testingdocker.yaml- Container build verification and health checkstrivy.yaml- Security vulnerability scanningpython.yaml- Load testing toolchain validationmarkdown.yaml- Documentation quality assurance
- Just: Cross-platform task automation
(see
justfilefor all available commands) - SQLx: Compile-time verified database queries with offline mode
- cargo-deny: Dependency vulnerability and license auditing
- uv: Fast Python package management for load testing tools
Run the complete CI pipeline locally:
just github-checks # Runs all quality checks- Rust (2024 edition)
- PostgreSQL
- Docker/Podman (optional)
- Just command runner (optional but recommended)
Additional tools for development and testing:
- Node.js and npm (for frontend CSS build)
- Python 3.13+ and uv (for load testing)
- SQLx CLI (for database operations)
-
Clone the repository
git clone <repository-url> cd fletcher
-
Set up PostgreSQL
# Using Just (recommended) just pg-start # Or manually with Docker docker run -d --name fletcher_postgresql \ --env POSTGRES_USER=fletcher_user \ --env POSTGRES_PASSWORD=password \ --env POSTGRES_DB=fletcher_db \ --publish 5432:5432 \ postgres:alpine
-
Configure environment
Option A: Using .env file (Recommended for development)
# Create a .env file in the project root cat > .env << 'EOF' BASE_URL=127.0.0.1:3000 DATABASE_URL=postgres://fletcher_user:password@localhost/fletcher_db SECRET_KEY=your-secret-key-for-jwt-signing-make-it-long-and-random REMOTE_APIS='[ { "service": "local", "hash": "$2b$10$DvqWB.sMjo1XSlgGrOzGAuBTY5E1hkLiDK3BdcK0TiROjCWkgCeaa", "roles": ["publish", "pause", "update", "disable"] }, { "service": "readonly", "hash": "$2b$10$46TiUvUaKvp2D/BuoXe8Fu9ktffCBXioF8M0DeeOWvz8X2J0RtpvK", "roles": [] } ]' RUST_BACKTRACE=1 EOF
Note: The above configuration includes:
localservice with passwordabc123(full access)readonlyservice with passwordabc123(read-only access)- Generate new password hashes with:
just hash "your-password"
Option B: Manual export
export BASE_URL="127.0.0.1:3000" export DATABASE_URL="postgres://fletcher_user:password@localhost/fletcher_db" export SECRET_KEY="your-secret-key-for-jwt-signing-make-it-long-and-random" export REMOTE_APIS='[{"service":"local","hash":"$2b$10$DvqWB.sMjo1XSlgGrOzGAuBTY5E1hkLiDK3BdcK0TiROjCWkgCeaa","roles":["publish","pause","update","disable"]},{"service":"readonly","hash":"$2b$10$46TiUvUaKvp2D/BuoXe8Fu9ktffCBXioF8M0DeeOWvz8X2J0RtpvK","roles":[]}]'
-
Run database migrations
just sqlx-migrate # Or: sqlx migrate run -
Build and run
just run # Or: cargo run
The application will be available at http://localhost:3000
Fletcher provides a modern, responsive web interface for managing and visualizing your data orchestration pipelines.
- 🔍 Live Search: Real-time search for plans with instant results
- 📊 DAG Visualization: Interactive GraphViz diagrams showing data product dependencies
- 📋 Plan Management: Detailed view of datasets, data products, and their states
- 🎨 Modern Design: Beautiful gradient styling with smooth animations
- 📱 Responsive Layout: Works seamlessly across desktop and mobile devices
The main landing page provides plan discovery functionality:
Fletcher's main search interface with live search and paginated results
- Live Search: Type-ahead search with 500ms debounce for finding plans
- Real-time Results: HTMX-powered dynamic updates without page refreshes
- Paginated Results: Efficient loading of large result sets (50 items per page)
- Quick Navigation: Click any result to instantly jump to the plan details
Comprehensive plan visualization and management:
Plan details page showing DAG visualization, data products table, and JSON payload
-
Dataset Overview:
- Dataset ID and current status (Active/Paused)
- Last modified information
- Quick status indicators with colored badges
-
Interactive DAG Visualization:
- GraphViz-powered dependency graph
- Color-coded nodes by state:
- 🟢 Green: Success
- 🟡 Light Green: Running
- ⚪ Light Grey: Waiting
- ⚫ Grey: Queued/Disabled
- 🔴 Red: Failed
- Left-to-right flow layout for clear dependency understanding
-
Data Products Table:
- Complete data product inventory
- State badges with color coding
- Compute platform indicators (CAMS/DBXaaS)
- Eager execution flags
- Direct links to external systems
- Last modification timestamps
-
Technical Details:
- Pretty-printed JSON payload with syntax highlighting
- Complete plan specification for debugging and analysis
- 🎨 TailwindCSS: Modern utility-first CSS framework for responsive design
- ⚡ HTMX: Progressive enhancement for dynamic interactions without complex JavaScript
- 📈 GraphViz: Professional dependency graph visualization with Viz.js
- 🌈 Prism.js: Beautiful syntax highlighting for JSON payloads
- 🖼️ Maud: Type-safe HTML templating in Rust
- Breadcrumb Navigation: Clear path between Search and Plan pages
- Contextual Links: Smart navigation that adapts based on current context
- Direct URLs: Bookmarkable URLs for all plans and searches
Fletcher's UI works in all modern browsers with:
- ES6+ JavaScript support
- SVG rendering capabilities
- CSS Grid and Flexbox support
/- Main search interface/plan/{dataset_id}- Plan details and visualization/component/plan_search- HTMX search component/assets/*- Static assets (CSS, JS, images)
Fletcher's UI uses modern frontend tools for styling and interactivity:
- TailwindCSS 4.x - Utility-first CSS framework with modern features
- DaisyUI - Component library built on TailwindCSS
- TailwindCSS Animated - Animation utilities
- HTMX - Progressive enhancement for dynamic interactions
- Viz.js - GraphViz rendering for dependency graphs
- Prism.js - Syntax highlighting for JSON payloads
The CSS is built using TailwindCSS CLI and bundled with the Rust application:
# Install Node.js dependencies
npm clean-install
# Build CSS (automatically handled during Rust build)
# See build.rs for integration detailsThe build.rs script automatically handles CSS compilation during the Rust
build process, ensuring the latest styles are always included in the binary.
Fletcher uses JWT (JSON Web Token) authentication with role-based access control (RBAC) to secure API endpoints.
-
Get JWT Token
curl -X POST http://localhost:3000/api/authenticate \ -H "Content-Type: application/json" \ -d '{ "service": "local", "key": "abc123" }'
-
Response
{ "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9...", "token_type": "Bearer", "expires": 1640995200, "issued": 1640991600, "issued_by": "Fletcher", "ttl": 3600, "service": "local", "roles": ["disable", "pause", "publish", "update"] } -
Use Bearer Token
curl -X POST http://localhost:3000/api/plan \ -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9..." \ -H "Content-Type: application/json" \ -d '{ ... }'
Fletcher implements four distinct roles that control access to different operations:
| Role | Description | Endpoints |
|---|---|---|
publish |
Create and submit new plans | POST /api/plan |
update |
Modify data product states | PUT /api/data_product/{dataset_id}/update |
PUT /api/data_product/{dataset_id}/clear |
||
pause |
Pause and unpause datasets | PUT /api/plan/{dataset_id}/pause |
PUT /api/plan/{dataset_id}/unpause |
||
disable |
Disable data products | DELETE /api/data_product/{dataset_id} |
Services are configured via the REMOTE_APIS environment variable:
[
{
"service": "local",
"hash": "$2b$10$DvqWB.sMjo1XSlgGrOzGAuBTY5E1hkLiDK3BdcK0TiROjCWkgCeaa",
"roles": ["publish", "pause", "update", "disable"]
},
{
"service": "readonly",
"hash": "$2b$10$46TiUvUaKvp2D/BuoXe8Fu9ktffCBXioF8M0DeeOWvz8X2J0RtpvK",
"roles": []
}
]local- Full access service with all rolesreadonly- Limited access service with no modification roleshash- bcrypt hash of the service's password (usejust hash "password"to generate)
POST /api/authenticate- Get JWT token (no auth required)
POST /api/plan- Create or update a plan [Requires:publishrole]GET /api/plan/{dataset_id}- Get a plan by dataset IDGET /api/plan/search- Search plansPUT /api/plan/{dataset_id}/pause- Pause a dataset [Requires:pauserole]PUT /api/plan/{dataset_id}/unpause- Unpause a dataset [Requires:pauserole]
GET /api/data_product/{dataset_id}/{data_product_id}- Get a data productPUT /api/data_product/{dataset_id}/update- Update data product states [Requires:updaterole]PUT /api/data_product/{dataset_id}/clear- Clear data products and downstream dependencies [Requires:updaterole]DELETE /api/data_product/{dataset_id}- Disable data products [Requires:disablerole]
Interactive Swagger API documentation with authentication and endpoint testing
/swagger- Interactive API documentation/spec- OpenAPI specification
-
Authenticate and get JWT
# Get JWT token curl -X POST http://localhost:3000/api/authenticate \ -H "Content-Type: application/json" \ -d '{ "service": "local", "key": "abc123" }'
-
Extract token from response
{ "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9...", "token_type": "Bearer", "service": "local", "roles": ["disable", "pause", "publish", "update"] } -
Use token for authenticated requests
# Store token in variable TOKEN="eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9..." # Create a plan (requires 'publish' role) curl -X POST http://localhost:3000/api/plan \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d @plan.json
{
"dataset": {
"id": "123e4567-e89b-12d3-a456-426614174000",
"extra": {}
},
"data_products": [
{
"id": "223e4567-e89b-12d3-a456-426614174000",
"compute": "cams",
"name": "Extract Raw Data",
"version": "1.0.0",
"eager": true,
"passthrough": {},
"extra": {}
},
{
"id": "323e4567-e89b-12d3-a456-426614174000",
"compute": "dbxaas",
"name": "Transform Data",
"version": "1.0.0",
"eager": true,
"passthrough": {},
"extra": {}
}
],
"dependencies": [
{
"parent_id": "223e4567-e89b-12d3-a456-426614174000",
"child_id": "323e4567-e89b-12d3-a456-426614174000",
"extra": {}
}
]
}Fletcher manages the following states:
waiting- Waiting on dependencies to completequeued- Job submitted but not startedrunning- Compute reports job is runningsuccess- Job completed successfullyfailed- Job faileddisabled- Data product is not part of the active plan
Install development dependencies:
# Install Just command runner
cargo install just
# Install SQLx CLI
just sqlx-install
# Install cargo-deny for security checks
just deny-installFletcher uses environment variables for configuration. For local development,
create a .env file in the project root:
# .env file for local development
BASE_URL=127.0.0.1:3000
DATABASE_URL=postgres://fletcher_user:password@localhost/fletcher_db
SECRET_KEY=your-secret-key-for-jwt-signing-make-it-long-and-random
REMOTE_APIS='[
{
"service": "local",
"hash": "$2b$10$DvqWB.sMjo1XSlgGrOzGAuBTY5E1hkLiDK3BdcK0TiROjCWkgCeaa",
"roles": ["publish", "pause", "update", "disable"]
},
{
"service": "readonly",
"hash": "$2b$10$46TiUvUaKvp2D/BuoXe8Fu9ktffCBXioF8M0DeeOWvz8X2J0RtpvK",
"roles": []
}
]'
RUST_BACKTRACE=1
# Optional: Set log levels
RUST_LOG=debugAvailable Environment Variables:
BASE_URL- Server bind address and port (required, default:127.0.0.1:3000)DATABASE_URL- PostgreSQL connection string (required)MAX_CONNECTIONS- Number of PostgreSQL connections in the pool (default is 10)SECRET_KEY- Secret key for JWT token signing (required)REMOTE_APIS- JSON array of service configurations with roles (required)RUST_BACKTRACE- Set to1orfullfor detailed error tracesRUST_LOG- Log level (error,warn,info,debug,trace)
Note: The .env file is automatically loaded by Fletcher using the
dotenvy crate.
# Build
just build # Debug build
just build-release # Release build
# Run
just run # Run with debug
just run-release # Run optimized
# Testing
just test # Run all tests
just check # Check code compilation
just clippy # Run linter
just fmt # Format code
just fmt-check # Check formatting
# Database
just sqlx-migrate # Run migrations
just sqlx-revert # Revert last migration
just sqlx-reset # Reset database
just sqlx-prepare # Update SQLx cache
just sqlx-check # Verify SQLx cache
# Load Testing & Stress Testing
just run-stress # Run Fletcher with stress test settings (30 connections)
just locust # Run basic load test
just locust-demo # Run continuous demo (1 user, API only)
just locust-stress # Run stress test (2000 users, continuous loop)
just locust-busiest-day # Run busiest day test (300 users, single run)
# Python Development
just py-right-check # Type checking with pyright
just py-right-check-watch # Watch mode type checking
just py-ruff-check # Linting with ruff
just py-ruff-fix # Auto-fix with ruff
just py-ruff-fmt # Format with ruff
just py-ruff-fmt-check # Check formatting with ruff
just py-ruff-check-watch # Watch mode linting
just py-audit # Scan for vulnerable dependencies
# Security
just deny # Check dependencies for security issues
just trivy-repo # Scan repository
just trivy-image # Scan Docker image
# PostgreSQL Development
just pg-start # Start PostgreSQL container
just pg-stop # Stop PostgreSQL container
just pg-cli # Connect with rainfrog CLI
# Docker/Podman
just docker-build # Build Docker image
just docker-run # Run Docker container
just podman-build # Build with Podman
just podman-run # Run with Podman
# Utilities
just hash "password" # Generate bcrypt hashfletcher/
├── src/
│ ├── api.rs # REST API endpoints
│ ├── core.rs # Business logic
│ ├── dag.rs # DAG operations and validation
│ ├── db.rs # Database operations
│ ├── error.rs # Error handling
│ ├── model.rs # Data models
│ ├── main.rs # Application entry point
│ └── ui/ # Web UI components
├── locust/ # Load testing framework
│ ├── src/
│ │ ├── locustfile.py # Main load testing scenarios
│ │ ├── model.py # Pydantic models for API data
│ │ └── setup.py # Test data generation utilities
│ ├── results/ # Load testing results and analysis
│ │ ├── notes.md # Comprehensive performance analysis
│ │ ├── stress_30_connections/ # Optimized configuration test results
│ │ ├── stress_10_connections/ # Default configuration test results
│ │ └── busiest_day/ # Production load simulation results
│ ├── pyproject.toml # Python project configuration
│ └── README.md # Load testing documentation
├── migrations/ # Database migrations
├── key_hasher/ # Password hashing utility
├── scripts/ # Utility scripts
├── assets/ # Static web assets
├── package.json # Node.js dependencies (CSS build)
└── justfile # Development commands
Run the comprehensive test suite:
# All tests
just test
# With coverage
cargo test --workspace
# Integration tests
cargo test --test integration_testsFletcher uses PostgreSQL with the following main tables:
dataset- Dataset metadata and pause statedata_product- Individual data products with state trackingdependency- Parent-child relationships between data products
See migrations/ for complete schema definitions.
# Build image
just docker-build
# Run with PostgreSQL
just pg-start
just docker-run
# Health check
just docker-healthcheckThe docker-run command automatically sets required environment variables including
BASE_URL, DATABASE_URL, SECRET_KEY, and REMOTE_APIS with default values
suitable for containerized deployment.
Required:
BASE_URL- Server bind address and port (e.g.,127.0.0.1:3000)DATABASE_URL- PostgreSQL connection stringSECRET_KEY- Secret key for JWT token signing (generate a long, random string)REMOTE_APIS- JSON array of service configurations with authentication and roles
Optional:
MAX_CONNECTIONS- Number of PostgreSQL connections in the pool. Each connection uses one core in PostgreSQL (default is 10)RUST_BACKTRACE- Set to1orfullfor detailed error tracesRUST_LOG- Log level (error,warn,info,debug,trace)
For Development: Use a .env file (see Environment Configuration
section above)
For Production: Set environment variables directly in your deployment
system
Security Notes:
- Use a strong, randomly generated
SECRET_KEYin production - Store bcrypt password hashes in
REMOTE_APIS, never plain text passwords - Use
just hash "your-password"to generate secure password hashes
Fletcher includes comprehensive load testing capabilities using Locust to simulate realistic user workflows and evaluate system performance under various loads.
- 🎯 Realistic Workflows: Simulates authentic Fletcher API usage patterns
- 🔄 Plan Lifecycle Testing: Creates plans, updates data products, and manages states
- 🌐 UI Testing: Tests both API endpoints and web interface interactions
- 📊 Multiple Test Modes: Support for single-run and continuous loop testing
- ⚡ Performance Metrics: Detailed insights into system behavior under load
- 🔧 Configurable Parameters: Customizable authentication, hosts, and execution modes
-
Run basic load test
just locust
-
Run continuous demo (1 user, API only)
just locust-demo
-
Run Fletcher in stress test mode
just run-stress
-
Run stress test (2500 users, continuous loop)
just locust-stress
-
Run busiest day simulation (300 users, single run)
just locust-busiest-day
Pre-configured test scenarios:
just locust # Interactive load test
just locust-demo # Continuous demo (1 user)
just locust-stress # Stress test (2500 users)
just locust-busiest-day # Production simulation (300 users)
just run-stress # Run Fletcher with optimized settingsAvailable Parameters:
Standard Locust Parameters:
--host- Base URL of the Fletcher API server--users- Number of concurrent users to simulate--spawn-rate- Users spawned per second--autostart- Start test automatically without web UI interaction
Fletcher-Specific Parameters:
--service- Service name for authentication (default:local)--key- Authentication key for the service (default:abc123)--mode- Execution mode:onceorloop(default:once)--processing_delay- Simulated data processing time in seconds (default:10)--restart_delay- Wait time before clearing dataset in loop mode (default:60)
Example Custom Configuration:
uv --directory locust/ run locust \
--locustfile src/locustfile.py \
--host http://127.0.0.1:3000 \
--users 50 \
--spawn-rate 1 \
--service local \
--key abc123 \
--mode loop \
--processing_delay 5 \
--restart_delay 30 \
--autostartThe load tests simulate several realistic Fletcher workflows:
- Authentication Flow: Get JWT tokens and verify role-based access
- Plan Creation: Submit complex plans with multiple data products and dependencies
- State Management: Update data product states and trigger downstream jobs
- UI Interaction: Navigate the web interface, search plans, and view visualizations
- Cleanup Operations: Clear data products and reset states for continuous testing
Key Results: Fletcher achieves optimal performance with MAX_CONNECTIONS=30,
supporting 1,300 concurrent users (180 requests/second). PostgreSQL CPU is the
primary limiting factor.
- Optimal Performance: 1,300 concurrent users (180 requests/second)
- Configuration:
MAX_CONNECTIONS=30(optimized from default 10) - Busiest Day Simulation: 300 users representing peak daily load (completes in 8m3s)
- Resource Usage:
- Fletcher: 28-160MB RAM (depending on load)
- PostgreSQL: 1.2-5GB RAM during stress testing
- Database-Bound Performance: PostgreSQL CPU is the primary limiting factor
- Connection Pool Impact: Increasing connections from 10 to 30 provides:
- +37% improvement in concurrent user capacity (950 → 1,300 users)
- +29% improvement in requests/second (140 → 180 RPS)
- -42% reduction in Fletcher memory usage (275MB → 160MB)
- Scalability Ceiling: Performance plateaus around 2,500+ users due to database constraints
- Use
MAX_CONNECTIONS=30for optimal performance - Plan for ~1,000-1,300 concurrent users with current architecture
- Monitor PostgreSQL CPU and connection utilization as key metrics
Test Environment:
- CPU: AMD Ryzen 9 7950X (32 cores) @ 5.88 GHz
- Storage: 2TB NVMe Gen 4 SSD (Sabrent)
- Database: PostgreSQL running locally (same machine)
- RAM: 64GB total available
Detailed Results: For comprehensive stress test analysis, performance
comparisons, and architectural considerations, see
locust/results/notes.md.
Performance Charts:
30 Connections Test - Optimal configuration achieving 180 RPS
10 Connections Test - Default configuration achieving 140 RPS
Busiest Day Simulation - 300 users representing peak daily load
Load Test Reports: Interactive HTML reports with detailed statistics:
- 30 Connections Test Report (recommended configuration)
- 10 Connections Test Report (default configuration)
- Busiest Day Test Report (production load simulation)
Additional files (CSV data, charts) are organized in the
locust/results/ folder by test type.
Note: Performance may vary significantly based on hardware specifications, network latency (if using remote PostgreSQL), and system load.
For accurate stress testing, run Fletcher with optimized settings:
# Terminal 1: Start Fletcher with stress test configuration
just run-stress
# Terminal 2: Run stress test
just locust-stressThe run-stress command configures Fletcher with MAX_CONNECTIONS=30 for
optimal database performance under high load.
Load testing is built with modern Python tools:
# Install dependencies
uv --directory locust/ sync
# Type checking
just py-right-check
# Linting and formatting
just py-ruff-check
just py-ruff-fix
just py-ruff-fmt
# Security scanning
just py-audit
# Interactive development
just py-ruff-check-watch
just py-right-check-watchTest Results: Locust automatically generates comprehensive HTML reports. See the Performance Benchmarks section above for direct links to interactive reports.
The load testing framework includes:
locustfile.py- Main test scenarios and user behavior simulationmodel.py- Pydantic models for Fletcher API data structuressetup.py- Test data generation utilities for complex planspyproject.toml- Python project configuration with dependencies
Fletcher supports compute types:
cams- C-AMS compute platformdbxaas- DBXaaS compute platform
Plans can include custom JSON metadata in extra fields for extensibility.
- Health check endpoint:
GET /(returns 200 when healthy) - Logs: Structured logging with tracing
- Metrics: HTTP request tracing via middleware
- Fork the repository
- Create a feature branch
- Run tests:
just test - Run linting:
just clippy - Check formatting:
just fmt-check - Run Python checks:
just py-right-check py-ruff-check py-ruff-fmt-check py-audit - Test load performance:
just locust-busiest-day - Run security checks:
just deny - Submit a pull request
The project maintains high code quality standards:
- Comprehensive test coverage (Rust and Python)
- Strict linting with Clippy (Rust) and Ruff (Python)
- Type checking with pyright for Python code
- Security scanning with cargo-deny and Trivy
- Automated CI/CD with GitHub Actions
This repo is named after Terence Fletcher, who was the world infamous
famous conductor of the Shaffer Studio Jazz Band at the Shaffer Conservatory
in New York City. Just as Fletcher demanded perfect timing and precision from
his musicians, this orchestration platform ensures your data products execute
with perfect timing and precision.
This project is licensed under the AGPL (Affero General Public License). View the LICENSE file for more details.