A terminal interface for managing and running workflows.
- Docker (for containerized workflows)
Linux/macOS:
curl -fsSL https://raw.githubusercontent.com/chiral-data/silva/main/install.sh | shWindows:
iwr -useb https://raw.githubusercontent.com/chiral-data/silva/main/install.ps1 | iex
The script will:
- Auto-detect your OS and architecture
- Download the latest release
- Install the binary to an appropriate location
- Add to PATH (Windows only)
Download pre-built binaries from the Releases page:
- Linux: x86_64, ARM64 (WIP)
- macOS: x86_64 (Intel), ARM64 (Apple Silicon)
- Windows: x86_64, ARM64
git clone https://github.com/chiral-data/silva.git
cd silva
cargo build --release
./target/release/silva←/→orhl- Switch between Applications, Workflows, and Settingsi- Toggle help popupq- Quit
Browse available bioinformatics applications:
↑↓orjk- Navigate listEnterord- View detailsEscord- Close details
Run and manage workflows:
↑↓orjk- Select workflowEnter- Execute workflowd- View/Close job logs
Configure health checks:
r- Refresh health checking status
- Navigate to the Workflows tab using
→ - Select a workflow with
↑/↓ - Press
Enterto execute - Press
dto view logs while running
The Silva workflow execution system allows you to define and run multi-step workflows using Docker containers. Each workflow consists of multiple jobs that execute sequentially, with each job running in its own Docker container.
The workflow home directory is configurable via the SILVA_WORKFLOW_HOME environment variable. If not set, it defaults to ./home.
export SILVA_WORKFLOW_HOME=/path/to/workflowsA collection of workflows can be found in this repository.
$SILVA_WORKFLOW_HOME/
├── workflow_1/
│ ├── job_1/
│ │ ├── @job.toml
│ │ ├── pre_run.sh
│ │ ├── run.sh
│ │ └── post_run.sh
│ ├── job_2/
│ │ ├── @job.toml
│ │ ├── Dockerfile
│ │ └── run.sh
│ └── job_3/
│ ├── @job.toml
│ └── run.sh
├── workflow_2/
│ └── job_1/
│ ├── @job.toml
│ └── run.sh
Each job requires a @job.toml configuration file that defines:
You must specify either a Docker image URL or a Dockerfile (but not both):
[container]
docker_image = "ubuntu:22.04"
[scripts]
run = "run.sh"[container]
docker_file = "Dockerfile"
[scripts]
pre = "setup.sh"
run = "main.sh"
post = "cleanup.sh"Scripts are optional and have default values:
pre: Pre-execution script (default:pre_run.sh), optionalrun: Main execution script (default:run.sh)post: Post-execution script (default:post_run.sh), optional
Note 1: The job folder is mounted as /workspace inside the container, and scripts are executed from this directory.
Note 2: if pre-execution script and post-execution are not specified, they will be ignored. help me to improve the expression
[container]
docker_image = "python:3.11-slim"
[scripts]
pre = "install_deps.sh"
run = "process_data.sh"
post = "generate_report.sh"Jobs can now specify dependencies on other jobs and automatically handle input/output file transfers:
# Example: A job that depends on preprocessing and uses its outputs
depends_on = ["01_preprocessing", "02_feature_extraction"]
inputs = ["*.csv", "features/*.json"]
outputs = ["model.pkl", "metrics/*.txt"]
[container]
docker_image = "python:3.11-slim"
[scripts]
run = "train_model.sh"Dependency Fields:
-
depends_on: List of job names that must complete before this job runs- Jobs execute in dependency order (topological sort)
- Circular dependencies are detected and reported as errors
- If a dependency job fails, dependent jobs won't execute
-
inputs: Glob patterns for files to copy from dependency outputs- Files are copied from each dependency's
outputs/folder - If empty or omitted, all output files from dependencies are copied
- Supports wildcards:
*.csv,data_*.json,results/**/*.txt - Conflicts (same filename from multiple dependencies) use first match with warning
- Files are copied from each dependency's
-
outputs: Glob patterns for files to collect after job execution- Matching files are copied to an
outputs/folder in the job directory - Supports wildcards and directory patterns
- Files become available to jobs that depend on this one
- If empty, no output collection occurs
- Matching files are copied to an
Example Multi-Job Workflow with Dependencies:
ml_pipeline/
├── 01_data_prep/
│ ├── @job.toml # No dependencies
│ └── prepare.sh # Outputs: train.csv, test.csv
├── 02_feature_eng/
│ ├── @job.toml # depends_on: ["01_data_prep"]
│ └── features.sh # Inputs: *.csv, Outputs: features.json
└── 03_train_model/
├── @job.toml # depends_on: ["02_feature_eng"]
└── train.sh # Inputs: features.json, Outputs: model.pkl
01_data_prep/@job.toml:
outputs = ["train.csv", "test.csv"]
[container]
docker_image = "python:3.11-slim"
[scripts]
run = "prepare.sh"02_feature_eng/@job.toml:
depends_on = ["01_data_prep"]
inputs = ["*.csv"]
outputs = ["features.json"]
[container]
docker_image = "python:3.11-slim"
[scripts]
run = "features.sh"03_train_model/@job.toml:
depends_on = ["02_feature_eng"]
inputs = ["features.json"]
outputs = ["model.pkl", "metrics.txt"]
[container]
docker_image = "python:3.11-slim"
[scripts]
run = "train.sh"How It Works:
- Jobs execute in dependency order (not alphabetical when dependencies exist)
- Before a job runs, input files from dependencies are copied to the job directory
- After successful execution, output files are collected to the
outputs/folder - The workflow displays execution order at startup:
01_data_prep → 02_feature_eng → 03_train_model
mkdir -p $SILVA_WORKFLOW_HOME/my_workflowJob directories should be named in a way that ensures correct execution order (jobs are executed alphabetically by name):
mkdir -p $SILVA_HOME_DIR/my_workflow/01_preprocessing
mkdir -p $SILVA_HOME_DIR/my_workflow/02_analysis
mkdir -p $SILVA_HOME_DIR/my_workflow/03_reportingFor each job, create a @job.toml file:
cat > $SILVA_HOME_DIR/my_workflow/01_preprocessing/@job.toml << 'EOF'
[container]
docker_image = "python:3.11-slim"
[scripts]
run = "preprocess.sh"
EOFCreate the required scripts (must be executable):
cat > $SILVA_HOME_DIR/my_workflow/01_preprocessing/preprocess.sh << 'EOF'
#!/bin/bash
set -e
echo "Starting preprocessing..."
python3 -c "print('Preprocessing complete!')"
EOF
chmod +x $SILVA_HOME_DIR/my_workflow/01_preprocessing/preprocess.sh-
Launch the Application
-
Navigate to Workflows Tab
- Press
Left or horRight or larrow keys to switch tabs - Navigate to the "Files" tab (shows workflow list)
- Press
-
Select a Workflow
- Use
UpandDownarrow keys to select a workflow - The selected workflow is highlighted
- Use
-
Launch Workflow Execution
- Press
Enteron a selected workflow - The Docker logs popup opens automatically
- Workflow execution begins in the background
- Press
-
Monitor Progress
- The Docker logs popup shows real-time execution logs
- Status section displays workflow name and execution status
- Job progress section shows visual indicators:
- ✓ (green) - Completed job
- ⟳ (yellow) - Currently running job
- ⬜ (gray) - Pending job
- Progress counter shows (current/total) jobs
| Key | Action |
|---|---|
Enter |
Launch selected workflow |
↑ / ↓ |
Navigate workflows/scroll logs |
d |
Toggle Docker logs popup |
b |
Scroll logs to bottom |
r |
Refresh workflow list |
i |
Toggle help popup |
q |
Quit application |
- Jobs execute in dependency order (topological sort) when dependencies are specified
- For workflows without dependencies, jobs execute in alphabetical order by folder name
- Each job runs to completion before the next job starts
- Job folder is mounted as
/workspacein the container - Scripts execute with
/workspaceas the working directory - Input files from dependencies are copied to the job directory before execution
- Output files are collected to the
outputs/folder after successful execution
For each job:
- Pre-run script (if specified)
- Main run script (required)
- Post-run script (if specified)
If any script returns a non-zero exit code, the job fails and the workflow stops.
- If a job fails, the workflow stops immediately
- Remaining jobs are not executed
- The failed job name is recorded in the execution result
- Logs up to the point of failure are retained
data_pipeline/
├── 01_extract/
│ ├── @job.toml
│ └── extract.sh
├── 02_transform/
│ ├── @job.toml
│ ├── Dockerfile
│ └── transform.py
└── 03_load/
├── @job.toml
└── load.sh
01_extract/@job.toml:
[container]
docker_image = "alpine:latest"
[scripts]
run = "extract.sh"02_transform/@job.toml:
[container]
dockerfile = "Dockerfile"
[scripts]
run = "transform.py"02_transform/Dockerfile:
FROM python:3.11-slim
RUN pip install pandas numpytest_suite/
├── job_1_unit_tests/
│ ├── @job.toml
│ └── run_tests.sh
├── job_2_integration_tests/
│ ├── @job.toml
│ └── run_tests.sh
└── job_3_e2e_tests/
├── @job.toml
└── run_tests.sh
All jobs use the same configuration:
[container]
docker_image = "node:20-alpine"
[scripts]
pre = "npm install"
run = "npm test"- Verify the workflow directory exists in
$SILVA_WORKFLOW_HOME - Press
rto refresh the workflow list - Check that jobs contain
@job.tomlfiles
- Verify
@job.tomlsyntax is valid - Ensure exactly one container type is specified (docker_image OR docker_file)
- Check that script files exist and are executable
- Verify Docker daemon is running
- Check that specified Docker images are available
- Review logs in the Docker popup for detailed error messages
- Ensure scripts have correct shebang (
#!/bin/bash)
- Make sure all scripts are executable:
chmod +x script.sh - Verify Docker has permission to access mounted volumes
Recommended Approach (v0.3.3+): Use the depends_on, inputs, and outputs configuration:
# job_1/@job.toml
outputs = ["result.txt", "data.csv"]
[container]
docker_image = "ubuntu:22.04"
[scripts]
run = "process.sh"# job_2/@job.toml
depends_on = ["job_1"]
inputs = ["*.txt", "*.csv"] # or omit to copy all outputs
[container]
docker_image = "ubuntu:22.04"
[scripts]
run = "analyze.sh"Files from job_1's outputs are automatically copied to job_2's directory before execution.
Legacy Approach: Access other job folders via relative paths:
##!/bin/bash
## job_1/run.sh - Write output
echo "result data" > /workspace/output.txt
## job_2/run.sh - Read input
cat /workspace/../job_1/output.txtNote: The dependency-based approach is preferred as it makes data flow explicit and handles file copying automatically.
Pass environment variables through Dockerfile:
FROM ubuntu:22.04
ENV MY_VAR=valueOr set them in your script:
##!/bin/bash
export MY_VAR=value
./my_programCurrently, each job runs in isolation. For jobs that need to communicate, use file-based data exchange through the workflow directory.
- Name Jobs with Prefixes: Use numeric prefixes (01*, 02*, 03_) to ensure correct execution order
- Use Set -e: Always start scripts with
set -eto fail on errors - Log Verbosely: Add echo statements to track progress
- Test Individually: Test each job independently before running the full workflow
- Keep Jobs Small: Break complex workflows into smaller, focused jobs
- Document Dependencies: Add README files explaining job purposes and dependencies
- Use Specific Tags: Specify exact Docker image tags (e.g.,
ubuntu:22.04notubuntu:latest)
| Variable | Default | Description |
|---|---|---|
SILVA_WORKFLOW_HOME |
./home |
Workflow home directory path |
| File | Required | Description |
|---|---|---|
@job.toml |
Yes | Job configuration file |
run.sh |
Default | Main execution script (configurable) |
pre_run.sh |
Default | Pre-execution script (configurable) |
post_run.sh |
Default | Post-execution script (configurable) |
Dockerfile |
Optional | Custom Docker image definition |
| Code | Meaning |
|---|---|
| 0 | Success |
| Non-zero | Failure (workflow stops) |
For issues or questions:
- Check the logs in the Docker popup (press
d) - Review test files for examples
- See source code in
src/components/workflow/andsrc/components/docker/
- Q: the emojis do not show correctly under Windows.
- A: we recommand using Windows Terminal instead of PowerShell. To install Windows Terminal, run
winget install Microsoft.WindowsTerminal.
- A: we recommand using Windows Terminal instead of PowerShell. To install Windows Terminal, run