diff --git a/docs/cli/index.md b/docs/cli/index.md index 6a9ea390a..2923bfb4f 100644 --- a/docs/cli/index.md +++ b/docs/cli/index.md @@ -6,7 +6,7 @@ The Wave CLI is a convenient wrapper around the Wave API. You can compose Wave CLI with other commands. The CLI returns the URL of the container build on `stdout` in the following format: -``` +```console wave.seqera.io/wt/xxxxxxxxxxxx/wave/build:xxxxxxxxxxxxxxxx ``` diff --git a/docs/cli/installation.mdx b/docs/cli/installation.mdx index f119301e9..146c71444 100644 --- a/docs/cli/installation.mdx +++ b/docs/cli/installation.mdx @@ -16,13 +16,13 @@ To self-install the latest Wave release from GitHub: 1. Move the executable from your downloads folder to a location in your `PATH`, such as `~/bin`. For example: - ``` + ```bash mv wave-cli-0.8.0-macos-x86_64 ~/bin/wave ``` 1. Ensure that the executable permission is set. For example: - ``` + ```bash chmod u+x ~/bin/wave ``` @@ -30,7 +30,7 @@ To self-install the latest Wave release from GitHub: 1. Create a basic `Dockerfile`: - ``` + ```Dockerfile cat << EOF > ./Dockerfile FROM busybox:latest EOF @@ -38,13 +38,13 @@ To self-install the latest Wave release from GitHub: 1. Use the CLI to build the container: - ``` + ```bash wave -f Dockerfile ``` Example output: - ``` + ```console wave.seqera.io/wt/xxxxxxxxxxxx/wave/build:xxxxxxxxxxxxxxxx ``` @@ -62,7 +62,7 @@ To install the latest version with [Homebrew]: 1. Create a basic `Dockerfile`: - ``` + ```Dockerfile cat << EOF > ./Dockerfile FROM busybox:latest EOF @@ -70,13 +70,13 @@ To install the latest version with [Homebrew]: 1. Use the CLI to build the container: - ``` + ```bash wave -f Dockerfile ``` Example output: - ``` + ```console wave.seqera.io/wt/xxxxxxxxxxxx/wave/build:xxxxxxxxxxxxxxxx ``` diff --git a/docs/cli/reference.md b/docs/cli/reference.md index ceed9758d..ef1b79d00 100644 --- a/docs/cli/reference.md +++ b/docs/cli/reference.md @@ -30,7 +30,7 @@ The following arguments are used for a directory build: Create a new context directory: -``` +```bash mkdir -p new-layer/usr/local/bin printf 'echo Hello world!' > new-layer/usr/local/bin/hello.sh chmod +x new-layer/usr/local/bin/hello.sh @@ -38,7 +38,7 @@ chmod +x new-layer/usr/local/bin/hello.sh Use the CLI to build the image and run the result with Docker: -``` +```bash docker run $(wave -i alpine --layer new-layer) sh -c hello.sh ``` @@ -64,7 +64,7 @@ Conda builds support the following arguments: In the following example, a container with the `samtools` and `bamtools` packages is built: -``` +```bash wave \ --conda-package bamtools=2.5.2 \ --conda-package samtools=1.17 @@ -92,7 +92,7 @@ Building a Dockerfile that requires `--build-arg` for build time variables isn't In the following example `Dockerfile`, several packages are installed: -``` +```Dockerfile cat << EOF > ./Dockerfile FROM alpine @@ -106,14 +106,14 @@ EOF Build and run the container based on the Dockerfile in the previous example by running the following command: -``` +```bash container=$(wave --containerfile ./Dockerfile) docker run --rm $container cowsay "Hello world" ``` In the following example `Dockerfile`, a local context is used: -``` +```Dockerfile cat << EOF > ./Dockerfile FROM alpine ADD hello.sh /usr/local/bin/ @@ -122,7 +122,7 @@ EOF Create the shell script referenced in the previous example by running the following commands in your terminal: -``` +```bash mkdir -p build-context/ printf 'echo Hello world!' > build-context/hello.sh chmod +x build-context/hello.sh @@ -130,7 +130,7 @@ chmod +x build-context/hello.sh Build and run the container based on the Dockerfile in the previous example by running the following command: -``` +```bash docker run $(wave -f Dockerfile --context build-context) sh -c hello.sh ``` @@ -164,19 +164,19 @@ The following arguments are used to build a Singularity container: In the following example, a Docker base image is augmented: -``` +```bash wave -i alpine --layer context-dir/ --build-repo docker.io/user/repo ``` In the following example, a SingularityCE def file is specified: -``` +```bash wave -f hello-world.def --singularity --freeze --build-repo docker.io/user/repo ``` In the following example, two Conda packages are specified: -``` +```bash wave --conda-package bamtools=2.5.2 --conda-package samtools=1.17 --freeze --singularity --build-repo docker.io/user/repo ``` @@ -208,7 +208,7 @@ The following arguments are used to freeze a container build: In the following example, the `alpine` container image is frozen to a private DockerHub image registry. The `--tower-token` argument is not required if the `TOWER_ACCESS_TOKEN` environment variable is defined. -``` +```bash wave -i alpine --freeze \ --build-repo docker.io/user/repo --tower-token ``` diff --git a/docs/install/configure-wave-build.md b/docs/install/configure-wave-build.md index b1a727360..8b62fc540 100644 --- a/docs/install/configure-wave-build.md +++ b/docs/install/configure-wave-build.md @@ -1,343 +1,414 @@ --- title: Configure Wave build +description: Enable Wave build on Kubernetes and AWS EFS storage +tags: [kubernetes, install, wave, wave build] --- -This guide covers extending your existing Wave installation on Kubernetes to support container build capabilities. This enables Wave's full feature set including container building, freezing, and advanced caching. +Wave's full build capabilities require specific integrations with Kubernetes and AWS Elastic File System (EFS) Storage. Amazon Elastic Kubernetes Service (EKS) and AWS hard dependencies for full-featured deployments that support container build capabilities. -## Prerequisites +This page describes how to extend your [Kubernetes installation](./kubernetes) to support container build capabilities. It includes: -Before extending Wave for build support, ensure you have: +- Configure Kubernetes service account and RBAC policies +- Configure EFS storage +- Update Wave configuration +- Update Wave deployments +- Deploy updates +- Verify build functionality +- Configure production enhancements +- Optimize for production environments +- Configure advanced options -- **Existing Wave installation** - Basic Wave deployment already running in augmentation-only mode -- **AWS EKS cluster** - Build capabilities require AWS-specific integrations -- **EFS filesystem** - Configured and accessible from your EKS cluster for shared build storage -- **Cluster admin permissions** - Required to create RBAC policies and storage resources +:::info[**Prerequisites**] +You will need the following to get started: -## Create Kubernetes Service Account & RBAC Policies +- An existing [Wave Lite Kubernetes installation](./kubernetes.md) +- An AWS EKS cluster +- An EFS filesystem configured and accessible from your EKS cluster +- Cluster admin permissions to create RBAC policies and storage resources +::: -Wave's build service needs permissions to create and manage build pods. Create the necessary RBAC configuration: +## Configure Kubernetes service account and RBAC policies -```yaml ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: wave-sa - namespace: wave ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: wave-role -rules: - - apiGroups: [""] - resources: [pods, pods/status, pods/log, pods/exec] - verbs: [get, list, watch, create, delete] - - apiGroups: ["batch"] - resources: [jobs, jobs/status] - verbs: [get, list, watch, create, delete] - - apiGroups: [""] - resources: [configmaps, secrets] - verbs: [get, list] ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: wave-rolebind -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: wave-role -subjects: - - kind: ServiceAccount +To configure your Kubernetes service account and RBAC policies: + +1. Create `wave-rbac.yaml` with the following configuration: + + ```yaml + --- + apiVersion: v1 + kind: ServiceAccount + metadata: name: wave-sa namespace: wave -``` - -## Configure EFS Storage - -Wave builds require shared storage accessible across multiple pods. Configure EFS with the AWS EFS CSI driver: - -### Storage Class - -```yaml -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: efs-wave-sc -provisioner: efs.csi.aws.com -parameters: - provisioningMode: efs-ap - fileSystemId: "REPLACE_ME_EFS_ID" - directoryPerms: "0755" -``` - -### Persistent Volume - -```yaml -apiVersion: v1 -kind: PersistentVolume -metadata: - name: wave-build-pv -spec: - capacity: - storage: 500Gi - volumeMode: Filesystem - accessModes: - - ReadWriteMany - persistentVolumeReclaimPolicy: Retain - storageClassName: efs-wave-sc - csi: - driver: efs.csi.aws.com - volumeHandle: "REPLACE_ME_EFS_ID" -``` - -### Persistent Volume Claim - -```yaml -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - namespace: wave - name: wave-build-pvc - labels: - app: wave-app -spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 500Gi - storageClassName: efs-wave-sc -``` - -**Configuration Notes:** -- Replace `REPLACE_ME_EFS_ID` with your actual EFS filesystem ID -- EFS must be in the same VPC as your EKS cluster -- Ensure EFS security groups allow NFS traffic from EKS worker nodes - -## Update Wave Configuration + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: wave-role + rules: + - apiGroups: [""] + resources: [pods, pods/status, pods/log, pods/exec] + verbs: [get, list, watch, create, delete] + - apiGroups: ["batch"] + resources: [jobs, jobs/status] + verbs: [get, list, watch, create, delete] + - apiGroups: [""] + resources: [configmaps, secrets] + verbs: [get, list] + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: wave-rolebind + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: wave-role + subjects: + - kind: ServiceAccount + name: wave-sa + namespace: wave + ``` + +1. Apply the RBAC configuration: + + ```bash + kubectl apply -f wave-rbac.yaml + ``` + +## Configure EFS storage + +Wave builds require shared storage accessible across multiple pods. Configure EFS with the AWS EFS CSI driver to provide persistent, shared storage for build artifacts and caching. + +:::note +EFS must be in the same Virtual Private Cloud (VPC) as your EKS cluster. Ensure EFS security groups allow NFS traffic from EKS worker nodes. +::: + +### Storage class + +To configure your EFS storage class: + +1. Create `wave-storage.yaml` with the following configuration: + + ```yaml + --- + apiVersion: storage.k8s.io/v1 + kind: StorageClass + metadata: + name: efs-wave-sc + provisioner: efs.csi.aws.com + parameters: + provisioningMode: efs-ap + fileSystemId: "" + directoryPerms: "0755" + ``` + + Replace `` with your Amazon EFS File System ID. + +1. Apply the storage configuration: + + ```bash + kubectl apply -f wave-storage.yaml + ``` + +### Persistent volume + +To configure your persistent volume: + +1. Create `wave-persistent.yaml` with the following configuration: + + ```yaml + apiVersion: v1 + kind: PersistentVolume + metadata: + name: wave-build-pv + spec: + capacity: + storage: 500Gi + volumeMode: Filesystem + accessModes: + - ReadWriteMany + persistentVolumeReclaimPolicy: Retain + storageClassName: efs-wave-sc + csi: + driver: efs.csi.aws.com + volumeHandle: "" + ``` + + Replace `` with your Amazon EFS File System ID. + +1. Apply the persistent volume: + + ```bash + kubectl apply -f wave-persistent.yaml + ``` + +### Persistent volume claim + +To configure your persistent volume claim: + +1. Create `wave-persistent-claim.yaml` with the following configuration: + + ```yaml + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + namespace: wave + name: wave-build-pvc + labels: + app: wave-app + spec: + accessModes: + - ReadWriteMany + resources: + requests: + storage: 500Gi + storageClassName: efs-wave-sc + ``` + +1. Apply the persistent volume claim: + + ```bash + kubectl apply -f wave-persistent-claim.yaml + ``` + +## Update Wave configuration Update your existing Wave ConfigMap to enable build features and configure storage paths: -```yaml -kind: ConfigMap -apiVersion: v1 -metadata: - name: wave-cfg - namespace: wave - labels: - app: wave-cfg -data: - config.yml: | - wave: - # Enable build service - build: - enabled: true - workspace: '/build/work' - # Optional: Configure build timeouts - timeout: '1h' - # Optional: Configure resource limits for build pods - resources: - requests: - memory: '1Gi' - cpu: '500m' - limits: - memory: '4Gi' - cpu: '2000m' - - # Enable other build-dependent features - mirror: - enabled: true - scan: - enabled: true - blobCache: - enabled: true - - # Existing database, redis, and platform configuration... - db: - uri: "jdbc:postgresql://your-postgres-host:5432/wave" - user: "wave_user" - password: "your_secure_password_here" - - redis: - uri: "redis://your-redis-host:6379" - - tower: - endpoint: - url: "https://your-platform-instance.com/api" - - # Kubernetes-specific configuration for builds - k8s: - namespace: wave - serviceAccount: wave-sa - # Optional: Configure build pod settings - buildPod: - image: 'quay.io/buildah/stable:latest' - nodeSelector: - wave-builds: "true" -``` - -## Update Wave Deployment - -Modify your existing Wave deployment to include the service account and EFS storage: - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: wave - namespace: wave - labels: - app: wave-app -spec: - replicas: 1 - selector: - matchLabels: - app: wave-app - template: +1. Open `wave-config.yaml` and update the configuration with the following: + + ```yaml + kind: ConfigMap + apiVersion: v1 metadata: + name: wave-cfg + namespace: wave + labels: + app: wave-cfg + data: + config.yml: | + wave: + # Enable build service + build: + enabled: true + workspace: '/build/workspace' + # Optional: Retain failed builds to gather logs & inspect + cleanup: "OnSuccess" + # Optional: Configure build timeouts + timeout: '15m' + # Example additional kubernetes configuration for wave-build + k8s: + dns: + servers: + - "1.1.1.1" + - "8.8.8.8" + namespace: "wave-build" + storage: + mountPath: "/build" + # Relevant volume claim name should match the + claimName: "wave-build-pvc" + serviceAccount: "wave-build-sa" + resources: + requests: + memory: '1800Mi' + nodeSelector: + # This node selector binds the build pods to a separate cluster node group + linux/amd64: 'service=wave-build' + linux/arm64: 'service=wave-build-arm64' + # Enable other build-dependent features + mirror: + enabled: true + scan: + enabled: true + blobCache: + enabled: true + + # Existing database, redis, and platform configuration... + db: + uri: "jdbc:postgresql://:5432/wave" + user: "wave_user" + password: "" + + redis: + uri: "redis://:6379" + + tower: + endpoint: + url: "" + ``` + + Replace the following: + + - ``: your Postgres service endpoint + - ``: your secure password for the database user + - ``: your Redis service endpoint + - ``: your Platform endpoint URL (_optional_) + +1. Apply the Wave configuration: + + ```bash + kubectl apply -f wave-config.yaml + ``` + +## Update Wave deployments + +To update your Wave deployment, follow these steps: + +1. Modify your existing `wave-deployment.yaml` to include the service account and EFS storage: + + ```yaml + apiVersion: apps/v1 + kind: Deployment + metadata: + name: wave + namespace: wave labels: app: wave-app spec: - serviceAccountName: wave-sa # Add service account - containers: - - image: your-registry.com/wave:latest - name: wave-app - ports: - - containerPort: 9090 - name: http - env: - - name: MICRONAUT_ENVIRONMENTS - value: "postgres,redis,k8s" # Add k8s environment - - name: WAVE_JVM_OPTS - value: "-Xmx3g -Xms1g -XX:+UseG1GC" - resources: - requests: - memory: "4Gi" - cpu: "1000m" - limits: - memory: "4Gi" - cpu: "2000m" - workingDir: "/work" - volumeMounts: + replicas: 1 + selector: + matchLabels: + app: wave-app + template: + metadata: + labels: + app: wave-app + spec: + serviceAccountName: wave-sa # Add service account + containers: + - image: /wave: + name: wave-app + ports: + - containerPort: 9090 + name: http + env: + - name: MICRONAUT_ENVIRONMENTS + value: "postgres,redis,k8s" # Add k8s environment + - name: WAVE_JVM_OPTS + value: "-Xmx3g -Xms1g -XX:+UseG1GC" + resources: + requests: + memory: "4Gi" + cpu: "1000m" + limits: + memory: "4Gi" + cpu: "2000m" + workingDir: "/work" + volumeMounts: + - name: wave-cfg + mountPath: /work/config.yml + subPath: "config.yml" + - name: build-storage + mountPath: /build + readinessProbe: + httpGet: + path: /health + port: 9090 + initialDelaySeconds: 30 + timeoutSeconds: 10 + livenessProbe: + httpGet: + path: /health + port: 9090 + initialDelaySeconds: 60 + timeoutSeconds: 10 + volumes: - name: wave-cfg - mountPath: /work/config.yml - subPath: "config.yml" - - name: build-storage # Add EFS mount - mountPath: /build - readinessProbe: - httpGet: - path: /health - port: 9090 - initialDelaySeconds: 30 - timeoutSeconds: 10 - livenessProbe: - httpGet: - path: /health - port: 9090 - initialDelaySeconds: 60 - timeoutSeconds: 10 - volumes: - - name: wave-cfg - configMap: - name: wave-cfg - - name: build-storage # Add EFS volume - persistentVolumeClaim: - claimName: wave-build-pvc - restartPolicy: Always -``` - -## Deploy the Updates - -Apply the configuration changes to enable build support: - -```bash -# Apply RBAC configuration -kubectl apply -f wave-rbac.yaml - -# Apply storage configuration -kubectl apply -f wave-storage.yaml - -# Update the ConfigMap -kubectl apply -f wave-configmap.yaml - -# Update the deployment -kubectl apply -f wave-deployment.yaml - -# Verify the deployment -kubectl get pods -n wave -kubectl logs -f deployment/wave -n wave - -# Check that EFS is mounted correctly -kubectl exec -it deployment/wave -n wave -- df -h /build -``` - -## Verify Build Functionality + configMap: + name: wave-cfg + - name: build-storage # Add EFS volume + persistentVolumeClaim: + claimName: wave-build-pvc + restartPolicy: Always + ``` -Test that Wave build capabilities are working: + Replace the following: + + - ``: your registry image address + - ``: your Wave image tag version + +1. Apply the updated deployment: + + ``` + kubectl apply -f wave-deployment.yaml + ``` + +## Verify the deployment -1. **Check Wave health endpoint** for build service status -2. **Monitor logs** for build service initialization messages -3. **Test a simple build** through the Wave API or Platform integration +To verify your deployment, follow these steps: -```bash -curl http://wave-service.wave.svc.cluster.local:9090/health +1. Check the Wave pods and logs:: -kubectl logs -f deployment/wave -n wave | grep -i build -``` + ```bash + kubectl get pods -n wave + kubectl logs -f deployment/wave -n wave + ``` -## Recommended Production Enhancements +1. Check that EFS is mounted correctly: -### Dedicated Node Pools + ```bash + kubectl exec -it deployment/wave -n wave -- df -h /build + ``` -Create dedicated node pools for Wave build workloads to isolate build processes and optimize resource allocation: +## Verify build functionality +Test that Wave build capabilities are working: + +1. Check Wave health endpoint for build service status +1. Monitor logs for build service initialization messages +1. **Test a simple build** through the Wave API or Platform integration + + ```bash + curl http://wave-service.wave.svc.cluster.local:9090/health + kubectl logs -f deployment/wave -n wave | grep -i build + ``` + +## Configure production enhancements -### Build Pod Resource Management +The following sections describe recommended enhancements for production deployments: -Configure resource quotas and limits for build pods: +### Dedicated node pools -```yaml -apiVersion: v1 -kind: ResourceQuota -metadata: - name: wave-build-quota - namespace: wave -spec: - hard: - requests.cpu: "10" - requests.memory: 20Gi - limits.cpu: "20" - limits.memory: 40Gi - pods: "10" -``` +Create dedicated node pools for Wave build workloads to isolate build processes and optimize resource allocation. -### Monitoring and Alerting +### Build pod resource management -Set up monitoring for build operations: +Control resource usage and prevent build pods from overwhelming your cluster: -- **Build success/failure rates** -- **Build duration metrics** -- **EFS storage usage** -- **Node resource utilization** -- **Build queue length** +- Configure resource quotas and limits for build pods. + + ```yaml + apiVersion: v1 + kind: ResourceQuota + metadata: + name: wave-build-quota + namespace: wave + spec: + hard: + requests.cpu: "10" + requests.memory: 20Gi + limits.cpu: "20" + limits.memory: 40Gi + pods: "10" + ``` -## Security Considerations +### Monitoring and alerting -- **EFS Access Points** - Use EFS access points to isolate build workspaces -- **Network Policies** - Restrict network access for build pods -- **Pod Security Standards** - Apply appropriate security contexts to build pods -- **Image Scanning** - Enable security scanning for built images -- **RBAC Minimization** - Regularly review and minimize Wave's cluster permissions +Set up monitoring for build operations to track performance and identify issues: -## Troubleshooting +- Build success/failure rates +- Build duration metrics +- EFS storage usage +- Node resource utilization +- Build queue length -**Common issues and solutions:** +## Security considerations -- **EFS mount failures** - Check security groups and VPC configuration -- **Build pod creation failures** - Verify RBAC permissions and node selectors -- **Storage access issues** - Ensure EFS access points are configured correctly -- **Build timeouts** - Adjust build timeout settings based on workload requirements +Implement these security measures to protect your Wave deployment and build environment: -For additional configuration options and advanced features, see [Configuring Wave](./configure-wave.md). +- Use EFS access points to isolate build workspaces +- Restrict network access for build pods +- Apply appropriate security contexts to build pods +- Enable security scanning for built images +- Regularly review and minimize Wave's cluster permissions diff --git a/docs/install/docker-compose.md b/docs/install/docker-compose.md index 4115de0d3..8229ba17e 100644 --- a/docs/install/docker-compose.md +++ b/docs/install/docker-compose.md @@ -1,187 +1,204 @@ --- -title: Docker Compose installation +title: Install Wave Lite with Docker Compose +description: Install Wave Lite with Docker Compose +tags: [docker compose, install, wave] --- -Wave enables you to provision container images on demand, removing the need to build and upload them manually to a container registry. Wave can provision both ephemeral and regular registry-persisted container images. +Docker Compose provides a straightforward deployment method for deploying Wave. It packages Wave services and dependencies into a coordinated container stack. This approach handles service orchestration automatically, coordinating startup and networking of Wave components. -Docker Compose installations support Wave Lite, a configuration mode for Wave that includes only container augmentation and inspection capabilities, and enables the use of Fusion file system in Nextflow pipelines. +Docker Compose installations support [Wave Lite](../wave-lite.md), a configuration mode for Wave that includes container augmentation and inspection capabilities, and enables the use of Fusion file system in Nextflow pipelines. -## Prerequisites +This page describes how to run Wave Lite with Docker Compose. It includes steps to: -Before installing Wave, you need the following infrastructure components: +- Configure a database +- Configure Wave +- Set up Docker Compose +- Deploy Wave +- Configure advanced options -- **PostgreSQL instance** - Version 12, or higher -- **Redis instance** - Version 6.2, or higher +:::info[**Prerequisites**] +You will need the following to get started: -## System requirements +- A PostgreSQL instance (version 12 or higher) +- A Redis instance (version 6.2 or higher) +- Supported versions of Docker Engine and Docker Compose +- A compute instance with at least: + - **Memory**: 32 GB RAM available for use by the Wave application on the host system + - **CPU**: 8 CPU cores available on the host system + - **Storage**: 10 GB in addition to sufficient disk space for your container images and temporary files (for example, in AWS EC2, `m5a.2xlarge` or greater) + - **Network**: Connectivity to your PostgreSQL and Redis instances +::: -The minimum system requirements for self-hosted Wave in Docker Compose are: +## Configure a database -- Current, supported versions of **Docker Engine** and **Docker Compose**. -- Compute instance minimum requirements: - - **Memory**: 32 GB RAM available to be used by the Wave application on the host system. - - **CPU**: 8 CPU cores available on the host system. - - **Storage**: 10 GB in addition to sufficient disk space for your container images and temporary files. - - For example, in AWS EC2, `m5a.2xlarge` or greater - - **Network**: Connectivity to your PostgreSQL and Redis instances. +To configure a PostgreSQL database, follow these steps: -## Database configuration +1. Connect to PostgreSQL. +1. Create a dedicated `wave` database and user account with the appropriate privileges: -Wave requires a PostgreSQL database to operate. + ```sql + CREATE ROLE wave_user LOGIN PASSWORD ''; -Create a dedicated `wave` database and user account with the appropriate privileges: + CREATE DATABASE wave; -```sql --- Create a dedicated user for Wave -CREATE ROLE wave_user LOGIN PASSWORD 'your_secure_password'; + \c wave; --- Create the Wave database -CREATE DATABASE wave; + GRANT USAGE, CREATE ON SCHEMA public TO wave_user; --- Connect to the wave database -\c wave; + GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO wave_user; + GRANT USAGE, SELECT, UPDATE ON ALL SEQUENCES IN SCHEMA public TO wave_user; --- Grant basic schema access -GRANT USAGE, CREATE ON SCHEMA public TO wave_user; + ALTER DEFAULT PRIVILEGES IN SCHEMA public + GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO wave_user; --- Grant privileges on existing tables and sequences -GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO wave_user; -GRANT USAGE, SELECT, UPDATE ON ALL SEQUENCES IN SCHEMA public TO wave_user; + ALTER DEFAULT PRIVILEGES IN SCHEMA public + GRANT USAGE, SELECT, UPDATE ON SEQUENCES TO wave_user; + ``` --- Grant privileges on future tables and sequences -ALTER DEFAULT PRIVILEGES IN SCHEMA public -GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO wave_user; + Replace `` with a secure password for the database user. + +:::note +Wave will automatically handle schema migrations and create the required database objects on startup. +::: + +## Configure Wave + +To configure Wave's behavior and integrations: + +- Create `config/wave-config.yml` with the following configuration: + + ```yaml + wave: + # Build service configuration - disabled for Docker Compose + build: + enabled: false + # Mirror service configuration - disabled for Docker Compose + mirror: + enabled: false + # Security scanning configuration - disabled for Docker Compose + scan: + enabled: false + # Blob caching configuration - disabled for Docker Compose + blobCache: + enabled: false + # Database connection settings + db: + uri: "jdbc:postgresql://:5432/wave" + user: "wave_user" + password: "" + + # Redis configuration for caching and session management + redis: + uri: "redis://:6379" + + # Platform integration (optional) + tower: + endpoint: + url: "" + + # Micronaut framework configuration + micronaut: + # Netty HTTP server configuration + netty: + event-loops: + default: + num-threads: 64 + stream-pool: + executor: stream-executor + # HTTP client configuration + http: + services: + stream-client: + read-timeout: 30s + read-idle-timeout: 5m + event-loop-group: stream-pool + + # Management endpoints configuration + loggers: + env: + enabled: false + bean: + enabled: false + caches: + enabled: false + refresh: + enabled: false + loggers: + enabled: false + info: + enabled: false + # Enable metrics for monitoring + metrics: + enabled: true + # Enable health checks + health: + enabled: true + disk-space: + enabled: false + jdbc: + enabled: false + ``` -ALTER DEFAULT PRIVILEGES IN SCHEMA public -GRANT USAGE, SELECT, UPDATE ON SEQUENCES TO wave_user; -``` + Replace the following: -Wave will automatically handle schema migrations on startup and create the required database objects. + - ``: your Postgres service endpoint + - ``: your secure password for the database user + - ``: your Redis service endpoint + - ``: your Platform endpoint URL (_optional_) -## Wave config + Adjust the following values based on your CPU cores: -Create a configuration file that defines Wave's behavior and integrations. Save this as `config/wave-config.yml` in your Docker Compose directory. + - `number-of-threads`: Set between 2x and 4x your CPU core count (default: 16) + - `num-threads`: Set between 2x and 4x your CPU core count (default: 64) -```yaml -wave: - # Build service configuration - disabled for Docker Compose - build: - enabled: false - # Mirror service configuration - disabled for Docker Compose - mirror: - enabled: false - # Security scanning configuration - disabled for Docker Compose - scan: - enabled: false - # Blob caching configuration - disabled for Docker Compose - blobCache: - enabled: false - # Database connection settings - db: - uri: "jdbc:postgresql://your-postgres-host:5432/wave" - user: "wave_user" - password: "your_secure_password" +## Configure Docker Compose -# Redis configuration for caching and session management -redis: - uri: "redis://your-redis-host:6379" +To configure Waves Docker Compose deploy: -# Platform integration (optional) -tower: - endpoint: - url: "https://your-platform-server.com" +- Create `docker-compose.yml` with the following configuration: -# Micronaut framework configuration -micronaut: - # Netty HTTP server configuration - netty: - event-loops: - default: - num-threads: 64 - stream-pool: - executor: stream-executor - # HTTP client configuration - http: + ```yaml services: - stream-client: - read-timeout: 30s - read-idle-timeout: 5m - event-loop-group: stream-pool - -# Management endpoints configuration -loggers: - env: - enabled: false - bean: - enabled: false - caches: - enabled: false - refresh: - enabled: false - loggers: - enabled: false - info: - enabled: false - # Enable metrics for monitoring - metrics: - enabled: true - # Enable health checks - health: - enabled: true - disk-space: - enabled: false - jdbc: - enabled: false -``` - -Configuration notes: - -- Replace `your-postgres-host` and `your-redis-host` with your service endpoints. -- Adjust `number-of-threads` (16) and `num-threads` (64) based on your CPU cores — Use between 2x and 4x your CPU core count. - -## Docker Compose - -Add the following to your `docker-compose.yml`: - -```yaml -services: - wave-app: - image: your-registry.com/wave:latest - container_name: wave-app - ports: - # Bind to the host on 9100 vs 9090 - - "9100:9090" - environment: - - MICRONAUT_ENVIRONMENTS=lite,redis,postgres - volumes: - - ./config/wave-config.yml:/work/config.yml:ro - deploy: - mode: replicated - replicas: 2 - resources: - limits: - memory: 4G - cpus: '1.0' - reservations: - memory: 4G - cpus: '1' - # Health check configuration - healthcheck: - test: ["CMD", "curl", "-f", "http://localhost:9090/health"] - interval: 30s - timeout: 10s - retries: 3 - start_period: 60s - # Restart policy - restart: unless-stopped -``` + wave-app: + image: /wave:latest + container_name: wave-app + ports: + # Bind to the host on 9100 vs 9090 + - "9100:9090" + environment: + - MICRONAUT_ENVIRONMENTS=lite,redis,postgres + volumes: + - ./config/wave-config.yml:/work/config.yml:ro + deploy: + mode: replicated + replicas: 2 + resources: + limits: + memory: 4G + cpus: '1.0' + reservations: + memory: 4G + cpus: '1' + # Health check configuration + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:9090/health"] + interval: 30s + timeout: 10s + retries: 3 + start_period: 60s + # Restart policy + restart: unless-stopped + ``` + + Replace `` with your registry image. ## Deploy Wave -1. Download and populate the [wave.env](./_templates/wave.env) file with the settings corresponding to your system. +To deploy Wave using Docker Swarm, follow these steps: -1. Use Docker Swarm to deploy Wave Lite. See [Create a swarm](https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/) for detailed setup instructions. +1. Download and populate the [wave.env](./_templates/wave.env) file with the settings corresponding to your system. +1. Use Docker Swarm to deploy Wave. For detailed setup instructions, see [Create a swarm](https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/). 1. Deploy the Wave service, running two replicas: @@ -212,9 +229,9 @@ services: ``` :::warning - If Wave Lite is running in the same container as Platform Connect for [Studios](https://docs.seqera.io/platform-enterprise/25.2/enterprise/studios#docker-compose), tearing down the service will also interrupt Connect services. + If Wave is running in the same container as Platform Connect for [Studios](https://docs.seqera.io/platform-enterprise/25.2/enterprise/studios#docker-compose), tearing down the service will also interrupt Connect services. ::: -### Advanced configuration +## Configure advanced options -See [Configuring Wave](./configure-wave.md) for advanced Wave features, scaling guidance, and integration options. +For advanced Wave features, scaling guidance, and integration options, see [Configuring Wave](./configure-wave.md). diff --git a/docs/install/index.md b/docs/install/index.md new file mode 100644 index 000000000..19dfe3add --- /dev/null +++ b/docs/install/index.md @@ -0,0 +1,57 @@ +--- +title: Installation +description: Install Wave +tags: [docker compose, kubernetes, install, wave, wave lite] +--- + +Wave provides multiple installation and deployment options to accommodate different organizational needs and infrastructure requirements. + +## Wave service by Seqera + +Wave is Seqera's container provisioning service that streamlines container management for Nextflow data analysis pipelines. The simplest way to get started with Wave is +through the hosted Wave service at https://wave.seqera.io. This cloud-based option requires no local infrastructure setup and provides immediate access to Wave's container +provisioning capabilities. The hosted service is ideal for users who want to start using Wave quickly with minimal configuration overhead. + +When using the hosted Wave service, you can add a simple configuration block to your `nextflow.config` file to integrate Wave with your Nextflow pipelines: + +```groovy +wave { + enabled = true +} + +tower { + accessToken = '' +} +``` + +You don't need a Seqera access token for basic Wave functionality. However, providing one grants you additional capabilities including access to private container +repositories, container freezing to your own registry, and bypassing rate limits on public registries. Container registry credentials should be configured using the [credentials manager](https://docs.seqera.io/platform-cloud/credentials/overview) in Seqera Platform. + +If you're a Seqera Platform Enterprise customer, integrate Wave by adding the `TOWER_ENABLE_WAVE=true` and `WAVE_SERVER_URL="https://wave.seqera.io"` environment variables to your Seqera configuration environment. You must configure container registry credentials in the Platform UI to enable access to private repositories, and ensure the Wave service is accessible from your network infrastructure. For more information, see [Wave containers](https://docs.seqera.io/platform-enterprise/enterprise/configuration/wave). + +## Self-hosted deployment + +For organizations that require greater control over their container infrastructure or those with specific security and compliance requirements, Wave offers self-hosted deployment options. + +### Docker Compose + +Docker Compose provides a straightforward method for deploying Wave. Docker Compose packages Wave services and dependencies into a coordinated container stack and handles service orchestration automatically, managing startup and networking of Wave components. + +Docker Compose installations support [Wave Lite](../wave-lite.md), a configuration mode for Wave that includes container augmentation and inspection capabilities, and enables the use of the Fusion file system in Nextflow pipelines. See [Install Wave Lite with Docker Compose](./docker-compose.md) for detailed configuration instructions. + +### Kubernetes + +Kubernetes is the preferred choice for deploying Wave in production environments that require scalability, high availability, and resilience. A Kubernetes installation involves setting up a cluster, configuring various components, and managing networking, storage, and resource allocation. + +Kubernetes installations support [Wave Lite](../wave-lite.md), a configuration mode for Wave that supports container augmentation and inspection on AWS, Azure, and GCP deployments, and enables the use of Fusion file system in Nextflow pipelines. See [Install Wave Lite with Kubernetes](./kubernetes.md) for detailed configuration instructions. + +If your organization requires full build capabilities, you can extend Wave beyond the base Wave Lite installation. This advanced configuration enables container +building functionality through specific integrations on Kubernetes and AWS EFS storage. See [Configure Wave build](./configure-wave-build.md) for detailed instructions. + + diff --git a/docs/install/kubernetes.md b/docs/install/kubernetes.md index 66ee4be7b..802a63fd2 100644 --- a/docs/install/kubernetes.md +++ b/docs/install/kubernetes.md @@ -1,329 +1,376 @@ --- -title: Kubernetes installation +title: Install Wave Lite with Kubernetes +description: Install Wave Lite on Kubernetes +tags: [kubernetes, install, wave, wave lite] --- -Wave enables you to provision container images on-demand, removing the need to build and upload them manually to a container registry. Wave can can provision both disposable containers that are only accessible for a short period, and regular registry-persisted container images. +Kubernetes is the preferred choice for deploying Wave in production environments that require scalability, high availability, and resilience. Its installation involves setting up a cluster, configuring various components, and managing networking, storage, and resource allocation. -This installation guide covers Wave in [Lite](../wave-lite.md) mode. Wave Lite provides container augmentation and inspection capabilities on AWS, Azure, and GCP cloud deployments, and enables the use of Fusion file system in Nextflow pipelines. +Kubernetes installations support [Wave Lite](../wave-lite.md), a configuration mode for Wave that supports container augmentation and inspection on AWS, Azure, and GCP deployments, and enables the use of Fusion file system in Nextflow pipelines. -:::info -Wave's full build capabilities require specific integrations with Kubernetes and AWS EFS Storage, making EKS and AWS a hard dependency for fully-featured deployments. After you have configured a base Wave Lite installation on AWS with this guide, see [Configure Wave Build](./configure-wave-build.md) to extend your installation to support build capabilities. -::: - -## Prerequisites - -**Required infrastructure:** -- **Kubernetes cluster** - Version 1.31 or higher (any distribution) -- **PostgreSQL instance** - Version 12 or higher (managed externally) -- **Redis instance** - Version 6.0 or higher (managed externally) +This page describes how to run Wave Lite on Kubernetes. It includes steps to: -## System requirements +- Configure a database +- Create a namespace +- Configure Wave +- Deploy Wave +- Create a service +- Configure Wave connectivity -The minimum system requirements for a Wave Kubernetes installation are: +Wave's full build capabilities require specific integrations with Kubernetes and AWS Elastic File System (EFS) Storage, making Amazon Elastic Kubernetes Service (EKS) and AWS hard dependencies for full-featured deployments. If you require build capabilities, see [Configure Wave build](./configure-wave-build.md) to extend your installation. -- **Memory**: Minimum 4GB RAM per Wave pod -- **CPU**: Minimum 1 CPU core per pod -- **Network**: Connectivity to your external PostgreSQL and Redis instances -- **Storage**: Sufficient storage for your container images and temporary files +:::info[**Prerequisites**] +You will need the following to get started: -:::info -See [Configure Wave](./configure-wave.md) for detailed scaling and performance tuning guidance. +- A Kubernetes cluster (version 1.31 or higher) +- An externally managed PostgreSQL instance (version 12 or higher) +- An externally managed Redis instance (version 6.0 or higher) +- A Kubernetes cluster with at least: + - **Memory**: 4 GB RAM per Wave pod + - **CPU**: 1 CPU core per pod + - **Network**: Connectivity to your external PostgreSQL and Redis instances + - **Storage**: Sufficient storage for your container images and temporary files ::: -## Assumptions - +:::warning[**Assumptions**] This guide assumes: + - You have already deployed Seqera Platform Enterprise - You will deploy Wave into the `wave` namespace - You have appropriate cluster permissions to create namespaces, deployments, and services - Your PostgreSQL and Redis instances are accessible from the Kubernetes cluster +::: -## Database configuration +## Configure a database -Wave requires a PostgreSQL database to operate. +To create a PostgreSQL database and user account, follow these steps: -Create a dedicated `wave` database and user account with the appropriate privileges: +1. Connect to PostgreSQL. +1. Create a dedicated `wave` database and user account with the appropriate privileges: -```sql --- Create a dedicated user for Wave -CREATE ROLE wave_user LOGIN PASSWORD 'your_secure_password'; + ```sql + CREATE ROLE wave_user LOGIN PASSWORD ''; --- Create the Wave database -CREATE DATABASE wave; + CREATE DATABASE wave; --- Connect to the wave database -\c wave; + \c wave; --- Grant basic schema access -GRANT USAGE, CREATE ON SCHEMA public TO wave_user; + GRANT USAGE, CREATE ON SCHEMA public TO wave_user; --- Grant privileges on existing tables and sequences -GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO wave_user; -GRANT USAGE, SELECT, UPDATE ON ALL SEQUENCES IN SCHEMA public TO wave_user; + GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO wave_user; + GRANT USAGE, SELECT, UPDATE ON ALL SEQUENCES IN SCHEMA public TO wave_user; --- Grant privileges on future tables and sequences -ALTER DEFAULT PRIVILEGES IN SCHEMA public -GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO wave_user; + ALTER DEFAULT PRIVILEGES IN SCHEMA public + GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO wave_user; -ALTER DEFAULT PRIVILEGES IN SCHEMA public -GRANT USAGE, SELECT, UPDATE ON SEQUENCES TO wave_user; -``` + ALTER DEFAULT PRIVILEGES IN SCHEMA public + GRANT USAGE, SELECT, UPDATE ON SEQUENCES TO wave_user; + ``` -## Create namespace + Replace `` with a secure password for the database user. -```yaml ---- -apiVersion: v1 -kind: Namespace -metadata: - name: "wave" - labels: - app: wave-app -``` +## Create a namespace -## Configure Wave +To create a Kubernetes `wave` namespace, follow these steps: -Create a ConfigMap containing Wave's configuration. Update the following values to match your environment: +1. Create `namespace.yaml` with the following configuration: -- Database connection details (`uri`, `user`, `password`) -- Redis connection string -- Seqera Platform API endpoint + ```yaml + --- + apiVersion: v1 + kind: Namespace + metadata: + name: "wave" + labels: + app: wave-app + ``` -:::warning -This configuration contains sensitive values. It is recommended to use Kubernetes Secrets for sensitive data, instead of embedding them directly in the ConfigMap. See the [Kubernetes Secrets documentation](https://kubernetes.io/docs/concepts/configuration/secret/) for more details. +1. Apply the `wave` namespace configuration: -Example using environment variables with secrets: + ```bash + kubectl apply -f namespace.yaml + ``` -```yaml -env: - - name: WAVE_DB_PASSWORD - valueFrom: - secretKeyRef: - name: wave-secrets - key: db-password -``` -::: +## Configure Wave -```yaml -kind: ConfigMap -apiVersion: v1 -metadata: - name: wave-cfg - namespace: "wave" - labels: - app: wave-cfg -data: - config.yml: | - wave: - # Build service configuration - disabled for Wave base installation - build: - enabled: false - # Mirror service configuration - disabled for Wave base installation - mirror: - enabled: false - # Security scanning configuration - disabled for Wave base installation - scan: - enabled: false - # Blob caching configuration - disabled for Wave base installation - blobCache: - enabled: false - # Database connection settings - db: - uri: "jdbc:postgresql://your-postgres-host:5432/wave" - user: "wave_user" - password: "your_secure_password" - - # Redis configuration for caching and session management - redis: - uri: "rediss://your-redis-host:6379" - - # Platform integration (optional) - tower: - endpoint: - url: "https://your-platform-server.com" - - # Micronaut framework configuration - micronaut: - # Executor configuration for handling concurrent requests - executors: - stream-executor: - type: FIXED - number-of-threads: 16 - # Netty HTTP server configuration - netty: - event-loops: - default: - num-threads: 64 - stream-pool: - executor: stream-executor - # HTTP client configuration - http: - services: - stream-client: - read-timeout: 30s - read-idle-timeout: 5m - event-loop-group: stream-pool - - # Management endpoints configuration - loggers: - # Enable metrics for monitoring - metrics: - enabled: true - # Enable health checks - health: - enabled: true - disk-space: - enabled: false - jdbc: - enabled: false -``` - -## Create deployment - -Deploy Wave using the following Deployment manifest: - -```yaml ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: wave - namespace: "wave" - labels: - app: wave-app -spec: - replicas: 1 - selector: - matchLabels: - app: wave-app - template: +To configure database connections, Redis caching, and server settings, follow these steps: + +1. Create `wave-config.yaml` with the following configuration: + + ```yaml + --- + apiVersion: v1 + kind: ConfigMap + metadata: + name: wave-cfg + namespace: "wave" + data: + config.yml: | + wave: + # Build service configuration - disabled for Wave base installation + build: + enabled: false + # Mirror service configuration - disabled for Wave base installation + mirror: + enabled: false + # Security scanning configuration - disabled for Wave base installation + scan: + enabled: false + # Blob caching configuration - disabled for Wave base installation + blobCache: + enabled: false + # Database connection settings + db: + uri: "jdbc:postgresql://:5432/wave" + user: "wave_user" + password: "" + + # Redis configuration for caching and session management + redis: + uri: "rediss://:6379" + + # Platform integration (optional) + tower: + endpoint: + url: "" + + # Micronaut framework configuration + micronaut: + # Executor configuration for handling concurrent requests + executors: + stream-executor: + type: FIXED + number-of-threads: 16 + # Netty HTTP server configuration + netty: + event-loops: + default: + num-threads: 64 + stream-pool: + executor: stream-executor + # HTTP client configuration + http: + services: + stream-client: + read-timeout: 30s + read-idle-timeout: 5m + event-loop-group: stream-pool + + # Management endpoints configuration + loggers: + # Enable metrics for monitoring + metrics: + enabled: true + # Enable health checks + health: + enabled: true + disk-space: + enabled: false + jdbc: + enabled: false + ``` + + Replace the following: + + - ``: your Postgres service endpoint + - ``: your Redis service endpoint + - ``: your secure password for the database user + - ``: your Platform endpoint URL (_optional_) + + :::tip + It is recommended to use Kubernetes Secrets for sensitive data instead of embedding them in the ConfigMap. For example, by adding environment variables with secrets to your Wave deployment manifest: + + ```yaml + --- + env: + - name: WAVE_DB_PASSWORD + valueFrom: + secretKeyRef: + name: wave-secrets + key: db-password + ``` + + For more information, see Kubernetes [Secrets](https://kubernetes.io/docs/concepts/configuration/secret/). + ::: + +1. Apply the ConfigMap: + + ```bash + kubectl apply -f wave-config.yaml + ``` + +## Deploy Wave + +To deploy Wave using the deployment manifest, follow these steps: + +1. Create `wave-deployment.yaml` with the following configuration: + + ```yaml + --- + apiVersion: apps/v1 + kind: Deployment metadata: + name: wave + namespace: "wave" labels: app: wave-app spec: - containers: - - image: REPLACE_ME_AWS_ACCOUNT.dkr.ecr.us-east-1.amazonaws.com/nf-tower-enterprise/wave:REPLACE_ME_WAVE_IMAGE_TAG - name: wave-app - ports: - - containerPort: 9090 - env: - - name: MICRONAUT_ENVIRONMENTS - value: "postgres,redis,lite" - resources: - requests: - memory: "4000Mi" - limits: - memory: "4000Mi" - workingDir: "/work" - volumeMounts: + replicas: 1 + selector: + matchLabels: + app: wave-app + template: + metadata: + labels: + app: wave-app + spec: + containers: + - image: .dkr.ecr.us-east-1.amazonaws.com/nf-tower-enterprise/wave: + name: wave-app + ports: + - containerPort: 9090 + env: + - name: MICRONAUT_ENVIRONMENTS + value: "postgres,redis,lite" + resources: + requests: + memory: "4000Mi" + limits: + memory: "4000Mi" + workingDir: "/work" + volumeMounts: + - name: wave-cfg + mountPath: /work/config.yml + subPath: "config.yml" + readinessProbe: + httpGet: + path: /health + port: 9090 + initialDelaySeconds: 5 + timeoutSeconds: 3 + livenessProbe: + httpGet: + path: /health + port: 9090 + initialDelaySeconds: 5 + timeoutSeconds: 3 + failureThreshold: 10 + volumes: - name: wave-cfg - mountPath: /work/config.yml - subPath: "config.yml" - readinessProbe: - httpGet: - path: /health - port: 9090 - initialDelaySeconds: 5 - timeoutSeconds: 3 - livenessProbe: - httpGet: - path: /health - port: 9090 - initialDelaySeconds: 5 - timeoutSeconds: 3 - failureThreshold: 10 - volumes: - - name: wave-cfg - configMap: - name: wave-cfg - restartPolicy: Always -``` - -## Create Service - -Expose Wave within the cluster using a Service: - -```yaml ---- -apiVersion: v1 -kind: Service -metadata: - name: wave-service - namespace: "wave" - labels: - app: wave-app -spec: - selector: - app: wave-app - ports: - - name: http - port: 9090 - targetPort: 9090 - protocol: TCP - type: ClusterIP -``` + configMap: + name: wave-cfg + restartPolicy: Always + ``` + + Replace the following: + + - ``: your AWS account ID + - ``: your Wave image tag version + +2. Apply the Wave configuration: + + ```bash + kubectl apply -f wave-deployment.yaml + ``` + +## Create a service +To expose Wave within the cluster using a Service, follow these steps: -## Next steps +1. Create `wave-service.yaml` with the following configuration: -### Configure Seqera Platform to integrate with Wave + ```yaml + --- + apiVersion: v1 + kind: Service + metadata: + name: wave-service + namespace: "wave" + labels: + app: wave-app + spec: + selector: + app: wave-app + ports: + - name: http + port: 9090 + targetPort: 9090 + protocol: TCP + type: ClusterIP + ``` + +1. Apply the service to the namespace: -Configure your Seqera Platform Enterprise deployment to integrate with Wave by setting the Wave server endpoint in your `tower.yml` [configuration](https://docs.seqera.io/platform-enterprise/latest/enterprise/configuration/wave). + ```bash + kubectl apply -f wave-service.yaml + ``` -### Networking +## Configure Wave connectivity + +After deploying Wave Lite, configure networking and integrate with Seqera Platform to make Wave accessible and functional. + +### Set up networking Wave must be accessible from: - Seqera Platform services - Compute environments (for container image access) -Configure external access using a Kubernetes ingress. +Use a Kubernetes ingress to configure external access. + +To set up an ingress, follow these steps: -Update the following example ingress with your provider-specific annotations: +1. Create `wave-ingress.yaml` with the following configuration: + + ```yaml + apiVersion: networking.k8s.io/v1 + kind: Ingress + metadata: + name: wave-ingress + namespace: wave + # Add your ingress controller annotations here + spec: + rules: + - host: + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: wave-service + port: + number: 9090 + ``` -```yaml -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: - name: wave-ingress - namespace: wave -spec: - rules: - - host: wave.your-domain.com - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: wave-service - port: - number: 9090 -``` + Replace `` with your provider-specific domain. -### TLS +2. Apply the ingress: -Wave does not handle TLS termination directly. Configure TLS at your ingress controller or load balancer level. Most ingress controllers support automatic certificate provisioning through provider integrations. + ```bash + kubectl apply -f wave-ingress.yaml + ``` +### Configure TLS -### Production environments +Wave does not handle TLS termination directly. +Configure it at your ingress controller or load balancer level. +Most ingress controllers support automatic certificate provisioning. -Consider implementing the following for production deployments: +### Connect to Seqera Platform -**Reliability:** -- Pod Disruption Budgets, for availability during cluster maintenance -- Horizontal Pod Autoscaler, for automatic scaling based on load -- Multiple replicas with anti-affinity rules, for high availability +Configure your Seqera Platform deployment to use Wave by setting the Wave server endpoint in your `tower.yml` configuration. For more information, see [Pair your Seqera instance with Wave](https://docs.seqera.io/platform-enterprise/enterprise/configuration/wave#pair-your-seqera-instance-with-wave). -**Resource management:** -- Node selectors or affinity rules for optimal pod placement -- Resource quotas and limit ranges for the Wave namespace +### Configure AWS ECR access -### AWS credentials to access ECR -Wave requires access to AWS ECR for container image management. Create an IAM role with the following permissions: +Wave needs specific permissions to manage container images. -```json -"Statement": [ +To configure AWS ECR access: + +- Create an IAM role with the following permissions: + + ```json + "Statement": [ { "Action": [ "ecr:BatchCheckLayerAvailability", @@ -344,11 +391,35 @@ Wave requires access to AWS ECR for container image management. Create an IAM ro ], "Effect": "Allow", "Resource": [ - "/wave/*" + "/wave/*" ] } - ``` + ] + ``` + + Replace `` with your ECR repository Amazon Resource Name (ARN). + +## Optimize for production environments + +For production deployments, implement these reliability and performance improvements: + +**High availability:** + +- Set `replicas: 3` or higher in your deployment +- Add pod anti-affinity rules to distribute replicas across nodes +- Configure Pod Disruption Budgets for maintenance windows + +**Auto-scaling:** + +- Set up Horizontal Pod Autoscaler based on CPU/memory usage +- Configure resource quotas and limits for the Wave namespace + +**Monitoring:** + +- Set up monitoring for Wave pods and services +- Configure alerting for health check failures +- Monitor database and Redis connection health -### Advanced configuration +## Configure advanced options -See [Configuring Wave](./configure-wave.md) for advanced Wave features, scaling guidance, and integration options. +For advanced Wave features, scaling guidance, and integration options, see [Configure Wave](./configure-wave.md). diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md index 25ec0338c..1aaa6eaf1 100644 --- a/docs/tutorials/index.md +++ b/docs/tutorials/index.md @@ -4,11 +4,19 @@ description: Guides to get started with Wave. tags: [containers, nextflow, seqera containers, wave, wave cli] --- +Wave is versatile and you can leverage it in your Nextflow pipelines in several ways. The following guides describe how to quickly get started with Wave: + +- [Nextflow and Seqera Containers][seqera-containers-page] - Add community-built containers in your pipelines +- [Nextflow and Wave][nextflow-wave-page] - Build and provision containers on-demand with Wave integration +- [Wave CLI][wave-cli-page] - Build, test, and manage containers from the command line + + [seqera-containers-page]: /wave_docs/wave_repo/docs/tutorials/nextflow-seqera-containers.mdx [nextflow-wave-page]: /wave_docs/wave_repo/docs/tutorials/nextflow-wave.mdx