diff --git a/public/omni/infrastructure-and-extensions/install-airgapped-omni.mdx b/public/omni/infrastructure-and-extensions/install-airgapped-omni.mdx index e103325..3c006bc 100644 --- a/public/omni/infrastructure-and-extensions/install-airgapped-omni.mdx +++ b/public/omni/infrastructure-and-extensions/install-airgapped-omni.mdx @@ -2,422 +2,445 @@ title: Installing Airgapped Omni --- -### Prerequisites +import { omni_release, version, release } from '/snippets/custom-variables.mdx'; -DNS server NTP server TLS certificates Installed on machine running Omni +This document will walk through each component to run the "Sidero stack" in an offline environment which includes the following components. -* genuuid - * Used to generate a unique account ID for Omni. -* Docker - * Used for running the suite of applications -* Wireguard - * Used by Siderolink - -### Gathering Dependencies +* Omni +* Self-signed Certificates +* [Image Factory](./self-hosted/deploy-image-factory-on-prem) +* Container registry +* Authentication service -In this package, we will be installing: +>When running Talos with Omni you do not need to run additional services such as the [Discovery Service](../../talos/v1.12/configure-your-talos-cluster/system-configuration/discovery) because discovery functionality is built in to Omni. -* Gitea -* Keycloak -* Omni +If you already have services such as a container registry and authentication service you can re-use those components. This guide will set up proof-of-concept quality options to get you started. -To keep everything organized, I am using the following directory structure to store all the dependencies and I will move them to the airgapped network all at once. +## Prerequisites -> **NOTE:** The empty directories will be used for the persistent data volumes when we deploy these apps in Docker. +There are some expectations your environment has the following supporting services for the services to run. These include: -```bash -airgap -├── certs -├── gitea -├── keycloak -├── omni -└── registry -``` +* Networks that can route Talos nodes to the Omni service endpoints +* Basic networking services such as DNS, DHCP, and NTP +* An admin system that can connect to the internet to download assets +* A server to run the Sidero stack on -#### Generate Certificates +>The Sidero stack can be run on a single server or multiple servers, but we don't recommend running Omni inside of Kubernetes for air gapped environments. -**TLS Certificates** +In addition to these services you'll need the following tools installed on the administrator machine and server. -This tutorial will involve configuring all of the applications to be accessed via https with signed `.pem` certificates generated with [certbot](https://certbot.eff.org/). There are many methods of configuring TLS certificates and this guide will not cover how to generate your own TLS certificates, but there are many resources available online to help with this subject if you do not have certificates already. +* [`talosctl`](../../talos/v1.12/getting-started/talosctl) +* `docker` or `podman` +* [`cfssl`](https://github.com/cloudflare/cfssl) +* [`yq`](https://github.com/mikefarah/yq) -**Omni Certificate** +>Podman is known to work but has some flags that are different than docker and you may have to translate them for your version of podman. -Omni uses etcd to store the data for our installation and we need to give it a private key to use for encryption of the etcd database. +### Export endpoints -1. First, Generate a GPG key. +To make this guide easier to follow we will set global variables for each of the endpoints and ports we will use. Update the hostnames and ports if you change any of them from the defaults. ```bash -gpg --quick-generate-key "Omni (Used for etcd data encryption) how-to-guide@siderolabs.com" rsa4096 cert never +REGISTRY_ENDPOINT=registry.internal:5000 +FACTORY_ENDPOINT=factory.internal:8080 +AUTH_ENDPOINT=auth.internal:5556 +OMNI_ENDPOINT=omni.internal ``` -This will generate a new GPG key pair with the specified properties. +## 1. Generate Certificates -What's going on here? +In order to run services securely, even in an air gapped environment, you should run with encrypted data in transit and at rest. There are multiple certificates and keys needed to secure your infrastructure. -* `quick-generate-key` allows us to quickly generate a new GPG key pair. -`"Omni (Used for etcd data encryption) how-to-guide@siderolabs.com"` is the user ID associated with the key which generally consists of the real name, a comment, and an email address for the user. -* `rsa4096` specifies the algorithm type and key size. -* `cert` means this key can be used to certify other keys. -* `never` specifies that this key will never expire. +* CA certificate (root of trust) +* Domain certificates for the following endpoints + * Omni + * Authentication + * Image factory + * Container registry +* Container signing certificate -2. Add an encryption subkey +If you already have a trusted, internal CA root you can skip generating the root CA. You will need to use your existing CA to create certificates for the services. -We will use the fingerprint of this key to create an encryption subkey. +### Create CA certificate (optional) -To find the fingerprint of the key we just created, run: +We will use the `cfssl` comand to make the CA and certificate signing easier but `openssl` can be used if you have existing CA infrastructure. ```bash -gpg --list-secret-keys +cat < ca-csr.json +{ + "CN": "Internal Root CA", + "key": { + "algo": "rsa", + "size": 4096 + }, + "names": [ + { + "C": "US", + "O": "Internal Infrastructure", + "OU": "Security" + } + ] +} +EOF ``` - -Next, run the following command to create the encryption subkey, replacing `$FPR` with your own keys fingerprint. +With the configuration create a CA certificate. ```bash -gpg --quick-add-key $FPR rsa4096 encr never +cfssl gencert -initca ca-csr.json | cfssljson -bare ca ``` -In this command: - -* `$FPR` is the fingerprint of the key we are adding the subkey to. -* `rsa4096` and `encr` specify that the new subkey will be an RSA encryption key with a size of 4096 bits. -* `never` means this subkey will never expire. +This will give you a private key, `ca-key.pem`, a public key `ca.pem`, and a signing request `ca.csr`. -3. Export the secret key +For any client that will be calling the internal services you will need to install the ca.pem file into your trusted store. -Lastly we'll export this key into an ASCII formatted file so Omni can use it. + + + On Ubuntu and Debian based Linux distros you can copy the `ca.pem` file into the `/usr/local/share/ca-certificates/` directory. -```bash -gpg --export-secret-key --armor how-to-guide@siderolabs.com > certs/omni.asc -``` + On Red Hat and Fedora based distros you can copy the `ca.pem` file into the `/etc/pki/ca-trust/source/anchors/` folder and then run the following command to generate the trusted root store: -* `--armor` is an option which creates the output in ASCII format. Without it, the output would be binary. + ```bash + sudo update-ca-trust + ``` + + + For macOS you should open the *Keychain Access* application and drag the `ca.pem` file into the window to install it. + + + On Windows you should rename the certificate extension from `.pem` to `.crt`. You can then double click on the file and select *Install Certificate* -> *Local Machine*. You then need to select "Place all certificates in the following store" and select the *Trusted Root Certification Authorities*. -Save this file to the certs directory in our package. + If you're using Windows Subsystem for Linux (WSL) you should follow the Linux guide for installing the certificate. + + -#### Create the app.ini File +### Generate web certificates -Gitea uses a configuration file named **app.ini** which we can use to pre-configure with the necessary information to run Gitea and bypass the initial startup page. When we start the container, we will mount this file as a volume using Docker. +Generate a single certificate that all services can use based on the CA we just created. In a production setting you should generate individual certificates for each service. -Create the **app.ini** file +Create a signing configuration to let `cfssl` know we want a web server certificate that should expire in 1 year. ```bash -vim gitea/app.ini +cat < ca-config.json +{ + "signing": { + "default": { + "expiry": "8760h" + }, + "profiles": { + "web-server": { + "usages": ["signing", "key encipherment", "server auth"], + "expiry": "8760h" + } + } + } +} +EOF ``` -Replace the `DOMAIN`, `SSH_DOMAIN`, and `ROOT_URL` values with your own hostname: - -```ini -APP_NAME=Gitea: Git with a cup of tea -RUN_MODE=prod -RUN_USER=git -I_AM_BEING_UNSAFE_RUNNING_AS_ROOT=false - -[server] -CERT_FILE=cert.pem -KEY_FILE=key.pem -APP_DATA_PATH=/data/gitea -DOMAIN=${GITEA_HOSTNAME} -SSH_DOMAIN=${GITEA_HOSTNAME} -HTTP_PORT=3000 -ROOT_URL=https://${GITEA_HOSTNAME}:3000/ -HTTP_ADDR=0.0.0.0 -PROTOCOL=https -LOCAL_ROOT_URL=https://localhost:3000/ - -[database] -PATH=/data/gitea/gitea.db -DB_TYPE=sqlite3 - -[security] -INSTALL_LOCK=true # This is the value which tells Gitea not to run the initial configuration wizard on start up -``` +Now create a wildcard certificate for your services. We'll be using the ICANN reserved `.internal` TLD for service domains, but you can use a proper domain if you have it registered. -> **NOTE:** If running this in a production environment, you will also want to configure the database settings for a production database. This configuration will use an internal sqlite database in the container. +>When using `.internal` domains you will need to update your DNS server or `/etc/hosts` file to make sure the endpoints resolve properly. -#### Gathering Images +```bash +cat < wildcard-csr.json +{ + "CN": "Internal Wildcard", + "hosts": [ + "internal", + "${AUTH_ENDPOINT%:*}", + "${REGISTRY_ENDPOINT%:*}", + "${FACTORY_ENDPOINT%:*}", + "${OMNI_ENDPOINT%:*}", + "127.0.0.1", + "$(hostname -I | awk '{print $1}')" + ], + "key": { + "algo": "rsa", + "size": 4096 + } +} +EOF +``` -Next we will gather all the images needed installing Gitea, Keycloak, Omni, **and** the images Omni will need for creating and installing Talos. +Generate the wildcard certificates for services. -I'll be using the following images for the tutorial: +```bash +cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \ + -config=ca-config.json \ + -profile=web-server wildcard-csr.json \ + | cfssljson -bare server +``` -* Gitea - * `docker.io/gitea/gitea:1.19.3` -* Keycloak - * `quay.io/keycloak/keycloak:21.1.1` -* Omni - * `ghcr.io/siderolabs/omni:v0.31.0` - * `ghcr.io/siderolabs/imager:v1.4.5` - * pull this image to match the version of Talos you would like to use. -* Talos - * `ghcr.io/siderolabs/flannel:v0.21.4` - * `ghcr.io/siderolabs/install-cni:v1.4.0-1-g9b07505` - * `docker.io/coredns/coredns:1.10.1` - * `gcr.io/etcd-development/etcd:v3.5.9` - * `registry.k8s.io/kube-apiserver:v1.27.2` - * `registry.k8s.io/kube-controller-manager:v1.27.2` - * `registry.k8s.io/kube-scheduler:v1.27.2` - * `registry.k8s.io/kube-proxy:v1.27.2` - * `ghcr.io/siderolabs/kubelet:v1.27.2` - * `ghcr.io/siderolabs/installer:v1.4.5` - * `registry.k8s.io/pause:3.6` - -> **NOTE**: The Talos images needed may be found using the command `talosctl image default`. If you do not have `talosctl` installed, you may find the instructions on how to install it [here](https://omni.siderolabs.com/docs/how-to-guides/how-to-install-talosctl/). - -**Package the images** - -1. Pull the images to load them locally into Docker. - -* Run the following command for each of the images listed above **except** for the Omni image which will be provided to you as an archive file already. +Create a certificate chain with server and CA. ```bash -sudo docker pull registry/repository/image-name:tag +cat server.pem ca.pem > server-chain.pem ``` -2. Verify all of the images have been downloaded +This will create a private server key, `server-key.pem`, a public server key, `server.pem`, a server signing request, `server.csr`, and a `server-chain.pem`, a server certificate with CA. + +Different services in this guide run with different user IDs so we will update the certificates to allow all users to read them. This is not suitable or secure for a production environment. ```bash -sudo docker image ls +chmod 644 server*.pem ``` -3. Save all of the images into an archive file. +## 2. Image factory and container registry + +Using the certificates we just created, follow the guide [Deploy Image Factory On-prem](./self-hosted/deploy-image-factory-on-prem). This will +create a container registry and host the Image Factory in your environment. It will also sign container images with an offline key for verification. + +If you do not have a working Image Factory with Talos images and extensions seeded do not continue with the guide. That is a pre-requisite for running Omni in an air gapped environment. + +## 3. Authentication + +If you have existing SAML or OIDC authentication available you can use that with Omni. Please see the configuration guides in the [Authentication and Authorization](../security-and-authentication/authentication-and-authorization) section. + +For a PoC environment we will run [dex](https://dexidp.io/docs/) with static users configured. Dex can be used for static configuration or to communicate with upstream providers. For this guide we will configure static users. + +### Deploy Dex (optional) + +Because Omni does not have any user authentication Dex will be configured so we can log in to Omni with a static user. You will need to download the dex container from a machine that has internet access and push it to your internal registry. -* All of the images can be saved as a single archive file which can be used to load all at once on our airgapped machine with the following command. +#### Download dex + +Download the dex container image. ```bash -docker save -o image-tarfile.tar \ - list \ - of \ - images +docker pull ghcr.io/dexidp/dex:v2.41.1 ``` -Here is an example of the command used for the images in this tutorial: +If your machine has access to the internal registry you can push the image directly. ```bash -docker save -o registry/all_images.tar \ - docker.io/gitea/gitea:1.19.3 \ - quay.io/keycloak/keycloak:21.1.1 \ - ghcr.io/siderolabs/imager:v1.4.5 \ - ghcr.io/siderolabs/flannel:v0.21.4 \ - ghcr.io/siderolabs/install-cni:v1.4.0-1-g9b07505 \ - docker.io/coredns/coredns:1.10.1 \ - gcr.io/etcd-development/etcd:v3.5.9 \ - registry.k8s.io/kube-apiserver:v1.27.2 \ - registry.k8s.io/kube-controller-manager:v1.27.2 \ - registry.k8s.io/kube-scheduler:v1.27.2 \ - registry.k8s.io/kube-proxy:v1.27.2 \ - ghcr.io/siderolabs/kubelet:v1.27.2 \ - ghcr.io/siderolabs/installer:v1.4.5 \ - registry.k8s.io/pause:3.6 +docker tag ghcr.io/dexidp/dex:v2.41.1 ${REGISTRY_ENDPOINT}/dexidp/dex:v2.41.1 +docker push ${REGISTRY_ENDPOINT}/dexidp/dex:v2.41.1 ``` -#### Move Dependencies +If you need to send the image to a remote machine or transfer it via an offline method you can export the container image. -Now that we have all the packages necessary for the airgapped deployment of Omni, we'll create a compressed archive file and move it to our airgapped network. +```bash +docker save -o dex.tar ghcr.io/dexidp/dex:v2.41.1 +``` -The directory structure should look like this now: +On the remote machine load the archive into the local image storage and then push it to the registry. ```bash -airgap -├── certs -│ ├── fullchain.pem -│ ├── omni.asc -│ └── privkey.pem -├── gitea -│ └── app.ini -├── keycloak -├── omni -└── registry - ├── omni-image.tar # Provided to you by Sidero Labs - └── all_images.tar +docker load -i dex.tar +docker push ${REGISTRY_ENDPOINT}/dexidp/dex:v2.41.1 ``` -Create a compressed archive file to move to our airgap machine. +#### Configure dex + +We will create an example dex configuration to use with Omni. + +Create a password. Make sure you remember this password for logging in later. ```bash -cd ../ -tar czvf omni-airgap.tar.gz airgap/ +export OMNI_USER_PASSWORD=$(htpasswd -BnC 15 user42 | cut --delimiter=: --fields=1 --complement) ``` -Now I will use scp to move this file to my machine which does not have internet access. Use whatever method you prefer to move this file. +Create a dex configuration file. ```bash -scp omni-airgap.tar.gz $USERNAME@$AIRGAP_MACHINE:/home/$USERNAME/ +cat < dex.yaml +issuer: https://${AUTH_ENDPOINT} +storage: + type: memory + +web: + https: 0.0.0.0:5556 + tlsCert: /etc/dex/tls/server-chain.pem + tlsKey: /etc/dex/tls/server-key.pem + +enablePasswordDB: true + +staticClients: + - name: Omni + id: omni + secret: aW50ZXJuYWwtc2lkZXJvLXN0YWNrCg== #internal-sidero-stack + redirectURIs: [https://omni.internal/oidc/consume] + +staticPasswords: + - email: "admin@omni.internal" + username: "admin" + preferredUsername: "admin" + hash: "$OMNI_USER_PASSWORD" +EOF ``` -Lastly, I will log in to my airgapped machine and extract the compressed archive file in the home directory +#### Run dex + +Run dex with the provided configuration and certificate. ```bash -cd ~/ -tar xzvf omni-airgap.tar.gz +docker run -d \ + --name dex \ + -p 5556:5556 \ + -v $(pwd)/dex.yaml:/etc/dex/dex.yaml \ + -v $(pwd)/server-key.pem:/etc/dex/tls/server-key.pem \ + -v $(pwd)/server-chain.pem:/etc/dex/tls/server-chain.pem \ + ${REGISTRY_ENDPOINT}/dexidp/dex:v2.41.1 \ + dex serve /etc/dex/dex.yaml ``` -### Log in Airgapped Machine +>If your machine has SELinux in enforcing mode you may need to add `:Z` to the volume mounts in the docker command. -From here on out, the rest of the tutorial will take place from the airgapped machine we will be installing Omni, Keycloak, and Gitea on. +## 4. Run Omni -### Gitea +Omni will depend on the following URLs. If these services are not running or the hostnames do not resolve you may need to add them to your /etc/hosts file. -Gitea will be used as a container registry for storing our images, but also many other functionalities including Git, Large File Storage, and the ability to store packages for many different package types. For more information on what you can use Gitea for, visit their [documentation](https://docs.gitea.com/). +* omni.internal +* factory.internal +* registry.internal +* auth.internal -#### Install Gitea +### Create etcd encryption key -Load the images we moved over. This will load all the images into Docker on the airgapped machine. +Data in Omni's database is encrypted and we need an encryption key to provide to Omni. + +Generate a GPG key: ```bash -docker load -i registry/omni-image.tar -docker load -i registry/all_images.tar +gpg --quick-generate-key "Omni (Used for etcd data encryption) how-to-guide@siderolabs.com" rsa4096 cert never +FINGERPRINT=$(gpg --with-colons --list-keys "how-to-guide@siderolabs.com" | awk -F: '$1 == "fpr" {print $10; exit}') +gpg --quick-add-key ${FINGERPRINT} rsa4096 encr never +gpg --export-secret-key --armor how-to-guide@siderolabs.com > omni.asc ``` -Run Gitea using Docker: +>Note: Do not add passphrases to keys during creation. -* The **app.ini** file is already configured and mounted below with the `- v` argument. +### Download Omni -```bash -sudo docker run -it \ - -v $PWD/certs/privkey.pem:/data/gitea/key.pem \ - -v $PWD/certs/fullchain.pem:/data/gitea/cert.pem \ - -v $PWD/gitea/app.ini:/data/gitea/conf/app.ini \ - -v $PWD/gitea/data/:/data/gitea/ \ - -p 3000:3000 \ - gitea/gitea:1.19.3 -``` +Download the Omni container image. -You may now log in at the `https://${GITEA_HOSTNAME}:3000` to begin configuring Gitea to store all the images needed for Omni and Talos. + +{`docker pull ghcr.io/siderolabs/omni:${omni_release}\ndocker tag ghcr.io/siderolabs/omni:${omni_release} \\\n \${REGISTRY_ENDPOINT}/siderolabs/omni:${omni_release}`} + -#### Gitea setup +If your machine has access to the internal registry you can push the image directly. -This is just the bare minimum setup to run Omni. Gitea has many additional configuration options and security measures to use in accordance with your industry's security standards. More information on the configuration of Gitea can be found [here](https://docs.gitea.com/). + +{`docker push \${REGISTRY_ENDPOINT}/siderolabs/omni:${omni_release}`} + -**Create a user** +If you need to send the image to a remote machine or transfer it via an offline method you can export the container image. -Click the **Register** button at the **top right** corner. The first user created will be created as an administrator - permissions can be adjusted afterwards if you like. + +{`docker save -o omni.tar \${REGISTRY_ENDPOINT}/siderolabs/omni:${omni_release}`} + -**Create organizations** +On the remote machine load the archive into the local image storage and then push it to the registry. -After registering an admin user, the organizations can be created which will act as the package repositories for storing images. Create the following organizations: + +{`docker load -i omni.tar\ndocker push \${REGISTRY_ENDPOINT}/siderolabs/omni:${omni_release}`} + -* `siderolabs` -* `keycloak` -* `coredns` -* `etcd-development` -* `registry-k8s-io-proxy` +### Start Omni container -> **NOTE:** If you are using self-signed certs and would like to push images to your local Gitea using Docker, you will also need to configure your certs.d directory as described [here](https://docs.docker.com/engine/security/certificates/). +This will run Omni with an embedded etcd database mounted to the host. It is not recommended for production use cases. The command assumes the certificates generated earlier are available in the local directory where you run this command. -#### Push Images to Gitea + +{`docker run \\\n --name omni \\\n -d --net=host \\\n --cap-add=NET_ADMIN \\\n --device /dev/net/tun:/dev/net/tun \\\n -v "\${PWD}/ca.pem:/etc/ssl/certs/ca-certificates.crt:ro" \\\n -v "\${PWD}/etcd:/_out/etcd" \\\n -v "\${PWD}/sqlite:/_out/sqlite:rw" \\\n -v "\${PWD}/server-key.pem:/server-key.pem:ro" \\\n -v "\${PWD}/server-chain.pem:/server-chain.pem:ro" \\\n -v "\${PWD}/omni.asc:/omni.asc:ro" \\\n \${REGISTRY_ENDPOINT}/siderolabs/omni:${omni_release} \\\n --name=air-gap-omni \\\n --cert=/server-chain.pem \\\n --key=/server-key.pem \\\n --siderolink-api-cert=/server-chain.pem \\\n --siderolink-api-key=/server-key.pem \\\n --private-key-source=file:///omni.asc \\\n --event-sink-port=8091 \\\n --bind-addr=0.0.0.0:443 \\\n --siderolink-api-bind-addr=0.0.0.0:8090 \\\n --k8s-proxy-bind-addr=0.0.0.0:8100 \\\n --advertised-api-url=https://omni.internal \\\n --siderolink-api-advertised-url=https://omni.internal:8090 \\\n --siderolink-wireguard-advertised-addr=\$(hostname -I | awk '{print \$1}'):50180 \\\n --advertised-kubernetes-proxy-url=https://omni.internal:8100 \\\n --auth-auth0-enabled=false \\\n --auth-oidc-enabled=true \\\n --auth-oidc-client-secret=aW50ZXJuYWwtc2lkZXJvLXN0YWNrCg== \\\n --auth-oidc-provider-url=https://\${AUTH_ENDPOINT} \\\n --auth-oidc-client-id=omni \\\n --auth-oidc-scopes=openid,profile,email \\\n --image-factory-address=https://\${FACTORY_ENDPOINT} \\\n --initial-users=admin@omni.internal \\\n --kubernetes-registry=\${REGISTRY_ENDPOINT}/siderolabs/kubelet \\\n --sqlite-storage-path=/_out/sqlite/omni.db \\\n --talos-installer-registry=\${REGISTRY_ENDPOINT}/siderolabs/installer \\\n --workload-proxying-enabled=false`} + -Now that all of our organizations have been created, we can push the images we loaded into our Gitea for deploying Keycloak, Omni, and storing images used by Talos. +>If you are running the image factory on the same computer you will need to update the `--metrics-bind-addr` to use a different port than `2122`. -For all of the images loaded, we first need to tag them for our Gitea. +These flags mount the files and directories needed by Omni (e.g. certificates, etcd storage) and set flags to connect Omni to the upstream services (e.g. factory, authentication). -```bash -sudo docker tag original-image:tag gitea:3000/new-image:tag -``` +## 5. Create a cluster -For example, if I am tagging the kube-proxy image it will look like this: +Before you create a cluster you will need to generate installation media. When creating a cluster you'll need to make sure you patch the machine config to redirect container registries to your internal registry. -> **NOTE:** Don't forget to tag all of the images from **registry.k8s.io** to go to the **registry-k8s-io-proxy** organization created in Gitea. +### Create installation media -```bash -docker tag registry.k8s.io/kube-proxy:v1.27.2 ${GITEA_HOSTNAME}:3000/registry-k8s-io-proxy/kube-proxy:v1.27.2 -``` +The first step to create a cluster is to boot machines and connect them to Omni. In order to do that we will need to embed the self-signed CA certificate into Talos. -Finally, push all the images into Gitea. +Create a configuration for a TrustedRootsConfig: ```bash -docker push ${GITEA_HOSTNAME}:3000/registry-k8s-io-proxy/kube-proxy:v1.27.2 +yq eval --null-input ' +.apiVersion = "v1alpha1" | +.kind = "TrustedRootsConfig" | +.name = "internal-ca" | +.certificates = load_str("ca.pem") +' > trustedrootsconfig.yaml ``` -### Keycloak +You can use this config two different ways. Use kernel arguments for Talos 1.11 and older and use embedded config for Talos 1.12. -#### Install Keycloak + + + Create an output directory for installation media and config. -The image used for keycloak is already loaded into Gitea and there are no files to stage before starting it so I'll run the following command to start it. **Replace KEYCLOAK\_HOSTNAME and GITEA\_HOSTNAME with your own hostnames**. + ```bash + mkdir _out + mv trustedrootsconfig.yaml _out/machine-config.yaml + echo "---" >> _out/machine-config.yaml + ``` -```bash -sudo docker run -it \ - -p 8080:8080 \ - -p 8443:8443 \ - -v $PWD/certs/fullchain.pem:/etc/x509/https/tls.crt \ - -v $PWD/certs/privkey.pem:/etc/x509/https/tls.key \ - -v $PWD/keycloak/data:/opt/keycloak/data \ - -e KEYCLOAK_ADMIN=admin \ - -e KEYCLOAK_ADMIN_PASSWORD=admin \ - -e KC_HOSTNAME=${KEYCLOAK_HOSTNAME} \ - -e KC_HTTPS_CERTIFICATE_FILE=/etc/x509/https/tls.crt \ - -e KC_HTTPS_CERTIFICATE_KEY_FILE=/etc/x509/https/tls.key \ - ${GITEA_HOSTNAME}:3000/keycloak/keycloak:21.1.1 \ - start -``` + Download machine join configuration from Omni. You can do this from the Omni web interface home page by clicking on the **Download Machine Join Config** button or if you have `omnictl` installed you can download it with -Once Keycloak is installed, you can reach it in your browser at `https://${KEYCLOAK\_HOSTNAME}:3000` + ```bash + omnictl jointoken machine-config >> _out/machine-config.yaml + ``` -#### Configuring Keycloak + Create a static hosts configuration for Omni and the registry. -For details on configuring Keycloak as a SAML Identity Provider to be used with Omni, follow this guide: [Configuring Keycloak SAML](../infrastructure-and-extensions/self-hosted/configure-keycloak-for-omni) + ```bash + yq --null-input ' + .apiVersion = "v1alpha1" | + .kind = "StaticHostConfig" | + .name = "'$(hostname -I | awk '{print $1}')'" | + .hostnames = ["'${OMNI_ENDPOINT%:*}'", "'${REGISTRY_ENDPOINT%:*}'"] +' > hosts-config.yaml + ``` + Append this configuration to the others. -### Omni + ```bash + echo "---" >> _out/machine-config.yaml + cat hosts-config.yaml >> _out/machine-config.yaml + ``` -With Keycloak and Gitea installed and configured, we're ready to start up Omni and start creating and managing clusters. + Embed both configurations and create an installation ISO with `imager`. -#### Install Omni + + {`docker run --rm -t \\\n -v "\${PWD}/_out:/out" \\\n --privileged \\\n \${REGISTRY_ENDPOINT}/siderolabs/imager:${release} \\\n iso \\\n --embedded-config-path=/out/machine-config.yaml`} + -To install Omni, first generate a UUID to pass to Omni when we start it. + + -```bash -export OMNI_ACCOUNT_UUID=$(uuidgen) -``` + Get the kernel arguments from Omni with `omnictl`. You can also copy them from the Omni web interface with the **Copy Kernel Parameters** button on the home page. -Next run the following command, replacing hostnames for Omni, Gitea, or Keycloak with your own. + ```bash + OMNI_KERNEL_ARGS=$(omnictl jointoken kernel-args) + ``` -```bash -sudo docker run \ - --net=host \ - --cap-add=NET_ADMIN \ - -v $PWD/etcd:/_out/etcd \ - -v $PWD/certs/fullchain.pem:/fullchain.pem \ - -v $PWD/certs/privkey.pem:/privkey.pem \ - -v $PWD/certs/omni.asc:/omni.asc \ - ${GITEA_HOSTNAME}:3000/siderolabs/omni:v0.12.0 \ - --account-id=${OMNI_ACCOUNT_UUID} \ - --name=omni \ - --cert=/fullchain.pem \ - --key=/privkey.pem \ - --siderolink-api-cert=/fullchain.pem \ - --siderolink-api-key=/privkey.pem \ - --private-key-source=file:///omni.asc \ - --event-sink-port=8091 \ - --bind-addr=0.0.0.0:443 \ - --siderolink-api-bind-addr=0.0.0.0:8090 \ - --k8s-proxy-bind-addr=0.0.0.0:8100 \ - --advertised-api-url=https://${OMNI_HOSTNAME}:443/ \ - --siderolink-api-advertised-url=https://${OMNI_HOSTNAME}:8090/ \ - --siderolink-wireguard-advertised-addr=${OMNI_HOSTNAME}:50180 \ - --advertised-kubernetes-proxy-url=https://${OMNI_HOSTNAME}:8100/ \ - --auth-auth0-enabled=false \ - --auth-saml-enabled \ - --talos-installer-registry=${GITEA_HOSTNAME}:3000/siderolabs/installer \ - --talos-imager-image=${GITEA_HOSTNAME}:3000/siderolabs/imager:v1.4.5 \ - --kubernetes-registry=${GITEA_HOSTNAME}:3000/siderolabs/kubelet \ - --auth-saml-url "https://${KEYCLOAK_HOSTNAME}:8443/realms/omni/protocol/saml/descriptor" -``` + Compress and base64 encode the configuration for a kernel argument. + + ```bash + TRUSTED_ROOT_CONFIG=$(cat trustedrootsconfig.yaml | zstd --compress --ultra -21 | base64 -w 0) + ``` -What's going on here: + Create an ISO with `imager` with both of the kernel arguments. -* `--auth-auth0-enabled=false` tells Omni not to use Auth0. -* `--auth-saml-enabled` enables SAML authentication. -* `--talos-installer-registry`, `--talos-imager-image` and `--kubernetes-registry` allow you to set the default images used by Omni to point to your local repository. -* `--auth-saml-url` is the URL we saved earlier in the configuration of Keycloak. - * `--auth-saml-metadata` may also be used if you would like to pass it as a file instead of a URL and can be used if using self-signed certificates for Keycloak. + + {`docker run --rm -t \\\n -v "\${PWD}/_out:/out" \\\n --privileged \\\n \${REGISTRY_ENDPOINT}/siderolabs/imager:${release} \\\n iso \\\n --extra-kernel-arg "talos.config.early=\$TRUSTED_ROOT_CONFIG $OMNI_KERNEL_ARGS"`} + -#### Creating a cluster + + -Guides on creating a cluster on Omni can be found here: +No matter which version of Talos you use you should have a Talos iso file in the `_out` directory. You can use this to boot a machine and it will connect to Omni and trust the self-signed CA certificate. -* [Creating an Omni cluster](../getting-started/create-a-cluster) +### Create cluster in Omni -Because we're working in an airgapped environment we will need the following values added to our cluster configs so they know where to pull images from. More information on the Talos MachineConfig.registries can be found [here](https://www.talos.dev/latest/talos-guides/discovery/). +Guides on creating a cluster on Omni can be found at [creating an Omni cluster](../getting-started/create-a-cluster). -> **NOTE:** In this example, cluster discovery is also disabled. You may also configure cluster discovery on your network. More information on the Discovery Service can be found [here](https://www.talos.dev/latest/talos-guides/discovery/) +Because we're working in an airgapped environment we will need the following values added to our cluster configs so they know where to pull images from. We also need to provide the CA certificate to the node so it will trust the certificate that signed `omni.internal` endpoint. + +> **NOTE:** In this example, cluster discovery is also disabled. You may also configure cluster discovery via Omni. More information on the Discovery Service can be found here. ```yaml machine: @@ -425,26 +448,20 @@ machine: mirrors: docker.io: endpoints: - - https://${GITEA_HOSTNAME}:3000 + - https://${REGISTRY_ENDPOINT} gcr.io: endpoints: - - https://${GITEA_HOSTNAME}:3000 + - https://${REGISTRY_ENDPOINT} ghcr.io: endpoints: - - https://${GITEA_HOSTNAME}:3000 + - https://${REGISTRY_ENDPOINT} registry.k8s.io: endpoints: - - https://${GITEA_HOSTNAME}:3000/v2/registry-k8s-io-proxy - overridePath: true + - https://${REGISTRY_ENDPOINT} cluster: discovery: enabled: false ``` -Specifics on patching machines can be found here: - -* [Create a Patch for Cluster Machines](../omni-cluster-setup/create-a-patch-for-cluster-machines) - -### Closure +The machine patch should be added to the cluster during cluster creation and should be applied cluster wide. -With Omni, Gitea, and Keycloak set up, you are ready to start managing and installing Talos clusters on your network! The suite of applications installed in this tutorial is an example of how an airgapped environment can be set up to make the most out of the Kubernetes clusters on your network. Other container registries or authentication providers may also be used with a similar setup, but this suite was chosen to give you a starting point and an example of what your environment could look like. diff --git a/public/omni/infrastructure-and-extensions/self-hosted/deploy-image-factory-on-prem.mdx b/public/omni/infrastructure-and-extensions/self-hosted/deploy-image-factory-on-prem.mdx index bff2643..ab51823 100644 --- a/public/omni/infrastructure-and-extensions/self-hosted/deploy-image-factory-on-prem.mdx +++ b/public/omni/infrastructure-and-extensions/self-hosted/deploy-image-factory-on-prem.mdx @@ -2,6 +2,8 @@ title: Deploy Image Factory On-prem --- +import { release } from '/snippets/custom-variables.mdx'; + The [Image Factory](https://github.com/siderolabs/image-factory) is a way for you to dynamically create Talos Linux images. There is a public, hosted version of the Image Factory at [factory.talos.dev](https://factory.talos.dev) and it can also be run in your environment. The Image Factory is a critical component of [Omni](../../overview/what-is-omni) to generate installation media and update Talos nodes, but it is not required to use Omni to use the Image Factory. It is a web interface and API for the `imager` command which is used to customize Talos from the command line. @@ -18,7 +20,7 @@ The Image Factory is a critical component of [Omni](../../overview/what-is-omni) If you already have a container registry available you can export your registry to an environment variable. ```shell -INTERNAL_REG= +REGISTRY_ENDPOINT=registry.internal:5000 ``` If you don't have a container registry available to push images to you can temporarily run one with the `zot` container. @@ -26,12 +28,30 @@ If you don't have a container registry available to push images to you can tempo This example doesn't have persistent storage. ```bash -docker run -d -p 5000:5000 --name zot \ - ghcr.io/project-zot/zot:latest +docker run -d -p 5000:5000 --name registry registry:2 -INTERNAL_REG=127.0.0.1:5000 +REGISTRY_ENDPOINT=registry.internal:5000 ``` + + If you are using certificates for your temporary registry you will need to mount the certificates into the container at run time. + + ```bash + docker run -d \ + --name registry \ + -p 5000:5000 \ + -v ${PWD}/server-key.pem:/certs/server-key.pem:ro \ + -v ${PWD}/server-chain.pem:/certs/server-chain.pem:ro \ + -e REGISTRY_HTTP_ADDR=0.0.0.0:5000 \ + -e REGISTRY_HTTP_TLS_KEY=/certs/server-key.pem \ + -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/server-chain.pem \ + registry:2 + + REGISTRY_ENDPOINT=registry.internal:5000 + ``` + Make sure the CA certificate is in your system pki path and `docker` has restarted to trust the certificate. + + ### Image Cache Signing Key You need to create a Cache Signing Key to sign cached Talos image artifacts, ensuring they haven’t been tampered with before being served. @@ -65,7 +85,7 @@ docker run -p 8080:8080 -d \ -v $PWD/signing-key.key:/signing-key.key:ro \ ghcr.io/siderolabs/image-factory:v0.9.0 \ -cache-signing-key-path /signing-key.key \ - -schematic-service-repository $INTERNAL_REG/siderolabs/image-factory/schematic + -schematic-service-repository $REGISTRY_ENDPOINT/siderolabs/image-factory/schematic ``` If your system has SELinux enabled you will need to mount the signing key with the :Z option so the image factory has access to the file. @@ -75,7 +95,7 @@ This will run the image factory on your machine on port 8080 and pull container This will not allow you to create or publish custom system extensions. To do that you will need to run your own container registry with the necessary images. -## Image Factory with Custom Container Registry +## Image Factory with Internal Container Registry Running the image factory in an air airgapped environment has more requirements than running in a connected mode. In addition to the requirements above you will also need. @@ -92,35 +112,35 @@ Starting with Talos 1.12 you can get a list of images needed to seed the image f Get a list of Talos base images needed for the image factory with: -```shell -talosctl image talos-bundle v1.11.5 > images.txt -``` + +{`talosctl image talos-bundle ${release} > images.txt`} + -This will give you a list of all images and extensions needed to build Talos 1.11.5. +This will give you a list of all images and extensions for Talos {release}. You will need to repeat this command for each version of Talos you want to download images for. If you don't need specific extensions you can delete them from the images.txt file. -Replace the registry in each image with your registry or use `localhost:5000` if running a registry locally, and push the images. - +Push the images to your `$REGISTRY_ENDPOINT` ```bash for SOURCE_IMAGE in $(cat images.txt) do IMAGE_WITHOUT_DIGEST=${SOURCE_IMAGE%%@*} - IMAGE_WITH_NEW_REG="${INTERNAL_REG}/${IMAGE_WITHOUT_DIGEST#*/}" + IMAGE_WITH_NEW_REG="${REGISTRY_ENDPOINT}/${IMAGE_WITHOUT_DIGEST#*/}" crane copy \ $SOURCE_IMAGE \ $IMAGE_WITH_NEW_REG done ``` -If you don't have an existing container registry or you _only_ want to download the container images you can download them to a folder with the command: + +If you don't have direct access to a container registry (e.g. air gapped environment) or you _only_ want to download the container images you can download them to a folder with the command: ```bash cat images.txt \ - talosctl images cache-create \ - --layout flat \ - --image-cache-path ./image-cache \ - --images=- + | talosctl images cache-create \ + --layout flat \ + --image-cache-path ./image-cache \ + --images=- ``` You will then be able to move the `image-cache` folder to an air gapped machine and serve the images on a read only, temporary container registry with: @@ -140,6 +160,8 @@ talosctl image cache-serve \ The image registry should be on your local machine IP address port 5000 with self-signed certificates. You can use this to copy the images to an internal, permanent container registry. + + ### Sign Container Images The Image Factory verifies container image signatures when being used. You will need to generate a cosign singing key and sign each container pushed to the registry. @@ -168,7 +190,7 @@ Sign all of the images using the images.txt file as a list. ```bash for IMAGE in $(cat images.txt) do - NEW_IMAGE="${INTERNAL_REG}/${IMAGE#*/}" + NEW_IMAGE="${REGISTRY_ENDPOINT}/${IMAGE#*/}" if [[ "$NEW_IMAGE" != *"@sha256:"* ]]; then NEW_IMAGE="${NEW_IMAGE}@$(crane digest $NEW_IMAGE)" fi @@ -179,14 +201,43 @@ for IMAGE in $(cat images.txt) --user $(id -u):$(id -g) \ ghcr.io/sigstore/cosign/cosign:v2.6.1 \ sign --key /keys/$KEY_FILE \ + --tlog-upload=false \ $NEW_IMAGE done ``` + +If your registry has a self-signed certificate authority you will need to make sure you mount your CA certificate into the cosign image. -### Run Image Factory +```bash +for IMAGE in $(cat images.txt) + do + NEW_IMAGE="${REGISTRY_ENDPOINT}/${IMAGE#*/}" + if [[ "$NEW_IMAGE" != *"@sha256:"* ]]; then + NEW_IMAGE="${NEW_IMAGE}@$(crane digest $NEW_IMAGE)" + fi + docker run --rm -it --net=host \ + -v $PWD:/keys -w /keys \ + -v "$PWD/ca.pem:/etc/ssl/certs/ca-certificates.crt:ro" \ + -e COSIGN_PASSWORD="" \ + -e COSIGN_YES=true \ + --user $(id -u):$(id -g) \ + ghcr.io/sigstore/cosign/cosign:v2.6.1 \ + sign --key /keys/$KEY_FILE \ + --tlog-upload=false \ + $NEW_IMAGE +done +``` + + +## Run Image Factory With a populated container registry and signed images you are ready to run the Image Factory. +Set a internal factory endpoint. +```bash +FACTORY_URL=https://factory.internal:8080 +``` + If the container registry and image factory are run on the same machine in containers `localhost` won't be reachable unless you run each container with `--net=host` which is not recommended. An alternative approach would be to use [private Docker networking](https://docs.docker.com/engine/network/) to bridge the containers. ```bash @@ -195,16 +246,50 @@ docker run -p 8080:8080 -d \ -v $PWD/signing-key.key:/signing-key.key:ro \ -v $PWD/cosign.pub:/cosign.pub:ro \ ghcr.io/siderolabs/image-factory:v0.9.0 \ - -image-registry $INTERNAL_REG \ - -installer-internal-repository $INTERNAL_REG/siderolabs \ - -installer-internal-repository $INTERNAL_REG/siderolabs \ - -schematic-service-repository $INTERNAL_REG/siderolabs/image-factory/schematic \ - -cache-repository $INTERNAL_REG/siderolabs/cache \ + -external-url $FACTORY_URL \ + -image-registry $REGISTRY_ENDPOINT \ + -installer-internal-repository $REGISTRY_ENDPOINT/siderolabs \ + -installer-internal-repository $REGISTRY_ENDPOINT/siderolabs \ + -schematic-service-repository $REGISTRY_ENDPOINT/siderolabs/image-factory/schematic \ + -cache-repository $REGISTRY_ENDPOINT/siderolabs/cache \ -cache-signing-key-path /signing-key.key \ -container-signature-pubkey /cosign.pub \ -cache-cdn-enabled=false \ -cache-s3-enabled=false ``` + + +To run the image factory with a self-signed CA certificate you need to mount them into the container image at run time. + +Set your internal factory endpoint. This is what your certificate should be signed with and available via DNS. +```bash +FACTORY_URL=https://factory.internal:8080 +``` +Run the image factory container. +```bash +docker run -p 8080:8080 -d \ + --name image-factory \ + -v $PWD/signing-key.key:/signing-key.key:ro \ + -v $PWD/cosign.pub:/cosign.pub:ro \ + -v $PWD/server-chain.pem:/certs/server-chain.pem:ro \ + -v $PWD/server-key.pem:/certs/server-key.pem:ro \ + -v $PWD/ca.pem:/etc/ssl/certs/ca-certificates.crt:ro \ + ghcr.io/siderolabs/image-factory:v0.9.0 \ + -external-url $FACTORY_URL \ + -image-registry $REGISTRY_ENDPOINT \ + -installer-internal-repository $REGISTRY_ENDPOINT/siderolabs \ + -installer-internal-repository $REGISTRY_ENDPOINT/siderolabs \ + -schematic-service-repository $REGISTRY_ENDPOINT/siderolabs/image-factory/schematic \ + -cache-repository $REGISTRY_ENDPOINT/siderolabs/cache \ + -cache-signing-key-path /signing-key.key \ + -container-signature-pubkey /cosign.pub \ + -cache-cdn-enabled=false \ + -cache-s3-enabled=false \ + -http-key-file=/certs/server-key.pem \ + -http-cert-file=/certs/server-chain.pem +``` + + ## Insecure If your image factory does not have certificates or does not have certificates trusted by the image factory you should add the following flags.