From 7d00acebd15b70fd848c191461142a7dd15eb71b Mon Sep 17 00:00:00 2001 From: Kenichiro Hiraiwa Date: Thu, 19 Jun 2025 18:34:35 -0700 Subject: [PATCH] feat: support EKS Auto Mode and Karpenter --- .../python/containers/about-multiservices.md | 10 +- website/docs/python/containers/upload-ecr.md | 2 +- website/docs/python/eks/about-deploy.md | 8 +- website/docs/python/eks/access-app.md | 14 +- .../python/eks/aws-otel-instrumentation.md | 111 +++-- .../docs/python/eks/aws-rds-postgresql-lab.md | 6 +- .../python/eks/aws-secrets-manager-lab.md | 28 +- .../python/eks/{Cleanup.md => cleanup.md} | 33 +- website/docs/python/eks/create-cluster.md | 60 +-- website/docs/python/eks/deploy-app.md | 70 ++- website/docs/python/eks/index.md | 4 +- website/docs/python/eks/manage-contexts.md | 17 +- .../docs/python/eks/scale-using-karpenter.md | 442 ++++++++++++++++++ .../docs/python/eks/setup-loadbalancing.md | 72 ++- website/docs/python/eks/setup-storage.md | 210 +-------- .../python/eks/view-kubernetes-resources.md | 83 ++-- .../python/introduction/environment-setup.md | 5 +- website/package-lock.json | 80 +++- website/package.json | 3 +- website/yarn.lock | 393 ++++++++++------ 20 files changed, 1041 insertions(+), 610 deletions(-) rename website/docs/python/eks/{Cleanup.md => cleanup.md} (50%) create mode 100644 website/docs/python/eks/scale-using-karpenter.md diff --git a/website/docs/python/containers/about-multiservices.md b/website/docs/python/containers/about-multiservices.md index f9b26a7..ed89f63 100644 --- a/website/docs/python/containers/about-multiservices.md +++ b/website/docs/python/containers/about-multiservices.md @@ -17,7 +17,7 @@ The `builder` uses a Python base image, installs system dependencies, copies the ```dockerfile # Use an official Python runtime as a parent image -FROM python:3.9-slim-buster as builder +FROM python:3.11-slim-buster as builder # Set environment variables ENV PYTHONDONTWRITEBYTECODE=1 \ @@ -37,7 +37,7 @@ The `runner` starts with a Python base image, installs system dependencies like ```dockerfile # Use an official Python runtime as a parent image -FROM python:3.9-slim-buster as builder +FROM python:3.11-slim-buster as builder # Set environment variables ENV PYTHONDONTWRITEBYTECODE=1 \ @@ -50,7 +50,7 @@ WORKDIR /server COPY ./server/requirements.txt /server/ RUN pip wheel --no-cache-dir --no-deps --wheel-dir /server/wheels -r requirements.txt -FROM python:3.9-slim-buster as runner +FROM python:3.11-slim-buster as runner WORKDIR /server @@ -101,12 +101,12 @@ The **db** service uses the official PostgreSQL image available on DockerHub. It ``` db: - image: postgres:13 + image: postgres:15 env_file: - .env volumes: - ./server/db/init.sh:/docker-entrypoint-initdb.d/init.sh - - postgres_data:/var/lib/postgresql/data + - postgres_data:/var/lib/postgresql@15/data networks: - webnet ``` diff --git a/website/docs/python/containers/upload-ecr.md b/website/docs/python/containers/upload-ecr.md index b0c25fb..e70c2d8 100644 --- a/website/docs/python/containers/upload-ecr.md +++ b/website/docs/python/containers/upload-ecr.md @@ -21,7 +21,7 @@ This lab shows the process of pushing Docker images to Amazon ECR using the Fast Create a new private Amazon ECR repository: ```bash -aws ecr create-repository --repository-name fastapi-microservices +aws --region ${AWS_REGION} ecr create-repository --repository-name fastapi-microservices ``` ## 2. Logging into Amazon ECR diff --git a/website/docs/python/eks/about-deploy.md b/website/docs/python/eks/about-deploy.md index 648641a..9b9dabe 100644 --- a/website/docs/python/eks/about-deploy.md +++ b/website/docs/python/eks/about-deploy.md @@ -13,12 +13,12 @@ The **[deploy-app-python.yaml](https://github.com/aws-samples/python-fastapi-dem - **Service:** The service in the manifest exposes the FastAPI application running on the EKS cluster to the external world. It routes incoming traffic on port 80 to the FastAPI application's listening port (8000). The service uses a NodePort type which automatically allocates a port from the NodePort range (default: 30000-32767) and proxies traffic on each node from that port into the Service. This allows for the Service to be externally accessible to Application and Network Load Balancers. - **Deployment:** The deployment in the manifest dictates how the FastAPI application should be deployed onto the EKS cluster. It specifies the number of replicas (i.e. the number of application instances that should be running), the container image to be used, and the necessary environment variables from a secret. It sets resource requests and limits for the containers to ensure the application gets the necessary resources. -- **Ingress:** The Ingress in the manifest provides HTTP route management for services within the EKS cluster. It routes incoming traffic to the FastAPI service based on the request path. In this scenario, all requests are directed to the FastAPI service. The Ingress configuration in this file is specifically set up for AWS using the AWS Load Balancer Controller, configuring an Application Load Balancer (ALB) that is internet-facing and employs an IP-based target type. +- **Ingress:** The Ingress in the manifest provides HTTP route management for services within the EKS cluster. It routes incoming traffic to the FastAPI service based on the request path. In this scenario, all requests are directed to the FastAPI service. When using Managed node groups, you'll need to use the AWS Load Balancer Controller, which configures an Application Load Balancer (ALB) that is internet-facing and employs an IP-based target type. On the other hand, when using EKS Auto Mode, you'll use the EKS Auto Mode instead, eliminating the need for separate installation and management of the AWS Load Balancer Controller. -## PostgreSQL - StatefulSet, Service, StorageClass, and VolumeClaimTemplates + +## PostgreSQL - StatefulSet, Service, and VolumeClaimTemplates The **[deploy-db-python.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/eks/deploy-db-python.yaml)** file is used for the deployment of the PostgreSQL database and consists of four primary resources: -- **StorageClass**: The StorageClass in the manifest is specific to AWS EBS (Elastic Block Store) and allows dynamic provisioning of storage for PersistentVolumeClaims. The EBS provisioner enables PersistentVolumes to have a stable and resilient storage solution for our PostgreSQL database. - **Service:** The Service in the manifest exposes the PostgreSQL database within the EKS cluster, facilitating the FastAPI application to access it. The Service listens on port 5432, which is the default PostgreSQL port. The Service is headless (as indicated by `clusterIP: None`), meaning it enables direct access to the Pods in the StatefulSet rather than load balancing across them. - **StatefulSet:** The StatefulSet in the manifest manages the PostgreSQL database deployment. A StatefulSet is used instead of a Deployment as it ensures each Pod receives a stable network identity and stable storage, which is essential for databases. The PostgreSQL container uses a Secret to obtain its environment variables and mounts a volume for persistent storage. -- **VolumeClaimTemplates**: The volumeClaimTemplates within the StatefulSet definition request a specific storage amount for the PostgreSQL database. It requests a storage capacity of 1Gi with [ReadWriteOnce](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) access mode, and it uses the AWS EBS StorageClass defined in this manifest. This ensures that each Pod within the StatefulSet gets its own PersistentVolume, guaranteeing the database data remains persistent across pod restarts, and the data is accessible from any node in the EKS cluster. \ No newline at end of file +- **VolumeClaimTemplates**: The volumeClaimTemplates within the StatefulSet definition request a specific storage amount for the PostgreSQL database. It requests a storage capacity of 1Gi with [ReadWriteOnce](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) access mode, and it uses the AWS EBS StorageClass defined in this manifest. This ensures that each Pod within the StatefulSet gets its own PersistentVolume, guaranteeing the database data remains persistent across pod restarts, and the data is accessible from any node in the EKS cluster. diff --git a/website/docs/python/eks/access-app.md b/website/docs/python/eks/access-app.md index 0d857cf..3ea7a5a 100644 --- a/website/docs/python/eks/access-app.md +++ b/website/docs/python/eks/access-app.md @@ -6,7 +6,7 @@ import GetEnvVars from '../../../src/includes/get-env-vars.md'; ## Objective -This guide aims to guide you through the process of accessing your microservices deployed onto EKS cluster. By using ingress object we were able to expose FastAPI service through Application Loadbalancer. The Management of Application Loadbalancer is done by AWS Loadbalancer controller through ingress manifest. +This guide aims to guide you through the process of accessing your microservices deployed onto EKS cluster. By using ingress object we were able to expose FastAPI service through Application Loadbalancer. The management of the Application Load Balancer through ingress manifest is handled differently depending on your cluster configuration: for EKS Auto Mode clusters, the ALB is managed by EKS Auto Mode, while for clusters using Managed node groups, the ALB is managed by the AWS Load Balancer Controller. ## Prerequisites @@ -23,6 +23,14 @@ Before we try to access our application, we need to ensure that all of our pods kubectl get pods -n my-cool-app ``` +The expected output should look like this: + +``` bash +NAME READY STATUS RESTARTS AGE +fastapi-deployment-9776bd4c7-t8xkm 1/1 Running 0 99s +fastapi-postgres-0 1/1 Running 0 8m28s +``` + All your pods should be in the "Running" state. If they're not, you will need to troubleshoot the deployment before proceeding. ## 2. Getting the ALB URL @@ -42,7 +50,7 @@ fastapi-ingress * k8s-mycoolap-fastapii-8114c40e9c-860636650.us ## 3. Accessing the FastAPI Service -In the previous lab exercise, we used the AWS Load Balancer Controller (LBC) to dynamically provision an [Application Load Balancer (ALB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html). Note that it takes several minutes or more before the ALB has finished provisioning. +In the previous lab exercise, we used EKS Auto Mode or AWS Load Balancer Controller (LBC) to dynamically provision an [Application Load Balancer (ALB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html). Note that it takes several minutes or more before the ALB has finished provisioning. 1. **Check the status**: Open the [Load Balancers](https://console.aws.amazon.com/ec2/#LoadBalancers:) page on the Amazon EC2 console and select the AWS Region in which your Amazon EKS cluster resides. Next, select your ALB name, such as `k8s-mycoolap-fastapii-8004c40e9c`. 2. **Open the app**: Open a new tab in your browser paste the ALB link, such as `k8s-mycoolap-fastapii-8114c40e9c-860636650.us-west-2.elb.amazonaws.com`. You should see the welcome page: @@ -57,4 +65,4 @@ To confirm that everything is functioning as expected, attempt to add a book by ## Conclusion -This guide has walked you through the steps necessary to access the FastAPI service deployed on an EKS cluster. We've shown you how to check the status of your pods and verify your setup by interacting with the FastAPI service. \ No newline at end of file +This guide has walked you through the steps necessary to access the FastAPI service deployed on an EKS cluster. We've shown you how to check the status of your pods and verify your setup by interacting with the FastAPI service. diff --git a/website/docs/python/eks/aws-otel-instrumentation.md b/website/docs/python/eks/aws-otel-instrumentation.md index e2454b1..72aa5a0 100644 --- a/website/docs/python/eks/aws-otel-instrumentation.md +++ b/website/docs/python/eks/aws-otel-instrumentation.md @@ -16,7 +16,7 @@ We'll go through the following steps as part of the instrumentation process: 2. Test the instrumentation locally 3. Deploy the ADOT add-on in our EKS cluster 4. Deploy the application with the ADOT Collector Side Car Container -5. Visualize request tracing in the AWS X-Ray console +5. Visualize request tracing in the Amazon CloudWatch console ## Prerequisites @@ -25,29 +25,19 @@ We'll go through the following steps as part of the instrumentation process: ## 1. Prepare Your Environment -In the `python-fastapi-demo-docker` project where your [environment variables](../../python/introduction/environment-setup) are sourced, check-out the `aws-opentelemetry` branch using the following command: - -:::tip -You may receive an error like `error: Your local changes to the following files would be overwritten by checkout` when running the below command if there are changes made to any of the source files. To fix this, first run the command `git stash` and then run the below command. After the checkout is finished, run the command `git stash pop` to reapply the changes. -::: - -``` bash -git checkout aws-opentelemetry -``` - - -Next, open the [eks/create-adot-add-on-python-fargate.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/aws-opentelemetry/eks/create-adot-add-on-python-fargate.yaml) file and replace the sample **region** with the same region as your EKS cluster. For example: + +Next, open the [eks/opentelemetry/create-adot-add-on-python-automode.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/eks/opentelemetry/create-adot-add-on-python-automode.yaml) file and replace the sample **region** with the same region as your EKS cluster. For example: ```bash apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: - name: fargate-quickstart + name: automode-quickstart region: us-west-2 ``` - Next, open the [eks/create-adot-add-on-python.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/aws-opentelemetry/eks/create-adot-add-on-python.yaml) file and replace the sample **region** with the same region as your EKS cluster. For example: + Next, open the [eks/opentelemetry/create-adot-add-on-python.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/eks/opentelemetry/create-adot-add-on-python.yaml) file and replace the sample **region** with the same region as your EKS cluster. For example: ```bash apiVersion: eksctl.io/v1alpha5 @@ -64,10 +54,10 @@ Next, open the [eks/create-adot-add-on-python-fargate.yaml](https://github.com/a Otel Instrumentation requires 4 primary steps to get you up and running with OpenTelemetry. Notably, each step is implemented in separate files, including: -* **Global Tracer Provider**: This is the factory for the tracer. It's responsible for creating and managing tracer objects, which in turn helps you to create spans. You'll find the Global Tracer Provider in the [app/tracing.py](https://github.com/aws-samples/python-fastapi-demo-docker/blob/aws-opentelemetry/server/app/tracing.py) file. -* **Processor**: This defines the method of sending the created elements (spans) onwards to the subsequent parts of the system for further processing and exporting. You'll find the configuration for the Processor in the [app/tracing.py](https://github.com/aws-samples/python-fastapi-demo-docker/blob/aws-opentelemetry/server/app/tracing.py) file. -* **Trace Exporter**: This component takes care of sending traces to the OTEL Exporter Endpoint and acts as the bridge. You'll find the Trace Exporter in the [app/tracing.py](https://github.com/aws-samples/python-fastapi-demo-docker/blob/aws-opentelemetry/server/app/tracing.py) file. -* **Instrumenting the Application and Database**: For the FastAPI application, you'll find it's instrumented in the [app/main.py](https://github.com/aws-samples/python-fastapi-demo-docker/blob/aws-opentelemetry/server/app/main.py#L11) file and the PostgreSQL/SQLAlchemy database in the [app/connect.py](https://github.com/aws-samples/python-fastapi-demo-docker/blob/aws-opentelemetry/server/app/connect.py#L47) file. +* **Global Tracer Provider**: This is the factory for the tracer. It's responsible for creating and managing tracer objects, which in turn helps you to create spans. You'll find the Global Tracer Provider in the [app/tracing.py](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/server/opentelemetry/app/tracing.py) file. +* **Processor**: This defines the method of sending the created elements (spans) onwards to the subsequent parts of the system for further processing and exporting. You'll find the configuration for the Processor in the [app/tracing.py](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/server/opentelemetry/app/tracing.py) file. +* **Trace Exporter**: This component takes care of sending traces to the OTEL Exporter Endpoint and acts as the bridge. You'll find the Trace Exporter in the [app/tracing.py](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/server/opentelemetry/app/tracing.py) file. +* **Instrumenting the Application and Database**: For the FastAPI application, you'll find it's instrumented in the [app/main.py](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/server/opentelemetry/app/main.py#L11) file and the PostgreSQL/SQLAlchemy database in the [app/connect.py](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/server/opentelemetry/app/connect.py#L47) file. ## 3. Testing the Instrumentation Locally @@ -97,14 +87,14 @@ To test the tracing locally, build the instrumented application code locally usi ```bash -docker-compose build +docker-compose -f docker-compose-opentelemetry.yml build ``` ```bash -finch compose build +finch compose -f docker-compose-opentelemetry.yml build ``` @@ -126,14 +116,14 @@ Start the application using the following command: ```bash -docker-compose up +docker-compose -f docker-compose-opentelemetry.yml up ``` ```bash -finch compose up +finch compose -f docker-compose-opentelemetry.yml up ``` @@ -164,7 +154,7 @@ Navigate to [http://localhost:8000](http://localhost:8000/) in your browser to c Interact with the application by adding a couple of books. -Then check for traces by opening the X-Ray Tracing page and selecting your AWS Region on the Amazon CloudWatch Console. Click on the "Trace Id" and the "TraceMap" to view traces. For example: +Then check for traces by opening the Application Signals (APM) page and selecting your AWS Region on the Amazon CloudWatch Console. Click on the "Traces" and the "Trace Map" to view traces. For example: ![Trace Map](./images/Local-tracing.png) @@ -172,6 +162,13 @@ Scroll down to view details of the requests in the trace: ![Trace Details](./images/Segment-Details.png) +:::note + +Note that the region set in the '.aws' directory has the highest priority in this test. + +::: + + ## 4. Build and Push the Multi-Architecture Container After running a few requests, we'll now push the image to Amazon ECR. @@ -236,7 +233,7 @@ Display your Amazon ECR URI: echo ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/fastapi-microservices:${IMAGE_VERSION} ``` -Replace the sample image in the [eks/deploy-app-with-adot-sidecar.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/aws-opentelemetry/eks/deploy-app-with-adot-sidecar.yaml#L35) file with your Amazon ECR URI. For example: +Replace the sample image in the [eks/opentelemetry/deploy-app-with-adot-sidecar.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/eks/opentelemetry/deploy-app-with-adot-sidecar.yaml#L35) file with your Amazon ECR URI. For example: ```bash image: 012345678901.dkr.ecr.us-west-1.amazonaws.com/fastapi-microservices:1.0 @@ -248,7 +245,7 @@ The [`cert-manager`](https://cert-manager.io/docs/) add-on is required for deplo Deploy the `cert-manager` add-on in your cluster using the following command: ``` bash -kubectl apply -f eks/cert-manager.yaml +kubectl apply -f eks/opentelemetry/cert-manager.yaml ``` It would take a minute or two for the pods to deploy successfully. @@ -269,17 +266,23 @@ cert-manager-webhook-021345abcd-ef678 1/1 Running 0 12s Run the following command to install the EKS ADOT add-on: - + ``` bash - eksctl create addon -f eks/create-adot-add-on-python-fargate.yaml + eksctl create addon -f eks/opentelemetry/create-adot-add-on-python-automode.yaml + ``` + + The expected output should look like this: + + ```bash + 2025-06-20 20:12:09 [ℹ] addon "adot" active ``` ``` bash - eksctl create addon -f eks/create-adot-add-on-python.yaml + eksctl create addon -f eks/opentelemetry/create-adot-add-on-python.yaml ``` @@ -289,12 +292,18 @@ Run the following command to install the EKS ADOT add-on: Now we'll create the OpenTelemetryCollector CRD object, which contains the configuration required for deploying the OTel Collector as a sidecar container. -First, replace the sample **region** in the [eks/opentelemetrycollector.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/aws-opentelemetry/eks/opentelemetrycollector.yaml#L34) with the same region as your EKS cluster. +First, replace the sample **region** in the [eks/opentelemetry/opentelemetrycollector.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/eks/opentelemetry/opentelemetrycollector.yaml#L34) with the same region as your EKS cluster. Next, run the following command to create the CRD: ``` bash -kubectl apply -f eks/opentelemetrycollector.yaml +kubectl apply -f eks/opentelemetry/opentelemetrycollector.yaml +``` + +The expected output should look like this: + +``` bash +opentelemetrycollector.opentelemetry.io/x-ray-collector-config created ``` ### Verify the ADOT Collector IAM Roles Service Account (IRSA) @@ -321,6 +330,8 @@ Events: ## 6. Deploy the Application with the ADOT Collector Side Car Container + + To deploy the add-on with the ADOT side car container, we'll use the `sidecar.opentelemetry.io/inject: "true"` annotation in our app deployment's pod specification. First, let's do a little clean up to remove redundant resources. We'll be redeploying these resources with a few additional modifications. Run the following command to delete FastAPI resources: @@ -329,16 +340,25 @@ First, let's do a little clean up to remove redundant resources. We'll be redepl kubectl delete -f eks/deploy-app-python.yaml ``` +The expected output should look like this: + +``` +service "fastapi-service" deleted +deployment.apps "fastapi-deployment" deleted +ingress.networking.k8s.io "fastapi-ingress" deleted +``` + :::note Make sure **not** to take any additional steps to delete `fastapi-secret` secret or `fastapi-postgres` database resources you created in previous lab exercises, as these are required for subsequent steps. ::: + Deploy the new FastAPI application resources using the following command: ``` bash -kubectl apply -f eks/deploy-app-with-adot-sidecar.yaml +kubectl apply -f eks/opentelemetry/deploy-app-with-adot-sidecar.yaml ``` The expected output should look like this: @@ -368,11 +388,12 @@ Open a web browser and enter the `ADDRESS` from the previous step to access the Then, interact with the application by adding new books. -Last, check for traces by opening the X-Ray Tracing page and selecting your AWS Region on the Amazon CloudWatch Console. Check for traces in AWS Cloudwatch Console -> X-Ray -> Traces. For example: +Last, check for traces by opening the Application Signals (APM) page and selecting your AWS Region on the Amazon CloudWatch Console. Click on the "Traces" and the "Trace Map" to view traces. For example: + ![Trace Map](./images/k8-app-trace.png) -We've used 'resourcedetection' in [eks/opentelemetrycollector.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/aws-opentelemetry/eks/opentelemetrycollector.yaml#L22-L25) in the OpenTelemetryCollector CRD to enrich the trace with Kubernetes-specific metadata. You can see the metadata in raw traces by selecting the application. For example: +We've used 'resourcedetection' in [eks/opentelemetry/opentelemetrycollector.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/eks/opentelemetry/opentelemetrycollector.yaml#L22-L25) in the OpenTelemetryCollector CRD to enrich the trace with Kubernetes-specific metadata. You can see the metadata in raw traces by selecting the application. For example: ![Raw Trace](./images/raw-trace-snippet.png) @@ -385,16 +406,15 @@ You can filter for traces by creating queries in a time duration. To learn more, To clean up all resources created in this lab exercise and the workshop up to this point, run the following commands. - + ``` bash cd /home/ec2-user/environment/python-fastapi-demo-docker -kubectl delete -f eks/deploy-db-python-fargate.yaml -kubectl delete -f eks/deploy-app-with-adot-sidecar.yaml -kubectl delete -f eks/opentelemetrycollector.yaml -eksctl delete iamserviceaccount --name adot-collector --namespace my-cool-app --cluster fargate-quickstart -eksctl delete addon -f eks/create-adot-add-on-python-fargate.yaml -kubectl delete -f eks/cert-manager.yaml -kubectl delete pdb coredns ebs-csi-controller -n kube-system +kubectl delete -f eks/deploy-db-python.yaml +kubectl delete -f eks/opentelemetry/deploy-app-with-adot-sidecar.yaml +kubectl delete -f eks/opentelemetry/opentelemetrycollector.yaml +eksctl delete iamserviceaccount --name adot-collector --namespace my-cool-app --cluster automode-quickstart +eksctl delete addon --name adot --cluster automode-quickstart +kubectl delete -f eks/opentelemetry/cert-manager.yaml ``` @@ -402,12 +422,11 @@ kubectl delete pdb coredns ebs-csi-controller -n kube-system ``` bash cd /home/ec2-user/environment/python-fastapi-demo-docker kubectl delete -f eks/deploy-db-python.yaml -kubectl delete -f eks/deploy-app-with-adot-sidecar.yaml -kubectl delete -f eks/opentelemetrycollector.yaml +kubectl delete -f eks/opentelemetry/deploy-app-with-adot-sidecar.yaml +kubectl delete -f eks/opentelemetry/opentelemetrycollector.yaml eksctl delete iamserviceaccount --name adot-collector --namespace my-cool-app --cluster managednode-quickstart eksctl delete addon --name adot --cluster managednode-quickstart -kubectl delete -f eks/cert-manager.yaml -kubectl delete pdb coredns ebs-csi-controller -n kube-system +kubectl delete -f eks/opentelemetry/cert-manager.yaml ``` diff --git a/website/docs/python/eks/aws-rds-postgresql-lab.md b/website/docs/python/eks/aws-rds-postgresql-lab.md index c6be0fc..454799d 100644 --- a/website/docs/python/eks/aws-rds-postgresql-lab.md +++ b/website/docs/python/eks/aws-rds-postgresql-lab.md @@ -56,13 +56,13 @@ To create the CloudFormation stack please run correct command below based on whe - + ```bash aws cloudformation create-stack --region $AWS_REGION \ --stack-name eksdevworkshop-rds-cluster \ --template-body file://eks/rds-serverless-postgres.json \ - --parameters ParameterKey=ClusterType,ParameterValue=Fargate + --parameters ParameterKey=ClusterType,ParameterValue=AutoMode ``` @@ -170,4 +170,4 @@ To clean up the resources provisioned during this lab we will just need to delet aws cloudformation delete-stack --region $AWS_REGION --stack-name eksdevworkshop-rds-cluster kubectl delete -f eks/deploy-app-python.yaml kubectl delete secret fastapi-secret -n my-cool-app -``` \ No newline at end of file +``` diff --git a/website/docs/python/eks/aws-secrets-manager-lab.md b/website/docs/python/eks/aws-secrets-manager-lab.md index c309fb2..7126f92 100644 --- a/website/docs/python/eks/aws-secrets-manager-lab.md +++ b/website/docs/python/eks/aws-secrets-manager-lab.md @@ -146,10 +146,10 @@ POLICY_ARN=$(aws --region $AWS_REGION --query Policy.Arn --output text iam creat Lastly, we'll need to create our IAM Role for the FastAPI application using this policy and create our `serviceaccount` for IRSA to provide credentials for this role to our application. - + ```bash -eksctl create iamserviceaccount --name fastapi-deployment-sa --region $AWS_REGION --cluster fargate-quickstart --attach-policy-arn $POLICY_ARN --namespace my-cool-app --approve --override-existing-serviceaccounts +eksctl create iamserviceaccount --name fastapi-deployment-sa --region $AWS_REGION --cluster automode-quickstart --attach-policy-arn $POLICY_ARN --namespace my-cool-app --approve --override-existing-serviceaccounts ``` @@ -166,17 +166,7 @@ After a minute or two of CloudFormation templates running we should see success ## 3. Build and Deploy the New Version of the Application -We're in the home stretch now and all that's left is to build and deploy our application. To do this, let's first switch to the Git branch `aws-secrets-manager-lab` which has our updated application code and deployment manifest. - -:::tip -You may receive an error like `error: Your local changes to the following files would be overwritten by checkout` when running the below command if there are changes made to any of the source files. To fix this, first run the command `git stash` and then run the below command. After the checkout is finished, run the command `git stash pop` to reapply the changes. -::: - -``` bash -git checkout aws-secrets-manager-lab -``` - -Next we'll update our `IMAGE_VERSION` environment variable to use a new version: +We'll update our `IMAGE_VERSION` environment variable to use a new version: :::info It's important to use a new image version as cached images on worker nodes can result in unexpected behaviors. @@ -198,7 +188,7 @@ Now, we can run the following commands to create our multi-architecture build en docker buildx create --name webBuilder docker buildx use webBuilder docker buildx inspect --bootstrap -docker buildx build --platform linux/amd64,linux/arm64 -t $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/fastapi-microservices:$IMAGE_VERSION . --push +docker buildx build -f Dockerfile.aws-secrets --platform linux/amd64,linux/arm64 -t $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/fastapi-microservices:$IMAGE_VERSION . --push ``` With our new image successfully uploaded to ECR we can now update our deployment file `eks/deploy-app-with-secrets-manager.yaml` to use our new image. First, retrieve your Amazon ECR repository URI using the following command: @@ -272,12 +262,13 @@ INFO: 192.168.25.22:30050 - "GET / HTTP/1.1" 200 OK To delete the resources we created during this lab run the following commands: - + ```bash -kubectl delete -f eks/deploy-app-with-secrets-manager.yaml +kubectl delete -f eks/aws-secrets/deploy-app-with-secrets-manager.yaml +kubectl delete -f eks/deploy-db-python.yaml aws secretsmanager delete-secret --region $AWS_REGION --force-delete-without-recovery --secret-id eksdevworkshop-db-url -eksctl delete iamserviceaccount --name fastapi-deployment-sa --region $AWS_REGION --cluster fargate-quickstart --namespace my-cool-app +eksctl delete iamserviceaccount --name fastapi-deployment-sa --region $AWS_REGION --cluster automode-quickstart --namespace my-cool-app aws iam delete-policy --region $AWS_REGION --policy-arn $POLICY_ARN ``` @@ -286,7 +277,8 @@ aws iam delete-policy --region $AWS_REGION --policy-arn $POLICY_ARN ```bash -kubectl delete -f eks/deploy-app-with-secrets-manager.yaml +kubectl delete -f eks/aws-secrets/deploy-app-with-secrets-manager.yaml +kubectl delete -f eks/deploy-db-python.yaml aws secretsmanager delete-secret --region $AWS_REGION --force-delete-without-recovery --secret-id eksdevworkshop-db-url eksctl delete iamserviceaccount --name fastapi-deployment-sa --region $AWS_REGION --cluster managednode-quickstart --namespace my-cool-app aws iam delete-policy --region $AWS_REGION --policy-arn $POLICY_ARN diff --git a/website/docs/python/eks/Cleanup.md b/website/docs/python/eks/cleanup.md similarity index 50% rename from website/docs/python/eks/Cleanup.md rename to website/docs/python/eks/cleanup.md index ecda918..7b126d8 100644 --- a/website/docs/python/eks/Cleanup.md +++ b/website/docs/python/eks/cleanup.md @@ -1,6 +1,6 @@ --- title: Cleaning Up Resources -sidebar_position: 14 +sidebar_position: 15 --- import Tabs from '@theme/Tabs'; @@ -19,14 +19,8 @@ This guide shows you how to delete all the resources you created in this worksho To avoid incurring future charges, you should delete the resources you created during this workshop. - + -1. Retrieve the EFS ID (e.g., `fs-040f4681791902287`) you configured in the [previous lab exercise](setup-storage.md), then replace the sample value in [eks/efs-pv.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/eks/efs-pv.yaml) with your EFS ID. -```bash -echo $file_system_id -``` - -2. Run the following commands to delete all resources created in this workshop. ```bash # Delete the ECR repository aws ecr delete-repository --repository-name fastapi-microservices --force @@ -35,24 +29,10 @@ aws ecr delete-repository --repository-name fastapi-microservices --force kubectl delete -f eks/deploy-app-python.yaml # Delete PostgreSQL services -kubectl delete -f eks/deploy-db-python-fargate.yaml - -# Delete the Persistent Volume Claim (PVC) -kubectl delete pvc postgres-data-fastapi-postgres-0 -n my-cool-app - -# Delete the Persistent Volume (PV) -kubectl delete -f eks/efs-pv.yaml - -# Delete the Storage Class -kubectl delete -f eks/efs-sc.yaml - -# Delete all mount targets associated with your EFS file system -for mount_target_id in $(aws efs describe-mount-targets --file-system-id $file_system_id --output text --query 'MountTargets[*].MountTargetId'); do - aws efs delete-mount-target --mount-target-id "$mount_target_id" -done +kubectl delete -f eks/deploy-db-python.yaml # Delete the cluster -eksctl delete cluster -f eks/create-fargate-python.yaml +eksctl delete cluster -f eks/create-automode-python.yaml ``` @@ -68,11 +48,8 @@ kubectl delete -f eks/deploy-app-python.yaml # Delete PostgreSQL services kubectl delete -f eks/deploy-db-python.yaml -# Delete PodDisruptionBudgets for 'coredns' and 'ebs-csi-controller' -kubectl delete pdb coredns ebs-csi-controller -n kube-system - # Delete the cluster eksctl delete cluster -f eks/create-mng-python.yaml ``` - \ No newline at end of file + diff --git a/website/docs/python/eks/create-cluster.md b/website/docs/python/eks/create-cluster.md index c158f1d..c8db356 100644 --- a/website/docs/python/eks/create-cluster.md +++ b/website/docs/python/eks/create-cluster.md @@ -13,25 +13,25 @@ Creating an Amazon EKS cluster with [eksctl](https://eksctl.io/) allows for a wi - + -## 1. Using the cluster configuration file for Fargate nodes -The **[create-fargate-python.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/eks/create-fargate-python.yaml)** eksctl configuration file sets up a Fargate-based cluster for deploying our [python-fastapi-demo-docker](https://github.com/aws-samples/python-fastapi-demo-docker) with the following components: +## 1. Using the cluster configuration file for EKS Auto Mode nodes +The **[create-automode-python.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/eks/create-automode-python.yaml)** eksctl configuration file sets up an EKS Auto Mode-based cluster for deploying our [python-fastapi-demo-docker](https://github.com/aws-samples/python-fastapi-demo-docker) with the following components: -- **Metadata**: This section contains crucial metadata about your cluster, such as the cluster's name ("fargate-quickstart"), the AWS region where the cluster will be hosted ("us-east-1"), and the Kubernetes version ("1.26") that the cluster will run. -- **Fargate Profiles**: This section configures the Fargate profiles, which determine how and which pods are launched on Fargate. By default, a maximum of five namespaces can be included. In our configuration, we're using the `default` and `kube-system` namespaces and have also added a custom namespace, `my-cool-app`, to host the application we plan to deploy on the cluster. -- **Permissions (IAM)**: This section outlines how the configuration utilizes IAM roles for service accounts through an [OpenID Connect (OIDC) identity provider](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html). Two service accounts are established here: `aws-load-balancer-controller`, which authorizes Kubernetes to manage the [AWS Load Balancer Controller (LBC)](https://kubernetes-sigs.github.io/aws-load-balancer-controller/), `ecr-access-service-account`, which facilitates interactions with the [Amazon Elastic Container Registry (ECR)](https://aws.amazon.com/ecr/). +- **Metadata**: This section contains crucial metadata about your cluster, such as the cluster's name ("automode-quickstart"), the AWS region where the cluster will be hosted ("us-east-1"), and the Kubernetes version ("1.32") that the cluster will run. +- **AutoMode config**: This section describes configurations such as enabling EKS Auto Mode, specifying node roles, and defining NodePool resources. If you omit node roles and NodePool, eksctl will automatically create default ones. +- **Permissions (IAM)**: This section outlines how the configuration utilizes IAM roles for service accounts through an [OpenID Connect (OIDC) identity provider](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html). A service account is established here: `adot-collector`, which has permissions to send tracing data to AWS X-Ray. - **Logs (CloudWatch)**: The configuration wraps up with a `cloudWatch` section, which sets up [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) logging for the cluster. All categories of Kubernetes control plane logs are enabled and are set to be retained for 30 days. ## 2. Creating the Cluster From the `python-fastapi-demo-docker` project directory, create the cluster using the eksctl configuration file: :::caution -Make sure to verify the region specified in `eks/create-fargate-python.yaml` and change it, if needed. The region must be same as the one you used in your [.env file](../../python/introduction/environment-setup). +Make sure to verify the region specified in `eks/create-automode-python.yaml` and change it, if needed. The region must be same as the one you used in your [.env file](../../python/introduction/environment-setup). ::: ```bash -eksctl create cluster -f eks/create-fargate-python.yaml +eksctl create cluster -f eks/create-automode-python.yaml ``` :::tip @@ -43,12 +43,13 @@ eksctl create cluster -f eks/create-fargate-python.yaml Upon completion, the output should look something like this: ``` -2023-05-26 13:10:23 [✔] EKS cluster "fargate-quickstart" in "us-east-1" region is ready +2025-06-20 18:16:03 [✔] EKS cluster "automode-quickstart" in "us-east-1" region is ready ``` ## 3. View Namespaces Check the namespaces in your cluster by running the following command: + ```bash kubectl get namespaces ``` @@ -56,49 +57,30 @@ The output should look something like this: ```bash NAME STATUS AGE -default Active 27m -kube-node-lease Active 27m -kube-public Active 27m -kube-system Active 27m -my-cool-app Active 27m +default Active 8m11s +kube-node-lease Active 8m11s +kube-public Active 8m11s +kube-system Active 8m11s +my-cool-app Active 32s ``` :::tip -- If you receive authentication errors, update kubeconfig using the following command `aws eks update-kubeconfig --name fargate-quickstart` +- If you receive authentication errors, update kubeconfig using the following command `aws eks update-kubeconfig --name automode-quickstart` ::: -## 4. Creating a Namespace -While we've already created the necessary Fargate profile and namespace for this workshop, to create any additional namespace and fargate profile, run the following commands: - -```bash -kubectl create namespace my-cool-app-v2 -``` -Before creating a Fargate Profile, first ensure that Fargate PodExecutionRole exists in the account. Create a PodExecutionRole with name `AmazonEKSFargatePodExecutionRole` if it doesn't exist following the steps in [EKS Fargate documentation](https://docs.aws.amazon.com/eks/latest/userguide/pod-execution-role.html). - -Then create a Fargate profile running the command below: - -```bash -aws eks create-fargate-profile \ - --region ${AWS_REGION} \ - --cluster fargate-quickstart \ - --fargate-profile-name fp-dev \ - --pod-execution-role-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/AmazonEKSFargatePodExecutionRole \ - --selectors namespace=my-cool-app-v2 -``` - ## Conclusion -This lab has walked you through the process of creating an Amazon EKS Fargate cluster pre-configured to deploy the [python-fastapi-demo-docker](https://github.com/aws-samples/python-fastapi-demo-docker) project's resources. By following these instructions, you've set up a functioning Kubernetes cluster on Amazon EKS, ready for deploying applications. +This lab has walked you through the process of creating an Amazon EKS Auto Mode cluster pre-configured to deploy the [python-fastapi-demo-docker](https://github.com/aws-samples/python-fastapi-demo-docker) project's resources. By following these instructions, you've set up a functioning Kubernetes cluster on Amazon EKS, ready for deploying applications. ## 1. Using the cluster configuration file for Managed Node Groups The **[create-mng-python.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/eks/create-mng-python.yaml)** eksctl configuration file sets up a managed node groups-based cluster for deploying our [python-fastapi-demo-docker](https://github.com/aws-samples/python-fastapi-demo-docker) with the following components: -- **Metadata**: This section contains crucial metadata about your cluster, such as the cluster's name ("managednode-quickstart"), the target AWS region ("us-east-1"), and the Kubernetes version ("1.26") to be deployed. -- **Permissions (IAM)**: This section outlines how the configuration utilizes IAM roles for service accounts through an [OpenID Connect (OIDC) identity provider](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html). Two service accounts are established here: `aws-load-balancer-controller`, which authorizes Kubernetes to manage the [AWS Load Balancer Controller (LBC)](https://kubernetes-sigs.github.io/aws-load-balancer-controller/), `ecr-access-service-account`, which facilitates interactions with the [Amazon Elastic Container Registry (ECR)](https://aws.amazon.com/ecr/). +- **Metadata**: This section contains crucial metadata about your cluster, such as the cluster's name ("managednode-quickstart"), the target AWS region ("us-east-1"), and the Kubernetes version ("1.32") to be deployed. +- **Permissions (IAM)**: This section outlines how the configuration utilizes IAM roles for service accounts through an [OpenID Connect (OIDC) identity provider](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html). Two service accounts are established here: `aws-load-balancer-controller`, which authorizes Kubernetes to manage the [AWS Load Balancer Controller (LBC)](https://kubernetes-sigs.github.io/aws-load-balancer-controller/), `adot-collector`, which has permissions to send tracing data to AWS X-Ray. - **Managed node groups**: This section defines a managed node group called `eks-mng`. Nodes within this group are based on `t3.medium` instance types, with an initial deployment of two nodes. For more instance types, see [Amazon EC2 Instance Types](https://aws.amazon.com/ec2/instance-types/). - **Managed add-ons**: The configuration contains an `addons` section, which defines the [EKS add-ons](https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html) to be enabled on the cluster. In this case, `kube-proxy`, `vpc-cni` (a networking plugin for pods in VPC), and `coredns` (a DNS server) are activated. The `vpc-cni` addon is additionally linked with the [AmazonEKS_CNI_Policy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEKS_CNI_Policy.html) policy. - **Logs (CloudWatch)**: The configuration wraps up with a `cloudWatch` section, which sets up [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) logging for the cluster. All categories of Kubernetes control plane logs are enabled and are set to be retained for 30 days. @@ -123,7 +105,7 @@ eksctl create cluster -f eks/create-mng-python.yaml Upon completion, the output should look something like this: ``` -2023-05-26 13:10:23 [✔] EKS cluster "managednode-quickstart" in "us-east-1" region is ready +2025-06-20 22:15:12 [✔] EKS cluster "managednode-quickstart" in "us-east-1" region is ready ``` ## 3. Viewing Namespaces @@ -152,4 +134,4 @@ my-cool-app Active 41h ## Conclusion This tutorial walked you through the process of creating and connecting to an Amazon EKS cluster using managed node groups for the [python-fastapi-demo-docker](https://github.com/aws-samples/python-fastapi-demo-docker) application. By using the eksctl tool and understanding the ClusterConfig file, you are now better equipped to deploy and manage Kubernetes applications, while AWS takes care of the node lifecycle management. - \ No newline at end of file + diff --git a/website/docs/python/eks/deploy-app.md b/website/docs/python/eks/deploy-app.md index a655abe..5153c86 100644 --- a/website/docs/python/eks/deploy-app.md +++ b/website/docs/python/eks/deploy-app.md @@ -51,56 +51,34 @@ kube-root-ca.crt 1 5m36s ## 2. Deploying the PostgreSQL StatefulSet, Service, and PersistentVolumeClaim -The **[eks/deploy-db-python.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/eks/deploy-db-python.yaml)** file is used for the deployment of the PostgreSQL database and consists of four primary resources: a StorageClass, Service, StatefulSet, and PersistentVolumeClaim. +The **[eks/deploy-db-python.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/eks/deploy-db-python.yaml)** file is used for the deployment of the PostgreSQL database and consists of four primary resources: a Service, StatefulSet, and PersistentVolumeClaim. - - +From the `python-fastapi-demo-docker` project directory, apply the database manifest: - From the `python-fastapi-demo-docker` project directory, apply the database manifest: - - ```bash - kubectl apply -f eks/deploy-db-python-fargate.yaml - ``` - - It will take a few minutes for Fargate to provision pods. In order to verify that the database pod is running please run the below command: - - ```bash - kubectl get pod fastapi-postgres-0 -n my-cool-app - ``` - - The expected output should look like this: - - ```bash - kubectl get pod fastapi-postgres-0 -n my-cool-app - NAME READY STATUS RESTARTS AGE - fastapi-postgres-0 1/1 Running 0 4m32s - ``` - - - +```bash +kubectl apply -f eks/deploy-db-python.yaml +``` - From the `python-fastapi-demo-docker` project directory, apply the database manifest: +The expected output should look like this: - ```bash - kubectl apply -f eks/deploy-db-python.yaml - ``` +```bash +service/db created +statefulset.apps/fastapi-postgres created +``` - In order to verify that the database pod is running please run the below command: +It will take a few minutes for Fargate to provision pods. In order to verify that the database pod is running please run the below command: - ```bash - kubectl get pod fastapi-postgres-0 -n my-cool-app - ``` +```bash +kubectl get pod fastapi-postgres-0 -n my-cool-app +``` - The expected output should look like this: - - ```bash - kubectl get pod fastapi-postgres-0 -n my-cool-app - NAME READY STATUS RESTARTS AGE - fastapi-postgres-0 1/1 Running 0 4m32s - ``` +The expected output should look like this: - - +```bash +kubectl get pod fastapi-postgres-0 -n my-cool-app +NAME READY STATUS RESTARTS AGE +fastapi-postgres-0 1/1 Running 0 4m32s +``` ## 3. Deploying the FastAPI Deployment, Service, and Ingress @@ -117,6 +95,14 @@ From the `python-fastapi-demo-docker` project directory, apply the application m kubectl apply -f eks/deploy-app-python.yaml ``` +The expected output should look like this: + +```bash +service/fastapi-service created +deployment.apps/fastapi-deployment created +ingress.networking.k8s.io/fastapi-ingress created +``` + ## 4. Verifying the Deployment After applying the manifest, verify that the deployment is running correctly. diff --git a/website/docs/python/eks/index.md b/website/docs/python/eks/index.md index 3430115..ac7b64a 100644 --- a/website/docs/python/eks/index.md +++ b/website/docs/python/eks/index.md @@ -12,6 +12,8 @@ Amazon EKS supports multiple cluster types, including self-managed nodes, manage - A **self-managed node** cluster is a traditional Kubernetes cluster, where the worker nodes are EC2 instances that are managed by the user. With self-managed nodes, you have full control over the EC2 instances and the Kubernetes worker nodes, including the ability to use custom Amazon Machine Images (AMIs), use EC2 Spot instances to reduce costs, and use auto-scaling groups to adjust the number of nodes based on demand. - **Managed node groups** are a managed worker node option that simplifies the deployment and management of worker nodes in your EKS cluster. With managed node groups, you can launch a group of worker nodes with the desired configuration, including instance type, AMI, and security groups. The managed node group will automatically manage the scaling, patching, and lifecycle of the worker nodes, simplifying the operation of the EKS cluster. - **Fargate** clusters are another managed worker node option that abstracts away the underlying infrastructure, allowing you to run Kubernetes pods without managing the worker nodes. With Fargate clusters, you can launch pods directly on Fargate without the need for EC2 instances. Fargate clusters simplify the operation of the EKS cluster by abstracting away the worker nodes, allowing you to focus on the applications running in the Kubernetes cluster. +- **EKS Auto Mode** extends AWS's management of Kubernetes clusters beyond the cluster itself, enabling AWS to set up and manage the infrastructure necessary for smooth operation of workloads. EKS Auto Mode uses Karpenter to automatically create and manage worker nodes in your account based on Pod demand. There's no need to create node groups or Fargate nodes. Additionally, it enables management of infrastructure such as ALB, NLB, and EBS without having to install Ingress Controller or CSI Controller. + ## Tools Before you begin, make sure that the following tools have been installed: @@ -25,4 +27,4 @@ helm version If any of these tools are missing, refer to section [Setting up the Development Environment](../../python/introduction/environment-setup.md) for installation instructions. ## Samples -- [eksctl examples](https://github.com/weaveworks/eksctl/tree/main/examples) \ No newline at end of file +- [eksctl examples](https://github.com/weaveworks/eksctl/tree/main/examples) diff --git a/website/docs/python/eks/manage-contexts.md b/website/docs/python/eks/manage-contexts.md index d6bc7ea..27f18e5 100644 --- a/website/docs/python/eks/manage-contexts.md +++ b/website/docs/python/eks/manage-contexts.md @@ -27,13 +27,13 @@ kubectl config current-context This command will output the current context, which should resemble: ```bash -arn:aws:eks:us-east-1:012345678901:cluster/fargate-quickstart +arn:aws:eks:us-east-1:012345678901:cluster/automode-quickstart ``` or ```bash -admin@fargate-quickstart.us-east-1.eksctl.io +admin@automode-quickstart.us-east-1.eksctl.io ``` ## 2. Switching Contexts @@ -43,27 +43,27 @@ If your current context doesn't match your EKS cluster, you need to switch conte From the `python-fastapi-demo-docker` project directory, update your local kubeconfig file using either one of the following commands: - + ```bash -aws eks --region ${AWS_REGION} update-kubeconfig --name fargate-quickstart +aws eks --region ${AWS_REGION} update-kubeconfig --name automode-quickstart ``` or ```bash -eksctl utils write-kubeconfig --cluster fargate-quickstart --region ${AWS_REGION} +eksctl utils write-kubeconfig --cluster automode-quickstart --region ${AWS_REGION} ``` Executing the above commands should output a confirmation message similar to the output below, indicating a successful context switch: ```bash -Updated context arn:aws:eks:us-east-1:012345678901:cluster/fargate-quickstart in /Users/frank/.kube/config +Updated context arn:aws:eks:us-east-1:012345678901:cluster/automode-quickstart in /Users/frank/.kube/config ``` or ```bash -2023-09-22 17:00:52 [✔] saved kubeconfig as "/Users/frank/.kube/config" +2025-06-18 13:24:37 [✔] saved kubeconfig as "/Users/frank/.kube/config" ``` @@ -87,7 +87,7 @@ Updated context arn:aws:eks:us-east-1:012345678901:cluster/managednode-quickstar or ```bash -2023-09-22 17:00:52 [✔] saved kubeconfig as "/Users/frank/.kube/config" +2025-06-20 22:22:14 [✔] saved kubeconfig as "/Users/frank/.kube/config" ``` @@ -102,4 +102,3 @@ or ## Conclusion This lab provided a quick walkthrough on how to verify and switch Kubernetes contexts in an EKS cluster. With a good grasp of Kubernetes contexts, you're now better equipped to handle workloads on different EKS clusters efficiently. - diff --git a/website/docs/python/eks/scale-using-karpenter.md b/website/docs/python/eks/scale-using-karpenter.md new file mode 100644 index 0000000..78a163c --- /dev/null +++ b/website/docs/python/eks/scale-using-karpenter.md @@ -0,0 +1,442 @@ +--- +title: Managing Worker Node Scaling with Karpenter +sidebar_position: 14 +--- +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; +import GetEnvVars from '../../../src/includes/get-env-vars.md'; +import GetECRURI from '../../../src/includes/get-ecr-uri.md'; + +## Objective + +In this Lab, we will first explain the concepts of Karpenter, followed by hands-on exercises to try the following: + +- When Pod replicas are increased, necessary worker nodes are provisioned, and when Pod replicas are decreased, unnecessary nodes are deprovisioned +- Worker node instances are cost-optimized through Consolidation, one of Karpenter's key features + +## Prerequisites + +* [Deploying FastAPI and PostgreSQL Microservices to EKS](./deploy-app.md) + + + + +--- + +## 1. Understanding the Concepts + +Karpenter is a flexible, high-performance open-source Kubernetes cluster autoscaler built by AWS. While the traditional Kubernetes Cluster Autoscaler (CAS) adjusted the number of worker nodes using EC2 Auto Scaling groups, Karpenter is characterized by its speed and flexibility as it directly launches appropriate EC2 instances as needed, rather than manipulating EC2 Auto Scaling groups. Note that EKS Auto Mode includes a fully managed Karpenter as one of its feature suites, so installation is not required. + +Karpenter uses the following Custom Resource Definitions (CRDs) to configure its behavior: + +- NodePool + This CRD allows you to configure what kind of nodes you want to launch. + - Node requirements (architecture, OS, instance types, etc.) + - Resource limits + - Scaling behavior + - Node lifecycle management +- EC2NodeClass/NodeClass + This CRD defines the node configuration itself and AWS-specific information. For example, it configures kubelet settings, roles, subnets, etc. In EKS Auto Mode, NodeClass is used instead of EC2NodeClass. +- NodeClaim + This CRD is created by Karpenter and is not created manually. It represents the request for nodes that will actually be launched. EC2 instances are launched based on this. + +To learn more, see [Concepts | Karpenter](https://karpenter.sh/docs/concepts/) in Karpenter documentation. + +## 2. Installing Karpenter + + + + + + EKS Auto Mode does not require Karpenter installation. Additionally, since NodePools are pre-configured in EKS Auto Mode, you can use them without creating additional NodePools. + + ```bash + kubectl get NodePool + NAME NODECLASS NODES READY AGE + general-purpose default 1 True 21h + system default 1 True 21h + ``` + + However, in this Lab, we will create and use a custom NodePool to more clearly understand the functionality. The custom ModePool manifest is [eks/karpenter/nodepool.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/eks/karpenter/nodepool.yaml). + + Create with the following command: + + ```bash + kubectl apply -f eks/karpenter/nodepool.yaml + ``` + + The expected output should look like this: + + ```bash + nodepool.karpenter.sh/developrs-workshop-pool created + ``` + + + + + When using managed node groups, you need to install the Karpenter add-on. In this lab, we will use eksctl due to its ease of installation. Currently, eksctl only supports adding the Karpenter add-on during cluster creation. To learn more, see [Karpenter Support - eksctl](https://eksctl.io/usage/eksctl-karpenter/) in eksctl documentation. + Therefore, we will create a new cluster using eksctl. The cluster config file is located at [eks/karpenter/create-mng-python.yaml](https://github.com/aws-samples/python-fastapi-demo-docker/blob/main/eks/karpenter/create-mng-python.yaml). Please modify the region setting as appropriate. + + ```bash + metadata: + (snip) + # region: The AWS region where your EKS cluster will be created. + region: us-west-2 + ``` + + Just in case, logout of helm registry to perform an unauthenticated pull against the public ECR: + + ``` + helm registry logout public.ecr.aws + ``` + + The expected output should look like this: + + ```bash + Removing login credentials for public.ecr.aws + ``` + + Execute the following command to create an EKS cluster named 'managednode-quickstart-karpenter': + + ```bash + eksctl create cluster -f eks/karpenter/create-mng-python.yaml + ``` + + The expected output should look like this: + + ```bash + 2025-08-12 22:45:53 [✔] EKS cluster "managednode-quickstart-karpenter" in "us-east-1" region is ready + ``` + + Add the necessary permissions to the IAM role 'eksctl-managednode-quickstart-karpenter-iamservice-role', which is automatically created for the Karpenter Pod. First, create a JSON file for the policy: + + ```bash + $ cat > karpenter-role-policy.json << EOF + { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": "eks:DescribeCluster", + "Resource": "arn:aws:eks:*:*:cluster/managednode-quickstart-karpenter" + }, + { + "Effect": "Allow", + "Action": "iam:RemoveRoleFromInstanceProfile", + "Resource": "*" + } + ] + } + EOF + ``` + + If successful, no output will be displayed. + Next, add this as an inline policy to the IAM role 'eksctl-managednode-quickstart-karpenter-iamservice-role': + + ```bash + aws iam put-role-policy --role-name eksctl-managednode-quickstart-karpenter-iamservice-role --policy-name karpeneter-additional-policy --policy-document file://karpenter-role-policy.json + ``` + + If successful, no output will be displayed. + + Next, create the NodePool and EC2NodeClass. + + ```bash + kubectl apply -f eks/karpenter/ec2nodeclass.yaml + kubectl apply -f eks/karpenter/nodepool-mng.yaml + ``` + + The expected each output should look like this: + + ``` bash + ec2nodeclass.karpenter.k8s.aws/default created + nodepool.karpenter.sh/developrs-workshop-pool created + ``` + + Since we have created a new cluster, we need to recreate the necessary resources: + + ```bash + kubectl create ns my-cool-app + kubectl apply -f eks/sc.yaml + kubectl create secret generic fastapi-secret --from-env-file=.env -n my-cool-app + kubectl create configmap db-init-script --from-file=init.sh=server/db/init.sh -n my-cool-app + ``` + + The expected each output should look like this: + + ```bash + namespace/my-cool-app created + storageclass.storage.k8s.io/ebs-sc created + secret/fastapi-secret created + configmap/db-init-script created + ``` + + To learn more about how to install, see [Getting Started with Karpenter | Karpenter](https://karpenter.sh/docs/getting-started/getting-started-with-karpenter/) in Karpenter documentation. + + + + +Additionally, in this Lab, we will install and use eks-node-viewer to better visualize worker nodes. This tool makes it easy to monitor the status and utilization of worker nodes. + + + + + Install the eks-node-viewer: + + ```bash + brew tap aws/tap + brew install eks-node-viewer + ``` + + The expected output should look like this: + + ```bash + ==> Fetching downloads for: eks-node-viewer + ==> Fetching aws/tap/eks-node-viewer + ==> Downloading https://github.com/awslabs/eks-node-viewer/releases/download/v0.7.4/eks-node-viewer_Darwin_all + ==> Downloading from https://release-assets.githubusercontent.com/github-production-release-asset/575555632/3c568f24-b0e6-42f6-9c73-f208a106f94f?sp=r&sv=2018-11-09&sr=b&spr=https&se=2025-08-12T22%3A13%3A41Z&rscd=attachment%3B+filename%3Deks-node-viewer_Darwin_all + ######################################################################################################################################################################################################################################################### 100.0% + ==> Installing eks-node-viewer from aws/tap + 🍺 /opt/homebrew/Cellar/eks-node-viewer/0.7.4: 4 files, 136.0MB, built in 4 seconds + ==> Running `brew cleanup eks-node-viewer`... + Disable this behaviour by setting `HOMEBREW_NO_INSTALL_CLEANUP=1`. + Hide these hints with `HOMEBREW_NO_ENV_HINTS=1` (see `man brew`). + ==> No outdated dependents to upgrade! + ``` + + + + + Install the eks-node-viewer: + + ```bash + go install github.com/awslabs/eks-node-viewer/cmd/eks-node-viewer@latest + ``` + + If the command succeeds, nothing is output. + + + + +If any issues arise, for troubleshooting purposes, see [GitHub - awslabs/eks-node-viewer: EKS Node Viewer](https://github.com/awslabs/eks-node-viewer) README.md. + +## 3. Trying Scaling Up/Down + +First, let's deploy an application for testing. We'll start by deploying the DB application Pod: + +```bash +kubectl apply -f eks/deploy-db-python.yaml +``` + +The expected output should look like this: + +```bash +service/db created +statefulset.apps/fastapi-postgres created +``` + +Next, let's deploy the Web application Pod: + +```bash +kubectl apply -f eks/karpenter/deploy-app-python.yaml +``` + +The expected output should look like this: + +```bash +deployment.apps/fastapi-deployment created +``` + +After deployment, verify that the Pods have started normally. Run the following command and confirm that the READY column matches: + +```bash +kubectl get po -n my-cool-app -o wide +``` + +The expected output should look like this: + +```bash +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +fastapi-deployment-6f69d7cf44-9wxpr 1/1 Running 0 118s 192.168.65.192 i-0123456789abcdef0 +fastapi-postgres-0 1/1 Running 0 2m38s 192.168.115.80 i-0123456789abcdef1 +``` + +Next, use another terminal to launch eks-node-viewer: + +```bash +eks-node-viewer --node-selector workload-type=developers-workshop +``` + +The expected output should look like this: + +```bash +1 nodes ( 200m/1780m) 11.2% cpu ████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ $0.042/hour | $30.740/month +4 pods (0 pending 4 running 4 bound) + +i-0123456789abcdef0 cpu ████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 11% (1 pods) t3a.medium/$0.0421 On-Demand/Auto - Ready - +``` + +You can confirm that one worker node is running with one Pod running on it. + +Now, let's scale up the ReplicaSet of Deployment fastapi-deployment: + +```bash +kubectl scale deploy -n my-cool-app fastapi-deployment --replicas=9 +``` + +The expected output should look like this: + +```bash +deployment.apps/fastapi-deployment scaled +``` + +After waiting a moment, check eks-node-viewer. As shown below, because one worker node ran out of capacity to start Pods, Karpenter provisions a new worker node. You can confirm that the Pods that couldn't be scheduled on the existing worker node are now running on the new worker node. + +:::note + +Please note that the actual placement of Pods on worker nodes may differ from the results shown below. Additionally, the following results are for EKS Auto Mode. In the case of Managed Node Groups, be aware that Pods from DaemonSets such as aws-node, ebs-csi-node, and kube-proxy will be present on the worker nodes. + +::: + + +```bash +2 nodes ( 1800m/3560m) 50.6% cpu ████████████████████░░░░░░░░░░░░░░░░░░░░ $0.084/hour | $61.481/month +12 pods (0 pending 12 running 12 bound) + +i-0123456789abcdef0 cpu ███████████████████████████████░░░░ 90% (8 pods) t3a.medium/$0.0421 On-Demand/Auto - Ready - +i-0123456789abcdef2 cpu ████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 11% (1 pods) t3a.medium/$0.0421 On-Demand/Auto - Ready - +``` + +Conversely, let's scale down the Deployment's ReplicaSet with the following command: + +```bash +kubectl scale deploy -n my-cool-app fastapi-deployment --replicas=1 +``` + +The expected output should look like this: + +```bash +deployment.apps/fastapi-deployment scaled +``` + +In eks-node-viewer, you can confirm that Karpenter has removed the Pods that were running on the old worker node. + +```bash +2 nodes ( 200m/3560m) 5.6% cpu ██░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ $0.084/hour | $61.481/month +4 pods (0 pending 4 running 4 bound) + +i-0123456789abcdef0 cpu ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0% (0 pods) t3a.medium/$0.0421 On-Demand/Auto - Ready - +i-0123456789abcdef2 cpu ████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 11% (1 pods) t3a.medium/$0.0421 On-Demand/Auto - Ready - +``` + +After a while, you can confirm that Karpenter removes the empty worker node. This is because the empty worker node was disrupted by Karpenter's Consolidation feature. + +```bash +1 nodes ( 200m/1780m) 11.2% cpu ████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ $0.042/hour | $30.740/month +4 pods (0 pending 4 running 4 bound) + +i-0123456789abcdef2 cpu ████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 11% (1 pods) t3a.medium/$0.0421 On-Demand/Auto - Ready - +``` + +## 4. Trying Consolidation Feature + +Consolidation is one of many useful features of Karpenter. In this Lab, we're using the same values as Karpenter's defaults, where Karpenter automatically performs worker node deletion or replacement when nodes are empty or underutilized. In this section, we'll also verify the behavior when utilization is low. + +```yaml + disruption: + consolidationPolicy: WhenEmptyOrUnderutilized + consolidateAfter: 0s +``` + +Karpenter will delete a worker node if all Pods can run using the available capacity of other worker nodes. Additionally, it will replace nodes if all Pods can run using a combination of available capacity on other worker nodes and one lower-cost alternative worker node. + +First, let's scale up: + +```bash +kubectl scale deploy -n my-cool-app fastapi-deployment --replicas=9 +``` + +The expected output should look like this: + +```bash +deployment.apps/fastapi-deployment scaled +``` + +In eks-node-viewer, you can confirm that Karpenter has provisioned new worker nodes as shown below. + +```bash +2 nodes ( 1800m/3560m) 50.6% cpu ████████████████████░░░░░░░░░░░░░░░░░░░░ $0.084/hour | $61.481/month +12 pods (0 pending 12 running 12 bound) + +i-0123456789abcdef2 cpu ███████████████████████████████░░░░ 90% (8 pods) t3a.medium/$0.0421 On-Demand/Auto - Ready - +i-0123456789abcdef3 cpu ████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 11% (1 pods) t3a.medium/$0.0421 On-Demand/Auto - Ready - +``` + +Next, let's decrease the number of replicas by one: + +```bash +kubectl scale deploy -n my-cool-app fastapi-deployment --replicas=8 +``` + +The expected output should look like this: + +```bash +deployment.apps/fastapi-deployment scaled +``` + +As a result, one of the worker nodes now has enough available capacity to run all 8 Pods: + +```bash +2 nodes ( 1600m/3560m) 44.9% cpu ██████████████████░░░░░░░░░░░░░░░░░░░░░░ $0.084/hour | $61.481/month +11 pods (0 pending 11 running 11 bound) + +i-0123456789abcdef2 cpu ████████████████████████████░░░░░░░ 79% (7 pods) t3a.medium/$0.0421 On-Demand/Auto - Ready - +i-0123456789abcdef3 cpu ████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 11% (1 pods) t3a.medium/$0.0421 On-Demand/Auto - Ready - +``` + +Consequently, Karpenter determines that one worker node is unnecessary and disrupts it. + +```bash +2 nodes ( 1600m/3560m) 44.9% cpu ██████████████████░░░░░░░░░░░░░░░░░░░░░░ $0.084/hour | $61.481/month +11 pods (0 pending 11 running 11 bound) + +i-0123456789abcdef2 cpu ███████████████████████████████░░░░ 90% (8 pods) t3a.medium/$0.0421 On-Demand/Auto - Ready - +i-0123456789abcdef3 cpu ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0% (0 pods) t3a.medium/$0.0421 On-Demand/Auto Deleting Ready - +``` + +```bash +1 nodes ( 1600m/1780m) 89.9% cpu ████████████████████████████████████░░░░ $0.042/hour | $30.740/month +11 pods (0 pending 11 running 11 bound) + +i-0123456789abcdef2 cpu ███████████████████████████████░░░░ 90% (8 pods) t3a.medium/$0.0421 On-Demand/Auto - Ready - +``` + +To learn more, see [Disruption | Karpenter](https://karpenter.sh/docs/concepts/disruption/#consolidation) in Karpenter documentation. + +## 5. Clean Up Resources + +To clean up all resources created in this lab exercise and the workshop up to this point, run the following commands. + + + + ``` bash + cd /home/ec2-user/environment/python-fastapi-demo-docker + kubectl delete -f eks/deploy-db-python.yaml + kubectl delete -f eks/karpenter/deploy-app-python.yaml + kubectl delete -f eks/karpenter/nodepool.yaml + ``` + + + + ``` bash + cd /home/ec2-user/environment/python-fastapi-demo-docker + eksctl delete cluster -f eks/karpenter/create-mng-python.yaml + ``` + + + + +## Conclusion + +In this lab, we first understood the concepts of Karpenter, and then practically tested scaling up and down. The scale-down behavior was achieved through Consolidation, one of Karpenter's features, which we examined in more detail. +Additionally, Karpenter can perform autoscaling when used in conjunction with the Horizontal Pod Autoscaler. diff --git a/website/docs/python/eks/setup-loadbalancing.md b/website/docs/python/eks/setup-loadbalancing.md index 7ac4359..40df3e1 100644 --- a/website/docs/python/eks/setup-loadbalancing.md +++ b/website/docs/python/eks/setup-loadbalancing.md @@ -1,5 +1,5 @@ --- -title: Setting up the AWS Application Load Balancer Controller (LBC) on the EKS Cluster +title: Setting up the Application Load Balancer on the EKS Cluster sidebar_position: 6 --- import Tabs from '@theme/Tabs'; @@ -7,7 +7,8 @@ import TabItem from '@theme/TabItem'; import GetEnvVars from '../../../src/includes/get-env-vars.md'; ## Objective -This lab shows you how to set up the [AWS Load Balancer Controller (LBC)](https://kubernetes-sigs.github.io/aws-load-balancer-controller/) on your cluster, which enables the routing of external traffic to your Kubernetes services. We'll leverage the [IAM Roles for Service Accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) we configured when we created our cluster, ensuring that the controller has the required permissions. + +This lab shows you how to prepare your cluster for using Application Load Balancers (ALB). When using Managed node groups, you need to install the [AWS Load Balancer Controller (LBC)](https://kubernetes-sigs.github.io/aws-load-balancer-controller/) on your cluster, which enables the routing of external traffic to your Kubernetes services. We'll leverage the [IAM Roles for Service Accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) we configured when we created our cluster, ensuring that the controller has the required permissions. However, if you're using EKS Auto Mode, this installation is not necessary as the functionality is built-in. :::info @@ -21,33 +22,43 @@ Classic Load Balancers are not supported for pods running on Fargate. Network Lo -## 1. Set Environment Variables -Before we start setting up our EKS cluster, we need to set a couple environment variables. + + -Export the name of your EKS cluster and the VPC ID associated with your EKS cluster executing the following commands: +## 1. Creating IngressClass - - +IngressClass defines which Ingress controller manages the Ingress, while IngressClassParams defines the parameters that are commonly applied to Ingress resources. -```bash -export CLUSTER_VPC=$(aws eks describe-cluster --name fargate-quickstart --region ${AWS_REGION} --query "cluster.resourcesVpcConfig.vpcId" --output text) -export CLUSTER_NAME=fargate-quickstart +Run the following command from the python-fastapi-demo-docker project directory to create the IngressClass and IngressClassParams: + +``` bash +kubectl apply -f eks/ic-automode.yaml +``` + +The expected output should look like this: + +``` +ingressclass.networking.k8s.io/alb created +ingressclassparams.eks.amazonaws.com/alb created ``` + + + +## 1. Set Environment Variables +Before we start setting up our EKS cluster, we need to set a couple environment variables. + +Export the name of your EKS cluster and the VPC ID associated with your EKS cluster executing the following commands: + ```bash export CLUSTER_VPC=$(aws eks describe-cluster --name managednode-quickstart --region ${AWS_REGION} --query "cluster.resourcesVpcConfig.vpcId" --output text) export CLUSTER_NAME=managednode-quickstart ``` - - - - - ## 2. Verify the Service Account First, we need to make sure the "aws-load-balancer-controller" service account is correctly set up in the "kube-system" namespace in our cluster. @@ -62,13 +73,13 @@ kind: ServiceAccount metadata: annotations: eks.amazonaws.com/role-arn: arn:aws:iam::012345678901:role/eksctl-fargate-quickstart-addon-iamserviceac-Role1-J2T54L9SG5L0 - creationTimestamp: "2023-05-30T23:09:32Z" + creationTimestamp: "2025-06-21T05:09:27Z" labels: app.kubernetes.io/managed-by: eksctl name: aws-load-balancer-controller namespace: kube-system - resourceVersion: "2102" - uid: 2086b1c0-de23-4386-ae20-19d51b7db4a1 + resourceVersion: "1406" + uid: 28939c82-5ed9-4a82-9be7-f6c5606ae4ac ``` ## 3. Add and Update EKS chart repository to Helm: @@ -105,7 +116,7 @@ helm install aws-load-balancer-controller eks/aws-load-balancer-controller \ You should receive an output confirming the successful installation of the AWS Load Balancer Controller (LBC): ```bash NAME: aws-load-balancer-controller -LAST DEPLOYED: Sat May 11 01:21:04 2023 +LAST DEPLOYED: Fri Jun 20 22:27:18 2025 NAMESPACE: kube-system STATUS: deployed REVISION: 1 @@ -123,6 +134,25 @@ helm list -A You should receive simillar output: ```bash -NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION -aws-load-balancer-controller kube-system 1 2023-09-11 00:31:57.585623 -0400 EDT deployed aws-load-balancer-controller-1.6.0 v2.6.0 +NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION +aws-load-balancer-controller kube-system 1 2025-06-20 22:27:18.048668 -0700 PDT deployed aws-load-balancer-controller-1.13.3 v2.13.3 ``` + + +## 5. Verifying IngressClass Installation + +You can verify that the IngressClass was successfully installed when you deployed the AWS Load Balancer Controller using the following command: + +``` bash +kubectl get ingressclasses +``` + +The expected output should look like this: + +``` bash +NAME CONTROLLER PARAMETERS AGE +alb ingress.k8s.aws/alb 6m50s +``` + + + diff --git a/website/docs/python/eks/setup-storage.md b/website/docs/python/eks/setup-storage.md index fa3cfe4..181f2a5 100644 --- a/website/docs/python/eks/setup-storage.md +++ b/website/docs/python/eks/setup-storage.md @@ -12,210 +12,28 @@ This lab shows you how to setup and configure a data storage mechanism on your c ## Prerequisites -- [Setting up the AWS Application Load Balancer Controller (LBC) on the EKS Cluster](./setup-loadbalancing.md) +- [Setting up the Application Load Balancer on the EKS Cluster](./setup-loadbalancing.md) - + -This lab shows you how to setup an [Elastic File System (EFS)](https://aws.amazon.com/efs/) volume within your Fargate cluster using the EFS CSI Driver. It's important to note that dynamic provisioning of persistent volumes is **not** supported for Fargate. As a result, in this lab, we'll be manually provisioning both the EFS filesystem and the corresponding Persistent Volume. +## 1. Creating StorageClass -## 1. Configuring Mount Targets for the EFS File System +StorageClass is used for dynamic provisioning of PersistentVolumes to fulfill PersistentVolumeClaims (PVCs). Run the following command from the python-fastapi-demo-docker project directory to create the StorageClass: -EFS only allows one mount target to be created in each Availability Zone, so you'll need to place the mount target on **each subnet**. - -1. Fetch the VPC ID associated with your EKS cluster and save it in a variable. - -```bash -vpc_id=$(aws eks describe-cluster \ - --name "fargate-quickstart" \ - --region "$AWS_REGION" \ - --query "cluster.resourcesVpcConfig.vpcId" \ - --output "text") -``` - -2. Retrieve the CIDR range for the VPC of your EKS cluster and save it in a variable. - -```bash -cidr_range=$(aws ec2 describe-vpcs \ - --vpc-ids "$vpc_id" \ - --query "Vpcs[].CidrBlock" \ - --output "text" \ - --region "$AWS_REGION") -``` - -3. Create a security group for your Amazon EFS mount points and save its ID in a variable. - -```bash -security_group_id=$(aws ec2 create-security-group \ - --group-name "MyEfsSecurityGroup" \ - --description "My EFS security group" \ - --vpc-id "$vpc_id" \ - --region "$AWS_REGION" \ - --output "text") -``` - -4. Create an inbound rule on the new security group that allows NFS traffic from the CIDR for your cluster's VPC. - - -```bash -aws ec2 authorize-security-group-ingress \ - --group-id "$security_group_id" \ - --protocol "tcp" \ - --port "2049" \ - --region "$AWS_REGION" \ - --cidr "$cidr_range" -``` - -The expected output should look like this: - -```bash -{ - "Return": true, - "SecurityGroupRules": [ - { - "SecurityGroupRuleId": "sgr-0ad5de1c44ffce249", - "GroupId": "sg-01a4ec3c5d2f85c10", - "GroupOwnerId": "991242130495", - "IsEgress": false, - "IpProtocol": "tcp", - "FromPort": 2049, - "ToPort": 2049, - "CidrIpv4": "192.168.0.0/16" - } - ] -} -``` - -:::note -To further restrict access to your file system, you can optionally specify the CIDR of your subnet instead of using the entire VPC. -::: - -5. Create the Amazon EFS File System for your cluster. - -```bash -file_system_id=$(aws efs create-file-system \ - --region "$AWS_REGION" \ - --tags "Key=Name,Value=fargate-quickstart-efs" \ - --performance-mode "generalPurpose" \ - --query "FileSystemId" \ - --output "text") -``` - -6. Run the following command to create an EFS mount target in each of the cluster VPC's subnets. - -```bash -for subnet in $(aws ec2 describe-subnets --filters "Name=vpc-id,Values=$vpc_id" --region "$AWS_REGION" --query 'Subnets[*].SubnetId' --output text) -do - aws efs create-mount-target \ - --file-system-id $file_system_id \ - --subnet-id $subnet \ - --region $AWS_REGION \ - --security-groups $security_group_id -done -``` - -The expected output for each subnet should look like this: -```bash -{ - "OwnerId": "01234567890", - "MountTargetId": "fsmt-0dc4c9ca6537396a1", - "FileSystemId": "fs-000f4681791902287", - "SubnetId": "subnet-0e5891457fe105577", - "LifeCycleState": "creating", - "IpAddress": "192.168.61.208", - "NetworkInterfaceId": "eni-0b59c825208f521b7", - "AvailabilityZoneId": "use1-az3", - "AvailabilityZoneName": "us-east-1a", - "VpcId": "vpc-0f3ef22756b1abf1f" -} -``` - -## 2. Creating the Storage Class and Persistent Volumes - -1. Deploy the Storage Class: - -```bash -kubectl apply -f eks/efs-sc.yaml -``` - -The expected output should look like this: - -```bash -storageclass.storage.k8s.io/efs-sc created -``` - -2. To create a persistent volume, you need to attach your EFS file system identifier to the PV. Run the following command to update the file system identifier in the `deploy-efs-pv-fargate.yaml` manifest: - -```bash -sed -i "s/fs-000000001121212/$file_system_id/g" eks/efs-pv.yaml -``` - -**Optionally**, if you're running this on macOS, run the following command to update the file system identifier: - -```bash -sed -i '' 's/fs-000000001121212/'"$file_system_id"'/g' eks/efs-pv.yaml -``` - -3. Deploy the Persistent Volume (PV): - -```bash -kubectl apply -f eks/efs-pv.yaml +``` bash +kubectl apply -f eks/sc-automode.yaml ``` The expected output should look like this: -```bash -persistentvolume/efs-pv created -``` - -## 3. Verifying the Deployment of the Storage Class and Persistent Volumes - -1. Verify that the Storage Class was successfully created: - -```bash -kubectl get storageclass efs-sc -``` - -The expected output should look like this: -```bash -NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE -efs-sc efs.csi.aws.com Delete Immediate false 10m ``` - -2. Verify that your file system identifier was successfully attached: - -```bash -kubectl describe pv efs-pv +storageclass.storage.k8s.io/ebs-sc created ``` -The expected output should look like this: - -```bash -Name: efs-pv -Labels: -Annotations: -Finalizers: [kubernetes.io/pv-protection] -StorageClass: efs-sc -Status: Available -Claim: -Reclaim Policy: Retain -Access Modes: RWX -VolumeMode: Filesystem -Capacity: 5Gi -Node Affinity: -Message: -Source: - Type: CSI (a Container Storage Interface (CSI) volume source) - Driver: efs.csi.aws.com - FSType: - VolumeHandle: fs-000f4681791902287 - ReadOnly: false - VolumeAttributes: -Events: -``` @@ -243,6 +61,20 @@ ebs-csi-controller-5bd7b5fdbf-6wpzv 6/6 Running 0 7h ebs-csi-controller-5bd7b5fdbf-lnn96 6/6 Running 0 7h15m ebs-csi-node-l7z5z 3/3 Running 0 4h7m ebs-csi-node-tvlkg 3/3 Running 0 4h7m +``` + +## 2. Creating StorageClass + +StorageClass is used for dynamic provisioning of PersistentVolumes to fulfill PersistentVolumeClaims (PVCs). Run the following command from the python-fastapi-demo-docker project directory to create the StorageClass: + +``` bash +kubectl apply -f eks/sc.yaml +``` + +The expected output should look like this: + +``` +storageclass.storage.k8s.io/ebs-sc created ``` diff --git a/website/docs/python/eks/view-kubernetes-resources.md b/website/docs/python/eks/view-kubernetes-resources.md index 696d10a..da20e0f 100644 --- a/website/docs/python/eks/view-kubernetes-resources.md +++ b/website/docs/python/eks/view-kubernetes-resources.md @@ -64,48 +64,75 @@ clusterrolebinding.rbac.authorization.k8s.io/eks-console-dashboard-full-access-b You can also view Kubernetes resources limited to a specific namespace. Refer to the EKS documentation for more details on creating RBAC bindings for a namespace. -## 3. Adding the IAM principal ARN to the `aws-auth` ConfigMap +## 3. Creating the access entry for the IAM principal ARN -:::caution -If the IAM principal logged into the EKS console is the creator of the EKS cluster, skip this step and proceed to step '[4. View Kubernetes resources](#4-viewing-your-kubernetes-resources)'. Performing this step by mistake will overwrite the original super-user permissions in the ConfigMap 'aws-auth', which can make the EKS cluster difficult to manage. +**Optionally**, you can create the access entry for an existing IAM principal ARN logged into the EKS console by running the following command. Make sure to substitute the sample value with your _existing_ IAM principal ARN as the value for the `--principal-arn` argument: -::: + + -**Optionally**, you can register an existing IAM principal ARN logged into the EKS console to the ConfigMap 'aws-auth' by running the following command. Make sure to substitute the sample value with your _existing_ IAM principal ARN as the value for the `--arn` argument: - - +``` +aws eks create-access-entry --cluster-name automode-quickstart \ + --principal-arn arn:aws:iam::012345678901:role/my-console-viewer-role \ + --kubernetes-groups eks-console-dashboard-full-access-group +``` + +The expected output should look like this: -```bash -eksctl create iamidentitymapping \ - --cluster fargate-quickstart \ - --region $AWS_REGION \ - --arn arn:aws:iam::012345678901:role/my-console-viewer-role \ - --group eks-console-dashboard-full-access-group \ - --no-duplicate-arns +``` +{ + "accessEntry": { + "clusterName": "automode-quickstart", + "principalArn": "arn:aws:iam::012345678901:role/my-console-viewer-role", + "kubernetesGroups": [ + "eks-console-dashboard-full-access-group" + ], + "accessEntryArn": "arn:aws:eks:us-east-1:012345678901:access-entry/automode-quickstart/role/092934755410/my-console-viewer-role/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", + "createdAt": "2025-06-18T21:36:50.926000-07:00", + "modifiedAt": "2025-06-18T21:36:50.926000-07:00", + "tags": {}, + "username": "arn:aws:sts::012345678901:assumed-role/my-console-viewer-role/{{SessionName}}", + "type": "STANDARD" + } +} ``` - - + + + -```bash -eksctl create iamidentitymapping \ - --cluster managednode-quickstart \ - --region $AWS_REGION \ - --arn arn:aws:iam::012345678901:role/my-console-viewer-role \ - --group eks-console-dashboard-full-access-group \ - --no-duplicate-arns ``` - - +aws eks create-access-entry --cluster-name managednode-quickstart \ + --principal-arn arn:aws:iam::012345678901:role/my-console-viewer-role \ + --kubernetes-groups eks-console-dashboard-full-access-group +``` The expected output should look like this: -```bash -2023-11-10 10:11:50 [ℹ] checking arn arn:aws:iam::012345678901:role/my-console-viewer-role against entries in the auth ConfigMap -2023-11-10 10:11:50 [ℹ] adding identity "arn:aws:iam::012345678901:role/my-console-viewer-role" to auth ConfigMap ``` +{ + "accessEntry": { + "clusterName": "managednode-quickstart", + "principalArn": "arn:aws:iam::012345678901:role/my-console-viewer-role", + "kubernetesGroups": [ + "eks-console-dashboard-full-access-group" + ], + "accessEntryArn": "arn:aws:eks:us-east-1:012345678901:access-entry/automode-quickstart/role/092934755410/my-console-viewer-role/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", + "createdAt": "2025-06-18T21:36:50.926000-07:00", + "modifiedAt": "2025-06-18T21:36:50.926000-07:00", + "tags": {}, + "username": "arn:aws:sts::012345678901:assumed-role/my-console-viewer-role/{{SessionName}}", + "type": "STANDARD" + } +} +``` + + + + + ## 4. Viewing your Kubernetes resources diff --git a/website/docs/python/introduction/environment-setup.md b/website/docs/python/introduction/environment-setup.md index 0d34e47..14f3192 100644 --- a/website/docs/python/introduction/environment-setup.md +++ b/website/docs/python/introduction/environment-setup.md @@ -69,7 +69,7 @@ If you're planning to complete the workshop in full, make sure you've set up the - [Install Docker Desktop](https://www.docker.com/products/docker-desktop/) - [Create a DockerHub Account](https://hub.docker.com/) -- [Install Python 3.9+](https://www.python.org/downloads/release/python-390/) +- [Install Python 3.11+](https://www.python.org/downloads/release/python-3110/) - [Install the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) - [Install minikube](https://minikube.sigs.k8s.io/docs/start/) - [Install eksctl](https://eksctl.io/installation) @@ -77,7 +77,7 @@ If you're planning to complete the workshop in full, make sure you've set up the - [Install Helm](https://helm.sh/docs/intro/install/) ## 2. (Optional) Alternative Tools -Optionally, if you are using macOS catalina (10.15) or higher, you can use Finch, instead of Docker. Finch is an open source tool for local container development. It is available for macOS on Intel and Apple Silicon. Finch and Docker can be installed together; however, for performing the workshop exercises, we recommend using either Finch or Docker consistently for all the steps. +Optionally, you can use Finch, instead of Docker. Finch is an open source tool for local container development. It is available for macOS on Intel and Apple Silicon. Finch and Docker can be installed together; however, for performing the workshop exercises, we recommend using either Finch or Docker consistently for all the steps. - [Install Finch](https://runfinch.com/docs/managing-finch/macos/installation/) @@ -167,4 +167,3 @@ finch run public.ecr.aws/finch/hello-finch:latest - diff --git a/website/package-lock.json b/website/package-lock.json index 6b9dbbc..b69b772 100644 --- a/website/package-lock.json +++ b/website/package-lock.json @@ -3287,19 +3287,6 @@ "react-dom": ">= 15" } }, - "node_modules/@microlink/react-json-view/node_modules/flux": { - "version": "4.0.4", - "resolved": "https://registry.npmjs.org/flux/-/flux-4.0.4.tgz", - "integrity": "sha512-NCj3XlayA2UsapRpM7va6wU1+9rE5FIL7qoMcmxWHRzbp0yujihMBm9BBHZ1MDIk5h5o2Bl6eGiCe8rYELAmYw==", - "license": "BSD-3-Clause", - "dependencies": { - "fbemitter": "^3.0.0", - "fbjs": "^3.0.1" - }, - "peerDependencies": { - "react": "^15.0.2 || ^16.0.0 || ^17.0.0" - } - }, "node_modules/@nodelib/fs.scandir": { "version": "2.1.5", "resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.5.tgz", @@ -4758,6 +4745,7 @@ "version": "1.20.3", "resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.20.3.tgz", "integrity": "sha512-7rAxByjUMqQ3/bHJy7D6OGXvx/MMc4IqBn/X0fcM1QUcAItpZrBEYhWGem+tzXH90c+G01ypMcYJBO9Y30203g==", + "license": "MIT", "dependencies": { "bytes": "3.1.2", "content-type": "~1.0.5", @@ -4781,6 +4769,7 @@ "version": "2.6.9", "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "license": "MIT", "dependencies": { "ms": "2.0.0" } @@ -4788,7 +4777,8 @@ "node_modules/body-parser/node_modules/ms": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", - "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==" + "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==", + "license": "MIT" }, "node_modules/bonjour-service": { "version": "1.1.1", @@ -4900,6 +4890,7 @@ "version": "3.1.2", "resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz", "integrity": "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==", + "license": "MIT", "engines": { "node": ">= 0.8" } @@ -5507,6 +5498,7 @@ "version": "1.0.5", "resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.5.tgz", "integrity": "sha512-nTjqfcBFEipKdXCv4YDQWCfmcLZKm81ldF0pAopTvyrFGVbcR6P/VAAd5G7N+0tTr8QqiU0tFadD6FK4NtJwOA==", + "license": "MIT", "engines": { "node": ">= 0.6" } @@ -6183,6 +6175,7 @@ "version": "2.0.0", "resolved": "https://registry.npmjs.org/depd/-/depd-2.0.0.tgz", "integrity": "sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw==", + "license": "MIT", "engines": { "node": ">= 0.8" } @@ -6200,6 +6193,7 @@ "version": "1.2.0", "resolved": "https://registry.npmjs.org/destroy/-/destroy-1.2.0.tgz", "integrity": "sha512-2sJGJTaXIIaR1w4iJSNoN0hnMY7Gpc/n8D4qSCJw8QqFWXf7cuAgnEHxBpweaVcPevC2l3KpjYCx3NypQQgaJg==", + "license": "MIT", "engines": { "node": ">= 0.8", "npm": "1.2.8000 || >= 1.4.16" @@ -6447,7 +6441,8 @@ "node_modules/ee-first": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz", - "integrity": "sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==" + "integrity": "sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==", + "license": "MIT" }, "node_modules/electron-to-chromium": { "version": "1.4.571", @@ -6490,6 +6485,7 @@ "version": "2.0.0", "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-2.0.0.tgz", "integrity": "sha512-Q0n9HRi4m6JuGIV1eFlmvJB7ZEVxu93IrMyiMsGC0lrMJMWzRgx6WGquyfQgZVb31vhGgXnfmPNNXmxnOkRBrg==", + "license": "MIT", "engines": { "node": ">= 0.8" } @@ -6768,6 +6764,7 @@ "version": "1.8.1", "resolved": "https://registry.npmjs.org/etag/-/etag-1.8.1.tgz", "integrity": "sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg==", + "license": "MIT", "engines": { "node": ">= 0.6" } @@ -6826,6 +6823,7 @@ "version": "4.21.0", "resolved": "https://registry.npmjs.org/express/-/express-4.21.0.tgz", "integrity": "sha512-VqcNGcj/Id5ZT1LZ/cfihi3ttTn+NJmkli2eZADigjq29qTlWi/hAQ43t/VLPq8+UX06FCEx3ByOYet6ZFblng==", + "license": "MIT", "dependencies": { "accepts": "~1.3.8", "array-flatten": "1.1.1", @@ -7103,6 +7101,7 @@ "version": "1.3.1", "resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-1.3.1.tgz", "integrity": "sha512-6BN9trH7bp3qvnrRyzsBz+g3lZxTNZTbVO2EV1CS0WIcDbawYVdYvGflME/9QP0h0pYlCDBCTjYa9nZzMDpyxQ==", + "license": "MIT", "dependencies": { "debug": "2.6.9", "encodeurl": "~2.0.0", @@ -7120,6 +7119,7 @@ "version": "2.6.9", "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "license": "MIT", "dependencies": { "ms": "2.0.0" } @@ -7127,7 +7127,8 @@ "node_modules/finalhandler/node_modules/ms": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", - "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==" + "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==", + "license": "MIT" }, "node_modules/find-cache-dir": { "version": "4.0.0", @@ -7170,6 +7171,19 @@ "flat": "cli.js" } }, + "node_modules/flux": { + "version": "4.0.4", + "resolved": "https://registry.npmjs.org/flux/-/flux-4.0.4.tgz", + "integrity": "sha512-NCj3XlayA2UsapRpM7va6wU1+9rE5FIL7qoMcmxWHRzbp0yujihMBm9BBHZ1MDIk5h5o2Bl6eGiCe8rYELAmYw==", + "license": "BSD-3-Clause", + "dependencies": { + "fbemitter": "^3.0.0", + "fbjs": "^3.0.1" + }, + "peerDependencies": { + "react": "^15.0.2 || ^16.0.0 || ^17.0.0" + } + }, "node_modules/follow-redirects": { "version": "1.15.6", "resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.15.6.tgz", @@ -7384,6 +7398,7 @@ "version": "0.5.2", "resolved": "https://registry.npmjs.org/fresh/-/fresh-0.5.2.tgz", "integrity": "sha512-zJ2mQYM18rEFOudeV4GShTGIQ7RbzA7ozbU9I/XBpm7kqgMywgmylMwXHxZJmkVoYkna9d2pVXVXPdYTP9ej8Q==", + "license": "MIT", "engines": { "node": ">= 0.6" } @@ -8223,6 +8238,7 @@ "version": "2.0.0", "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-2.0.0.tgz", "integrity": "sha512-FtwrG/euBzaEjYeRqOgly7G0qviiXoJWnvEH2Z1plBdXgbyjv34pHTSb9zoeHMyDy33+DWy5Wt9Wo+TURtOYSQ==", + "license": "MIT", "dependencies": { "depd": "2.0.0", "inherits": "2.0.4", @@ -8316,6 +8332,7 @@ "version": "0.4.24", "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.24.tgz", "integrity": "sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA==", + "license": "MIT", "dependencies": { "safer-buffer": ">= 2.1.2 < 3" }, @@ -9591,6 +9608,7 @@ "version": "0.3.0", "resolved": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz", "integrity": "sha512-dq+qelQ9akHpcOl/gUVRTxVIOkAJ1wR3QAvb4RsVjS8oVoFjDGTc679wJYmUmknUF5HwMLOgb5O+a3KxfWapPQ==", + "license": "MIT", "engines": { "node": ">= 0.6" } @@ -9617,6 +9635,7 @@ "version": "1.0.3", "resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-1.0.3.tgz", "integrity": "sha512-gaNvAS7TZ897/rVaZ0nMtAyxNyi/pdbjbAwUpFQpN70GqnVfOiXpeUUMKRBmzXaSQ8DdTX4/0ms62r2K+hE6mQ==", + "license": "MIT", "funding": { "url": "https://github.com/sponsors/sindresorhus" } @@ -11426,6 +11445,7 @@ "version": "1.6.0", "resolved": "https://registry.npmjs.org/mime/-/mime-1.6.0.tgz", "integrity": "sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg==", + "license": "MIT", "bin": { "mime": "cli.js" }, @@ -11716,6 +11736,7 @@ "version": "1.13.2", "resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.2.tgz", "integrity": "sha512-IRZSRuzJiynemAXPYtPe5BoI/RESNYR7TYm50MC5Mqbd3Jmw5y790sErYw3V6SryFJD64b74qQQs9wn5Bg/k3g==", + "license": "MIT", "engines": { "node": ">= 0.4" }, @@ -11760,6 +11781,7 @@ "version": "2.4.1", "resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.4.1.tgz", "integrity": "sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg==", + "license": "MIT", "dependencies": { "ee-first": "1.1.1" }, @@ -12078,7 +12100,8 @@ "node_modules/path-to-regexp": { "version": "0.1.10", "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.10.tgz", - "integrity": "sha512-7lf7qcQidTku0Gu3YDPc8DJ1q7OOucfa/BSsIwjuh56VU7katFvuM8hULfkwB3Fns/rsVF7PwPKVw1sl5KQS9w==" + "integrity": "sha512-7lf7qcQidTku0Gu3YDPc8DJ1q7OOucfa/BSsIwjuh56VU7katFvuM8hULfkwB3Fns/rsVF7PwPKVw1sl5KQS9w==", + "license": "MIT" }, "node_modules/path-type": { "version": "4.0.0", @@ -12984,6 +13007,7 @@ "version": "6.13.0", "resolved": "https://registry.npmjs.org/qs/-/qs-6.13.0.tgz", "integrity": "sha512-+38qI9SOr8tfZ4QmJNplMUxqjbe7LKvvZgWdExBOmd+egZTtjLB67Gu0HRX3u/XOq7UU2Nx6nsjvS16Z9uwfpg==", + "license": "BSD-3-Clause", "dependencies": { "side-channel": "^1.0.6" }, @@ -13057,6 +13081,7 @@ "version": "2.5.2", "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.5.2.tgz", "integrity": "sha512-8zGqypfENjCIqGhgXToC8aB2r7YrBX+AQAfIPs/Mlk+BtPTztOvTS01NRW/3Eh60J+a48lt8qsCzirQ6loCVfA==", + "license": "MIT", "dependencies": { "bytes": "3.1.2", "http-errors": "2.0.0", @@ -14032,7 +14057,8 @@ "node_modules/safer-buffer": { "version": "2.1.2", "resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz", - "integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==" + "integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==", + "license": "MIT" }, "node_modules/sass": { "version": "1.77.8", @@ -14241,6 +14267,7 @@ "version": "0.19.0", "resolved": "https://registry.npmjs.org/send/-/send-0.19.0.tgz", "integrity": "sha512-dW41u5VfLXu8SJh5bwRmyYUbAoSB3c9uQh6L8h/KtsFREPWpbX1lrljJo186Jc4nmci/sGUZ9a0a0J2zgfq2hw==", + "license": "MIT", "dependencies": { "debug": "2.6.9", "depd": "2.0.0", @@ -14264,6 +14291,7 @@ "version": "2.6.9", "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "license": "MIT", "dependencies": { "ms": "2.0.0" } @@ -14271,12 +14299,14 @@ "node_modules/send/node_modules/debug/node_modules/ms": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", - "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==" + "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==", + "license": "MIT" }, "node_modules/send/node_modules/encodeurl": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-1.0.2.tgz", "integrity": "sha512-TPJXq8JqFaVYm2CWmPvnP2Iyo4ZSM7/QKcSmuMLDObfpH5fi7RUGmd/rTDf+rut/saiDiQEeVTNgAmJEdAOx0w==", + "license": "MIT", "engines": { "node": ">= 0.8" } @@ -14284,7 +14314,8 @@ "node_modules/send/node_modules/ms": { "version": "2.1.3", "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz", - "integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==" + "integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==", + "license": "MIT" }, "node_modules/serialize-javascript": { "version": "6.0.2", @@ -14447,6 +14478,7 @@ "version": "1.16.2", "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-1.16.2.tgz", "integrity": "sha512-VqpjJZKadQB/PEbEwvFdO43Ax5dFBZ2UECszz8bQ7pi7wt//PWe1P6MN7eCnjsatYtBT6EuiClbjSWP2WrIoTw==", + "license": "MIT", "dependencies": { "encodeurl": "~2.0.0", "escape-html": "~1.0.3", @@ -14483,7 +14515,8 @@ "node_modules/setprototypeof": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.2.0.tgz", - "integrity": "sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw==" + "integrity": "sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw==", + "license": "ISC" }, "node_modules/shallow-clone": { "version": "3.0.1", @@ -14554,6 +14587,7 @@ "version": "1.0.6", "resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.0.6.tgz", "integrity": "sha512-fDW/EZ6Q9RiO8eFG8Hj+7u/oW+XrPTIChwCOM2+th2A6OblDtYYIpve9m+KvI9Z4C9qSEXlaGR6bTEYHReuglA==", + "license": "MIT", "dependencies": { "call-bind": "^1.0.7", "es-errors": "^1.3.0", @@ -14765,6 +14799,7 @@ "version": "2.0.1", "resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz", "integrity": "sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ==", + "license": "MIT", "engines": { "node": ">= 0.8" } @@ -15245,6 +15280,7 @@ "version": "1.0.1", "resolved": "https://registry.npmjs.org/toidentifier/-/toidentifier-1.0.1.tgz", "integrity": "sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA==", + "license": "MIT", "engines": { "node": ">=0.6" } @@ -15306,6 +15342,7 @@ "version": "1.6.18", "resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.18.tgz", "integrity": "sha512-TkRKr9sUTxEH8MdfuCSP7VizJyzRNMjj2J2do2Jr3Kym598JVdEksuzPQCnlFPW4ky9Q+iA+ma9BGm06XQBy8g==", + "license": "MIT", "dependencies": { "media-typer": "0.3.0", "mime-types": "~2.1.24" @@ -15537,6 +15574,7 @@ "version": "1.0.0", "resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz", "integrity": "sha512-pjy2bYhSsufwWlKwPc+l3cN7+wuJlK6uz0YdJEOlQDbl6jo/YlPi4mb8agUkVC8BF7V8NuzeyPNqRksA3hztKQ==", + "license": "MIT", "engines": { "node": ">= 0.8" } diff --git a/website/package.json b/website/package.json index 13d8962..6ea00af 100644 --- a/website/package.json +++ b/website/package.json @@ -50,5 +50,6 @@ }, "devDependencies": { "@docusaurus/module-type-aliases": "^3.0.0" - } + }, + "packageManager": "yarn@1.22.22+sha512.a6b2f7906b721bba3d67d4aff083df04dad64c399707841b7acf00f6b133b7ac24255f2652fa22ae3534329dc6180534e98d17432037ff6fd140556e2bb3137e" } diff --git a/website/yarn.lock b/website/yarn.lock index c1d55dd..be4df9c 100644 --- a/website/yarn.lock +++ b/website/yarn.lock @@ -75,6 +75,11 @@ "@algolia/requester-common" "4.20.0" "@algolia/transporter" "4.20.0" +"@algolia/client-common@5.1.1": + version "5.1.1" + resolved "https://registry.npmjs.org/@algolia/client-common/-/client-common-5.1.1.tgz" + integrity sha512-jkQNQbGY+XQB3Eln7wqqdUZKBzG8lETcsaUk5gcMc6iIwyN/qW0v0fhpKPH+Kli+BImLxo0CWk12CvVvx2exWA== + "@algolia/client-personalization@4.20.0": version "4.20.0" resolved "https://registry.npmjs.org/@algolia/client-personalization/-/client-personalization-4.20.0.tgz" @@ -84,6 +89,15 @@ "@algolia/requester-common" "4.20.0" "@algolia/transporter" "4.20.0" +"@algolia/client-search@>= 4.9.1 < 6": + version "5.1.1" + resolved "https://registry.npmjs.org/@algolia/client-search/-/client-search-5.1.1.tgz" + integrity sha512-SFpb3FI/VouGou/vpuS7qeCA5Y/KpV42P6CEA/1MZQtl/xJkl6PVjikb+Q9YadeHi2jtDV/aQ6PyiVDnX4PQcw== + dependencies: + "@algolia/client-common" "5.1.1" + "@algolia/requester-browser-xhr" "5.1.1" + "@algolia/requester-node-http" "5.1.1" + "@algolia/client-search@4.20.0": version "4.20.0" resolved "https://registry.npmjs.org/@algolia/client-search/-/client-search-4.20.0.tgz" @@ -117,6 +131,13 @@ dependencies: "@algolia/requester-common" "4.20.0" +"@algolia/requester-browser-xhr@5.1.1": + version "5.1.1" + resolved "https://registry.npmjs.org/@algolia/requester-browser-xhr/-/requester-browser-xhr-5.1.1.tgz" + integrity sha512-NXmN1ujJCj5GlJQaMK6DbdiXdcf6nhRef/X40lu9TYi71q9xTo/5RPMI0K2iOp6g07S26BrXFOz6RSV3Ny4LLw== + dependencies: + "@algolia/client-common" "5.1.1" + "@algolia/requester-common@4.20.0": version "4.20.0" resolved "https://registry.npmjs.org/@algolia/requester-common/-/requester-common-4.20.0.tgz" @@ -129,6 +150,13 @@ dependencies: "@algolia/requester-common" "4.20.0" +"@algolia/requester-node-http@5.1.1": + version "5.1.1" + resolved "https://registry.npmjs.org/@algolia/requester-node-http/-/requester-node-http-5.1.1.tgz" + integrity sha512-xwrgnNTIzgxDEx6zuCKSKTPzQLA8fL/WZiVB6fRpIu5agLMjoAi0cWA5YSDbo+2FFxqVgLqKY/Jz6mKmWtY15Q== + dependencies: + "@algolia/client-common" "5.1.1" + "@algolia/transporter@4.20.0": version "4.20.0" resolved "https://registry.npmjs.org/@algolia/transporter/-/transporter-4.20.0.tgz" @@ -159,7 +187,7 @@ resolved "https://registry.npmjs.org/@babel/compat-data/-/compat-data-7.23.3.tgz" integrity sha512-BmR4bWbDIoFJmJ9z2cZ8Gmm2MXgEDgjdWgpKmKWUt54UGFJdlj31ECtbaDvCG/qVdG3AQ1SfpZEs01lUFbzLOQ== -"@babel/core@^7.19.6", "@babel/core@^7.22.9": +"@babel/core@^7.0.0", "@babel/core@^7.0.0-0", "@babel/core@^7.0.0-0 || ^8.0.0-0 <8.0.0", "@babel/core@^7.12.0", "@babel/core@^7.13.0", "@babel/core@^7.19.6", "@babel/core@^7.22.9", "@babel/core@^7.4.0 || ^8.0.0-0 <8.0.0": version "7.23.3" resolved "https://registry.npmjs.org/@babel/core/-/core-7.23.3.tgz" integrity sha512-Jg+msLuNuCJDyBvFv5+OKOUjWMZgd85bKjbICd3zWrKAo+bJ49HJufi7CQE0q0uR8NGyO6xkCACScNqyjHSZew== @@ -1210,7 +1238,7 @@ "@docsearch/css" "3.5.2" algoliasearch "^4.19.1" -"@docusaurus/core@3.0.0", "@docusaurus/core@^3.0.0": +"@docusaurus/core@^2.0.0-beta || ^3.0.0-alpha", "@docusaurus/core@^3.0.0", "@docusaurus/core@3.0.0": version "3.0.0" resolved "https://registry.npmjs.org/@docusaurus/core/-/core-3.0.0.tgz" integrity sha512-bHWtY55tJTkd6pZhHrWz1MpWuwN4edZe0/UWgFF7PW/oJeDZvLSXKqwny3L91X1/LGGoypBGkeZn8EOuKeL4yQ== @@ -1336,7 +1364,7 @@ vfile "^6.0.1" webpack "^5.88.1" -"@docusaurus/module-type-aliases@3.0.0", "@docusaurus/module-type-aliases@^3.0.0": +"@docusaurus/module-type-aliases@^3.0.0", "@docusaurus/module-type-aliases@3.0.0": version "3.0.0" resolved "https://registry.npmjs.org/@docusaurus/module-type-aliases/-/module-type-aliases-3.0.0.tgz" integrity sha512-CfC6CgN4u/ce+2+L1JdsHNyBd8yYjl4De2B2CBj2a9F7WuJ5RjV1ciuU7KDg8uyju+NRVllRgvJvxVUjCdkPiw== @@ -1350,7 +1378,7 @@ react-helmet-async "*" react-loadable "npm:@docusaurus/react-loadable@5.5.2" -"@docusaurus/plugin-content-blog@3.0.0", "@docusaurus/plugin-content-blog@^3.0.0": +"@docusaurus/plugin-content-blog@^3.0.0", "@docusaurus/plugin-content-blog@3.0.0": version "3.0.0" resolved "https://registry.npmjs.org/@docusaurus/plugin-content-blog/-/plugin-content-blog-3.0.0.tgz" integrity sha512-iA8Wc3tIzVnROJxrbIsU/iSfixHW16YeW9RWsBw7hgEk4dyGsip9AsvEDXobnRq3lVv4mfdgoS545iGWf1Ip9w== @@ -1493,7 +1521,7 @@ "@types/react" "*" prop-types "^15.6.2" -"@docusaurus/theme-classic@3.0.0", "@docusaurus/theme-classic@^3.0.0": +"@docusaurus/theme-classic@^3.0.0", "@docusaurus/theme-classic@3.0.0": version "3.0.0" resolved "https://registry.npmjs.org/@docusaurus/theme-classic/-/theme-classic-3.0.0.tgz" integrity sha512-wWOHSrKMn7L4jTtXBsb5iEJ3xvTddBye5PjYBnWiCkTAlhle2yMdc4/qRXW35Ot+OV/VXu6YFG8XVUJEl99z0A== @@ -1575,7 +1603,7 @@ fs-extra "^11.1.1" tslib "^2.6.0" -"@docusaurus/types@3.0.0": +"@docusaurus/types@*", "@docusaurus/types@3.0.0": version "3.0.0" resolved "https://registry.npmjs.org/@docusaurus/types/-/types-3.0.0.tgz" integrity sha512-Qb+l/hmCOVemReuzvvcFdk84bUmUFyD0Zi81y651ie3VwMrXqC7C0E7yZLKMOsLj/vkqsxHbtkAuYMI89YzNzg== @@ -1655,7 +1683,7 @@ resolved "https://registry.npmjs.org/@fortawesome/fontawesome-common-types/-/fontawesome-common-types-6.4.2.tgz" integrity sha512-1DgP7f+XQIJbLFCTX1V2QnxVmpLdKdzzo2k8EmvDOePfchaIGQ9eCHj2up3/jNEbZuBqel5OxiaOJf37TWauRA== -"@fortawesome/fontawesome-svg-core@^6.4.2": +"@fortawesome/fontawesome-svg-core@^6.4.2", "@fortawesome/fontawesome-svg-core@~1 || ~6": version "6.4.2" resolved "https://registry.npmjs.org/@fortawesome/fontawesome-svg-core/-/fontawesome-svg-core-6.4.2.tgz" integrity sha512-gjYDSKv3TrM2sLTOKBc5rH9ckje8Wrwgx1CxAPbN5N3Fm4prfi7NsJVWd1jklp7i5uSCVwhZS5qlhMXqLrpAIg== @@ -1806,7 +1834,7 @@ "@nodelib/fs.stat" "2.0.5" run-parallel "^1.1.9" -"@nodelib/fs.stat@2.0.5", "@nodelib/fs.stat@^2.0.2": +"@nodelib/fs.stat@^2.0.2", "@nodelib/fs.stat@2.0.5": version "2.0.5" resolved "https://registry.npmjs.org/@nodelib/fs.stat/-/fs.stat-2.0.5.tgz" integrity sha512-RkhPPp2zrqDAQA/2jNhnztcPAlv64XdhIp7a7454A5ovI7Bukxgt7MX7udwAu3zg1DcpPU0rz3VV1SeaqvY4+A== @@ -1949,7 +1977,7 @@ "@svgr/babel-plugin-transform-react-native-svg" "^6.5.1" "@svgr/babel-plugin-transform-svg-component" "^6.5.1" -"@svgr/core@^6.5.1": +"@svgr/core@*", "@svgr/core@^6.0.0", "@svgr/core@^6.5.1": version "6.5.1" resolved "https://registry.npmjs.org/@svgr/core/-/core-6.5.1.tgz" integrity sha512-/xdLSWxK5QkqG524ONSjvg3V/FkNyCv538OIBdQqPNaAta3AsXj/Bd2FbvR87yMbXO2hFSWiAe/Q6IkVPDw+mw== @@ -2242,7 +2270,7 @@ "@types/history" "^4.7.11" "@types/react" "*" -"@types/react@*": +"@types/react@*", "@types/react@>= 16.8.0 < 19.0.0", "@types/react@>=16": version "18.3.4" resolved "https://registry.npmjs.org/@types/react/-/react-18.3.4.tgz" integrity sha512-J7W30FTdfCxDDjmfRM+/JqLHBIyl7xUIp9kwK637FGmY7+mkSFSe6L4jpZzhj5QMfLssSDP4/i75AKkrdC7/Jw== @@ -2327,7 +2355,7 @@ resolved "https://registry.npmjs.org/@ungap/structured-clone/-/structured-clone-1.2.0.tgz" integrity sha512-zuVdFrMJiuCDQUMCzQaD6KL28MjnqqN8XnAqiEq9PNm/hCPTSGfrXCOfwj1ow4LFb/tNymJPwsNbVePc1xFqrQ== -"@webassemblyjs/ast@1.12.1", "@webassemblyjs/ast@^1.12.1": +"@webassemblyjs/ast@^1.12.1", "@webassemblyjs/ast@1.12.1": version "1.12.1" resolved "https://registry.npmjs.org/@webassemblyjs/ast/-/ast-1.12.1.tgz" integrity sha512-EKfMUOPRRUTy5UII4qJDGPpqfwjOmZ5jeGFwid9mnoqIFK+e0vqoi1qH56JpmZSzEL53jKnNzScdmftJyG5xWg== @@ -2428,7 +2456,7 @@ "@webassemblyjs/wasm-gen" "1.12.1" "@webassemblyjs/wasm-parser" "1.12.1" -"@webassemblyjs/wasm-parser@1.12.1", "@webassemblyjs/wasm-parser@^1.12.1": +"@webassemblyjs/wasm-parser@^1.12.1", "@webassemblyjs/wasm-parser@1.12.1": version "1.12.1" resolved "https://registry.npmjs.org/@webassemblyjs/wasm-parser/-/wasm-parser-1.12.1.tgz" integrity sha512-xikIi7c2FHXysxXe3COrVUPSheuBtpcfhbpFj4gmu7KRLYOzANztwUU0IbsqvMqzuNK2+glRGWCEqZo1WCLyAQ== @@ -2481,7 +2509,7 @@ acorn-walk@^8.0.0: resolved "https://registry.npmjs.org/acorn-walk/-/acorn-walk-8.2.0.tgz" integrity sha512-k+iyHEuPgSw6SbuDpGQM+06HQUa04DZ3o+F6CSzXMvvI5KMvnaEqXe+YVe555R9nn6GPt404fos4wcgpw12SDA== -acorn@^8.0.0, acorn@^8.0.4, acorn@^8.7.1, acorn@^8.8.2: +"acorn@^6.0.0 || ^7.0.0 || ^8.0.0", acorn@^8, acorn@^8.0.0, acorn@^8.0.4, acorn@^8.7.1, acorn@^8.8.2: version "8.11.2" resolved "https://registry.npmjs.org/acorn/-/acorn-8.11.2.tgz" integrity sha512-nc0Axzp/0FILLEVsm4fNwLCwMttvhEI263QtVPQcbpfZZ3ts0hLsZGOpE6czNlid7CJ9MlyH8reXkpsf3YUY4w== @@ -2506,7 +2534,12 @@ ajv-formats@^2.1.1: dependencies: ajv "^8.0.0" -ajv-keywords@^3.4.1, ajv-keywords@^3.5.2: +ajv-keywords@^3.4.1: + version "3.5.2" + resolved "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-3.5.2.tgz" + integrity sha512-5p6WTN0DdTGVQk6VjcEju19IgaHudalcfabD7yhDGeA6bcQnmL+CpveLJq/3hvfwd1aof6L386Ougkx6RfyMIQ== + +ajv-keywords@^3.5.2: version "3.5.2" resolved "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-3.5.2.tgz" integrity sha512-5p6WTN0DdTGVQk6VjcEju19IgaHudalcfabD7yhDGeA6bcQnmL+CpveLJq/3hvfwd1aof6L386Ougkx6RfyMIQ== @@ -2518,7 +2551,7 @@ ajv-keywords@^5.1.0: dependencies: fast-deep-equal "^3.1.3" -ajv@^6.12.2, ajv@^6.12.5: +ajv@^6.12.2, ajv@^6.12.5, ajv@^6.9.1: version "6.12.6" resolved "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz" integrity sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g== @@ -2528,7 +2561,7 @@ ajv@^6.12.2, ajv@^6.12.5: json-schema-traverse "^0.4.1" uri-js "^4.2.2" -ajv@^8.0.0, ajv@^8.9.0: +ajv@^8.0.0, ajv@^8.8.2, ajv@^8.9.0: version "8.12.0" resolved "https://registry.npmjs.org/ajv/-/ajv-8.12.0.tgz" integrity sha512-sRu1kpcO9yLtYxBKvqfTeh9KzZEwO3STyX1HT+4CaDzC6HpTGYhIhPIzj9XuKU7KYDwnaeh5hcOwjy1QuJzBPA== @@ -2545,7 +2578,7 @@ algoliasearch-helper@^3.13.3: dependencies: "@algolia/events" "^4.0.1" -algoliasearch@^4.18.0, algoliasearch@^4.19.1: +algoliasearch@^4.18.0, algoliasearch@^4.19.1, "algoliasearch@>= 3.1 < 6", "algoliasearch@>= 4.9.1 < 6": version "4.20.0" resolved "https://registry.npmjs.org/algoliasearch/-/algoliasearch-4.20.0.tgz" integrity sha512-y+UHEjnOItoNy0bYO+WWmLWBlPwDjKHW6mNHrPi0NkuhpQOOEbrkwQH/wgKFDLh7qlKjzoKeiRtlpewDPDG23g== @@ -2631,16 +2664,16 @@ argparse@^2.0.1: resolved "https://registry.npmjs.org/argparse/-/argparse-2.0.1.tgz" integrity sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q== -array-flatten@1.1.1: - version "1.1.1" - resolved "https://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz" - integrity sha512-PCVAQswWemu6UdxsDFFX/+gVeYqKAod3D3UVm91jHwynguOwAvYPhx8nNlM++NqRcK6CxxpUafjmhIdKiHibqg== - array-flatten@^2.1.2: version "2.1.2" resolved "https://registry.npmjs.org/array-flatten/-/array-flatten-2.1.2.tgz" integrity sha512-hNfzcOV8W4NdualtqBFPyVO+54DSJuZGY9qT4pRroB6S9e3iiido2ISIC5h9R2sPJ8H3FHCIiEnsv1lPXO3KtQ== +array-flatten@1.1.1: + version "1.1.1" + resolved "https://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz" + integrity sha512-PCVAQswWemu6UdxsDFFX/+gVeYqKAod3D3UVm91jHwynguOwAvYPhx8nNlM++NqRcK6CxxpUafjmhIdKiHibqg== + array-union@^2.1.0: version "2.1.0" resolved "https://registry.npmjs.org/array-union/-/array-union-2.1.0.tgz" @@ -2758,7 +2791,7 @@ binary-extensions@^2.0.0: body-parser@1.20.3: version "1.20.3" - resolved "https://registry.yarnpkg.com/body-parser/-/body-parser-1.20.3.tgz#1953431221c6fb5cd63c4b36d53fab0928e548c6" + resolved "https://registry.npmjs.org/body-parser/-/body-parser-1.20.3.tgz" integrity sha512-7rAxByjUMqQ3/bHJy7D6OGXvx/MMc4IqBn/X0fcM1QUcAItpZrBEYhWGem+tzXH90c+G01ypMcYJBO9Y30203g== dependencies: bytes "3.1.2" @@ -2832,7 +2865,7 @@ braces@^3.0.3, braces@~3.0.2: dependencies: fill-range "^7.1.1" -browserslist@^4.0.0, browserslist@^4.18.1, browserslist@^4.21.10, browserslist@^4.21.4, browserslist@^4.21.9, browserslist@^4.22.1: +browserslist@^4.0.0, browserslist@^4.18.1, browserslist@^4.21.10, browserslist@^4.21.4, browserslist@^4.21.9, browserslist@^4.22.1, "browserslist@>= 4.21.0": version "4.22.1" resolved "https://registry.npmjs.org/browserslist/-/browserslist-4.22.1.tgz" integrity sha512-FEVc202+2iuClEhZhrWy6ZiAcRLvNMyYcxZ8raemul1DYVOVdFsbqckWLdsixQZCpJlwe77Z3UTalE7jsjnKfQ== @@ -3001,7 +3034,7 @@ cheerio@^1.0.0-rc.12: parse5 "^7.0.0" parse5-htmlparser2-tree-adapter "^7.0.0" -"chokidar@>=3.0.0 <4.0.0", chokidar@^3.4.2, chokidar@^3.5.3: +chokidar@^3.4.2, chokidar@^3.5.3, "chokidar@>=3.0.0 <4.0.0": version "3.5.3" resolved "https://registry.npmjs.org/chokidar/-/chokidar-3.5.3.tgz" integrity sha512-Dr3sfKRP6oTcjf2JmUmFJfeVMvXBdegxB0iVQ5eb2V10uFJUCAS8OByZdVAyVb8xXNz3GjjTgj9kLWsZTqE6kw== @@ -3095,16 +3128,16 @@ color-convert@^2.0.1: dependencies: color-name "~1.1.4" -color-name@1.1.3: - version "1.1.3" - resolved "https://registry.npmjs.org/color-name/-/color-name-1.1.3.tgz" - integrity sha512-72fSenhMw2HZMTVHeCA9KCmpEIbzWiQsjN+BHcBbS9vr1mtt+vJjPdksIBNUmKAW8TFUDPJK5SUU3QhE9NEXDw== - color-name@~1.1.4: version "1.1.4" resolved "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz" integrity sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA== +color-name@1.1.3: + version "1.1.3" + resolved "https://registry.npmjs.org/color-name/-/color-name-1.1.3.tgz" + integrity sha512-72fSenhMw2HZMTVHeCA9KCmpEIbzWiQsjN+BHcBbS9vr1mtt+vJjPdksIBNUmKAW8TFUDPJK5SUU3QhE9NEXDw== + colord@^2.9.1: version "2.9.3" resolved "https://registry.npmjs.org/colord/-/colord-2.9.3.tgz" @@ -3491,20 +3524,27 @@ debounce@^1.2.1: resolved "https://registry.npmjs.org/debounce/-/debounce-1.2.1.tgz" integrity sha512-XRRe6Glud4rd/ZGQfiV1ruXSfbvfJedlV9Y6zOlP+2K04vBYiJEte6stfFkCP03aMnY5tsipamumUjL14fofug== -debug@2.6.9, debug@^2.6.0: +debug@^2.6.0: version "2.6.9" resolved "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz" integrity sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA== dependencies: ms "2.0.0" -debug@4, debug@^4.0.0, debug@^4.1.0, debug@^4.1.1: +debug@^4.0.0, debug@^4.1.0, debug@^4.1.1, debug@4: version "4.3.4" resolved "https://registry.npmjs.org/debug/-/debug-4.3.4.tgz" integrity sha512-PRWFHuSU3eDtQJPvnNY7Jcket1j0t5OuOsFzPPzsekD52Zl8qUfFIPEiswXqIvHWGVHOgX+7G/vCNNhehwxfkQ== dependencies: ms "2.1.2" +debug@2.6.9: + version "2.6.9" + resolved "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz" + integrity sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA== + dependencies: + ms "2.0.0" + decode-named-character-reference@^1.0.0: version "1.0.2" resolved "https://registry.npmjs.org/decode-named-character-reference/-/decode-named-character-reference-1.0.2.tgz" @@ -3583,16 +3623,16 @@ delayed-stream@~1.0.0: resolved "https://registry.npmjs.org/delayed-stream/-/delayed-stream-1.0.0.tgz" integrity sha512-ZySD7Nf91aLB0RxL4KGrKHBXl7Eds1DAmEdcoVawXnLD7SDhpNgtuII2aAkg7a7QS41jxPSZ17p4VdGnMHk3MQ== -depd@2.0.0: - version "2.0.0" - resolved "https://registry.npmjs.org/depd/-/depd-2.0.0.tgz" - integrity sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw== - depd@~1.1.2: version "1.1.2" resolved "https://registry.npmjs.org/depd/-/depd-1.1.2.tgz" integrity sha512-7emPTl6Dpo6JRXOXjLRxck+FlLRX5847cLKEn00PLAgc3g2hTZZgr+e4c2v6QpSmLeFP3n5yUo7ft6avBK/5jQ== +depd@2.0.0: + version "2.0.0" + resolved "https://registry.npmjs.org/depd/-/depd-2.0.0.tgz" + integrity sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw== + dequal@^2.0.0: version "2.0.3" resolved "https://registry.npmjs.org/dequal/-/dequal-2.0.3.tgz" @@ -3796,7 +3836,7 @@ encodeurl@~1.0.2: encodeurl@~2.0.0: version "2.0.0" - resolved "https://registry.yarnpkg.com/encodeurl/-/encodeurl-2.0.0.tgz#7b8ea898077d7e409d3ac45474ea38eaf0857a58" + resolved "https://registry.npmjs.org/encodeurl/-/encodeurl-2.0.0.tgz" integrity sha512-Q0n9HRi4m6JuGIV1eFlmvJB7ZEVxu93IrMyiMsGC0lrMJMWzRgx6WGquyfQgZVb31vhGgXnfmPNNXmxnOkRBrg== enhanced-resolve@^5.17.1: @@ -4005,7 +4045,7 @@ execa@^5.0.0: express@^4.17.3, express@^4.21.0: version "4.21.0" - resolved "https://registry.yarnpkg.com/express/-/express-4.21.0.tgz#d57cb706d49623d4ac27833f1cbc466b668eb915" + resolved "https://registry.npmjs.org/express/-/express-4.21.0.tgz" integrity sha512-VqcNGcj/Id5ZT1LZ/cfihi3ttTn+NJmkli2eZADigjq29qTlWi/hAQ43t/VLPq8+UX06FCEx3ByOYet6ZFblng== dependencies: accepts "~1.3.8" @@ -4133,7 +4173,7 @@ feed@^4.2.2: dependencies: xml-js "^1.6.11" -file-loader@^6.2.0: +file-loader@*, file-loader@^6.2.0: version "6.2.0" resolved "https://registry.npmjs.org/file-loader/-/file-loader-6.2.0.tgz" integrity sha512-qo3glqyTa61Ytg4u73GultjHGjdRyig3tG6lPtyX/jOEJvHif9uB0/OCI2Kif6ctF3caQTW2G5gym21oAsI4pw== @@ -4155,7 +4195,7 @@ fill-range@^7.1.1: finalhandler@1.3.1: version "1.3.1" - resolved "https://registry.yarnpkg.com/finalhandler/-/finalhandler-1.3.1.tgz#0c575f1d1d324ddd1da35ad7ece3df7d19088019" + resolved "https://registry.npmjs.org/finalhandler/-/finalhandler-1.3.1.tgz" integrity sha512-6BN9trH7bp3qvnrRyzsBz+g3lZxTNZTbVO2EV1CS0WIcDbawYVdYvGflME/9QP0h0pYlCDBCTjYa9nZzMDpyxQ== dependencies: debug "2.6.9" @@ -4297,11 +4337,6 @@ fs.realpath@^1.0.0: resolved "https://registry.npmjs.org/fs.realpath/-/fs.realpath-1.0.0.tgz" integrity sha512-OO0pH2lK6a0hZnAdau5ItzHPI6pUlvI7jMVnxUQRtw4owF2wk8lOSabtGDCTP4Ggrg2MbGnWO9X8K1t4+fGMDw== -fsevents@~2.3.2: - version "2.3.3" - resolved "https://registry.npmjs.org/fsevents/-/fsevents-2.3.3.tgz" - integrity sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw== - function-bind@^1.1.2: version "1.1.2" resolved "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz" @@ -4444,16 +4479,16 @@ got@^12.1.0: p-cancelable "^3.0.0" responselike "^3.0.0" -graceful-fs@4.2.10: - version "4.2.10" - resolved "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.10.tgz" - integrity sha512-9ByhssR2fPVsNZj478qUUbKfmL0+t5BDVyjShtyZZLiK7ZDAArFFfopyOTj0M05wE2tJPisA4iTnnXl2YoPvOA== - graceful-fs@^4.1.2, graceful-fs@^4.1.6, graceful-fs@^4.2.0, graceful-fs@^4.2.11, graceful-fs@^4.2.4, graceful-fs@^4.2.6, graceful-fs@^4.2.9: version "4.2.11" resolved "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.11.tgz" integrity sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ== +graceful-fs@4.2.10: + version "4.2.10" + resolved "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.10.tgz" + integrity sha512-9ByhssR2fPVsNZj478qUUbKfmL0+t5BDVyjShtyZZLiK7ZDAArFFfopyOTj0M05wE2tJPisA4iTnnXl2YoPvOA== + gray-matter@^4.0.3: version "4.0.3" resolved "https://registry.npmjs.org/gray-matter/-/gray-matter-4.0.3.tgz" @@ -4750,6 +4785,16 @@ http-deceiver@^1.2.7: resolved "https://registry.npmjs.org/http-deceiver/-/http-deceiver-1.2.7.tgz" integrity sha512-LmpOGxTfbpgtGVxJrj5k7asXHCgNZp5nLfp+hWc8QQRqtb7fUy6kRY3BO1h9ddF6yIPYUARgxGOwB42DnxIaNw== +http-errors@~1.6.2: + version "1.6.3" + resolved "https://registry.npmjs.org/http-errors/-/http-errors-1.6.3.tgz" + integrity sha512-lks+lVC8dgGyh97jxvxeYTWQFvh4uw4yC12gVl63Cg30sjPX4wuGcdkICVXDAESr6OJGjqGA8Iz5mkeN6zlD7A== + dependencies: + depd "~1.1.2" + inherits "2.0.3" + setprototypeof "1.1.0" + statuses ">= 1.4.0 < 2" + http-errors@2.0.0: version "2.0.0" resolved "https://registry.npmjs.org/http-errors/-/http-errors-2.0.0.tgz" @@ -4761,16 +4806,6 @@ http-errors@2.0.0: statuses "2.0.1" toidentifier "1.0.1" -http-errors@~1.6.2: - version "1.6.3" - resolved "https://registry.npmjs.org/http-errors/-/http-errors-1.6.3.tgz" - integrity sha512-lks+lVC8dgGyh97jxvxeYTWQFvh4uw4yC12gVl63Cg30sjPX4wuGcdkICVXDAESr6OJGjqGA8Iz5mkeN6zlD7A== - dependencies: - depd "~1.1.2" - inherits "2.0.3" - setprototypeof "1.1.0" - statuses ">= 1.4.0 < 2" - http-parser-js@>=0.5.1: version "0.5.8" resolved "https://registry.npmjs.org/http-parser-js/-/http-parser-js-0.5.8.tgz" @@ -4879,7 +4914,7 @@ inflight@^1.0.4: once "^1.3.0" wrappy "1" -inherits@2, inherits@2.0.4, inherits@^2.0.1, inherits@^2.0.3, inherits@~2.0.3: +inherits@^2.0.1, inherits@^2.0.3, inherits@~2.0.3, inherits@2, inherits@2.0.4: version "2.0.4" resolved "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz" integrity sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ== @@ -4889,16 +4924,16 @@ inherits@2.0.3: resolved "https://registry.npmjs.org/inherits/-/inherits-2.0.3.tgz" integrity sha512-x00IRNXNy63jwGkJmzPigoySHbaqpNuzKbBOmzK+g2OdZpQ9w+sxCN+VSB3ja7IAge2OP2qpfxTjeNcyjmW1uw== -ini@2.0.0: - version "2.0.0" - resolved "https://registry.npmjs.org/ini/-/ini-2.0.0.tgz" - integrity sha512-7PnF4oN3CvZF23ADhA5wRaYEQpJ8qygSkbtTXWBeXWXmEVRXK+1ITciHWwHhsjv1TmW0MgacIv6hEi5pX5NQdA== - ini@^1.3.4, ini@^1.3.5, ini@~1.3.0: version "1.3.8" resolved "https://registry.npmjs.org/ini/-/ini-1.3.8.tgz" integrity sha512-JV/yugV2uzW5iMRSiZAyDtQd+nxtUnjeLt0acNdw98kKLrvuRVyB80tsREOE7yvGVgalhZ6RNXCmEHkUKBKxew== +ini@2.0.0: + version "2.0.0" + resolved "https://registry.npmjs.org/ini/-/ini-2.0.0.tgz" + integrity sha512-7PnF4oN3CvZF23ADhA5wRaYEQpJ8qygSkbtTXWBeXWXmEVRXK+1ITciHWwHhsjv1TmW0MgacIv6hEi5pX5NQdA== + inline-style-parser@0.1.1: version "0.1.1" resolved "https://registry.npmjs.org/inline-style-parser/-/inline-style-parser-0.1.1.tgz" @@ -4921,16 +4956,16 @@ invariant@^2.2.4: dependencies: loose-envify "^1.0.0" -ipaddr.js@1.9.1: - version "1.9.1" - resolved "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.1.tgz" - integrity sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g== - ipaddr.js@^2.0.1: version "2.2.0" resolved "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-2.2.0.tgz" integrity sha512-Ag3wB2o37wslZS19hZqorUnrnzSkpOVy+IiiDEiTqNubEYpYuHWIf6K4psgN2ZWKExS4xhVCrRVfb/wfW8fWJA== +ipaddr.js@1.9.1: + version "1.9.1" + resolved "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.1.tgz" + integrity sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g== + is-alphabetical@^2.0.0: version "2.0.1" resolved "https://registry.npmjs.org/is-alphabetical/-/is-alphabetical-2.0.1.tgz" @@ -5101,16 +5136,16 @@ is-yarn-global@^0.4.0: resolved "https://registry.npmjs.org/is-yarn-global/-/is-yarn-global-0.4.1.tgz" integrity sha512-/kppl+R+LO5VmhYSEWARUFjodS25D68gvj8W7z0I7OWhUla5xWu8KL6CtB2V0R6yqhnRgbcaREMr4EEM6htLPQ== -isarray@0.0.1: - version "0.0.1" - resolved "https://registry.npmjs.org/isarray/-/isarray-0.0.1.tgz" - integrity sha512-D2S+3GLxWH+uhrNEcoh/fnmYeP8E8/zHl644d/jdA0g2uyXvy3sb0qxotE+ne0LtccHknQzWwZEzhak7oJ0COQ== - isarray@~1.0.0: version "1.0.0" resolved "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz" integrity sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ== +isarray@0.0.1: + version "0.0.1" + resolved "https://registry.npmjs.org/isarray/-/isarray-0.0.1.tgz" + integrity sha512-D2S+3GLxWH+uhrNEcoh/fnmYeP8E8/zHl644d/jdA0g2uyXvy3sb0qxotE+ne0LtccHknQzWwZEzhak7oJ0COQ== + isexe@^2.0.0: version "2.0.0" resolved "https://registry.npmjs.org/isexe/-/isexe-2.0.0.tgz" @@ -5641,7 +5676,7 @@ memoize-one@^5.1.1: merge-descriptors@1.0.3: version "1.0.3" - resolved "https://registry.yarnpkg.com/merge-descriptors/-/merge-descriptors-1.0.3.tgz#d80319a65f3c7935351e5cfdac8f9318504dbed5" + resolved "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-1.0.3.tgz" integrity sha512-gaNvAS7TZ897/rVaZ0nMtAyxNyi/pdbjbAwUpFQpN70GqnVfOiXpeUUMKRBmzXaSQ8DdTX4/0ms62r2K+hE6mQ== merge-stream@^2.0.0: @@ -6083,7 +6118,7 @@ micromatch@^4.0.2, micromatch@^4.0.4, micromatch@^4.0.5: braces "^3.0.3" picomatch "^2.3.1" -mime-db@1.52.0, "mime-db@>= 1.43.0 < 2": +"mime-db@>= 1.43.0 < 2", mime-db@1.52.0: version "1.52.0" resolved "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz" integrity sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg== @@ -6093,13 +6128,6 @@ mime-db@~1.33.0: resolved "https://registry.npmjs.org/mime-db/-/mime-db-1.33.0.tgz" integrity sha512-BHJ/EKruNIqJf/QahvxwQZXKygOQ256myeN/Ew+THcAa5q+PjyTTMMeNQC4DZw5AwfvelsUrA6B67NKMqXDbzQ== -mime-types@2.1.18: - version "2.1.18" - resolved "https://registry.npmjs.org/mime-types/-/mime-types-2.1.18.tgz" - integrity sha512-lc/aahn+t4/SWV/qcmumYjymLsWfN3ELhpmVuUFjgsORruuZPVSwAQryq+HHGvO/SI2KVX26bx+En+zhM8g8hQ== - dependencies: - mime-db "~1.33.0" - mime-types@^2.1.12, mime-types@^2.1.27, mime-types@^2.1.31, mime-types@~2.1.17, mime-types@~2.1.24, mime-types@~2.1.34: version "2.1.35" resolved "https://registry.npmjs.org/mime-types/-/mime-types-2.1.35.tgz" @@ -6107,6 +6135,13 @@ mime-types@^2.1.12, mime-types@^2.1.27, mime-types@^2.1.31, mime-types@~2.1.17, dependencies: mime-db "1.52.0" +mime-types@2.1.18: + version "2.1.18" + resolved "https://registry.npmjs.org/mime-types/-/mime-types-2.1.18.tgz" + integrity sha512-lc/aahn+t4/SWV/qcmumYjymLsWfN3ELhpmVuUFjgsORruuZPVSwAQryq+HHGvO/SI2KVX26bx+En+zhM8g8hQ== + dependencies: + mime-db "~1.33.0" + mime@1.6.0: version "1.6.0" resolved "https://registry.npmjs.org/mime/-/mime-1.6.0.tgz" @@ -6140,7 +6175,7 @@ minimalistic-assert@^1.0.0: resolved "https://registry.npmjs.org/minimalistic-assert/-/minimalistic-assert-1.0.1.tgz" integrity sha512-UtJcAD4yEaGtjPezWuO9wC4nwUnVH/8/Im3yEHQP4b67cXlD/Qr9hdITCU1xDbSEXg2XKNaP8jsReV7vQd00/A== -minimatch@3.1.2, minimatch@^3.0.4, minimatch@^3.0.5, minimatch@^3.1.1: +minimatch@^3.0.4, minimatch@^3.0.5, minimatch@^3.1.1, minimatch@3.1.2: version "3.1.2" resolved "https://registry.npmjs.org/minimatch/-/minimatch-3.1.2.tgz" integrity sha512-J7p63hRiAjw1NDEww1W7i37+ByIrOWO5XQQAzZ3VOcL0PNybwpfmV/N05zFAzwQ9USyEcX6t3UO+K5aqBQOIHw== @@ -6523,9 +6558,16 @@ path-parse@^1.0.7: resolved "https://registry.npmjs.org/path-parse/-/path-parse-1.0.7.tgz" integrity sha512-LDJzPVEEEPR+y48z93A0Ed0yXb8pAByGWo/k5YYdYgpY2/2EsOsksJrq7lOHxryrVOn1ejG6oAp8ahvOIQD8sw== +path-to-regexp@^1.7.0: + version "1.8.0" + resolved "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-1.8.0.tgz" + integrity sha512-n43JRhlUKUAlibEJhPeir1ncUID16QnEjNpwzNdO3Lm4ywrBpBZ5oLD0I6br9evr1Y9JTqwRtAh7JLoOzAQdVA== + dependencies: + isarray "0.0.1" + path-to-regexp@0.1.10: version "0.1.10" - resolved "https://registry.yarnpkg.com/path-to-regexp/-/path-to-regexp-0.1.10.tgz#67e9108c5c0551b9e5326064387de4763c4d5f8b" + resolved "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.10.tgz" integrity sha512-7lf7qcQidTku0Gu3YDPc8DJ1q7OOucfa/BSsIwjuh56VU7katFvuM8hULfkwB3Fns/rsVF7PwPKVw1sl5KQS9w== path-to-regexp@2.2.1: @@ -6533,13 +6575,6 @@ path-to-regexp@2.2.1: resolved "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-2.2.1.tgz" integrity sha512-gu9bD6Ta5bwGrrU8muHzVOBFFREpp2iRkVfhBJahwJ6p6Xw20SjT0MxLnwkjOibQmGSYhiUnf2FLe7k+jcFmGQ== -path-to-regexp@^1.7.0: - version "1.8.0" - resolved "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-1.8.0.tgz" - integrity sha512-n43JRhlUKUAlibEJhPeir1ncUID16QnEjNpwzNdO3Lm4ywrBpBZ5oLD0I6br9evr1Y9JTqwRtAh7JLoOzAQdVA== - dependencies: - isarray "0.0.1" - path-type@^4.0.0: version "4.0.0" resolved "https://registry.npmjs.org/path-type/-/path-type-4.0.0.tgz" @@ -6859,7 +6894,7 @@ postcss-zindex@^5.1.0: resolved "https://registry.npmjs.org/postcss-zindex/-/postcss-zindex-5.1.0.tgz" integrity sha512-fgFMf0OtVSBR1va1JNHYgMxYk73yhn/qb4uQDq1DLGYolz8gHCyr/sesEuGUaYs58E3ZJRcpoGuPVoB7Meiq9A== -postcss@^8.4.17, postcss@^8.4.21, postcss@^8.4.26: +"postcss@^7.0.0 || ^8.0.1", postcss@^8.0.9, postcss@^8.1.0, postcss@^8.2.15, postcss@^8.2.2, postcss@^8.4.16, postcss@^8.4.17, postcss@^8.4.21, postcss@^8.4.26: version "8.4.41" resolved "https://registry.npmjs.org/postcss/-/postcss-8.4.41.tgz" integrity sha512-TesUflQ0WKZqAvg52PWL6kHgLKP6xB6heTOdoYM0Wt2UHyxNa4K25EZZMgKns3BH1RLVbZCREPpLY0rhnNoHVQ== @@ -6970,7 +7005,7 @@ pure-color@^1.2.0: qs@6.13.0: version "6.13.0" - resolved "https://registry.yarnpkg.com/qs/-/qs-6.13.0.tgz#6ca3bd58439f7e245655798997787b0d88a51906" + resolved "https://registry.npmjs.org/qs/-/qs-6.13.0.tgz" integrity sha512-+38qI9SOr8tfZ4QmJNplMUxqjbe7LKvvZgWdExBOmd+egZTtjLB67Gu0HRX3u/XOq7UU2Nx6nsjvS16Z9uwfpg== dependencies: side-channel "^1.0.6" @@ -6999,16 +7034,16 @@ randombytes@^2.1.0: dependencies: safe-buffer "^5.1.0" -range-parser@1.2.0: - version "1.2.0" - resolved "https://registry.npmjs.org/range-parser/-/range-parser-1.2.0.tgz" - integrity sha512-kA5WQoNVo4t9lNx2kQNFCxKeBl5IbbSNBl1M/tLkw9WCn+hxNBAW5Qh8gdhs63CJnhjJ2zQWFoqPJP2sK1AV5A== - range-parser@^1.2.1, range-parser@~1.2.1: version "1.2.1" resolved "https://registry.npmjs.org/range-parser/-/range-parser-1.2.1.tgz" integrity sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg== +range-parser@1.2.0: + version "1.2.0" + resolved "https://registry.npmjs.org/range-parser/-/range-parser-1.2.0.tgz" + integrity sha512-kA5WQoNVo4t9lNx2kQNFCxKeBl5IbbSNBl1M/tLkw9WCn+hxNBAW5Qh8gdhs63CJnhjJ2zQWFoqPJP2sK1AV5A== + raw-body@2.5.2: version "2.5.2" resolved "https://registry.npmjs.org/raw-body/-/raw-body-2.5.2.tgz" @@ -7069,7 +7104,7 @@ react-dev-utils@^12.0.1: strip-ansi "^6.0.1" text-table "^0.2.0" -react-dom@^18.0.0: +react-dom@*, "react-dom@^16.6.0 || ^17.0.0 || ^18.0.0", react-dom@^18.0.0, "react-dom@>= 15", "react-dom@>= 16.8.0 < 19.0.0", react-dom@>=16.14.0: version "18.3.1" resolved "https://registry.npmjs.org/react-dom/-/react-dom-18.3.1.tgz" integrity sha512-5m4nQKp+rZRb09LNH59GM4BxTh9251/ylbKIbpe7TpGxfJ+9kv6BLkLBXIjjspbgbnIBNqlI23tRnTWT0snUIw== @@ -7115,7 +7150,7 @@ react-loadable-ssr-addon-v5-slorber@^1.0.1: dependencies: "@babel/runtime" "^7.10.3" -"react-loadable@npm:@docusaurus/react-loadable@5.5.2": +react-loadable@*, "react-loadable@npm:@docusaurus/react-loadable@5.5.2": version "5.5.2" resolved "https://registry.npmjs.org/@docusaurus/react-loadable/-/react-loadable-5.5.2.tgz" integrity sha512-A3dYjdBGuy0IGT+wyLIGIKLRE+sAk1iNk0f1HjNDysO7u8lhL4N3VEm+FAubmJbAztn94F7MxBTPmnixbiyFdQ== @@ -7154,7 +7189,7 @@ react-router-dom@^5.3.4: tiny-invariant "^1.0.2" tiny-warning "^1.0.0" -react-router@5.3.4, react-router@^5.3.4: +react-router@^5.3.4, react-router@>=5, react-router@5.3.4: version "5.3.4" resolved "https://registry.npmjs.org/react-router/-/react-router-5.3.4.tgz" integrity sha512-Ys9K+ppnJah3QuaRiLxk+jDWOR1MekYQrlytiXxC1RyfbdsZkS5pvKAzCCr031xHixZwpnsYNT5xysdFHQaYsA== @@ -7186,7 +7221,7 @@ react-tooltip@^5.23.0: "@floating-ui/dom" "^1.6.1" classnames "^2.3.0" -react@^18.0.0: +react@*, "react@^15.0.2 || ^16.0.0 || ^17.0.0", "react@^16.6.0 || ^17.0.0 || ^18.0.0", "react@^16.8.0 || ^17.0.0 || ^18.0.0", react@^18.0.0, react@^18.3.1, "react@>= 15", "react@>= 16.8.0 < 19.0.0", react@>=15, react@>=16, react@>=16.0.0, react@>=16.14.0, react@>=16.3, react@>=16.6.0: version "18.3.1" resolved "https://registry.npmjs.org/react/-/react-18.3.1.tgz" integrity sha512-wS+hAgJShR0KhEvPJArfuPVN1+Hz1t0Y6n5jLrGQbkb4urgPE/0Rve+1kMB1v/oWgHgm4WIcV+i7F2pTVj+2iQ== @@ -7496,15 +7531,20 @@ rxjs@^7.8.1: dependencies: tslib "^2.1.0" -safe-buffer@5.1.2, safe-buffer@~5.1.0, safe-buffer@~5.1.1: +safe-buffer@^5.1.0, safe-buffer@>=5.1.0, safe-buffer@~5.2.0, safe-buffer@5.2.1: + version "5.2.1" + resolved "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz" + integrity sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ== + +safe-buffer@~5.1.0, safe-buffer@~5.1.1: version "5.1.2" resolved "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz" integrity sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g== -safe-buffer@5.2.1, safe-buffer@>=5.1.0, safe-buffer@^5.1.0, safe-buffer@~5.2.0: - version "5.2.1" - resolved "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz" - integrity sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ== +safe-buffer@5.1.2: + version "5.1.2" + resolved "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz" + integrity sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g== "safer-buffer@>= 2.1.2 < 3": version "2.1.2" @@ -7522,7 +7562,7 @@ sass-loader@^10.1.1: schema-utils "^3.0.0" semver "^7.3.2" -sass@^1.69.5: +sass@^1.3.0, sass@^1.30.0, sass@^1.69.5: version "1.77.8" resolved "https://registry.npmjs.org/sass/-/sass-1.77.8.tgz" integrity sha512-4UHg6prsrycW20fqLGPShtEvo/WyHRVRHwOP4DzkUrObWoWI05QBSfzU71TVB7PFaL104TwNaHpjlWXAZbQiNQ== @@ -7543,16 +7583,25 @@ scheduler@^0.23.2: dependencies: loose-envify "^1.1.0" -schema-utils@2.7.0: - version "2.7.0" - resolved "https://registry.npmjs.org/schema-utils/-/schema-utils-2.7.0.tgz" - integrity sha512-0ilKFI6QQF5nxDZLFn2dMjvc4hjg/Wkg7rHd3jK6/A4a1Hl9VFdQWvgB1UMGoU94pad1P/8N7fMcEnLnSiju8A== +schema-utils@^3.0.0: + version "3.3.0" + resolved "https://registry.npmjs.org/schema-utils/-/schema-utils-3.3.0.tgz" + integrity sha512-pN/yOAvcC+5rQ5nERGuwrjLlYvLTbCibnZ1I7B1LaiAz9BRBlE9GMgE/eqV30P7aJQUf7Ddimy/RsbYO/GrVGg== dependencies: - "@types/json-schema" "^7.0.4" - ajv "^6.12.2" - ajv-keywords "^3.4.1" + "@types/json-schema" "^7.0.8" + ajv "^6.12.5" + ajv-keywords "^3.5.2" + +schema-utils@^3.1.1: + version "3.3.0" + resolved "https://registry.npmjs.org/schema-utils/-/schema-utils-3.3.0.tgz" + integrity sha512-pN/yOAvcC+5rQ5nERGuwrjLlYvLTbCibnZ1I7B1LaiAz9BRBlE9GMgE/eqV30P7aJQUf7Ddimy/RsbYO/GrVGg== + dependencies: + "@types/json-schema" "^7.0.8" + ajv "^6.12.5" + ajv-keywords "^3.5.2" -schema-utils@^3.0.0, schema-utils@^3.1.1, schema-utils@^3.2.0: +schema-utils@^3.2.0: version "3.3.0" resolved "https://registry.npmjs.org/schema-utils/-/schema-utils-3.3.0.tgz" integrity sha512-pN/yOAvcC+5rQ5nERGuwrjLlYvLTbCibnZ1I7B1LaiAz9BRBlE9GMgE/eqV30P7aJQUf7Ddimy/RsbYO/GrVGg== @@ -7571,6 +7620,20 @@ schema-utils@^4.0.0: ajv-formats "^2.1.1" ajv-keywords "^5.1.0" +schema-utils@2.7.0: + version "2.7.0" + resolved "https://registry.npmjs.org/schema-utils/-/schema-utils-2.7.0.tgz" + integrity sha512-0ilKFI6QQF5nxDZLFn2dMjvc4hjg/Wkg7rHd3jK6/A4a1Hl9VFdQWvgB1UMGoU94pad1P/8N7fMcEnLnSiju8A== + dependencies: + "@types/json-schema" "^7.0.4" + ajv "^6.12.2" + ajv-keywords "^3.4.1" + +"search-insights@>= 1 < 3": + version "2.17.0" + resolved "https://registry.npmjs.org/search-insights/-/search-insights-2.17.0.tgz" + integrity sha512-AskayU3QNsXQzSL6v4LTYST7NNfs2HWyHHB+sdORP9chsytAhro5XRfToAMI/LAVYgNbzowVZTMfBRodgbUHKg== + section-matter@^1.0.0: version "1.0.0" resolved "https://registry.npmjs.org/section-matter/-/section-matter-1.0.0.tgz" @@ -7611,7 +7674,7 @@ semver@^7.3.2, semver@^7.3.5, semver@^7.3.7, semver@^7.3.8, semver@^7.5.4: send@0.19.0: version "0.19.0" - resolved "https://registry.yarnpkg.com/send/-/send-0.19.0.tgz#bbc5a388c8ea6c048967049dbeac0e4a3f09d7f8" + resolved "https://registry.npmjs.org/send/-/send-0.19.0.tgz" integrity sha512-dW41u5VfLXu8SJh5bwRmyYUbAoSB3c9uQh6L8h/KtsFREPWpbX1lrljJo186Jc4nmci/sGUZ9a0a0J2zgfq2hw== dependencies: debug "2.6.9" @@ -7664,7 +7727,7 @@ serve-index@^1.9.1: serve-static@1.16.2: version "1.16.2" - resolved "https://registry.yarnpkg.com/serve-static/-/serve-static-1.16.2.tgz#b6a5343da47f6bdd2673848bf45754941e803296" + resolved "https://registry.npmjs.org/serve-static/-/serve-static-1.16.2.tgz" integrity sha512-VqpjJZKadQB/PEbEwvFdO43Ax5dFBZ2UECszz8bQ7pi7wt//PWe1P6MN7eCnjsatYtBT6EuiClbjSWP2WrIoTw== dependencies: encodeurl "~2.0.0" @@ -7807,7 +7870,7 @@ sort-css-media-queries@2.1.0: resolved "https://registry.npmjs.org/sort-css-media-queries/-/sort-css-media-queries-2.1.0.tgz" integrity sha512-IeWvo8NkNiY2vVYdPa27MCQiR0MN0M80johAYFVxWWXQ44KU84WNxjslwBHmc/7ZL2ccwkM7/e6S5aiKZXm7jA== -"source-map-js@>=0.6.2 <2.0.0", source-map-js@^1.2.0: +source-map-js@^1.2.0, "source-map-js@>=0.6.2 <2.0.0": version "1.2.0" resolved "https://registry.npmjs.org/source-map-js/-/source-map-js-1.2.0.tgz" integrity sha512-itJW8lvSA0TXEphiRoawsCksnlf8SyvmFzIhltqAHluXd88pkCd+cXJVHTDwdCr0IzwptSm035IHQktUu1QUMg== @@ -7820,7 +7883,12 @@ source-map-support@~0.5.20: buffer-from "^1.0.0" source-map "^0.6.0" -source-map@^0.6.0, source-map@^0.6.1, source-map@~0.6.0: +source-map@^0.6.0: + version "0.6.1" + resolved "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz" + integrity sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g== + +source-map@^0.6.1: version "0.6.1" resolved "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz" integrity sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g== @@ -7830,6 +7898,11 @@ source-map@^0.7.0: resolved "https://registry.npmjs.org/source-map/-/source-map-0.7.4.tgz" integrity sha512-l3BikUxvPOcn5E74dZiq5BGsTb5yEwhaTSzccU6t4sDOH8NWJCstKO5QT2CvtFoK6F0saL7p9xHAqHOlCPJygA== +source-map@~0.6.0: + version "0.6.1" + resolved "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz" + integrity sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g== + space-separated-tokens@^2.0.0: version "2.0.2" resolved "https://registry.npmjs.org/space-separated-tokens/-/space-separated-tokens-2.0.2.tgz" @@ -7873,22 +7946,45 @@ stable@^0.1.8: resolved "https://registry.npmjs.org/stable/-/stable-0.1.8.tgz" integrity sha512-ji9qxRnOVfcuLDySj9qzhGSEFVobyt1kIOSkj1qZzYLzq7Tos/oUUWvotUPQLlrsidqsK6tBH89Bc9kL5zHA6w== -statuses@2.0.1: - version "2.0.1" - resolved "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz" - integrity sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ== - "statuses@>= 1.4.0 < 2": version "1.5.0" resolved "https://registry.npmjs.org/statuses/-/statuses-1.5.0.tgz" integrity sha512-OpZ3zP+jT1PI7I8nemJX4AKmAX070ZkYPVWV/AaKTJl+tXCTGyVdC1a4SL8RUQYEwk/f34ZX8UTykN68FwrqAA== +statuses@2.0.1: + version "2.0.1" + resolved "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz" + integrity sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ== + std-env@^3.0.1: version "3.7.0" resolved "https://registry.npmjs.org/std-env/-/std-env-3.7.0.tgz" integrity sha512-JPbdCEQLj1w5GilpiHAx3qJvFndqybBysA3qUOnznweH4QbNYUsW/ea8QzSrnh0vNsezMMw5bcVool8lM0gwzg== -string-width@^4.1.0, string-width@^4.2.0: +string_decoder@^1.1.1: + version "1.3.0" + resolved "https://registry.npmjs.org/string_decoder/-/string_decoder-1.3.0.tgz" + integrity sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA== + dependencies: + safe-buffer "~5.2.0" + +string_decoder@~1.1.1: + version "1.1.1" + resolved "https://registry.npmjs.org/string_decoder/-/string_decoder-1.1.1.tgz" + integrity sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg== + dependencies: + safe-buffer "~5.1.0" + +string-width@^4.1.0: + version "4.2.3" + resolved "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz" + integrity sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g== + dependencies: + emoji-regex "^8.0.0" + is-fullwidth-code-point "^3.0.0" + strip-ansi "^6.0.1" + +string-width@^4.2.0: version "4.2.3" resolved "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz" integrity sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g== @@ -7906,20 +8002,6 @@ string-width@^5.0.1, string-width@^5.1.2: emoji-regex "^9.2.2" strip-ansi "^7.0.1" -string_decoder@^1.1.1: - version "1.3.0" - resolved "https://registry.npmjs.org/string_decoder/-/string_decoder-1.3.0.tgz" - integrity sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA== - dependencies: - safe-buffer "~5.2.0" - -string_decoder@~1.1.1: - version "1.1.1" - resolved "https://registry.npmjs.org/string_decoder/-/string_decoder-1.1.1.tgz" - integrity sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg== - dependencies: - safe-buffer "~5.1.0" - stringify-entities@^4.0.0: version "4.0.4" resolved "https://registry.npmjs.org/stringify-entities/-/stringify-entities-4.0.4.tgz" @@ -8155,6 +8237,11 @@ typedarray-to-buffer@^3.1.5: dependencies: is-typedarray "^1.0.0" +"typescript@>= 2.7", typescript@>=4.9.5: + version "5.5.4" + resolved "https://registry.npmjs.org/typescript/-/typescript-5.5.4.tgz" + integrity sha512-Mtq29sKDAEYP7aljRgtPOpTvOfbwRWlS6dPRzwjdE+C0R4brX/GUyhHSecbHMFLNBLcJIPt9nl9yG5TZ1weH+Q== + ua-parser-js@^1.0.35: version "1.0.38" resolved "https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-1.0.38.tgz" @@ -8258,7 +8345,7 @@ universalify@^2.0.0: resolved "https://registry.npmjs.org/universalify/-/universalify-2.0.1.tgz" integrity sha512-gptHNQghINnc/vTGIk0SOFGFNXw7JVrlRUtConJRlvaw6DuX0wO5Jeko9sWrMBhh+PsYAZ7oXAiOnf/UKogyiw== -unpipe@1.0.0, unpipe@~1.0.0: +unpipe@~1.0.0, unpipe@1.0.0: version "1.0.0" resolved "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz" integrity sha512-pjy2bYhSsufwWlKwPc+l3cN7+wuJlK6uz0YdJEOlQDbl6jo/YlPi4mb8agUkVC8BF7V8NuzeyPNqRksA3hztKQ== @@ -8498,7 +8585,7 @@ webpack-sources@^3.2.2, webpack-sources@^3.2.3: resolved "https://registry.npmjs.org/webpack-sources/-/webpack-sources-3.2.3.tgz" integrity sha512-/DyMEOrDgLKKIG0fmvtz+4dUX/3Ghozwgm6iPp8KRhvn+eQf9+Q7GWxVNMk3+uCPWfdXYC4ExGBckIXdFEfH1w== -webpack@^5.88.1: +"webpack@^4.0.0 || ^5.0.0", "webpack@^4.36.0 || ^5.0.0", "webpack@^4.37.0 || ^5.0.0", webpack@^5.0.0, webpack@^5.1.0, webpack@^5.20.0, webpack@^5.88.1, "webpack@>= 4", "webpack@>=4.41.1 || 5.x", webpack@>=5, "webpack@3 || 4 || 5": version "5.94.0" resolved "https://registry.npmjs.org/webpack/-/webpack-5.94.0.tgz" integrity sha512-KcsGn50VT+06JH/iunZJedYGUJS5FGjow8wb9c0v5n1Om8O1g4L6LjtfxwlXIATopoQu+vOXXa7gYisWxCoPyg== @@ -8537,7 +8624,7 @@ webpackbar@^5.0.2: pretty-time "^1.1.0" std-env "^3.0.1" -websocket-driver@>=0.5.1, websocket-driver@^0.7.4: +websocket-driver@^0.7.4, websocket-driver@>=0.5.1: version "0.7.4" resolved "https://registry.npmjs.org/websocket-driver/-/websocket-driver-0.7.4.tgz" integrity sha512-b17KeDIQVjvb0ssuSDF2cYXSg2iztliJ4B9WdsuB6J952qCPKmnVq4DyW5motImXHDC1cBT/1UezrJVsKw5zjg== @@ -8636,7 +8723,17 @@ yallist@^3.0.2: resolved "https://registry.npmjs.org/yallist/-/yallist-3.1.1.tgz" integrity sha512-a4UGQaWPH59mOXUYnAG2ewncQS4i4F43Tv3JoAM+s2VDAmS9NsK8GpDMLrCHPksFT7h3K6TOoUNn2pb7RoXx4g== -yaml@^1.10.0, yaml@^1.10.2, yaml@^1.7.2: +yaml@^1.10.0: + version "1.10.2" + resolved "https://registry.npmjs.org/yaml/-/yaml-1.10.2.tgz" + integrity sha512-r3vXyErRCYJ7wg28yvBY5VSoAF8ZvlcW9/BwUzEtUsjvX/DKs24dIkuwjtuprwJJHsbyUbLApepYTR1BN4uHrg== + +yaml@^1.10.2: + version "1.10.2" + resolved "https://registry.npmjs.org/yaml/-/yaml-1.10.2.tgz" + integrity sha512-r3vXyErRCYJ7wg28yvBY5VSoAF8ZvlcW9/BwUzEtUsjvX/DKs24dIkuwjtuprwJJHsbyUbLApepYTR1BN4uHrg== + +yaml@^1.7.2: version "1.10.2" resolved "https://registry.npmjs.org/yaml/-/yaml-1.10.2.tgz" integrity sha512-r3vXyErRCYJ7wg28yvBY5VSoAF8ZvlcW9/BwUzEtUsjvX/DKs24dIkuwjtuprwJJHsbyUbLApepYTR1BN4uHrg==