Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -30,3 +30,6 @@ coverage_unit.html
*-results.html

.sonarlint*

# Python files
__pycache__/
2 changes: 1 addition & 1 deletion operators/o2ims-operator/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# limitations under the License.
##########################################################################

FROM python:3.12.10-alpine3.21@sha256:9c51ecce261773a684c8345b2d4673700055c513b4d54bc0719337d3e4ee552e AS builder
FROM python:3.12.10-alpine3.21 AS builder

# Create a non-root user and group
RUN addgroup -g 65535 o2ims && \
Expand Down
14 changes: 1 addition & 13 deletions operators/o2ims-operator/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,16 +58,7 @@ curl --create-dirs -O --output-dir ./config/crd/bases/ https://raw.githubusercon

#### Non-containerized Development Environment

```bash
kubectl create -f tests/deployment/sa-test-pod.yaml
kubectl exec -it -n porch-system porch-sa-test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token &> /tmp/porch-token
# Create the CRD from the Nephio API repo
kubectl create -f https://raw.githubusercontent.com/nephio-project/api/refs/heads/main/config/crd/bases/o2ims.provisioning.oran.org_provisioningrequests.yaml
export TOKEN=/tmp/porch-token
# Exposing the Kube proxy for development after killing previous proxy sessions
pkill kubectl
nohup kubectl proxy --port 8080 &>/dev/null &
```
Already setup by the test/create-cluster.sh go to To Start the Operator.

#### Containerized Development Environment

Expand Down Expand Up @@ -129,7 +120,6 @@ O2IMS operator listens for ProvisioningRequest CR and once it is created it goes
Following are the Provisioning Request Phases:



| Status | Description |
| --- | --- |
| `PENDING` | The ProvisioningRequest is waiting to be processed by the O-Cloud (IMS). |
Expand All @@ -138,8 +128,6 @@ Following are the Provisioning Request Phases:
| `FAILED` | The ProvisioningRequest could not be fully processed by the O-Cloud (IMS). |
| `DELETING` | The ProvisioningRequest is in the process of being deleted by the O-Cloud (IMS). |



1. `ProvisioningRequest validation`: The controller [provisioning_request_validation_controller.py](./controllers/provisioning_request_validation_controller.py) validates the provisioning requests. Currently it checks if the field `clusterName` and `clusterProvisioner`. At the moment only `capi` handled clusters are support
2. `ProvisioningRequest creation`: The controller [provisioning_request_controller.py](./controllers/provisioning_request_controller.py) takes care of creating the a package variant for Porch which can be applied to the cluster where porch is running. After applying package variant it waits for the cluster to be created and it follows the creation via querying `clusters.cluster.x-k8s.io` endpoint. Later we will add querying of packageRevisions also but at the moment their is a problem with querying packageRevisions because sometimes Porch is not able to process the request

Expand Down
38 changes: 26 additions & 12 deletions operators/o2ims-operator/controllers/manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,20 +16,32 @@

import logging
import threading

from datetime import datetime
import kopf

from api import app # Import the Flask app
from controllers.utils import LOG_LEVEL, CLUSTER_PROVISIONER, CREATION_TIMEOUT
from provisioning_request_controller import *
from provisioning_request_validation_controller import *
from northbound_restapi import app # Import the Flask app
from utils import (
LOG_LEVEL,
CLUSTER_PROVISIONER,
CREATION_TIMEOUT,
TIME_FORMAT,
)
from provisioning_request_controller import (
check_creation_request_status,
cluster_creation_request,
cluster_creation_status,
)

from provisioning_request_validation_controller import (
validate_cluster_creation_request,
)


# Start Flask in a separate thread
# Start northbound rest endpoint for FOCOM
def run_flask():
app.run(host='0.0.0.0', port=5000)

threading.Thread(target=run_flask, daemon=True).star
threading.Thread(target=run_flask, daemon=True).start

@kopf.on.startup()
def configure(settings: kopf.OperatorSettings, memo: kopf.Memo, **_):
Expand All @@ -40,21 +52,23 @@ def configure(settings: kopf.OperatorSettings, memo: kopf.Memo, **_):
settings.posting.level = logging.ERROR
if LOG_LEVEL == "WARNING":
settings.posting.level = logging.WARNING
settings.persistence.finalizer = f"provisioningrequests.o2ims.provisioning.oran.org"
if LOG_LEVEL == "DEBUG":
settings.posting.level = logging.DEBUG
settings.persistence.finalizer = "provisioningrequests.o2ims.provisioning.oran.org"
settings.persistence.progress_storage = kopf.AnnotationsProgressStorage(
prefix=f"provisioningrequests.o2ims.provisioning.oran.org"
prefix="provisioningrequests.o2ims.provisioning.oran.org"
)
settings.persistence.diffbase_storage = kopf.AnnotationsDiffBaseStorage(
prefix=f"provisioningrequests.o2ims.provisioning.oran.org",
prefix="provisioningrequests.o2ims.provisioning.oran.org",
key="last-handled-configuration",
)
memo.cluster_provisioner = CLUSTER_PROVISIONER
memo.creation_timeout = CREATION_TIMEOUT


## kopf.event is designed to show events in kubectl get events. For clusterscope resources currently it is not possible to show events
@kopf.on.resume(f"o2ims.provisioning.oran.org", "provisioningrequests")
@kopf.on.create(f"o2ims.provisioning.oran.org", "provisioningrequests")
@kopf.on.resume("o2ims.provisioning.oran.org", "provisioningrequests")
@kopf.on.create("o2ims.provisioning.oran.org", "provisioningrequests")
async def create_fn(spec, logger, status, patch: kopf.Patch, memo: kopf.Memo, **kwargs):
metadata_name = kwargs["body"]["metadata"]["name"]
# Template name will be treated as package name
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,26 @@
###########################################################################
# Copyright 2025 The Nephio Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##########################################################################

import logging
from datetime import datetime

from flask import Flask, request, jsonify
from kubernetes import client, config

from controllers.utils import validate_cluster_creation_request
from utils import validate_cluster_creation_request

app = Flask(__name__)

Expand All @@ -14,7 +30,7 @@ def trigger_action():
logging.info("O2IMS API Received Request Payload Is:", data)
# add validation logic here
now = datetime.now()
dt_string = now.strftime("%Y-%m-%d %H:%M:%S")
dt_string = now.strftime("%Y-%m-%d %H:%M:%S")
try:
validate_cluster_creation_request(params=data)
o2ims_cr={
Expand All @@ -32,12 +48,12 @@ def trigger_action():
'templateName': data.get('templateName'),
'templateParameters':data.get('templateParameters'),
'templateVersion': data.get('templateVersion')
}
}
}
logging.debug("O2IMS CR Payload Is:", o2ims_cr)
config.load_incluster_config()
api = client.CustomObjectsApi()
response = api.create_cluster_custom_object(
api.create_cluster_custom_object(
group='o2ims.provisioning.oran.org',
version='v1alpha1',
plural='provisioningrequest',
Expand Down Expand Up @@ -68,13 +84,12 @@ def fetch_status():
response_data={"provisioningRequestData": {}, "status": status,"ProvisionedResourceSet":{}}
return response_data, 200
#read all the provisioning requests and create response array from it

for o2ims_cr in data:
status={}
#read o2ims_cr status and update message accordingly
if 'status' in o2ims_cr.keys():
if o2ims_cr['status'].get('provisioningState')=='failed':
status= jsonify({"status":{"updateTime":dt_string,"message":o2ims_cr['status'].get('provisioningMessage'),"provisioningPhase":"FAILED"}})
status= jsonify({"status":{"updateTime":dt_string,"message":o2ims_cr['status'].get('provisioningMessage'),"provisioningPhase":"FAILED"}})
elif o2ims_cr['status'].get('provisioningState')=='progressing':
status= jsonify({"status":{"updateTime":dt_string,"message":o2ims_cr['status'].get('provisioningMessage'),"provisioningPhase":"PROGRESSING"}})
elif o2ims_cr['status'].get('provisioningState')=='fulfilled':
Expand All @@ -89,4 +104,3 @@ def fetch_status():
except client.exceptions.ApiException as e:
logging.error(f"Caught Exception while fetching O2IMS CR Status ,{e}")
return jsonify({"status":{"updateTime":dt_string,"message":f"O2IMS Deployment Failed,{e}","provisioningPhase":"FAILED"}}),500

Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
import uuid
from datetime import datetime

from controllers.utils import check_o2ims_provisioning_request, UPSTREAM_PKG_REPO, create_package_variant, \
from utils import check_o2ims_provisioning_request, UPSTREAM_PKG_REPO, create_package_variant, \
get_package_variant, get_capi_cluster, TIME_FORMAT


Expand Down
6 changes: 3 additions & 3 deletions operators/o2ims-operator/controllers/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
# Labels to put inside the owned resources
LABEL = {"owner": "o2ims.provisioning.oran.org.provisioningrequests"}
# Log level of the controller
LOG_LEVEL = str(os.getenv("LOG_LEVEL", "INFO"))
LOG_LEVEL = str(os.getenv("LOG_LEVEL", "DEBUG"))
# To verify HTTPs certificates when communicating with cluster
HTTPS_VERIFY = bool(os.getenv("HTTPS_VERIFY", False))
# Token used to communicate with Kube cluster
Expand Down Expand Up @@ -116,7 +116,7 @@ def create_package_variant(
response = {"status": False, "reason": "k8sApi server is not reachable"}
else:
response = {"status": False, "reason": r.json()}
elif r["status"] == True and "name" in r:
elif r["status"] and "name" in r:
response = {"status": r["status"], "name": r["name"]}
else:
response = {"status": r["status"], "reason": r["reason"]}
Expand Down Expand Up @@ -158,7 +158,7 @@ def get_package_variant(name: str = None, namespace: str = None, logger=None):
elif r.status_code in [401, 403]:
response = {"status": False, "reason": "unauthorized"}
elif r.status_code == 404:
response = {"status": False, "reason": f"notFound"}
response = {"status": False, "reason": "notFound"}
elif r.status_code == 500:
response = {"status": False, "reason": "k8sApi server is not reachable"}
else:
Expand Down
28 changes: 14 additions & 14 deletions operators/o2ims-operator/tests/create-cluster.sh
Original file line number Diff line number Diff line change
Expand Up @@ -32,10 +32,10 @@ create_kpt_package() {
rm -rf "${kpt_dir:?}/$2"
}

## Always delete the cluster
kind delete cluster -n o2ims-mgmt || true
kind create cluster --config="$(dirname "$0")"/mgmt-cluster.yaml --wait 5m
kubectl cluster-info --context kind-o2ims-mgmt
## Always delete the cluster
#kind delete cluster -n o2ims-mgmt || true
#kind create cluster --config="$(dirname "$0")"/mgmt-cluster.yaml --wait 5m
#kubectl cluster-info --context kind-o2ims-mgmt

# Gitea
create_kpt_package $CATALOG_REPO/distros/sandbox/gitea@origin gitea
Expand All @@ -62,7 +62,7 @@ create_kpt_package $CATALOG_REPO/nephio/optional/resource-backend@origin resourc
# Nephio Core Opertaor
create_kpt_package $CATALOG_REPO/nephio/core/nephio-operator@origin nephio-operator

# Create Gitea secret
# Create Gitea secret
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
Expand Down Expand Up @@ -91,16 +91,16 @@ rm -rf $kpt_dir/rootsync
create_kpt_package $CATALOG_REPO/nephio/optional/stock-repos@origin stock-repos

# Check Token for SA
kubectl create -f "$(dirname "$0")"/sa-test-pod.yaml
kubectl wait --for=condition=ready pod -l app=testo2ims -n porch-system --timeout=3m
rm -rf /tmp/porch-token
kubectl exec -it -n porch-system porch-sa-test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token &> /tmp/porch-token
#kubectl create -f "$(dirname "$0")"/sa-test-pod.yaml
#kubectl wait --for=condition=ready pod -l app=testo2ims -n porch-system --timeout=3m
#rm -rf /tmp/porch-token
#kubectl exec -it -n porch-system porch-sa-test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token &> /tmp/porch-token

# Create CRD
kubectl create -f https://raw.githubusercontent.com/nephio-project/api/refs/heads/main/config/crd/bases/o2ims.provisioning.oran.org_provisioningrequests.yaml
export TOKEN=/tmp/porch-token ## important for development environment
#kubectl create -f https://raw.githubusercontent.com/nephio-project/api/refs/heads/main/config/crd/bases/o2ims.provisioning.oran.org_provisioningrequests.yaml
#export TOKEN=/tmp/porch-token ## important for development environment

# Exposing the kube proxy for development, killing previous proxy sessions if they exist
pkill kubectl
nohup kubectl proxy --port 8080 &>/dev/null &
echo "Cluster is properly configured and proxy is running at 8080"
#pkill kubectl
#nohup kubectl proxy --port 8080 &>/dev/null &
#echo "Cluster is properly configured and proxy is running at 8080"
2 changes: 1 addition & 1 deletion operators/o2ims-operator/tests/mgmt-cluster.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ networking:
serviceSubnet: "10.97.0.0/16"
nodes:
- role: control-plane
image: kindest/node:v1.31.0
image: kindest/node:v1.32.0
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
17 changes: 17 additions & 0 deletions operators/o2ims-operator/tests/sample_provisioning_request.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
apiVersion: o2ims.provisioning.oran.org/v1alpha1
kind: ProvisioningRequest
metadata:
name: edge-cluster
spec:
name: sample-edge
description: "Provisioning request for setting up a sample edge kind cluster."
templateName: nephio-workload-cluster
templateVersion: v3.0.0
templateParameters:
clusterName: edge
clusterProvisioner: capi
creationTimeout: 200
labels:
nephio.org/site-type: edge
nephio.org/region: europe-paris-west
nephio.org/owner: nephio-o2ims
8 changes: 7 additions & 1 deletion operators/o2ims-operator/tests/test_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,13 @@
import random
import string

from controllers.utils import *
from controllers.utils import (
KUBERNETES_BASE_URL,
create_package_variant,
get_package_variant,
check_o2ims_provisioning_request,
get_capi_cluster,
)

# Constants used for testing
NAME = "test_name"
Expand Down