-
Notifications
You must be signed in to change notification settings - Fork 35
tests: test CRD versions #1317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
tests: test CRD versions #1317
Conversation
943710d to
26cc904
Compare
Add test which checks if the CRD versions defined in the cluster match the versions in the subcharts in the git repo. This test ensures that the subcharts in the release version don't unexpectedly change their API versions within a minor Harvester release. To implement this test, additional infrastructure for the Kubernetes API has been added. fixes: harvester#1314 Signed-off-by: Moritz Röhrich <moritz.rohrich@suse.com>
26cc904 to
33c5419
Compare
lanfon72
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, the library apiclient is focusing for Harvester and related endpoint, we don't and will not have plan to implement kube APIs, as it already have many other libs does.
and as Albin's snippets, it shows we can simply get those CRD info from bash, so I would prefer use that version rather than create a new API set.
It would be more readable and fit our manual test steps.
and I don't think we need to verify those CRDs every time, usually we will only verify them after the cluster upgraded into new version, so we will not place then in apis.
| assert code == 200 | ||
| assert data.get("value") is not None | ||
|
|
||
| yield semver.VersionInfo.parse(data.get("value").lstrip("v")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we already have apiclient.cluster_version to expose the cluster's version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The apiclient.cluster_version returns a pkg_resources.Version object, but my tests are using the semver library to parse versioning information.
The semver library has several advantages. One being that it isn't in the process of being deprecated. Another advantage is that it has good documentation and deals with a lot of different operations on version information better. Third, the pkg_resources library is supposed to deal with python-packages and not generic version information. This is a difference in so far as python packages have specific requirements on the format of the version string that we don't adhere to with Harvester
|
Test target and classification are important, we should not mix different purpose test code. Or we can have this automation in another folder or something thus not break test organization? How do you guys think? |
Well, it does not. There's also a Rancher API client part in it.
Please no. We have a test suite, let's not add a second test suite.
I see your point, but I don't see where else in the test suite an appropriate place would be. Maybe we should create a separate directory for upgrade tests under |
Signed-off-by: Moritz Röhrich <moritz.rohrich@suse.com>
|
|
For what it's worth.
du -h -d 1 "$(pip -V | cut -d ' ' -f 4 | sed 's/pip//g')" | grep -vE "dist-info|_distutils_hack|__pycache__" | sort -hsemver:
But, again it's kubernetes, and I'd expect that package to be a bit bigger. But even for our packages in requirements.txt : We are not pinning our dependencies, but maybe we could start? I get that yeah we could do it in /bin/bash, but yeah that also has other dependencies, taking something like (say you've curl'd all the githubusercontent yamls down to /home/rancher/harvester_yamls): #! /usr/bin/bash
for file in /home/rancher/harvester_yamls/*; do
# get the name of the file
file_name=$(basename $file)
# get the name of the crd
crd_name=$(cat $file | yq .metadata.name)
# get the object path
obj_path='.spec.versions[].name'
# get the observation
observation=$(kubectl get Customresourcedefinitions/$crd_name -o yaml | yq $obj_path | tr '\n' ' ')
# get the expectation
expectation=$(cat $file | yq $obj_path | tr '\n' ' ')
# print the observation and expectation
echo -e "For $file - Observation: $observation\nExpectation: $expectation\n"
# check if the observation is equal to the expectation
[ "$observation" == "$expectation" ] && echo True || echo False
doneBut the bash is requiring the Harvester node to have the binaries of:
Which our Harvester nodes do have those binaries, but it just ends up shifting the dependencies on to Harvester - and Harvester could change at a later point to not support those things, hypothetically, ...but our testing code would instead continue to work because we wouldn't be dependent on the Harvester node actually containing binaries to perform checks written in bash. So I also agree that it may be better to not rely on a bash script to check the CRD dependencies. Maybe we could retrofit the apiclient.cluster_version (both Harvester & Rancher) to utilize the To not block @m-ildefons , I'm okay approving this for now and maybe we could iterate on it later -> as long as there's a test run that's been ran to show it passing 😄 |
Add test which checks if the CRD versions defined in the cluster match the versions in the subcharts in the git repo.
This test ensures that the subcharts in the release version don't unexpectedly change their API versions within a minor Harvester release.
To implement this test, additional infrastructure for the Kubernetes API has been added.
fixes: #1314