Skip to content

Conversation

@taodd
Copy link
Owner

@taodd taodd commented Nov 23, 2025

No description provided.

@pponnuvel
Copy link
Collaborator

Thanks for this @taodd !

The tests are failing because we don't have 19.2.3 reference dwarfs. Added those in #70. Please take a look.

@pponnuvel
Copy link
Collaborator

@taodd Can you rebase this with main? That should make the tests pass. I'll then test & review.

Copy link
Collaborator

@pponnuvel pponnuvel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

But there's a minor issue: the "ceph" cmd can be accessed directly as "microceph" cmd has its own flags & options. So need to replace all "ceph" cmds with "microceph.ceph".

because of this, the test is failing right now.


I also wonder if we need to do all the cleanups and whether we need to check if microceph already exists. Because each time, this is run, a new Ubuntu instance will be created and it won't have microceph.

Cleanups: while it's generally good to cleanup, but in this case, the whole Ubuntu instance will be wiped off regardless of test result. Also if any of the cleanup ops fail, it'll mark the test failed.

TIMEOUT=120
ELAPSED=0
while [ $ELAPSED -lt $TIMEOUT ]; do
if ceph status | grep -q "HEALTH_OK\|HEALTH_WARN"; then
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/ceph/microceph.ceph

ELAPSED=$((ELAPSED + 5))
done

ceph status
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/ceph/microceph.ceph


echo "=== Step 8: Create RBD pool and image for testing ==="
# Create RBD pool if it doesn't exist
if ! ceph osd pool ls | grep -q "^test_pool$"; then
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/ceph/microceph.ceph

and below too.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Pon, let me try to uninstall the pre-installed ceph package and see if it works

@pponnuvel
Copy link
Collaborator

It's still failing the same way:

Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')

It suggests there's still the wrong "ceph" command. Perhaps the ceph packages you remove aren't comprehensive? Maybe ceph-common and other exist? I think it'd straightforward to just use microceph.ceph instead so that it always uses the microceph cluster regardless of apt packages. We can look at it next year!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants