You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Oct 20, 2022. It is now read-only.
We find that sometimes casskop and Cassandra disagrees on the keyspace which prevents us from setting nodesPerRacks to 0.
Create a CassandraCluster with 3 DC, each DC with 1 rack, and nodesPerRacks is set to 1
Clean up Cassandra pods by running kubectl casskop cleanup --pod cassandra-cluster-dc1-rack1-0. The operator will use Jolokia to communicate with cassandra and issue the keyspace deletion operation. In operator's log, it shows,
[cassandra-cluster-dc1-rack1-0.cassandra-cluster]: Cleanup of keyspace system_distributed
[cassandra-cluster-dc1-rack1-0.cassandra-cluster]: Cleanup of keyspace system_auth
[cassandra-cluster-dc1-rack1-0.cassandra-cluster]: Cleanup of keyspace system_traces
Set nodesPerRacks of first DC to 0, the operator still reject the operation due to detection of existing keyspaces,
The Operator has refused the ScaleDown. Keyspaces still having data [system_distributed system_auth system_traces]
In step 2 it seems that the keyspace is already deleted and the node is already cleaned up, but some how in step 3 we still cannot set nodesPerRacks to 0, thus cannot delete a dc.
What did you do?
Try to set nodesPerRacks to 0 before deleting a dc.
What did you expect to see? nodesPerRacks is set to 0.
What did you see instead? Under which circumstances?
The operator still reject the operation due to detection of existing keyspaces.