You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I am facing an issue with the latest CHK kind resource in keeper version with operator version 24.2 and keeper version 24.8.8.17 Here is the manifest that I am using:
However when keeper_server/server_id: 1 is set under spec.configuration.settings, the pod comes up healthy. It seems like keeper_server/raft_configuration/server/id: 1 is being taken into consideration when set under cluster layout settings however keeper_server/server_id doesn't work when set under the same. I have tested the same by keeping keeper_server/raft_configuration/server/id under spec.configuration.clusters[].layout.replicas[].settings and keeper_server/server_id under spec.configuration.settings.
The requirement is to add specific server IDs to each replica rather than the default approach. Hence this cannot be achieved with spec.configuration.settings and must be done with spec.configuration.clusters[].layout.replicas[].settings. If not, is there any other way to achieve this?
The text was updated successfully, but these errors were encountered:
Hello, I am facing an issue with the latest CHK kind resource in keeper version with operator version 24.2 and keeper version 24.8.8.17 Here is the manifest that I am using:
The pod fails to come up with error:
However when keeper_server/server_id: 1 is set under spec.configuration.settings, the pod comes up healthy. It seems like keeper_server/raft_configuration/server/id: 1 is being taken into consideration when set under cluster layout settings however keeper_server/server_id doesn't work when set under the same. I have tested the same by keeping keeper_server/raft_configuration/server/id under spec.configuration.clusters[].layout.replicas[].settings and keeper_server/server_id under spec.configuration.settings.
The requirement is to add specific server IDs to each replica rather than the default approach. Hence this cannot be achieved with spec.configuration.settings and must be done with spec.configuration.clusters[].layout.replicas[].settings. If not, is there any other way to achieve this?
The text was updated successfully, but these errors were encountered: