Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
in-cluster DNS and load balancers on more platforms #1666
base: master
Are you sure you want to change the base?
in-cluster DNS and load balancers on more platforms #1666
Changes from 1 commit
1e8dbd5
57fc907
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a hard limitation? In general BYO infra in cloud providers using publish strategy
External
they create public and private subnets, placing nodes in private subnets.I see in the EP provide in-cluster implementations of network services that the the Load Balancer service is created, initially in Bootstrap node then moved to Control Plane nodes when
InClusterLoadBalancer
is set, would be possible to have certain flexibility to configure which node it will be deployed? Is it makes sense? I am wondering if we could advise "non-integrated" cloud providers which does not met LB requirements (eg hairpin), or does not offer cloud-based LBs to isolate the nodes where the Load Balancer is deployed in public installations, allowing them to also opt-in this feature, keeping that network standard: keeping control planes in private subnets, and haproxy deployed in nodes where they can expose to internet, in general public subnets.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is already possible to exercise at least some control over where the loadbalancer will run, which is how we deploy clusters with remote workers today. However, I am going to agree with @mhrivnak that this is out of scope in terms of simply enabling the DNS and LB infra on platform None. If changes to how that infra works are desired that should be a separate discussion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit, for real APIs we use CEL for both of these now, these validations don't actually work as intended
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are the existing API/ingress VIP fields going to be deprecated/moved to this type?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hopefully that's not necessary. I proposed an API here just to have a standing point for conversation, but I'm happy for the team to weigh in on what would make the most sense. There needs to be some balance of fitting in with the existing APIs, but also making it obvious and simple to enable or disable the feature. And for this proposal, the default should continue to be "disabled" since that matches existing behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following external-lb-vips enhancement a
loadBalancer
property was added to the install config [here].(https://github.com/openshift/installer/pull/6812/files)I think this enhancement is the counterpart of external-lb-vips enhancement and it could probably use the same API
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's prior art for controlling loadbalancer deployment based on either the existence of the VIPs or an explicit field (as in the feature @eranco74 linked). The existence of the VIPs was something of a kludge because the VSphere UPI implementation was done before we had these components and we needed to be able to differentiate UPI and IPI without breaking existing logic.
Given that, I'd mildly prefer to use the explicit field (which would default to the opposite of what the existing platforms do), in part because the existing VSphere UPI logic is already a headache that semi-regularly trips us up. It's probably not a big deal as long as we pick a method and stick to it for these new platforms, but if we implicitly enable the LB when VIPs are provided, then we'll have three different sets of logic: Based only on the loadBalancer property (non-vsphere on-prem platforms), based on both the loadBalancer property and the existence of VIPs (vsphere), and only the existence of VIPs (none and external). We're less likely to accidentally break this in the future if it works the same as the other on-prem platforms and VSphere is our only outlier.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we go with the loadBalancer field discussed above, we probably want this to have a default value of "None" or "UserManaged" or something like that to match the loadBalancer behavior.
We also may not want to use CoreDNS in the option name. There have been discussions about changing the implementation of the on-prem DNS to something lighter weight and it would be confusing if the option CoreDNS actually deployed dnsmasq. That's also why the loadBalancer parameter names are "OpenShiftManaged" and "UserManaged" instead of "keepalived" and "none".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, in a perfect world I'd like to see this happen, for a couple of reasons:
That said, I recognize this could complicate things quite a lot, as discussed in the cons section. I guess you can consider this a vote in favor of a common config section, but not a vote against the proposal above.