Skip to content

OCPBUGS-55288: Added note about bonded interface and node IP address … #92534

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 20 additions & 15 deletions modules/ipi-install-establishing-communication-between-subnets.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,12 @@

In a typical {product-title} cluster setup, all nodes, including the control plane and compute nodes, reside in the same network. However, for edge computing scenarios, it can be beneficial to locate compute nodes closer to the edge. This often involves using different network segments or subnets for the remote nodes than the subnet used by the control plane and local compute nodes. Such a setup can reduce latency for the edge and allow for enhanced scalability.

Before installing {product-title}, you must configure the network properly to ensure that the edge subnets containing the remote nodes can reach the subnet containing the control plane nodes and receive traffic from the control plane too.
Before installing {product-title}, you must configure the network properly to ensure that the edge subnets containing the remote nodes can reach the subnet containing the control plane nodes and receive traffic from the control plane too.

[IMPORTANT]
====
During cluster installation, assign permanent IP addresses to nodes in the network configuration of the `install-config.yaml` configuration file. If you do not do this, nodes might get assigned a temporary IP address that can impact how traffic reaches the nodes. For example, if a node has a temporary IP address assigned to it and you configured a bonded interface for a node, the bonded interface might receive a different IP address.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is much better now.

I'd just clarify that if this happens, node installation may never end correctly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I cannot think of concrete steps, because what you need to do is either already in the docs or out the scope of it.

====

You can run control plane nodes in the same subnet or multiple subnets by configuring a user-managed load balancer in place of the default load balancer. With a multiple subnet environment, you can reduce the risk of your {product-title} cluster from failing because of a hardware failure or a network outage. For more information, see "Services for a user-managed load balancer" and "Configuring a user-managed load balancer".

Expand All @@ -33,21 +38,21 @@ In this procedure, the cluster spans two subnets:
.Procedure

. Configure the first subnet to communicate with the second subnet:

+
.. Log in as `root` to a control plane node by running the following command:
+
[source,terminal]
----
$ sudo su -
----

+
.. Get the name of the network interface by running the following command:
+
[source,terminal]
----
# nmcli dev status
----

+
.. Add a route to the second subnet (`192.168.0.0`) via the gateway by running the following command:
+
[source,terminal]
Expand All @@ -63,7 +68,7 @@ Replace `<interface_name>` with the interface name. Replace `<gateway>` with the
----
# nmcli connection modify eth0 +ipv4.routes "192.168.0.0/24 via 192.168.0.1"
----

+
.. Apply the changes by running the following command:
+
[source,terminal]
Expand All @@ -72,14 +77,14 @@ Replace `<interface_name>` with the interface name. Replace `<gateway>` with the
----
+
Replace `<interface_name>` with the interface name.

+
.. Verify the routing table to ensure the route has been added successfully:
+
[source,terminal]
----
# ip route
----

+
.. Repeat the previous steps for each control plane node in the first subnet.
+
[NOTE]
Expand All @@ -88,21 +93,21 @@ Adjust the commands to match your actual interface names and gateway.
====

. Configure the second subnet to communicate with the first subnet:

+
.. Log in as `root` to a remote compute node by running the following command:
+
[source,terminal]
----
$ sudo su -
----

+
.. Get the name of the network interface by running the following command:
+
[source,terminal]
----
# nmcli dev status
----

+
.. Add a route to the first subnet (`10.0.0.0`) via the gateway by running the following command:
+
[source,terminal]
Expand All @@ -118,7 +123,7 @@ Replace `<interface_name>` with the interface name. Replace `<gateway>` with the
----
# nmcli connection modify eth0 +ipv4.routes "10.0.0.0/24 via 10.0.0.1"
----

+
.. Apply the changes by running the following command:
+
[source,terminal]
Expand All @@ -127,14 +132,14 @@ Replace `<interface_name>` with the interface name. Replace `<gateway>` with the
----
+
Replace `<interface_name>` with the interface name.

+
.. Verify the routing table to ensure the route has been added successfully by running the following command:
+
[source,terminal]
----
# ip route
----

+
.. Repeat the previous steps for each compute node in the second subnet.
+
[NOTE]
Expand All @@ -143,7 +148,7 @@ Adjust the commands to match your actual interface names and gateway.
====

. After you have configured the networks, test the connectivity to ensure the remote nodes can reach the control plane nodes and the control plane nodes can reach the remote nodes.

+
.. From the control plane nodes in the first subnet, ping a remote node in the second subnet by running the following command:
+
[source,terminal]
Expand All @@ -152,7 +157,7 @@ $ ping <remote_node_ip_address>
----
+
If the ping is successful, it means the control plane nodes in the first subnet can reach the remote nodes in the second subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node.

+
.. From the remote nodes in the second subnet, ping a control plane node in the first subnet by running the following command:
+
[source,terminal]
Expand Down