Skip to content

Conversation

@LukeRepko
Copy link
Collaborator

@LukeRepko LukeRepko commented May 29, 2025

The default build layout left some additional network configuration to be desired, mainly when setting up the inner cloud's provider network there was no easy way to route real external internet traffic to the internal cloud's neutron routers. This change simply dedicates the pre-existing enp5s0 interface as a "neutron overlay network" which could be used if desired for east/west VM connectivity solely for Neutron geneve traffic (though this would require advanced configuration not covered here or in Genestack documentation). Consider it reserved for future use at the moment.

A separate provider network (enp6s0) is added and connected to the osflex-router. The idea is that one will create a flat provider network on the inner cloud sharing the same subnet as the osflex-provider-subnet. This allows for a double NAT configuration so that FLIPs from the outercloud can NAT to IPs designated on the inner cloud's PUBLICNET subnet allocation pool (which should be configured with the same range defined in the variables.tf (provider_vips).

Some renaming has taken place to hopefully help alleviate confusion when looking at resources.

Note: You may need to remove the extra default route added to network and compute nodes:

ansible -m shell -a 'sudo ip route del default via 192.168.200.1 dev enp6s0' all --limit 'network0*'

Example LAB Network Architecture
osflex-provider-example

It was often unclear of the purpose for each named network and subnet
Neutron overlay should be on network and compute nodes, that's it.
This adds a provider network to be used with the neutron deployment.

It is still a WIP.
Disabling port security is needed to spoof ip/mac pairs unless we
populate allowed address pairs with everything up front. Since we are
definitely not going to be doing that we disable port security thus the
necessity to remove allowed address pairs (which requires port sec). The
hope is that we will still be able to raise these "VIPs" on the network
gateway nodes, the outer genestack gateway node will ARP for the VIP,
and then once a router and/or port is assigned on the inner cloud, ARP
will complete and traffic will flow.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant