Replies: 1 comment 2 replies
-
|
Hi @bradh352 , an interesting case you have, and i see the requirement for VLAN-trunk enabled VMs coming up in more and more requests - one of them being #11491 where you are involved already as well. In your specific usecase an idea which came up to my mind is the use of VXLAN. In regards to pinning a VM to a host - one very simple solution would be the use of a specific host-tag (i.e. sriovnic01_host01) and creating a compute offering using this host-tag. As a result the VM will always we scheduled to run on the host with this specific tag. Best regards, |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Since it is not possible to currently pass multiple VLANs through to a VM via a single interface, and I have the need to forward a few hundred(!) VLANs through to a guest, I'm trying to explore my options. It appears at around 23 vNICs you start getting failures, so adding a few hundred vNICs to the guest isn't an option. From what I've read this is due to virtio-net consuming a PCI slot per vNIC and there are only 32 slots available total.
I think the solution is going to be to use Network VF / SR-IOV / PCI-passthrough. From what I can tell, that would mean using "extraconfig" in the VM to tell it to pass through the PCIe VF like:
https://gist.github.com/rajujith/4cc3f17379b63e86f73b041f2be75528#file-enable-gpu-passthrough-on-cloudstack-kvm-with-extraconfig
I can live with that. But how do I pin this VM to this host? I have "DRS" enabled, so it might try to migrate it, wouldn't it? If there isn't a way to pin the VM is there a way to exclude from DRS?
Also, what about maintenance mode? That tries to live migrate VMs away, but in this case I'd need to shut it down instead. I see something about rolling maintenance scripts/hooks:
https://www.shapeblue.com/cloudstack-feature-first-look-kvm-rolling-maintenance/
We've up to now just used the normal maintenance mode before doing tasks on a host. But we deploy with Ansible so maybe we just add the shutdown/startup there ... but it would be convenient to have a hook if someone needs to power a host down for another reason such as changing out a memory stick or similar. Will those same hooks be called?
Anyhow, am I on the right path here for my needs? Any insight would be helpful!
===
Just because I know someone is going to ask why I would possibly need this, the scenario is I have some specialized equipment that must be fully isolated from eachother (of which we have a few hundred) so can't reside in the same vlan. They also need L2 adjacency for all services because their default gateways get reconfigured for special needs in their environments. And finally, we need to be able to access this equipment but appear to be L2 adjacent so we SNAT out that port so all traffic appears to originate from their L2 segment from them. Yeah, its bad. No, there's no way to "fix" it.
Beta Was this translation helpful? Give feedback.
All reactions