Conversation
|
|
||
| open_enum::open_enum! { | ||
| /// ClassCode identifies the PCI device's type. | ||
| /// ClassCode identifies the PCI device's type, the "Base Class". |
There was a problem hiding this comment.
Note for reviewers: these changes are to match the terminology in PCI Code and ID Assignment Specification from the PCI SIG standards body.
|
While this primitive is useful for me in the NVMe context, I could see a generalized override make sense. In the current architecture, each device owns the |
I can offer some context on why this is modeled the way it is. In a nutshell - we wanted to keep the "core" PCI interconnect trait as simple (and as flexible) as possible, so we could decouple our PCI root controller implementations (e.g: vPCI, legacy PCI, and in the future - PCIe) from the various PCI devices that plugged into them (e.g: legacy gen1 PCI devices, virtio-pci, modern pci deviecs, etc...). Pushing the details of config space emulation / layout into the devices themselves (aided by a common shared "helper" emulator that those devices could leverage) enables the core interface to be as simple as "read/write bytes from a device's PCI config space", which significantly simplifies the wiring required to mix-and-match PCI devices / busses / topologies. Of course, this means there's no centralized place to set up these sorts of overrides... and you are forced to plumb through this config info manually. That said - I might have some ideas on how we can simplify some of the plumbing here. I'll leave some more comments on the PR |
|
If this is just for testing/prototyping, I wonder if it would be better to instead make a wrapper PCI device that allows for various config space functionality to be tweaked before/after calling through to the real device. |
|
Hah, and here I was just about to leave a different suggestion (something along the lines of adding a new field to The caveat is that you end up paying extra virtual dispatch cost when doing chipset device operations (IO, MMIO, PCI, etc...), which is a bit unfortunate. My follow-on suggestion is that you could implement overrides as a feature on the PCI "bus" itself. i.e: when attaching a device to the vPCI bus, give it the ability to associate ID overrides with particular devices, and have it do an extra intercept when incoming/outbound pci_{read/write} to devices. |
|
@jstarks, @daprilik Thank you for the suggestions, and I agree with the course of action. If I'm following correctly, then I'd:
Is this what you had in mind? This raises a separate question for me: where is the machinery to decide how to offer this device (e.g. over virtio, vpci, or on the emulated pci bus)? I don't think I'll get to them in a timely manner. I'll move this PR back to draft and pick it back up when I am able. |
|
If you take my suggestion (as opposed to doing this in the bus, which is also a reasonable suggestion)...
Then, instead of doing something in the NVMe resolver, you'd make a new resource: The resolver for this thing would resolve So there wouldn't be any NVMe-related changes here. |
Got it, thanks. This is helpful. |
|
And if you take my suggestion (pushing this override logic into the vpci bus itself) I believe the key function to work around is |
|
Will repopen or post a new pr if I complete this work. |
I have found it useful to ask the nvme device to pretend to be a different PCI device type. This is useful for testing and prototyping.
Validated via unit tests.