-
Notifications
You must be signed in to change notification settings - Fork 12
[LTS 9.4] dmaengine: idxd: Fix possible Use-After-Free in irq_process_work_list #422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
jira VULN-8255 cve CVE-2024-40956 commit-author Li RongQing <[email protected]> commit e3215de Use list_for_each_entry_safe() to allow iterating through the list and deleting the entry in the iteration process. The descriptor is freed via idxd_desc_complete() and there's a slight chance may cause issue for the list iterator when the descriptor is reused by another thread without it being deleted from the list. Fixes: 16e19e1 ("dmaengine: idxd: Fix list corruption in description completion") Signed-off-by: Li RongQing <[email protected]> Reviewed-by: Dave Jiang <[email protected]> Reviewed-by: Fenghua Yu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Vinod Koul <[email protected]> (cherry picked from commit e3215de) Signed-off-by: Anmol Jain <[email protected]>
This PR is causing the
|
@PlaidCat @thefossguy-ciq @jainanmol84 I think we need a script which installs all the necessary packages needed for kselftest. Many kselftests were failing for me initially because of not having some packages needed for that test. I then tried to install atleast the basic ones. But we all will have different results with different packages. So I will try to work on something where we have a standard installation script which everyone can use after setting up their VM. |
Does running the |
I think this is one of those fun issues of |
I re-ran the tests and got 2 additional failures.
|
This makes no sense, especially numbers wise. 3 less tests are passing but only 1 more test is failing. Math ain't mathing. |
@jainanmol84 can you run kselftest on the "old" (on the kernel without your patch) twice and see if that has any variance? |
If there are no obvious failures in the |
I tried this:
That way, I got a total of 4 log files for a separate kselftest execution run. And diffing them, I see a different kselftest failing, on the same kernel without rebooting.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for dragging the kselftest discussion. It was found that it is irrelevant to this particular PR. Hence, approving. 🚤
Forgot to specify this earlier: The tests from different (and unrelated) subsystems are failing. Indicating a deeper issue, which is irrelevant to this PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've maybe not re-enforced this enough. Ideally I'd like a fresh vm to run each ksleftest individually on PR and we just run the kselftest for the subsystem we're operating in. basically
Right now that is a little beyond our local systems |
@PlaidCat Actually this is right within reach of But let me tell you right away - doing this for each test will blow out the testing time from around 2-3 hours to around 2-3 days. |
Well, scratch that. Nothing needs to be implemented. Here's the script which does this kind of testing for some 3 random tests
The results are in separate files tagged with timestamp, like |
Yeah this is why I want multiple machines and its "Grand Design" (or my cruddy bullet points) knows that its more than just on the hosst |
Sorry for coming to the party uninvited, but it just breaks my heart to see other people fighting the unnecessary fights. We could save so much energy not re-inventing the wheel and channel it to actually push the cart. @shreeya-patel98 shares this sentiment too, I see
This is exactly what was driving the rocky-patching project at https://gitlab.conclusive.pl/devices/rocky-patching. It's kinda big and odd, with some tools most would find exotic so it's not for everyone, but you can look @shreeya-patel98 at yaml files at https://gitlab.conclusive.pl/devices/rocky-patching/-/tree/master/jinja/cloud-init?ref_type=heads - they serve as basically the very scripts you want to write (launched automatically at machine's spin-up), so you can have some starting point. I would love to see what you ended up adding to them, as I still have some test suites not compiling properly. Here's a file which categorizes the selftests according to how they behave (flappy / stable / broken / etc.) so that the problematic ones can be omitted and the tests can be ran smoothly: https://gitlab.conclusive.pl/devices/rocky-patching/-/blob/master/rocky.yml?ref_type=heads. Again, it's part of the framework, but may as well be used "manually" for whatever one needs. For example, I skip all the tests marked as "bad", because running them is meaningless and they often screw up the machine in some way. |
Commit message
Kernel build logs
kernel-build.log
Kselftests
kselftest-after.log
kselftest-before.log