-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynamic tuning of irq and kthreads. #85
Comments
one thing I've been thinking about is doing some dynamic receive packet steering etc on the networking side when users pin tasks. I'll take a look at the above. |
so. here's another angle to consider: In a former life, we had some dual-core redis hosts.... which were struggling to keep up with the demand placed on it... upon doing some digging, it came to light that the single-threaded-redis process wasn't able to take advantage of the second core on the host.... as I understand it; The Linux kernel, will, by default, try an be smart (and usually is) and will try to schedule a process to run on the core which has the IRQs of the resources the process is interacting with the most to avoid context switching. HOWEVER, in cases where you have a sufficiently busy single-threaded process, that decision is NOT the right call any longer, as the redis process is getting starved of available cpu cycles because its scheduled core is busy pushing packets. I found that (in this case), we achieved ~30% uptick in delivered redis performance to the consuming nodes IOW: the increased SOFTIRQ/context switching penalty was less than the contention caused by colocating process/irq. admittedly, this is something of a distinct case... but it does highlight the challenge that bpftune faces with regard to offering 'good' advise.... What would NORMALLY be a good thing to do from a 'latency of request internal to the system' perspective, is actually NOT the right thing to do from an observed performance where it matters perspective. |
thanks for this @wolfspyre ! it's a really interesting case ; can we distinguish cases where colocation of packet processing and app hurt rather than help? in thinking about this, i'm wondering what effects we might observe that would lead us to spot these sorts of scenarios; excessive queueing of requests at the socket layer as the app is starved of cycles perhaps? I recently added code to tune softirq processing time to increase it where we still have packets to handle and the time runs out; this is balanced by checks that compare how much time tasks are waiting. the idea is that if excessive softirq processing shuts out app processing, we decrease softirq time. but this is all on the same core.. |
Dynamic tuning of irq and kthreads.
Like described here is possible in bpftune?
redhat-performance/tuned#631
The text was updated successfully, but these errors were encountered: