Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use fair spinlocks in SMP #1488

Closed
patacongo opened this issue Jul 30, 2020 · 5 comments · Fixed by #10605
Closed

Use fair spinlocks in SMP #1488

patacongo opened this issue Jul 30, 2020 · 5 comments · Fixed by #10605
Labels
Community: Good first issue Good for newcomers Type: Enhancement New feature or request

Comments

@patacongo
Copy link
Contributor

patacongo commented Jul 30, 2020

The spinlock implementation is very simple. Each CPU simply tries to acquire the spinlock via a test and set atomic operation, the first CPU to successfully perform the test and set is the "winner" and gets the spinlock. That CPU is complete arbitrary and not managed in any way.

This kind of spinlock can have adverse effects in SMP because under certain conditions other CPUs may always get the spinlock and one CPU can randomly be delayed for a significant amount of time.

We should consider replacing a fair spinlock implementation that enforces first-in, first out spinlocks. There are several fair spinlock algorithms. Some established and in common use, some simple and some complex, some straight-forward and some more controversial, some have undesirable side-effects. A first step would be to decide on an appropriate fair spinlock algorithm.

@patacongo
Copy link
Contributor Author

We should consider replacing a fair spinlock implementation that enforces first-in, first out spinlocks.

Most fair spinlock algorithms try to maintain FIFO spinlock access. I would think for a real time system, priority based access would lead to better control over response latencies.

@mu578
Copy link

mu578 commented Jun 14, 2023

Yes that was the point about my comment in the other thread, i.e context switching overheads entering critical section ; there is something like the Lamport algorithm if I remember well.

@patacongo for some references; an interesting usage of ticket spinlocks within linux kernel ; https://github.com/drouyang/pmtlock

@TaiJuWu
Copy link
Contributor

TaiJuWu commented Sep 8, 2023

Hello. I try to implement fair spinlock and put on this.
I try to test by this file.
The result like this

Thread 6 is exiting lock 4 time.
Thread 7 is enter lock 4 time.
Thread 7 is exiting lock 4 time.
Thread 6 is enter lock 5 time.
Thread 6 is exiting lock 5 time.
Thread 7 is enter lock 5 time.
Thread 7 is exiting lock 5 time.
Thread 8 is enter lock 1 time.
Thread 8 is exiting lock 1 time.
Thread 9 is enter lock 1 time.
Thread 9 is exiting lock 1 time.
Thread 8 is enter lock 2 time.
Thread 8 is exiting lock 2 time.
Thread 9 is enter lock 2 time.
Thread 9 is exiting lock 2 time.
Thread 8 is enter lock 3 time.
Thread 8 is exiting lock 3 time.
Thread 9 is enter lock 3 time.
Thread 9 is exiting lock 3 time.
Thread 8 is enter lock 4 time.
Thread 8 is exiting lock 4 time.

But I only test on RR scheduler and I am not sure my implementation is correct or not.

I have two questions:

  1. Is the call flow of fair_spin_lock too long to influence the performance of RTOS?
  2. My implement didn't consider interrupts so I need to consider it by irq_restore?

So I hope I can get some suggestions from community.
If you have any suggestions, i would be very grateful.

PS. I did't consider preemptive for this fair spinlock.

@xiaoxiang781216 xiaoxiang781216 linked a pull request Sep 16, 2023 that will close this issue
@TaiJuWu
Copy link
Contributor

TaiJuWu commented Sep 17, 2023

Yes that was the point about my comment in the other thread, i.e context switching overheads entering critical section ; there is something like the Lamport algorithm if I remember well.

@patacongo for some references; an interesting usage of ticket spinlocks within linux kernel ; https://github.com/drouyang/pmtlock

I have read this paper.
It seem not work on priority base spinlock.
But work on same priority process and our requirement is priority base.
It is still interesting paper.

@mu578
Copy link

mu578 commented Sep 17, 2023

You could overlay priority groups, you'd need just not to use the builtin list + having something timeout-able because you'd need a time-slicing mechanism (e.g enforcing preemption and maintaining order) but it is tricky to achieve within a SCHED_RR policy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Community: Good first issue Good for newcomers Type: Enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants