Conversation
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
awkernel_async_lib/src/scheduler.rs
Outdated
| fn invoke_preemption(&self, wake_task_priority: PriorityInfo) { | ||
| while let Some((task_id, cpu_id, lowest_priority_info)) = get_lowest_priority_task_info() { | ||
| if wake_task_priority < lowest_priority_info { | ||
| if IS_SEND_IPI.load(Ordering::Relaxed) { |
There was a problem hiding this comment.
You may have to use compare_exchange() here.
Checking and storing the value should be atomic.
There was a problem hiding this comment.
Between executing get_lowest_priority_task_info and checking the IS_SEND_IPI value, is there a possibility that preemption could occur on another CPU, changing the lowest_priority task, and even completing the process of setting IS_SEND_IPI back to false?
There was a problem hiding this comment.
To help me understand, could you answer the following question for both before and after merging this PR?
"Can a CPU receive and handle an IPI from another CPU while it's in invoke_preemption(), and then resume its original execution?"
There was a problem hiding this comment.
You may have to use compare_exchange() here.
Checking and storing the value should be atomic.
Correction. Thank you very much.
There was a problem hiding this comment.
Between executing get_lowest_priority_task_info and checking the IS_SEND_IPI value, is there a possibility that preemption could occur on another CPU, changing the lowest_priority task, and even completing the process of setting IS_SEND_IPI back to false?
Yes. See Doc for how to deal with this after the implementation change.
There was a problem hiding this comment.
To help me understand, could you answer the following question for both before and after merging this PR?
"Can a CPU receive and handle an IPI from another CPU while it's in invoke_preemption(), and then resume its original execution?"
As mentioned earlier, interrupts during locking cause deadlocks, so no work properly after the merge.
Before merging, the GEDF data was locked, so no interruptions occurred during invoke_preemption(), and there were no issues.
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
awkernel_async_lib/src/task.rs
Outdated
| let mut is_send_ipi = IS_SEND_IPI.lock(&mut node); | ||
| if *is_send_ipi { | ||
| continue; | ||
| } | ||
| *is_send_ipi = true; |
There was a problem hiding this comment.
You can use AtomicBool::compare_exchange().
https://doc.rust-lang.org/std/sync/atomic/struct.AtomicBool.html#method.compare_exchange
There was a problem hiding this comment.
Correction. Thank you very much.
|
@kobayu858 |
awkernel_async_lib/src/task.rs
Outdated
| let mut node = MCSNode::new(); | ||
| let tasks = TASKS.lock(&mut node); | ||
| running_tasks | ||
| if true_count >= non_primary_cpus { |
There was a problem hiding this comment.
| if true_count >= non_primary_cpus { | |
| if true_count == non_primary_cpus { |
There was a problem hiding this comment.
Correction. Thank you very much.
awkernel_async_lib/src/task.rs
Outdated
| .is_none_or(|(_, _, lowest_priority_info)| priority_info > *lowest_priority_info) | ||
| { | ||
| lowest_task = Some((task_id, cpu_id, priority_info)); | ||
| if running_tasks.is_empty() || running_tasks.len() != non_primary_cpus { |
There was a problem hiding this comment.
| if running_tasks.is_empty() || running_tasks.len() != non_primary_cpus { | |
| if running_tasks.len() < non_primary_cpus { |
There was a problem hiding this comment.
running_tasks.is_empty() is required to pass make test.
Because at make tast, there is only one core, and running_tasks and non_primary_cpus seem to be set to 0.
Therefore, the above fix will cause an infinite loop during make test.
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Signed-off-by: kobayu858 <yutaro.kobayashi@tier4.jp>
Implementation of inter-scheduler preemption for priority inversion.
Specifically, re-transmission of IPI, limitation of the period during which task status can be obtained, etc. were added.
Details are given below.
Test