There will be a large performance loss after SCHED_CRITMONITOR is enabled.
By isolating thread running time-related functions, CPU load can be run with less overhead.
Signed-off-by: yinshengkai <yinshengkai@xiaomi.com>
Signed-off-by: buxiasen <buxiasen@xiaomi.com>
Most tools used for compliance and SBOM generation use SPDX identifiers
This change brings us a step closer to an easy SBOM generation.
Signed-off-by: Alin Jerpelea <alin.jerpelea@sony.com>
reason:
In the SMP, when a context switch occurs, restore_critical_section is executed.
In order to reduce the time taken for context switching,
we inline the restore_critical_section function.
Given that restore_critical_section is small in size
and is called from only one location, inlining it does not increase the size of the image.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
because 'g_cpu_nestcount[me] > 0' will never happen, in this place
test:
We can use qemu for testing.
compiling
make distclean -j20; ./tools/configure.sh -l qemu-armv8a:nsh_smp ;make -j20
running
qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic -machine virt,virtualization=on,gic-version=3 -net none -chardev stdio,id=con,mux=on -serial chardev:con -mon chardev=con,mode=readline -kernel ./nuttx
Signed-off-by: hujun5 <hujun5@xiaomi.com>
this_task() is a function call that involves disabling interrupts and this_cpu().
Since restore_critical_section is always called in an interrupt-disabled context,
there's no need to disable interrupts again. Therefore, to save time and achieve
the same effect, I directly call tcb = current_task(me) instead of tcb = this_task().
Signed-off-by: hujun5 <hujun5@xiaomi.com>
we can use g_cpu_lockset to determine whether we are currently in the scheduling lock,
and all accesses and modifications to g_cpu_lockset, g_cpu_irqlock, g_cpu_irqset
are in the critical section, so we can directly operate on it.
test:
We can use qemu for testing.
compiling
make distclean -j20; ./tools/configure.sh -l qemu-armv8a:nsh_smp ;make -j20
running
qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic -machine virt,virtualization=on,gic-version=3 -net none -chardev stdio,id=con,mux=on -serial chardev:con -mon chardev=con,mode=readline -kernel ./nuttx
Signed-off-by: hujun5 <hujun5@xiaomi.com>
1. The critical section does not prevent task scheduling
2. If the critical section is in sched_lock, there is no need to check,
scheduling is not going to happen
3. If sched_lock is in the critical section, sched_unlock will also
trigger scheduling without waiting for the exit of the critical section
4. After exiting the critical section, if there is an interrupt,
the scheduling will be automatically triggered
Signed-off-by: hujun5 <hujun5@xiaomi.com>
cpu0 thread0: cpu1:
sched_yield()
nxsched_set_priority()
nxsched_running_setpriority()
nxsched_reprioritize_rtr()
nxsched_add_readytorun()
up_cpu_pause()
IRQ enter
arm64_pause_handler()
enter_critical_section() begin
up_cpu_paused() pick thread0
arm64_restorestate() set thread0 tcb->xcp.regs to CURRENT_REGS
up_switch_context()
thread0 -> thread1
arm64_syscall()
case SYS_switch_context
change thread0 tcb->xcp.regs
restore_critical_section()
enter_critical_section() done
leave_critical_section()
IRQ leave with restore CURRENT_REGS
ERROR !!!
Reason:
As descript above, cpu0 swith task: thread0 -> thread1, and the
syscall() execute slowly, this time cpu1 pick thread0 to run at
up_cpu_paused(). Then cpu0 syscall execute, cpu1 IRQ leave error.
Resolve:
Move arm64_restorestate() after enter_critical_section() done
This is a continued fix with:
https://github.com/apache/nuttx/pull/6833
Signed-off-by: ligd <liguiding1@xiaomi.com>
test config: ./tools/configure.sh -l qemu-armv8a:nsh_smp
Pass ostest
No matter big-endian or little-endian, ticket spinlock only check the
next and the owner is equal or not.
If they are equal, it means there is a task hold the lock or lock is
free.
Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
Co-authored-by: Xiang Xiao <xiaoxiang781216@gmail.com>
Situation:
Assume we have 2 cpus, and busy run task0.
CPU0 CPU1
task0 -> task1 task2 -> task0
1. remove task0 form runninglist
2. take task1 as new tcb
3. add task0 to blocklist
4. clear spinlock
4.1 remove task2 form runninglist
4.2 take task0 as new tcb
4.3 add task2 to blocklist
4.4 use svc ISR swith to task0
4.5 crash
5. use svc ISR swith to task1
Fix:
Move clear spinlock to the end of svc ISR
Signed-off-by: ligd <liguiding1@xiaomi.com>
Summary:
- SP_SECTION was introduced to allocate spinlock in non-cachable
region mainly for Cortex-A to stabilize the NuttX SMP kernel
- However, all spinlocks are now allocated in cachable area and
works without any problems
- So SP_SECTION should be removed to simplify the kernel code
Impact:
- None
Testing:
- Build test only
Signed-off-by: Masayuki Ishikawa <Masayuki.Ishikawa@jp.sony.com>
Summary:
- I noticed that Cortex-A SGI can be masked
- We thought the SGI is not maskable
- Although I can not remember how I tested it before
- It actually works as expected now
- Also, fixed the number of remaining bugs in TODO
Impact:
- No impact
Testing:
- Tested with sabre-6quad:smp (QEMU and dev board)
- Add the following code in up_idle() before calling asm("WFI");
+ if (0 != up_cpu_index())
+ {
+ up_irq_save();
+ }
- Run the hello app, you can see "Hello, World!!"
- But nsh will freeze soon because arm_pause_handler is not called.
Signed-off-by: Masayuki Ishikawa <Masayuki.Ishikawa@jp.sony.com>
Summary:
- I found a deadlock during Wi-Fi audio streaming test plus stress test
- The testing environment was spresense:wifi_smp (NCPUS=4)
- The deadlock happened because two CPUs called up_cpu_pause() almost simultaneously
- This situation should not happen, because up_cpu_pause() is called in a critical section
- Actually, the latter call was from nxsem_post() in an IRQ handler
- And when enter_critical_section() was called, irq_waitlock() detected a deadlock
- Then it called up_cpu_paused() to break the deadlock
- However, this resulted in setting g_cpu_irqset on the CPU
- Even though another CPU had held a g_cpu_irqlock
- This situation violates the critical section and should be avoided
- To avoid the situation, if a CPU sets g_cpu_irqset after calling up_cpu_paused()
- The CPU must release g_cpu_irqlock first
- Then retry irq_waitlock() to acquire g_cpu_irqlock
Impact:
- Affect SMP
Testing:
- Tested with spresense:wifi_smp (NCPUS=2 and 4)
- Tested with spresense:smp
- Tested with sim:smp
- Tested with sabre-6quad:smp (QEMU)
- Tested with maix-bit:smp (QEMU)
- Tested with esp32-core:smp (QEMU)
- Tested with lc823450-xgevk:rndis
Signed-off-by: Masayuki Ishikawa <Masayuki.Ishikawa@jp.sony.com>
Summary:
- ARCH_GLOBAL_IRQDISABLE was initially introduced for LC823450 SMP
- At that time, i.MX6 (quad Cortex-A9) did not use this config
- However, this option is now used for all CPUs which support SMP
- So it's good timing for refactoring the code
Impact:
- Should have no impact because the logic is the same for SMP
Testing:
- Tested with board: spresense:smp, spresense:wifi_smp
- Tested with qemu: esp32-core:smp, maix-bit:smp, sabre-6quad:smp
- Build only: lc823450-xgevk:rndis, sam4cmp-db:nsh
Signed-off-by: Masayuki Ishikawa <Masayuki.Ishikawa@jp.sony.com>
sched/init/nx_bringup.c: Fix a naming collision.
sched/init: Rename os_start() to nx_start()
sched/init: Rename os_smp* to nx_smp*
sched/init: Rename os_bringup to nx_bringup
sched/init: rename all internal static functions to begin with nx_ vs os_
fs/procfs/fs_procfsproc: Extended the process ID ProcFS output to show per-thread maximum time for pre-emption disabled and maximum time within a critical section.
sched/sched/sched_critmonitor.c: Adds data collection logic in support of monitoring critical sections and pre-emption state.