1. The critical section does not prevent task scheduling
2. If the critical section is in sched_lock, there is no need to check,
scheduling is not going to happen
3. If sched_lock is in the critical section, sched_unlock will also
trigger scheduling without waiting for the exit of the critical section
4. After exiting the critical section, if there is an interrupt,
the scheduling will be automatically triggered
Signed-off-by: hujun5 <hujun5@xiaomi.com>
In SMP mode, up_cpu_index()/this_cpu() are the same, both return the index of the physical core.
In AMP mode, up_cpu_index() will return the index of the physical core, and this_cpu() will always return 0
| #ifdef CONFIG_SMP
| # define this_cpu() up_cpu_index()
| #elif defined(CONFIG_AMP)
| # define this_cpu() (0)
| #else
| # define this_cpu() (0)
| #endif
Signed-off-by: chao an <anchao@lixiang.com>
cpu0 thread0: cpu1:
sched_yield()
nxsched_set_priority()
nxsched_running_setpriority()
nxsched_reprioritize_rtr()
nxsched_add_readytorun()
up_cpu_pause()
IRQ enter
arm64_pause_handler()
enter_critical_section() begin
up_cpu_paused() pick thread0
arm64_restorestate() set thread0 tcb->xcp.regs to CURRENT_REGS
up_switch_context()
thread0 -> thread1
arm64_syscall()
case SYS_switch_context
change thread0 tcb->xcp.regs
restore_critical_section()
enter_critical_section() done
leave_critical_section()
IRQ leave with restore CURRENT_REGS
ERROR !!!
Reason:
As descript above, cpu0 swith task: thread0 -> thread1, and the
syscall() execute slowly, this time cpu1 pick thread0 to run at
up_cpu_paused(). Then cpu0 syscall execute, cpu1 IRQ leave error.
Resolve:
Move arm64_restorestate() after enter_critical_section() done
This is a continued fix with:
https://github.com/apache/nuttx/pull/6833
Signed-off-by: ligd <liguiding1@xiaomi.com>
spinlock.c:
Implement read write spinlock.
Readers can take lock simultaneously but only one writer can take lock.
irq_spinlock.c:
Align g_irq_spin_count.
If the lock is NULL, the caller will get global lock (e.g. g_irq_spin) and spin_lock_irqsave() support nest on the same CPU.
If the CPU can write lock, it can call write_lock_irqsave() again (e.g. support nest).
Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
Co-authored-by: David Sidrane <David.Sidrane@Nscdg.com>
test config: ./tools/configure.sh -l qemu-armv8a:nsh_smp
Pass ostest
No matter big-endian or little-endian, ticket spinlock only check the
next and the owner is equal or not.
If they are equal, it means there is a task hold the lock or lock is
free.
Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
Co-authored-by: Xiang Xiao <xiaoxiang781216@gmail.com>
When supporting high-priority interrupts, updating the
g_running_tasks within a high-priority interrupt may be
cause problems. The g_running_tasks should only be updated
when it is determined that a task context switch has occurred.
Signed-off-by: zhangyuan21 <zhangyuan21@xiaomi.com>
1. Update all CMakeLists.txt to adapt to new layout
2. Fix cmake build break
3. Update all new file license
4. Fully compatible with current compilation environment(use configure.sh or cmake as you choose)
------------------
How to test
From within nuttx/. Configure:
cmake -B build -DBOARD_CONFIG=sim/nsh -GNinja
cmake -B build -DBOARD_CONFIG=sim:nsh -GNinja
cmake -B build -DBOARD_CONFIG=sabre-6quad/smp -GNinja
cmake -B build -DBOARD_CONFIG=lm3s6965-ek/qemu-flat -GNinja
(or full path in custom board) :
cmake -B build -DBOARD_CONFIG=$PWD/boards/sim/sim/sim/configs/nsh -GNinja
This uses ninja generator (install with sudo apt install ninja-build). To build:
$ cmake --build build
menuconfig:
$ cmake --build build -t menuconfig
--------------------------
2. cmake/build: reformat the cmake style by cmake-format
https://github.com/cheshirekow/cmake_format
$ pip install cmakelang
$ for i in `find -name CMakeLists.txt`;do cmake-format $i -o $i;done
$ for i in `find -name *\.cmake`;do cmake-format $i -o $i;done
Co-authored-by: Matias N <matias@protobits.dev>
Signed-off-by: chao an <anchao@xiaomi.com>
Summary:
- Support arm64 pmu api, Currently only the cycle counter function is supported.
- Using ARM64 PMU hardware capability to implement perf interface, modify all
perf interface related code.
- Support for pmu init under smp.
Signed-off-by: wangming9 <wangming9@xiaomi.com>
Situation:
Assume we have 2 cpus, and busy run task0.
CPU0 CPU1
task0 -> task1 task2 -> task0
1. remove task0 form runninglist
2. take task1 as new tcb
3. add task0 to blocklist
4. clear spinlock
4.1 remove task2 form runninglist
4.2 take task0 as new tcb
4.3 add task2 to blocklist
4.4 use svc ISR swith to task0
4.5 crash
5. use svc ISR swith to task1
Fix:
Move clear spinlock to the end of svc ISR
Signed-off-by: ligd <liguiding1@xiaomi.com>
reason:
1. g_running_tasks = thread A
2. thread A exit (free thread A's tcb) -> thread B
3. thread B interrupt by irq
4. check g_running_tasks->flags -> kasan report used after free
rootcause:
g_running_tasks has't set completely when syscall hanppened
Resolve:
Use rtcb (get at ISR begining) instead
Signed-off-by: ligd <liguiding1@xiaomi.com>
because not all compiler support the weak attribute, and
many features are either always used or guarded by config.
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>