reason:
1 To improve efficiency, we mimic Linux's behavior where preemption disabling is only applicable to the current CPU and does not affect other CPUs.
2 In the future, we will implement "spinlock+sched_lock", and use it extensively. Under such circumstances, if preemption is still globally disabled, it will seriously impact the scheduling efficiency.
3 We have removed g_cpu_lockset and used irqcount in order to eliminate the dependency of schedlock on critical sections in the future, simplify the logic, and further enhance the performance of sched_lock.
4 We set lockcount to 1 in order to lock scheduling on all CPUs during startup, without the need to provide additional functions to disable scheduling on other CPUs.
5 Cpu1~n must wait for cpu0 to enter the idle state before enabling scheduling because it prevents CPUs1~n from competing with cpu0 for the memory manager mutex, which could cause the cpu0 idle task to enter a wait state and trigger an assert.
size nuttx
before:
text data bss dec hex filename
265396 51057 63646 380099 5ccc3 nuttx
after:
text data bss dec hex filename
265184 51057 63642 379883 5cbeb nuttx
size -216
Configuring NuttX and compile:
$ ./tools/configure.sh -l qemu-armv8a:nsh_smp
$ make
Running with qemu
$ qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic \
-machine virt,virtualization=on,gic-version=3 \
-net none -chardev stdio,id=con,mux=on -serial chardev:con \
-mon chardev=con,mode=readline -kernel ./nuttx
Signed-off-by: hujun5 <hujun5@xiaomi.com>
reason:
Currently, if we need to schedule a task to another CPU, we have to completely halt the other CPU,
manipulate the scheduling linked list, and then resume the operation of that CPU. This process is both time-consuming and unnecessary.
During this process, both the current CPU and the target CPU are inevitably subjected to busyloop.
The improved strategy is to simply send a cross-core interrupt to the target CPU.
The current CPU continues to run while the target CPU responds to the interrupt, eliminating the certainty of a busyloop occurring.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
reason:
1In the scenario of active waiting, context switching is inevitable, and we can eliminate redundant judgments.
code size
before
hujun5@hujun5-OptiPlex-7070:~/downloads1/vela_sim/nuttx$ size nuttx
text data bss dec hex filename
262848 49985 63893 376726 5bf96 nuttx
after
hujun5@hujun5-OptiPlex-7070:~/downloads1/vela_sim/nuttx$ size nuttx
text data bss dec hex filename
263324 49985 63893 377202 5c172 nuttx
reduce code size by -476
Configuring NuttX and compile:
$ ./tools/configure.sh -l qemu-armv8a:nsh_smp
$ make
Running with qemu
$ qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic \
-machine virt,virtualization=on,gic-version=3 \
-net none -chardev stdio,id=con,mux=on -serial chardev:con \
-mon chardev=con,mode=readline -kernel ./nuttx
Signed-off-by: hujun5 <hujun5@xiaomi.com>
Most tools used for compliance and SBOM generation use SPDX identifiers
This change brings us a step closer to an easy SBOM generation.
Signed-off-by: Alin Jerpelea <alin.jerpelea@sony.com>
Calling malloc in the critical section may cause thread switching.
Add a retry mechanism to handle the situation where other threads have been successfully expanded.
Signed-off-by: yinshengkai <yinshengkai@xiaomi.com>
Signed-off-by: ligd <liguiding1@xiaomi.com>
Two things need to be done when vfork'ing:
- Must attach to parent's address environment (the addrenv is shared)
- Must allocate a kernel stack (where would the register context go otherwise)
Note that this code assumes the address environment is shared, since we
don't support fork() which would _clone_ the address environment instead.
Kthreads can share the group data so that to reduce overheads.
This implements shared kthread group via:
- use `tcb_s` instead of `task_tcb_s` for kthreads
- use `g_kthread_group` when creating kthreads
- use stackargs to start tasks and kthreads
see pull/12320 for test logs.
Signed-off-by: Yanfeng Liu <yfliu2008@qq.com>
we can use g_cpu_lockset to determine whether we are currently in the scheduling lock,
and all accesses and modifications to g_cpu_lockset, g_cpu_irqlock, g_cpu_irqset
are in the critical section, so we can directly operate on it.
test:
We can use qemu for testing.
compiling
make distclean -j20; ./tools/configure.sh -l qemu-armv8a:nsh_smp ;make -j20
running
qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic -machine virt,virtualization=on,gic-version=3 -net none -chardev stdio,id=con,mux=on -serial chardev:con -mon chardev=con,mode=readline -kernel ./nuttx
Signed-off-by: hujun5 <hujun5@xiaomi.com>
sched implementation not depends on macro abstraction, so revert below commit:
This reverts commit 4e62d0005a
This reverts commit 0f0c370520
This reverts commit ad0efd04ee
Signed-off-by: chao an <anchao@lixiang.com>
This log appears for NSH internal commands in KERNEL build and
duplicates a lot with task_spawn: log. So drop it to make the
DEBUG_SCHED_INFO logging more readable.
Signed-off-by: Yanfeng Liu <yfliu2008@qq.com>
1. add support to join main task
| static pthread_t self;
|
| static void *join_task(void *arg)
| {
| int ret;
| ret = pthread_join(self, NULL); <--- /* Fix Task could not be joined */
| return NULL;
| }
|
| int main(int argc, char *argv[])
| {
| pthread_t thread;
|
| self = pthread_self();
|
| pthread_create(&thread, NULL, join_task, NULL);
| sleep(1);
|
| pthread_exit(NULL);
| return 0;
| }
2. Detach active thread will not alloc for additional join, just update the task flag.
3. Remove the return value waiting lock logic (data_sem),
the return value will be stored in the waiting tcb.
4. Revise the return value of pthread_join(), consistent with linux
e.g:
Joining a detached and canceled thread should return EINVAL, not ESRCH
https://pubs.opengroup.org/onlinepubs/009695399/functions/pthread_join.html
[EINVAL]
The value specified by thread does not refer to a joinable thread.
NOTE:
This PR will not increase stack usage, but struct tcb_s will increase 32 bytes.
Signed-off-by: chao an <anchao@lixiang.com>
move task group into task_tcb_s to avoid access allocator to improve performance
for Task Termination, the time consumption will be reduced ~2us (Tricore TC397 300MHZ):
15.97(us) -> 13.55(us)
Signed-off-by: chao an <anchao@lixiang.com>
Change the type of task group member to single list chain to
avoid accessing the memory allocator to improve the performance
Signed-off-by: chao an <anchao@lixiang.com>
The maximum startup parameters have been checked accordingly in nxtask_setup_stackargs(),
let us save argument counter to avoid limit check.
Signed-off-by: chao an <anchao@lixiang.com>
Task group could find from process id, replace group_findbypid to
task_getgroup to simplify the search logic
Signed-off-by: chao an <anchao@lixiang.com>
Add support for static tcb, applications in some special case can
initialize system resources in advance through static tcb.
| static struct task_tcb_s g_tcb;
|
| memset(&g_tcb, 0, sizeof(struct task_tcb_s));
| g_tcb.cmn.flags = TCB_FLAG_TTYPE_KERNEL;
| nxtask_init(&g_tcb, "PTCB", 101, NULL, 1024, ptcb_task, NULL, NULL, NULL);
|
| ...
| nxtask_activate(&g_tcb.cmn);
Signed-off-by: chao an <anchao@lixiang.com>
Task activation/exit logs are helpful for device bringup,
especially when user space nsh prompt doesn't show up.
Putting them here will allow cleaning of logs in multiple
up_exit() functions later.
Signed-off-by: Yanfeng Liu <yfliu2008@qq.com>
PR #11165 causes an unnecessary regression; task_delete no longer works,
if the deleted task is from another group.
The logic that prevents this comes from:
nxnotify_cancellation() ->
tls_get_info_pid() ->
nxsched_get_stackinfo()
Which checks for permissions, which does not make sense in this case since
it is the kernel asking for the stack information.
Fix this by partially reverting 11165 and implementing a direct path for
the kernel to query for any tasks TLS.
The task files should consult the "spawn action" and "O_CLOEXEC flags"
to determine further whether the file should be duplicated.
This PR will further optimize file list duplicating to avoid the performance
regression caused by additional file operations.
Signed-off-by: chao an <anchao@xiaomi.com>
This moves task / thread cancel point logic from the NuttX kernel into
libc, while the data needed by the cancel point logic is moved to TLS.
The change is an enabler to move user-space APIs to libc as well, for
a coherent user/kernel separation.
Set the newly spawned process's signal mask, if the caller has instructed
to do so by setting POSIX_SPAWN_SETSIGMASK.
This is called after the task has been created but has NOT been started
yet.
Like the name implies, it is supposed to set the spawn attributes for
the NuttX specific "spawn proxy task" which was historically used as
a proxy to spawn new tasks. The proxy handled file actions and the signal
mask which are inherited from the parent.
The proxy task does not exist anymore, thus the proxy task attributes
do not need to be set anymore either.
Also, the function is currently still used, but the signal mask is set
for the spawning process, not the proxy process, and this is most
DEFINITELY an error (as the spawning process's signal mask changes
unexpectedly).
Setting the signal mask for the newly spawned process is simple, just
set it directly, if instructed to do so. This will be done in a later
patch!
VELAPLATFO-18473
refs:
https://man7.org/linux/man-pages/man2/fcntl.2.html
If the FD_CLOEXEC bit is set, the file descriptor will automatically
be closed during a successful execve(2).
(If the execve(2) fails, the file descriptor is left open.)
modify:
1. Ensure that the child task copies all fds of the parent task,
including those with O_CLOEXE.
2. Make sure spawn_file_action is executed under fd with O_CLOEXEC,
otherwise it will fail.
3. When a new task is activated or exec is called, close all fds
with O_CLOEXEC flags.
Signed-off-by: dongjiuzhu1 <dongjiuzhu1@xiaomi.com>
Handle task spawn attributes as task spawn file actions are handled.
Why? This removes the need for sched_lock() when the task is being
spawned. When loading the new task from a file the scheduler can be
locked for a VERY LONG time, in the order of hundreds of milliseconds!
This is unacceptable for real time operation.
Also fixes a latent bug in exec_module, spawn_file_actions is executed
at a bad location; when CONFIG_ARCH_ADDRENV=y actions will point to the
new process's address environment (as it is temporarily instantiated at
that point). Fix this by moving it to after addrenv_restore.
The nxspawn_dup2 function will return a value greater than 0,
so the loop should only exit if ret is less than 0.
Signed-off-by: zhangyuan21 <zhangyuan21@xiaomi.com>
1. as we can use fork to implement vfork, so we rename the vfork to
fork, and use the fork method as the base to implement vfork method
2. create the vfork function as a libc function based on fork
function
Signed-off-by: guoshichao <guoshichao@xiaomi.com>
1. Update all CMakeLists.txt to adapt to new layout
2. Fix cmake build break
3. Update all new file license
4. Fully compatible with current compilation environment(use configure.sh or cmake as you choose)
------------------
How to test
From within nuttx/. Configure:
cmake -B build -DBOARD_CONFIG=sim/nsh -GNinja
cmake -B build -DBOARD_CONFIG=sim:nsh -GNinja
cmake -B build -DBOARD_CONFIG=sabre-6quad/smp -GNinja
cmake -B build -DBOARD_CONFIG=lm3s6965-ek/qemu-flat -GNinja
(or full path in custom board) :
cmake -B build -DBOARD_CONFIG=$PWD/boards/sim/sim/sim/configs/nsh -GNinja
This uses ninja generator (install with sudo apt install ninja-build). To build:
$ cmake --build build
menuconfig:
$ cmake --build build -t menuconfig
--------------------------
2. cmake/build: reformat the cmake style by cmake-format
https://github.com/cheshirekow/cmake_format
$ pip install cmakelang
$ for i in `find -name CMakeLists.txt`;do cmake-format $i -o $i;done
$ for i in `find -name *\.cmake`;do cmake-format $i -o $i;done
Co-authored-by: Matias N <matias@protobits.dev>
Signed-off-by: chao an <anchao@xiaomi.com>
to avoid the infinite recusive dispatch:
*0 myhandler (signo=27, info=0xf3e38b9c, context=0x0) at ltp/testcases/open_posix_testsuite/conformance/interfaces/sigqueue/7-1.c:39
*1 0x58f1c39e in nxsig_deliver (stcb=0xf4e20f40) at signal/sig_deliver.c:167
*2 0x58fa0664 in up_schedule_sigaction (tcb=0xf4e20f40, sigdeliver=0x58f1bab5 <nxsig_deliver>) at sim/sim_schedulesigaction.c:88
*3 0x58f19907 in nxsig_queue_action (stcb=0xf4e20f40, info=0xf4049334) at signal/sig_dispatch.c:115
*4 0x58f1b089 in nxsig_tcbdispatch (stcb=0xf4e20f40, info=0xf4049334) at signal/sig_dispatch.c:435
*5 0x58f31853 in nxsig_unmask_pendingsignal () at signal/sig_unmaskpendingsignal.c:104
*6 0x58f1ca09 in nxsig_deliver (stcb=0xf4e20f40) at signal/sig_deliver.c:199
*7 0x58fa0664 in up_schedule_sigaction (tcb=0xf4e20f40, sigdeliver=0x58f1bab5 <nxsig_deliver>) at sim/sim_schedulesigaction.c:88
*8 0x58f19907 in nxsig_queue_action (stcb=0xf4e20f40, info=0xf4049304) at signal/sig_dispatch.c:115
*9 0x58f1b089 in nxsig_tcbdispatch (stcb=0xf4e20f40, info=0xf4049304) at signal/sig_dispatch.c:435
*10 0x58f31853 in nxsig_unmask_pendingsignal () at signal/sig_unmaskpendingsignal.c:104
*11 0x58f1ca09 in nxsig_deliver (stcb=0xf4e20f40) at signal/sig_deliver.c:199
*12 0x58fa0664 in up_schedule_sigaction (tcb=0xf4e20f40, sigdeliver=0x58f1bab5 <nxsig_deliver>) at sim/sim_schedulesigaction.c:88
*13 0x58f19907 in nxsig_queue_action (stcb=0xf4e20f40, info=0xf40492d4) at signal/sig_dispatch.c:115
*14 0x58f1b089 in nxsig_tcbdispatch (stcb=0xf4e20f40, info=0xf40492d4) at signal/sig_dispatch.c:435
*15 0x58f31853 in nxsig_unmask_pendingsignal () at signal/sig_unmaskpendingsignal.c:104
*16 0x58f1ca09 in nxsig_deliver (stcb=0xf4e20f40) at signal/sig_deliver.c:199
*17 0x58fa0664 in up_schedule_sigaction (tcb=0xf4e20f40, sigdeliver=0x58f1bab5 <nxsig_deliver>) at sim/sim_schedulesigaction.c:88
*18 0x58f19907 in nxsig_queue_action (stcb=0xf4e20f40, info=0xf40492a4) at signal/sig_dispatch.c:115
*19 0x58f1b089 in nxsig_tcbdispatch (stcb=0xf4e20f40, info=0xf40492a4) at signal/sig_dispatch.c:435
*20 0x58f31853 in nxsig_unmask_pendingsignal () at signal/sig_unmaskpendingsignal.c:104
*21 0x58f1ca09 in nxsig_deliver (stcb=0xf4e20f40) at signal/sig_deliver.c:199
*22 0x58fa0664 in up_schedule_sigaction (tcb=0xf4e20f40, sigdeliver=0x58f1bab5 <nxsig_deliver>) at sim/sim_schedulesigaction.c:88
*23 0x58f19907 in nxsig_queue_action (stcb=0xf4e20f40, info=0xf4049274) at signal/sig_dispatch.c:115
*24 0x58f1b089 in nxsig_tcbdispatch (stcb=0xf4e20f40, info=0xf4049274) at signal/sig_dispatch.c:435
*25 0x58f31853 in nxsig_unmask_pendingsignal () at signal/sig_unmaskpendingsignal.c:104
*26 0x58f1ca09 in nxsig_deliver (stcb=0xf4e20f40) at signal/sig_deliver.c:199
*27 0x58fa0664 in up_schedule_sigaction (tcb=0xf4e20f40, sigdeliver=0x58f1bab5 <nxsig_deliver>) at sim/sim_schedulesigaction.c:88
*28 0x58f19907 in nxsig_queue_action (stcb=0xf4e20f40, info=0xf4049244) at signal/sig_dispatch.c:115
*29 0x58f1b089 in nxsig_tcbdispatch (stcb=0xf4e20f40, info=0xf4049244) at signal/sig_dispatch.c:435
*30 0x58f31853 in nxsig_unmask_pendingsignal () at signal/sig_unmaskpendingsignal.c:104
*31 0x58f1ca09 in nxsig_deliver (stcb=0xf4e20f40) at signal/sig_deliver.c:199
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>