This patch moved the g_wdtimernested to wd_start.c
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
Signed-off-by: ligd <liguiding1@xiaomi.com>
If g_wdactivelist has been changed in the wdog callback, the list traversal with next pointer will cause problem.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
Signed-off-by: ligd <liguiding1@xiaomi.com>
This commit refactors the wdog module to use absolute time representation internally. The main improvements include:
1. Fixed recursive watchdog handling caused by calling wd_start within watchdog timeout callback function.
2. Simplified timer processing to improve performance and enhance code readability.
3. Improved accuracy of timers.
4. Reduced critical section and interrupt disable time, improving real-time performance.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
Signed-off-by: ligd <liguiding1@xiaomi.com>
reason:
In the SMP, when a context switch occurs, restore_critical_section is executed.
In order to reduce the time taken for context switching,
we inline the restore_critical_section function.
Given that restore_critical_section is small in size
and is called from only one location, inlining it does not increase the size of the image.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
The file descriptors of kernel threads should be clean and should
not be generated based on user threads. Instead, an idle thread
hould be chosen.
Signed-off-by: dongjiuzhu1 <dongjiuzhu1@xiaomi.com>
Configuring NuttX and compile:
$ ./tools/configure.sh -l qemu-armv8a:nsh_smp
$ make
Running with qemu
$ qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic \
-machine virt,virtualization=on,gic-version=3 \
-net none -chardev stdio,id=con,mux=on -serial chardev:con \
-mon chardev=con,mode=readline -kernel ./nuttx
reason:
If the type of tcb is TSTATE_TASK_ASSIGNED, removing it using nxsched_remove_not_running
and then putting it back into the queue may result in a context switch.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
reason:
1 We place interrupt handling functions of the same priority into the work queue corresponding
to that priority, allowing high-priority interrupts to preempt low-priority ones,
thus ensuring the real-time performance of high-priority interrupts.
2 The sole purpose of the interrupt handler is to wake
up the work queue of the corresponding priority and execute the interrupt handling function.
3 Compared to the functionality of isr threads, this
approach saves more memory, particularly when the number of interrupts is large.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
Since pthread_mutex is implemented by sem, it is impossible to see in ps who holds the lock and causes the wait.
Replace sem with mutex implementation to solve the above problems
Signed-off-by: yinshengkai <yinshengkai@xiaomi.com>
reason:
Since smp call handler may lead to context switching,
we need to update the context information by calling up_cpu_paused_[save|restore].
Signed-off-by: hujun5 <hujun5@xiaomi.com>
Make this_cpu is arch independent and up_cpu_index do that.
In AMP mode, up_cpu_index() may return the index of the physical core.
Signed-off-by: fangxinyong <fangxinyong@xiaomi.com>
reason:
dynaminc create g_irqmap to reduce the use of data segments
CONFIG_ARCH_NUSER_INTERRUPTS should be one more than the number of IRQs actually used
Signed-off-by: hujun5 <hujun5@xiaomi.com>
Calling malloc in the critical section may cause thread switching.
Add a retry mechanism to handle the situation where other threads have been successfully expanded.
Signed-off-by: yinshengkai <yinshengkai@xiaomi.com>
Signed-off-by: ligd <liguiding1@xiaomi.com>
enable the scheduler may cause the context to switch to a high-priority task,
which will failure to clear pending events correctly.
Signed-off-by: chao an <anchao@lixiang.com>
reason:
These interfaces are used when we assign interrupt
handling of the same priority to the corresponding priority work queues.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
1.work_queue a work which call work_queue again
2.work_cancel_sync the work
3.the work is just calling in workthread
4.work_queue the work again,and post the work_cancel_sync wakeup
5.the work is not canceled
Signed-off-by: dulibo1 <dulibo1@xiaomi.com>
purpose:
To improve the real-time performance of the system, we prefer to perform
as few operations as possible within the interrupt function.
We have designed an interrupt thread for each interrupt,
where all the operations that are not necessary to be handled
in the interrupt function are delegated to be processed by the interrupt thread.
Up_enable_irq will be invoked after isrthread started.
Configuring NuttX and compile:
$ ./tools/configure.sh -l qemu-armv8a:nsh_smp
$ make
Running with qemu
$ qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic \
-machine virt,virtualization=on,gic-version=3 \
-net none -chardev stdio,id=con,mux=on -serial chardev:con \
-mon chardev=con,mode=readline -kernel ./nuttx
Signed-off-by: hujun5 <hujun5@xiaomi.com>
"/mnt/yang/qixinwei_cmake/nuttx/sched/sched/sched_removeblocked.c", line 58: warning #188-D:
enumerated type mixed with another type
tstate_t task_state = btcb->task_state;
"/mnt/yang/qixinwei_cmake/nuttx/sched/sched/sched_setpriority.c", line 243: warning #188-D:
enumerated type mixed with another type
tstate_t task_state = tcb->task_state;
Signed-off-by: yanghuatao <yanghuatao@xiaomi.com>
When a low-priority thread sends a kill signal to a high-priority thread,
the high-priority thread will exit and release tcb. When the thread returns
to the low-priority thread, it will access the released stcb.
Signed-off-by: yinshengkai <yinshengkai@xiaomi.com>
Events groups are synchronization primitives that allow tasks to wait
for multiple conditions to be met before proceeding. They are particularly
useful in scenarios where a task needs to wait for several events to occur
simultaneously.
Signed-off-by: chao an <anchao@lixiang.com>
Regression by:
| commit 2ee8aa6f2b
| Author: hujun5 <hujun5@xiaomi.com>
| Date: Thu Jan 11 11:27:31 2024 +0800
|
| sched: we use spin_lock_irqsave replace enter_critical_section to protect g_irqvector
|
| enter_critical_section may be called before os initialized
|
| Signed-off-by: hujun5 <hujun5@xiaomi.com>
Signed-off-by: chao an <anchao@lixiang.com>
Two things need to be done when vfork'ing:
- Must attach to parent's address environment (the addrenv is shared)
- Must allocate a kernel stack (where would the register context go otherwise)
Note that this code assumes the address environment is shared, since we
don't support fork() which would _clone_ the address environment instead.
when the thread to backtrace is exiting, get_tcb and up_backtrace in
different critical section may cause try to dump invalid pointer, have
to ensure the nxsched_get_tcb and up_backtrace inside same critical
section procedure.
Signed-off-by: buxiasen <buxiasen@xiaomi.com>
* sched/Kconfig
(INIT_ENTRYPOINT): Document that the signature of "main" must take
"argc" and "argv" or else compilation error results, as discussed in
the email thread below [1].
References:
[1] dev@nuttx.apache.org email thread:
"basically, I get an error when I do make" started 18 Jul 2024,
archived:
https://lists.apache.org/thread/9j2s3647ysdhy204g4ombn4o09bn11c1
and elsewhere.
because 'g_cpu_nestcount[me] > 0' will never happen, in this place
test:
We can use qemu for testing.
compiling
make distclean -j20; ./tools/configure.sh -l qemu-armv8a:nsh_smp ;make -j20
running
qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic -machine virt,virtualization=on,gic-version=3 -net none -chardev stdio,id=con,mux=on -serial chardev:con -mon chardev=con,mode=readline -kernel ./nuttx
Signed-off-by: hujun5 <hujun5@xiaomi.com>
Add support for XSAVE/XRSTOR to handle x86_64 procesor extended states.
Support for these instructions is required to support AVX instruction set
Signed-off-by: p-szafonimateusz <p-szafonimateusz@xiaomi.com>
purpose:
1 sched_lock is very time-consuming, and reducing its invocations can improve performance.
2 sched_lock is prone to misuse, and narrowing its scope of use is to prevent people from referencing incorrect code and using it
test:
We can use qemu for testing.
compiling
make distclean -j20; ./tools/configure.sh -l qemu-armv8a:nsh_smp ;make -j20
running
qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic -machine virt,virtualization=on,gic-version=3 -net none -chardev stdio,id=con,mux=on -serial chardev:con -mon chardev=con,mode=readline -kernel ./nuttx
We have also tested this patch on other ARM hardware platforms.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
Kthreads can share the group data so that to reduce overheads.
This implements shared kthread group via:
- use `tcb_s` instead of `task_tcb_s` for kthreads
- use `g_kthread_group` when creating kthreads
- use stackargs to start tasks and kthreads
see pull/12320 for test logs.
Signed-off-by: Yanfeng Liu <yfliu2008@qq.com>
Thread args have already been saved to stack after the TLS section by
nxtask_setup_stackargs. This is to retrieve it for use.
Signed-off-by: Yanfeng Liu <yfliu2008@qq.com>
This adds pre-allocation and dynamic allocations for sigactions.
Current behavior can be acheived by setting SIG_ALLOC_ACTIONS to
a number larger than 1.
Signed-off-by: Yanfeng Liu <yfliu2008@qq.com>
this_task() is a function call that involves disabling interrupts and this_cpu().
Since restore_critical_section is always called in an interrupt-disabled context,
there's no need to disable interrupts again. Therefore, to save time and achieve
the same effect, I directly call tcb = current_task(me) instead of tcb = this_task().
Signed-off-by: hujun5 <hujun5@xiaomi.com>
we can use g_cpu_lockset to determine whether we are currently in the scheduling lock,
and all accesses and modifications to g_cpu_lockset, g_cpu_irqlock, g_cpu_irqset
are in the critical section, so we can directly operate on it.
test:
We can use qemu for testing.
compiling
make distclean -j20; ./tools/configure.sh -l qemu-armv8a:nsh_smp ;make -j20
running
qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic -machine virt,virtualization=on,gic-version=3 -net none -chardev stdio,id=con,mux=on -serial chardev:con -mon chardev=con,mode=readline -kernel ./nuttx
Signed-off-by: hujun5 <hujun5@xiaomi.com>
sched implementation not depends on macro abstraction, so revert below commit:
This reverts commit 4e62d0005a
This reverts commit 0f0c370520
This reverts commit ad0efd04ee
Signed-off-by: chao an <anchao@lixiang.com>
waitpid() cannot be used in kernel mode unless SCHED_HAVE_PARENT is
selected -> add dependency if BUILD_KERNEL is selected.
Why? Because without SCHED_HAVE_PARENT waitpid() works in a non-standard
way, meaning it does not use SIGCHLD to wake the parent, as it should.
Also, returning the child status via stat_loc corrupts memory as stat_loc
points to the parent's address environment:
pid_t nxsched_waitpid(pid_t pid, int *stat_loc, int options)
{
...
group->tg_statloc = stat_loc;
...
}
And later when the status is returned, the child writes to tg_statloc,
which points to the parent's address environment -> random memory
corruption:
static inline void nxtask_exitwakeup(FAR struct tcb_s *tcb, int status)
{
...
if (group->tg_statloc != NULL)
{
*group->tg_statloc = status << 8;
}
...
}
This log appears for NSH internal commands in KERNEL build and
duplicates a lot with task_spawn: log. So drop it to make the
DEBUG_SCHED_INFO logging more readable.
Signed-off-by: Yanfeng Liu <yfliu2008@qq.com>
we removed "select SCHED_RESUMESCHEDULER",
As SCHED_RESUMESCHEDULER is not a necessary feature in SMP,
turning it on by default may affect performance.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
test:
We can use qemu for testing.
compiling
make distclean -j20; ./tools/configure.sh -l qemu-armv8a:nsh_smp ;make -j20 running
qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic -machine virt,virtualization=on,gic-version=3 -net none -chardev stdio,id=con,mux=on -serial chardev:con -mon chardev=con,mode=readline -kernel ./nuttx
We have also tested this patch on other ARM hardware platforms.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
purpose:
1 sched_lock is very time-consuming, and reducing its invocations can improve performance.
2 sched_lock is prone to misuse, and narrowing its scope of use is to prevent people from referencing incorrect code and using it
test:
We can use qemu for testing.
compiling
make distclean -j20; ./tools/configure.sh -l qemu-armv8a:nsh_smp ;make -j20
running
qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic -machine virt,virtualization=on,gic-version=3 -net none -chardev stdio,id=con,mux=on -serial chardev:con -mon chardev=con,mode=readline -kernel ./nuttx
We have also tested this patch on other ARM hardware platforms.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
purpose:
1 sched_lock is very time-consuming, and reducing its invocations can improve performance.
2 sched_lock is prone to misuse, and narrowing its scope of use is to prevent people from referencing incorrect code and using it
test:
We can use qemu for testing.
compiling
make distclean -j20; ./tools/configure.sh -l qemu-armv8a:nsh_smp ;make -j20
running
qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic -machine virt,virtualization=on,gic-version=3 -net none -chardev stdio,id=con,mux=on -serial chardev:con -mon chardev=con,mode=readline -kernel ./nuttx
We have also tested this patch on other ARM hardware platforms.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
1. The critical section does not prevent task scheduling
2. If the critical section is in sched_lock, there is no need to check,
scheduling is not going to happen
3. If sched_lock is in the critical section, sched_unlock will also
trigger scheduling without waiting for the exit of the critical section
4. After exiting the critical section, if there is an interrupt,
the scheduling will be automatically triggered
Signed-off-by: hujun5 <hujun5@xiaomi.com>