Add a common method to format backtrace to buffer, so it can be used by both mm, fs and other possoble modules.
Signed-off-by: fangpeina <fangpeina@xiaomi.com>
The mempool mistakenly considers the heap memory as its own
The recurring scenario only occurs in mempool_deinit
test code in hello_main:
int main(int argc, FAR char *argv[])
{
void *a = malloc(1024*64);
void *d[16];
void *heap = mm_initialize("123", a, 1024 * 64);
for (int i = 0; i < 16; i++)
{
d[i] = mm_malloc(heap,32);
}
for (int i = 0; i < 16; i++)
{
if (d[i] != NULL)
mm_free(heap,d[i]);
}
mm_uninitialize(heap);
free(a);
return 0;
}
and crash backtrace
0 _assert (filename=0x4ea20 "mempool/mempool.c", linenum=373, msg=0x0 <up_perf_convert>, regs=0x0 <up_perf_convert>)
at misc/assert.c:551
1 0x0000a32c in __assert (filename=0x4ea20 "mempool/mempool.c", linenum=373, msg=0x0 <mempool_multiple_foreach>)
at assert/lib_assert.c:36
2 0x0000f92c in mempool_release (pool=0x100e7a0, blk=0x100ff80) at mempool/mempool.c:373
3 0x000109ce in mempool_multiple_free (mpool=0x100e6f8, blk=0x100ff80) at mempool/mempool_multiple.c:648
4 0x0000deac in mm_delayfree (heap=0x100e090, mem=0x1010000, delay=false) at mm_heap/mm_free.c:83
5 0x0000e21c in mm_free (heap=0x100e090, mem=0x1010000) at mm_heap/mm_free.c:242
6 0x0001021c in mempool_multiple_free_chunk (mpool=0x100e6f8, ptr=0x1010000) at mempool/mempool_multiple.c:222
7 0x0001048e in mempool_multiple_free_callback (pool=0x100e7a0, addr=0x1010080) at mempool/mempool_multiple.c:291
8 0x0000ff6e in mempool_deinit (pool=0x100e7a0) at mempool/mempool.c:644
9 0x00010cba in mempool_multiple_deinit (mpool=0x100e6f8) at mempool/mempool_multiple.c:883
10 0x0000dd0c in mm_uninitialize (heap=0x100e090) at mm_heap/mm_initialize.c:326
11 0x0002c742 in hello_main (argc=1, argv=0x100d050) at hello_main.c:54
12 0x0000a83e in nxtask_startup (entrypt=0x2c6a5 <hello_main>, argc=1, argv=0x100d050) at sched/task_startup.c:70
13 0x00005272 in nxtask_start () at task/task_start.c:112
Signed-off-by: anjiahao <anjiahao@xiaomi.com>
remove alist, switch to a convenient way to traverse
the physical address directly.
At the same time, we can use `gurad` to mark whether
it is free or alloc or out of bounds check
Signed-off-by: anjiahao <anjiahao@xiaomi.com>
The benefit of this approach is that in a multi-core AMP system,
a single coredump might contain memory information from other cores.
By analyzing this coredump along with the corresponding ELF files from
the other cores, you can reconstruct the crash site of those other
cores.
Signed-off-by: anjiahao <anjiahao@xiaomi.com>
If we have a full memory raw dump, we can parse g_tcbinfo, g_npidhash,
and g_pidhash from the ELF to get NuttX thread info,
regardless of the crash dump log file.
support new command in gdb:
info threads
thread id
Signed-off-by: anjiahao <anjiahao@xiaomi.com>
sim/posix/sim_alsa.c:728:24: runtime error: member
access within null pointer of type 'const struct sim_codec_ops_s'
Signed-off-by: jinxiuxu <jinxiuxu@xiaomi.com>
If sched lock before irq save, and irq handler do post, scheduler will
be delayed after WFI until next sched unlock. which is not acceptable.
Signed-off-by: buxiasen <buxiasen@xiaomi.com>
pm process should be done by chip specific, but we can provide a standard
flow, then vendor & chip can only focus on handle different state change.
Signed-off-by: buxiasen <buxiasen@xiaomi.com>
This fixes syscall instrumentation in flat build:
- add syscall_names.c if LIB_SYSCALL=n to 'wraps'
subdirectory
- change execute_process to add_custom_command
- fix typo in wrapper naming convention
Signed-off-by: Daniel Jasinski <jasinskidaniel95szcz@gmail.com>
critical_section is not compatible with irq disabled, have to delay the
wd_start after spin_unlock_irqrestore.
Signed-off-by: buxiasen <buxiasen@xiaomi.com>
In a multicore task scenario, there may be a situation where the task runs on different cores at different time slices (when the task is not bound to a particular core).
When the task calls cache clean/flush(range > cache size), depending on the optimization, clean_all, flush_all are called. however, at this point, there may be dirty data or incomplete data profiles in the cache on the kernel that is running the task, which may result in dirty data being flushed into memory or make the application think that the flushed data should be successfully flushed into memory, leading to unknown consequences.
Signed-off-by: chenrun1 <chenrun1@xiaomi.com>
In a multicore task scenario, there may be a situation where the task runs on different cores at different time slices (when the task is not bound to a particular core).
When the task calls cache clean/flush(range > cache size), depending on the optimization, clean_all, flush_all are called. however, at this point, there may be dirty data or incomplete data profiles in the cache on the kernel that is running the task, which may result in dirty data being flushed into memory or make the application think that the flushed data should be successfully flushed into memory, leading to unknown consequences.
Signed-off-by: chenrun1 <chenrun1@xiaomi.com>