GRAN_ALIGNED should check that the memory block's alignment (log2align)
is correct, not that the memory block is aligned with the granule size.
This fixes DEBUGASSERT() in mm_granfree:
_assert: Assertion failed : at file: mm_gran/mm_granfree.c:49
The assertion triggers if granule size != alignment.
This revises vm_map_region() by accepting unaligned paddr, which is
aligned-down before mapping and in-page offset is then added to vaddr
before returning. It also moves vm_map_region() and vm_unmap_region()
to vm_region.c.
Signed-off-by: Yanfeng Liu <yfliu2008@qq.com>
This patch adds definitions to support user space device mappings
that allows devices like frame buffer to be accessible from user
space in kernel mode.
The are mainly two changes:
- in `mm/`:
added vm_map_region(), vm_unmap_region() for drivers to do
device mapping easily.
- in `arch/`:
extended ARCH_SHM_NPAGES as user-space mapping region size.
decoupled ARCH_SHM_MAXREGIONS from region size calculations and
limit its usage only for SysV shm purposes.
Signed-off-by: Yanfeng Liu <yfliu2008@qq.com>
Support the network interface card driver to receive zero copies of packets and send and receive giant frame packets, allowing drivers to initialize the DMA buffer to the iob structure, and we can apply for IOB with large memory
Signed-off-by: zhanghongyu <zhanghongyu@xiaomi.com>
This patch refactors granule allocator to remove the 32 granules
limitation with the help of a gran_range_s structure and related
functions, see "mm_grantable.h" for details.
Below are the major functions explaining how this works:
- The gran_match() checks if a gran range all in the given state.
it gives last mismatch position when fails free range matching.
- The gran_search() tries to find the position of a free range.
It leverages last mismatch position from gran_match() to speed
up the search.
range size handling is mainly in gran_match() and gran_set_().
Signed-off-by: Yanfeng Liu <yfliu2008@qq.com>
- Add ARCH_KVMA_MAPPING to guard kernel mapping.
- Set dependency from MM_KMAP to ARCH_KVMA_MAPPING, as per commit
70de321de3.
Signed-off-by: Yanfeng Liu <yfliu2008@qq.com>
Extracting global variable information using scripts:
kasan_global.py:
1. Extract the global variable information provided by the -- param asan globals=1 option
2. Generate shadow regions for global variable out of bounds detection
Makefile:
1. Implement multiple links, embed the shadow area into the program, and call it by the Kasan module
Signed-off-by: W-M-R <mike_0528@163.com>
In SMP mode, up_cpu_index()/this_cpu() are the same, both return the index of the physical core.
In AMP mode, up_cpu_index() will return the index of the physical core, and this_cpu() will always return 0
| #ifdef CONFIG_SMP
| # define this_cpu() up_cpu_index()
| #elif defined(CONFIG_AMP)
| # define this_cpu() (0)
| #else
| # define this_cpu() (0)
| #endif
Signed-off-by: chao an <anchao@lixiang.com>
These variables will trigger variable 'ret' set but not used warnings due to different configurations.
Signed-off-by: yinshengkai <yinshengkai@xiaomi.com>
ctc E246: ["map/mm_map.c" 67/41] left side of '.' or '->' is not struct or union
ctc E260: ["map/mm_map.c" 67/25] not an lvalue
ctc E246: ["map/mm_map.c" 80/3] left side of '.' or '->' is not struct or union
ctc E260: ["map/mm_map.c" 80/3] not an lvalue
Signed-off-by: chao an <anchao@lixiang.com>
After this, RISC-V fully supports the kmap interface.
Due to the current design limitations of having only a single L2 table
per process, the kernel kmap area cannot be mapped via any user page
directory, as they do not contain the page tables to address that range.
So a "kernel address environment" is added, which can do the mapping. The
mapping is reflected to every process as only the root page directory (L1)
is copied to users, which means every change to L2 / L3 tables will be
seen by every user.
Mapping a physical page to a kernel virtual page is very simple and does
not need the kernel vma list, just get the kernel addressable virtual
address for the page.
is_kmap_vaddr is added and used to test that a given (v)addr is actually
inside the kernel map area. This gives a speed optimization for kmm_unmap,
as it is no longer necessary to take the mm_map_lock to check if such a
mapping exists; obviously if the address is not within the kmap area, it
won't be in the list either.
User pages are mapped from the currently active address environment. If
the process is running on a borrowed address environment, then the
mapping should be created from there.
This happens during (new) process creation only.
rndis header length is 36, L2 header is 14, IPv6 header is 40, tcp header is 56 when sack option count is 4(default max_ofosegs is 4). so the iob bufsize should greater than we need.
Signed-off-by: zhanghongyu <zhanghongyu@xiaomi.com>
Modification based on opengroup's description for shmget: "When the shared memory segment is created, it shall be initialized with all zero values."
Link to documentation page for shmget: https://pubs.opengroup.org/onlinepubs/9699919799/