Most tools used for compliance and SBOM generation use SPDX identifiers
This change brings us a step closer to an easy SBOM generation.
Signed-off-by: Alin Jerpelea <alin.jerpelea@sony.com>
Fragmentation of network data will intensify iob consumption, if
the device receives a message storm of fragmented packets, the iob
cache will not be effectively used, this is not allowed on iot devices
since the resources of such devices are limited. Of course, this
also takes some disadvantages: data needs to be copied.
This option will brings some balance on resource-constrained devices,
enable this config to reduce the consumption of iob, the received iob
buffers will be merged into the contiguous iob chain.
Signed-off-by: chao an <anchao@xiaomi.com>
since it is impossible to track producer and consumer
correctly if TCP/IP stack pass IOB directly to netdev
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
Do not bother to preserve segment boundaries in the tcp
readahead queues.
* Avoid wasting the tail IOB space for each segments.
Instead, pack the newly received data into the tail space
of the last IOB. Also, advertise the tail space as
a part of the window.
* Use IOB chain directly. Eliminate IOB queue overhead.
* Allow to accept only a part of a segment.
* This change improves the memory efficiency.
And probably more importantly, allows less-confusing
recv window advertisement behavior.
Previously, even when we advertise N bytes window,
we often couldn't actually accept N bytes. Depending on
the segment sizes and IOB configurations, it was causing
segment drops.
Also, the previous code was moving the right edge of the
window back and forth too often, even when nothing in
the system was competing on the IOBs. Shrinking the
window that way is a kinda well known recipe to confuse
the peer stack.
* Move the code to advance rcvseq for user data from tcp_input
to receive handlers.
Motivation: allow partial ack.
* If we drop a segment, ignore FIN as well. Note than tcp FIN bit is
logically after the user data in the same segment.
I assume this was just an oversight because I couldn't
find any obvious reason to special-case only the first IOB.
The commit message of the original commit is cited below.
```
commit bf21056001
Author: chao.an <anchao@xiaomi.com>
Date: Fri Nov 27 09:50:38 2020 +0800
net/tcp: fallback to unthrottle pool to avoid deadlock
Add a fallback mechanism to ensure that there are still available
iobs for an free connection, Guarantees all connections will have
a minimum threshold iob to keep the connection not be hanged.
Change-Id: I59bed98d135ccd1f16264b9ccacdd1b0d91261de
Signed-off-by: chao.an <anchao@xiaomi.com>
```
Add a fallback mechanism to ensure that there are still available
iobs for an free connection, Guarantees all connections will have
a minimum threshold iob to keep the connection not be hanged.
Change-Id: I59bed98d135ccd1f16264b9ccacdd1b0d91261de
Signed-off-by: chao.an <anchao@xiaomi.com>
Here is the email loop talk about why it is better to remove the option:
https://groups.google.com/forum/#!topic/nuttx/AaNkS7oU6R0
Change-Id: Ib66c037752149ad4b2787ef447f966c77aa12aad
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
* Simplify EINTR/ECANCEL error handling
1. Add semaphore uninterruptible wait function
2 .Replace semaphore wait loop with a single uninterruptible wait
3. Replace all sem_xxx to nxsem_xxx
* Unify the void cast usage
1. Remove void cast for function because many place ignore the returned value witout cast
2. Replace void cast for variable with UNUSED macro
Author: Gregory Nutt <gnutt@nuttx.org>
Run all .h and .c files modified in last PR through nxstyle.
Author: Xiang Xiao <xiaoxiang@xiaomi.com>
Net cleanup (#17)
* Fix the semaphore usage issue found in tcp/udp
1. The count semaphore need disable priority inheritance
2. Loop again if net_lockedwait return -EINTR
3. Call nxsem_trywait to avoid the race condition
4. Call nxsem_post instead of sem_post
* Put the work notifier into free list to avoid the heap fragment in the long run. Since the allocation strategy is encapsulated internally, we can even refine the implementation later.
* Network stack shouldn't allocate memory in the poll implementation to avoid the heap fragment in the long run, other modification include:
1. Select MM_IOB automatically since ICMP[v6] socket can't work without the read ahead buffer
2. Remove the net lock since xxx_callback_free already do the same thing
3. TCP/UDP poll should work even the read ahead buffer isn't enabled at all
* Add NET_ prefix for UDP_NOTIFIER and TCP_NOTIFIER option to align with other UDP/TCP option convention
* Remove the unused _SF_[IDLE|ACCEPT|SEND|RECV|MASK] flags since there are code to set/clear these flags, but nobody check them.
Iobinstrumentation
* mm/iob: Introduces producer/consumer id to every iob call. This is so that the calls can be instrumented to monitor the IOB resources.
* iob instrumentation - Merges producer/consumer enumeration for simpler IOB user.
* fs/procfs: Starts adding support for /proc/iobinfo
* fs/procfs: Finishes first pass of simple IOB user stastics and /proc/iobinfo entry
Approved-by: Gregory Nutt <gnutt@nuttx.org>
Squashed commit of the following:
net/tcp: Add signal notification for the case when UDP read-ahead data is buffered. This is basically of clone of the TCP notification logic with naming adapted for UDP.
net/tcp: Add signal notification for the case when TCP read-ahead data is buffered.
This was fixed by duplicating most of the IOB interfaces: The versions that waited are still present (like iob_alloc()), but now there are non-waiting verisons of the same interfaces (like iob_tryalloc()). The TCP read-ahead logic now uses only these non-waiting interfaces.