Commit Graph

52 Commits

Author SHA1 Message Date
Alin Jerpelea
67d02a45eb net: migrate to SPDX identifier
Most tools used for compliance and SBOM generation use SPDX identifiers
This change brings us a step closer to an easy SBOM generation.

Signed-off-by: Alin Jerpelea <alin.jerpelea@sony.com>
2024-09-12 01:08:11 +08:00
Ville Juven
9cd8ea32d1 net/xx/wrbuffer: Do not use SEM_INITIALIZER for buffers
Using the macro places the buffers into .data section which means they
will consume the full buffer size of flash / read only memory as well.

Place the buffers into .bss to avoid this case.
2023-08-25 00:02:07 +08:00
liqinhui
f61dc72892 net/tcp:Add NewReno congestion control.
- NewReno congestion control algorithm is used to solve the problem
  of network congestion breakdown. NewReno congestion control includes
  slow start, collision avoidance, fast retransmission, and fast
  recovery. The implementation refers to RFC6582 and RFC5681.

- In addition, we optimize the congestion algorithm. In the conflict
  avoidance stage, the maximum congestion window max_cwnd is used to
  limit the excessive growth of cwnd and prevent network jitter
  caused by congestion. Maximum congestion window max_cwnd is updated
  with the current congestion window cwnd and the update weight is
  0.875 when an RTO timeout occurs.

Signed-off-by: liqinhui <liqinhui@xiaomi.com>
2023-05-16 12:35:01 -03:00
Xiang Xiao
a851ad84c3 net: consistent the net sem wait naming conversion
to prepare the new mutex wait function

Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2023-01-15 12:31:30 -03:00
anjiahao
d07792a343 Initialize global mutext/sem by NXMUTEX_INITIALIZER and SEM_INITIALIZER
Signed-off-by: anjiahao <anjiahao@xiaomi.com>
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-11-14 09:34:04 +09:00
anjiahao
5724c6b2e4 sem:remove sem default protocl
Signed-off-by: anjiahao <anjiahao@xiaomi.com>
2022-10-22 14:50:48 +08:00
Xiang Xiao
40ef5bc6db libc: Move queue.h from include to include/nuttx
to avoid the conflict with libuv's queue.h

Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-09-26 08:04:58 +02:00
Xiang Xiao
ba9486de4a iob: Remove iob_user_e enum and related code
since it is impossible to track producer and consumer
correctly if TCP/IP stack pass IOB directly to netdev

Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-08-15 08:41:20 +03:00
zhanghongyu
c50d7e174f net: tcp/udp/icmp/icmpv6 add FIONSPACE support
Signed-off-by: zhanghongyu <zhanghongyu@xiaomi.com>
2022-04-02 13:39:38 +08:00
Xiang Xiao
7598070508 net: Remove the unnecessary initialization code
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-03-12 19:24:17 +02:00
chao.an
1911ae2192 net/tcp: add interface tcp_wrbuffer_timedalloc()
add new interface to support alloc wrbuffer with timeout

Signed-off-by: chao.an <anchao@xiaomi.com>
2022-02-17 21:27:39 +01:00
Alexander Lunev
2a6de301ee net/tcp: transformed NET_TCP_FAST_RETRANSMIT_WATERMARK option to boolean.
According to RFC 5681 (3.2) the TCP Fast Retransmit algorithm should start
if the threshold of 3 duplicate ACKs is reached.
Thus the threshold should be a constant, not an integer option.
2022-01-26 11:50:48 +08:00
YAMAMOTO Takashi
09f3a1ec8e tcp_send_buffered: throttle IOB allocations for send
Consider a bi-directional TCP connection:

1. we use all IOBs for tx queue
2. we advertize zero recv window because we have no free IOBs
3. if the peer tcp does the same thing,
   both sides advertize zero window and can not drain the tx queue.

For a similar stall to happen, the peer doesn't need to be
a naive tcp implementation like nuttx. A naive application blocking
on send() without draining its read buffer is enough.
(Probably such an application should be fixed to drain rx even
when tx is full. However, it's another story.)

This commit avoids the situation by prevent tx from grabbing
the all IOBs in the first place. (assuming CONFIG_IOB_THROTTLE > 0)
2021-07-14 15:08:18 +08:00
Alin Jerpelea
b3ad98c89a net: update licenses to Apache
Gregory Nutt is the copyright holder for those files and he has submitted the
SGA as a result we can migrate the licenses to Apache.

Signed-off-by: Alin Jerpelea <alin.jerpelea@sony.com>
2021-05-27 08:07:25 +09:00
chao.an
c2b0006dcd net/tcp: implement the fast retransmit
RFC2001: TCP Slow Start, Congestion Avoidance, Fast Retransmit,
         and Fast Recovery Algorithms

...

3.  Fast Retransmit
  Modifications to the congestion avoidance algorithm were proposed in
  1990 [3].  Before describing the change, realize that TCP may
  generate an immediate acknowledgment (a duplicate ACK) when an out-
  of-order segment is received (Section 4.2.2.21 of [1], with a note
  that one reason for doing so was for the experimental fast-
  retransmit algorithm).  This duplicate ACK should not be delayed.
  The purpose of this duplicate ACK is to let the other end know that a
  segment was received out of order, and to tell it what sequence
  number is expected.

  Since TCP does not know whether a duplicate ACK is caused by a lost
  segment or just a reordering of segments, it waits for a small number
  of duplicate ACKs to be received.  It is assumed that if there is
  just a reordering of the segments, there will be only one or two
  duplicate ACKs before the reordered segment is processed, which will
  then generate a new ACK.  If three or more duplicate ACKs are
  received in a row, it is a strong indication that a segment has been
  lost.  TCP then performs a retransmission of what appears to be the
  missing segment, without waiting for a retransmission timer to
  expire.

Change-Id: Ie2cbcecab507c3d831f74390a6a85e0c5c8e0652
Signed-off-by: chao.an <anchao@xiaomi.com>
2020-12-01 11:36:10 -06:00
Gregory Nutt
a569006fd8 sched/: Make more naming consistent
Rename various functions per the quidelines of https://cwiki.apache.org/confluence/display/NUTTX/Naming+of+OS+Internal+Functions

    nxsem_setprotocol -> nxsem_set_protocol
    nxsem_getprotocol -> nxsem_get_protocol
    nxsem_getvalue -> nxsem_get_value
2020-05-17 14:01:00 -03:00
Xiang Xiao
5c5c08efcd network: simplify the timeout process logic
1.Consolidate absolute to relative timeout conversion into one place(_net_timedwait)
2.Drive the wait timeout logic by net_timedwait instead of devif_timer
This patch help us remove devif_timer(period tick) to save the power in the future.

Change-Id: I534748a5d767ca6da8a7843c3c2f993ed9ea77d4
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2020-01-11 08:24:49 -06:00
Xiang Xiao
90c52e6f8f Squashed commit of the following:
Author: Gregory Nutt <gnutt@nuttx.org>

    Run all .h and .c files modified in last PR through nxstyle.

Author: Xiang Xiao <xiaoxiang@xiaomi.com>

    Net cleanup (#17)

    * Fix the semaphore usage issue found in tcp/udp

    1. The count semaphore need disable priority inheritance
    2. Loop again if net_lockedwait return -EINTR
    3. Call nxsem_trywait to avoid the race condition
    4. Call nxsem_post instead of sem_post

    * Put the work notifier into free list to avoid the heap fragment in the long run.  Since the allocation strategy is encapsulated internally, we can even refine the implementation later.

    * Network stack shouldn't allocate memory in the poll implementation to avoid the heap fragment in the long run, other modification include:

    1. Select MM_IOB automatically since ICMP[v6] socket can't work without the read ahead buffer
    2. Remove the net lock since xxx_callback_free already do the same thing
    3. TCP/UDP poll should work even the read ahead buffer isn't enabled at all

    * Add NET_ prefix for UDP_NOTIFIER and TCP_NOTIFIER option to align with other UDP/TCP option convention

    * Remove the unused _SF_[IDLE|ACCEPT|SEND|RECV|MASK] flags since there are code to set/clear these flags, but nobody check them.
2019-12-31 09:26:14 -06:00
Anthony Merlino
70404ed0dc Merged in antmerlino/nuttx/iobinstrumentation (pull request #1001)
Iobinstrumentation

* mm/iob: Introduces producer/consumer id to every iob call. This is so that the calls can be instrumented to monitor the IOB resources.

* iob instrumentation - Merges producer/consumer enumeration for simpler IOB user.

* fs/procfs: Starts adding support for /proc/iobinfo

* fs/procfs: Finishes first pass of simple IOB user stastics and /proc/iobinfo entry

Approved-by: Gregory Nutt <gnutt@nuttx.org>
2019-08-16 22:42:25 +00:00
Gregory Nutt
f6b00e1966 tools/nxstyle.c: Fix logic error that prevent detecion of '/' and '/=' as operators. net/: Minor updates resulting from testing tools/nxstyle. 2019-03-11 12:48:39 -06:00
Harri Luhtala
2d0f1b85e3 net/tcp/tcp_wrbuffer.c: fix buffer release handling on failed buffer alloc. Attempt to release write buffer on failed TCP write I/O buffer alloc and tryalloc failed to wrb->wb_iob assertion. 2018-09-25 07:02:04 -06:00
Gregory Nutt
427b3b8fcb Squashed commit of the following:
net/utils:  return from net_breaklock() was being clobbered.
    net/:  Replace all calls to iob_alloc() with calls to net_ioballoc() which will release the network lock, if necessary.
    net/utils, tcp, include/net:  Separate out the special IOB allocation logic and place it in its own function.  Prototype is available in a public header file where it can also be used by network drivers.
    net/utils: net_timedwait() now uses new net_breaklock() and net_restorelock().
2018-07-07 08:26:13 -06:00
Gregory Nutt
75cc19ebb4 net/tcp: Fix a deadlock condition that can occur when (1) all network logic runs on a single work queue, (1) TCP write buffering is enabled, and (2) we run out of IOBs. In this case, the TCP write buffering logic was blocking on iob_alloc() with the network locked. Since the network was locked, the device driver polls that would provide take the write buffer data and release the IOBs could not execute. This fixes the problem by unlocking the network lock while waiting for the IOBs. 2018-07-06 19:49:05 -06:00
Gregory Nutt
23a8af2069 Trivial update to some comments. 2018-07-06 17:37:26 -06:00
Pelle Windestam
303e6499bd net/tcp: Extended support for sending to non-blocking tcp sockets. 2018-04-20 07:37:51 -06:00
Gregory Nutt
2c17bf21af net/tcp: Missing header file inclusion in tcp_wrbuffer.c 2018-02-14 09:23:55 -06:00
Alan Carvalho de Assis
fb50c44d08 Fix various issues noted by Coverity 2018-02-06 09:13:16 -06:00
Gregory Nutt
7cf88d7dbd Make sure that labeling is used consistently in all function headers. 2018-02-01 10:00:02 -06:00
Gregory Nutt
9568600ab1 Squashed commit of the following:
This commit backs out most of commit b4747286b1.  That change was added because sem_wait() would sometimes cause cancellation points inappropriated.  But with these recent changes, nxsem_wait() is used instead and it is not a cancellation point.

    In the OS, all calls to sem_wait() changed to nxsem_wait().  nxsem_wait() does not return errors via errno so each place where nxsem_wait() is now called must not examine the errno variable.

    In all OS functions (not libraries), change sem_wait() to nxsem_wait().  This will prevent the OS from creating bogus cancellation points and from modifying the per-task errno variable.

    sched/semaphore:  Add the function nxsem_wait().  This is a new internal OS interface.  It is functionally equivalent to sem_wait() except that (1) it is not a cancellation point, and (2) it does not set the per-thread errno value on return.
2017-10-04 15:22:27 -06:00
Gregory Nutt
42a0796615 Squashed commit of the following:
sched/semaphore:  Add nxsem_post() which is identical to sem_post() except that it never modifies the errno variable.  Changed all references to sem_post in the OS to nxsem_post().

    sched/semaphore:  Add nxsem_destroy() which is identical to sem_destroy() except that it never modifies the errno variable.  Changed all references to sem_destroy() in the OS to nxsem_destroy().

    libc/semaphore and sched/semaphore:  Add nxsem_getprotocol() and nxsem_setprotocola which are identical to sem_getprotocol() and set_setprotocol() except that they never modifies the errno variable.  Changed all references to sem_setprotocol in the OS to nxsem_setprotocol().  sem_getprotocol() was not used in the OS
2017-10-03 15:35:24 -06:00
Gregory Nutt
83cdb0c552 Squashed commit of the following:
libc/semaphore:  Add nxsem_getvalue() which is identical to sem_getvalue() except that it never modifies the errno variable.  Changed all references to sem_getvalue in the OS to nxsem_getvalue().

    sched/semaphore:  Rename all internal private functions from sem_xyz to nxsem_xyz.  The sem_ prefix is (will be) reserved only for the application semaphore interfaces.

    libc/semaphore:  Add nxsem_init() which is identical to sem_init() except that it never modifies the errno variable.  Changed all references to sem_init in the OS to nxsem_init().

    sched/semaphore:  Rename sem_tickwait() to nxsem_tickwait() so that it is clear this is an internal OS function.

    sched/semaphoate:  Rename sem_reset() to nxsem_reset() so that it is clear this is an internal OS function.
2017-10-03 12:52:31 -06:00
Jussi Kivilinna
547733cbb0 Update net_timedwait() and net_lockedwait() call sites to handle negated errno in return value 2017-09-04 07:56:51 -06:00
Gregory Nutt
8ffb103adb networking: IGMP: Remove special support for interrupt level processing (there is none) and fix some timer cancellation logic. In many files, correct comments. There is no interrupt level processing in the networking layer. 2017-09-02 10:27:03 -06:00
Gregory Nutt
2043e1a114 IOBs: Move from driver/iob to a better location in mm/iob 2017-05-09 07:35:30 -06:00
Gregory Nutt
d5207efb5a Be consistent... Use Name: consistent in function headers vs Function: 2017-04-21 16:33:14 -06:00
Gregory Nutt
bcc6b61fc1 Move include/nuttx/net/iob.h to include/drivers/iob.h; rename CONFIG_NET_IOB to CONFIG_DRIVERS_IOB 2017-04-20 14:53:30 -06:00
Gregory Nutt
a1469a3e95 Add CONFIG_DEBUG_ERROR. Change names of *dbg() * *err() 2016-06-11 15:50:49 -06:00
Gregory Nutt
1cdc746726 Rename CONFIG_DEBUG to CONFIG_DEBUG_FEATURES 2016-06-11 14:14:08 -06:00
Gregory Nutt
2c95fef501 Remove some empty code section comments 2016-02-26 07:35:55 -06:00
Andrew Webster
df211ee46a TCP: add writable check during poll
When a poll requesting POLLOUT happens, the poll should return
immediately if a write will not block.  This change adds that, as
opposed to the old behaviour of blocking until a timer from the
Ethernet driver eventually triggers the poll to complete.

This is only implemented for buffered TCP.  Unbuffered TCP should
behave as before.
2016-01-22 15:52:14 -06:00
Gregory Nutt
371a9fd84c Networking: Fix another deadlock condition. tcp_write_buffer_alloc() calls sem_wait() with network locked. That worked if CONFIG_NET_NOINTS was not defined because interrupts are automatically restored when the wait happens. But with CONFIG_NET_NOINTS=y, the wait blocks with the network locked -- bad style and also can lead to a deadlock condition 2015-01-28 11:56:11 -06:00
Gregory Nutt
2d52d70d4c NET: Move private definitions from include/nuttx/net/tcp to net/tcp/tcp.h 2014-07-06 12:34:27 -06:00
Gregory Nutt
e1091251e6 NET: Move statistcs from uip.h to new netstats.h to remove nasty circular inclusion problem. 2014-06-26 09:32:39 -06:00
Gregory Nutt
473ba2ba6c NET: Fix an include file ordering problem when CONFIG_NET_STATISTICS= 2014-06-26 07:29:16 -06:00
Gregory Nutt
04985d6d1e Clean up all TCP-related naming 2014-06-24 18:12:49 -06:00
Gregory Nutt
e9a588c398 Add throttle support to the I/O buffer logic 2014-06-24 11:53:19 -06:00
Gregory Nutt
5d1f8180d4 Move the remaining files from include/nuttx/net/uip to include/nuttx/net; Rename *_internal.h header files in net/ to just *.h 2014-06-24 10:14:15 -06:00
Gregory Nutt
626469e30c Move include/nuttx/net/uipopt.h to include/nuttx/net/netconfig.h 2014-06-24 08:53:28 -06:00
Gregory Nutt
356d25b503 First cut at conversion of write-buffering to use I/O buffer chaings (IOBs) 2014-06-22 11:27:57 -06:00
Gregory Nutt
2805582151 net: Add net/tcp/tcp.h; rename uip_tcpwrbuffer_ to tcp_wrbuffer_* 2014-06-21 15:23:39 -06:00