Consider a bi-directional TCP connection:
1. we use all IOBs for tx queue
2. we advertize zero recv window because we have no free IOBs
3. if the peer tcp does the same thing,
both sides advertize zero window and can not drain the tx queue.
For a similar stall to happen, the peer doesn't need to be
a naive tcp implementation like nuttx. A naive application blocking
on send() without draining its read buffer is enough.
(Probably such an application should be fixed to drain rx even
when tx is full. However, it's another story.)
This commit avoids the situation by prevent tx from grabbing
the all IOBs in the first place. (assuming CONFIG_IOB_THROTTLE > 0)
Gregory Nutt is the copyright holder for those files and he has submitted the
SGA as a result we can migrate the licenses to Apache.
Signed-off-by: Alin Jerpelea <alin.jerpelea@sony.com>
RFC2001: TCP Slow Start, Congestion Avoidance, Fast Retransmit,
and Fast Recovery Algorithms
...
3. Fast Retransmit
Modifications to the congestion avoidance algorithm were proposed in
1990 [3]. Before describing the change, realize that TCP may
generate an immediate acknowledgment (a duplicate ACK) when an out-
of-order segment is received (Section 4.2.2.21 of [1], with a note
that one reason for doing so was for the experimental fast-
retransmit algorithm). This duplicate ACK should not be delayed.
The purpose of this duplicate ACK is to let the other end know that a
segment was received out of order, and to tell it what sequence
number is expected.
Since TCP does not know whether a duplicate ACK is caused by a lost
segment or just a reordering of segments, it waits for a small number
of duplicate ACKs to be received. It is assumed that if there is
just a reordering of the segments, there will be only one or two
duplicate ACKs before the reordered segment is processed, which will
then generate a new ACK. If three or more duplicate ACKs are
received in a row, it is a strong indication that a segment has been
lost. TCP then performs a retransmission of what appears to be the
missing segment, without waiting for a retransmission timer to
expire.
Change-Id: Ie2cbcecab507c3d831f74390a6a85e0c5c8e0652
Signed-off-by: chao.an <anchao@xiaomi.com>
1.Consolidate absolute to relative timeout conversion into one place(_net_timedwait)
2.Drive the wait timeout logic by net_timedwait instead of devif_timer
This patch help us remove devif_timer(period tick) to save the power in the future.
Change-Id: I534748a5d767ca6da8a7843c3c2f993ed9ea77d4
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
Author: Gregory Nutt <gnutt@nuttx.org>
Run all .h and .c files modified in last PR through nxstyle.
Author: Xiang Xiao <xiaoxiang@xiaomi.com>
Net cleanup (#17)
* Fix the semaphore usage issue found in tcp/udp
1. The count semaphore need disable priority inheritance
2. Loop again if net_lockedwait return -EINTR
3. Call nxsem_trywait to avoid the race condition
4. Call nxsem_post instead of sem_post
* Put the work notifier into free list to avoid the heap fragment in the long run. Since the allocation strategy is encapsulated internally, we can even refine the implementation later.
* Network stack shouldn't allocate memory in the poll implementation to avoid the heap fragment in the long run, other modification include:
1. Select MM_IOB automatically since ICMP[v6] socket can't work without the read ahead buffer
2. Remove the net lock since xxx_callback_free already do the same thing
3. TCP/UDP poll should work even the read ahead buffer isn't enabled at all
* Add NET_ prefix for UDP_NOTIFIER and TCP_NOTIFIER option to align with other UDP/TCP option convention
* Remove the unused _SF_[IDLE|ACCEPT|SEND|RECV|MASK] flags since there are code to set/clear these flags, but nobody check them.
Iobinstrumentation
* mm/iob: Introduces producer/consumer id to every iob call. This is so that the calls can be instrumented to monitor the IOB resources.
* iob instrumentation - Merges producer/consumer enumeration for simpler IOB user.
* fs/procfs: Starts adding support for /proc/iobinfo
* fs/procfs: Finishes first pass of simple IOB user stastics and /proc/iobinfo entry
Approved-by: Gregory Nutt <gnutt@nuttx.org>
net/utils: return from net_breaklock() was being clobbered.
net/: Replace all calls to iob_alloc() with calls to net_ioballoc() which will release the network lock, if necessary.
net/utils, tcp, include/net: Separate out the special IOB allocation logic and place it in its own function. Prototype is available in a public header file where it can also be used by network drivers.
net/utils: net_timedwait() now uses new net_breaklock() and net_restorelock().
This commit backs out most of commit b4747286b1. That change was added because sem_wait() would sometimes cause cancellation points inappropriated. But with these recent changes, nxsem_wait() is used instead and it is not a cancellation point.
In the OS, all calls to sem_wait() changed to nxsem_wait(). nxsem_wait() does not return errors via errno so each place where nxsem_wait() is now called must not examine the errno variable.
In all OS functions (not libraries), change sem_wait() to nxsem_wait(). This will prevent the OS from creating bogus cancellation points and from modifying the per-task errno variable.
sched/semaphore: Add the function nxsem_wait(). This is a new internal OS interface. It is functionally equivalent to sem_wait() except that (1) it is not a cancellation point, and (2) it does not set the per-thread errno value on return.
sched/semaphore: Add nxsem_post() which is identical to sem_post() except that it never modifies the errno variable. Changed all references to sem_post in the OS to nxsem_post().
sched/semaphore: Add nxsem_destroy() which is identical to sem_destroy() except that it never modifies the errno variable. Changed all references to sem_destroy() in the OS to nxsem_destroy().
libc/semaphore and sched/semaphore: Add nxsem_getprotocol() and nxsem_setprotocola which are identical to sem_getprotocol() and set_setprotocol() except that they never modifies the errno variable. Changed all references to sem_setprotocol in the OS to nxsem_setprotocol(). sem_getprotocol() was not used in the OS
libc/semaphore: Add nxsem_getvalue() which is identical to sem_getvalue() except that it never modifies the errno variable. Changed all references to sem_getvalue in the OS to nxsem_getvalue().
sched/semaphore: Rename all internal private functions from sem_xyz to nxsem_xyz. The sem_ prefix is (will be) reserved only for the application semaphore interfaces.
libc/semaphore: Add nxsem_init() which is identical to sem_init() except that it never modifies the errno variable. Changed all references to sem_init in the OS to nxsem_init().
sched/semaphore: Rename sem_tickwait() to nxsem_tickwait() so that it is clear this is an internal OS function.
sched/semaphoate: Rename sem_reset() to nxsem_reset() so that it is clear this is an internal OS function.
When a poll requesting POLLOUT happens, the poll should return
immediately if a write will not block. This change adds that, as
opposed to the old behaviour of blocking until a timer from the
Ethernet driver eventually triggers the poll to complete.
This is only implemented for buffered TCP. Unbuffered TCP should
behave as before.