1. remove the unnecessary interfaces tcp_close_monitor()
socket flags(s_flags) is a global state for net connection
remove the incorrect update for stop monitor
2. do not start the tcp monitor from duplicated psock
the tcp monitor has already registered in connect callback
------------------------------------------------------------
This patch also fix the telnet issue reported by:
https://github.com/apache/incubator-nuttx/pull/5434#issuecomment-1035600651
the orignal session fd is closed after dup, the connect state
has incorrectly migrated to close:
drivers/net/telnet.c:
977 static int telnet_session(FAR struct telnet_session_s *session)
...
1031 ret = psock_dup2(psock, &priv->td_psock);
...
1082 nx_close(session->ts_sd);
Signed-off-by: chao.an <anchao@xiaomi.com>
and removed two tcp_send_txnotify() calls from tcp_sendfile (they are not needed anymore).
As a result, the TX throughput of both the tcp_send_buffered and tcp_send_unbuffered
is significantly boosted in case of TUN/TAP network device.
According to RFC 5681 (3.2) the TCP Fast Retransmit algorithm should start
if the threshold of 3 duplicate ACKs is reached.
Thus the threshold should be a constant, not an integer option.
(conn->sndseq was updated in multiple places that was unreasonable and complicated).
This optimization is the same as it was done for tcp_send_unbuffered.
Wrong unackseq calculation locked conn->tx_unacked at non-zero values
even if all ACKs were received.
This issue is the same as it was with tcp_send_unbuffered.
Do not use pvconn argument to get the TCP connection pointer because pvconn is
normally NULL for some events like NETDEV_DOWN. Instead, the TCP connection pointer
can be reliably obtained from the corresponding TCP socket.
Both the snd_ackcb and snd_datacb callbacks were created and destroyed right after sending every packet.
Whenever TCP_REXMIT event occurred due to TCP send timeout, TCP_REXMIT was ignored because
snd_ackcb callback had been destroyed by the time.
The issue is fixed as follows:
- both the snd_ackcb and snd_datacb callbacks are combined into one snd_cb callback
(the same way as in tcp_send_unbuffered.c).
- the snd_cb callback lives until all requested data (via sendfile) is sent,
including all ACKs and possible retransmissions.
As a positive side effect of the code optimization / fix, sendfile TCP payload throughput is increased.
tcp_sendfile() reads data directly from a file and does not use NET_TCP_WRITE_BUFFERS data flow
even if CONFIG_NET_TCP_WRITE_BUFFERS option is enabled.
Despite this, tcp_sendfile relied on NET_TCP_WRITE_BUFFERS specific flow control variables that
were idle during sendfile operation. Thus it was a total inconsistency.
E.g. because of the issue, TCP socket used by sendfile() operation never issued
FIN packet on close() command, and the TCP connection hung up.
As a result of the fix, simultaneously enabled CONFIG_NET_TCP_WRITE_BUFFERS and
CONFIG_NET_SENDFILE options can coexist.
Wrong unackseq calculation locked conn->tx_unacked at non-zero values
even if all ACKs were received. Thus unbuffered psock_tcp_send() never completed.
If the remote TCP receiver advertised TCP window size greater than 64 KB
and TCP ACK packets returned to the NuttX TCP sender with a significant delay,
tx_unacked variable overflowed and further TCP send stalled forever
(until TCP re-connection).
While it's a neat idea, it doesn't work well in reality.
* Many of modern tcp stacks don't obey the "ack every other packet"
rule these days. (Linux, macOS, ...)
* Even if a traditional TCP implementation is assumed, we can't
predict/control which packets are acked reliably. For example,
window updates can easily mess up our strategy.
tcp_timer: eliminated false decrements of conn->timer in case of multiple network adapters.
The false timer decrements sometimes provoked TCP spurious retransmissions due to premature timeouts.
In case of enabled packet forwarding mode, packets were forwarded in a reverse order
because of LIFO behavior of the connection event list.
The issue exposed only during high network traffic. Thus the event list started to grow
that resulted in changing the order of packets inside of groups of several packets
like the following: 3, 2, 1, 6, 5, 4, 8, 7 etc.
Remarks concerning the connection event list implementation:
* Now the queue (list) is FIFO as it should be.
* The list is singly linked.
* The list has a head pointer (inside of outer net_driver_s structure),
and a tail pointer is added into outer net_driver_s structure.
* The list item is devif_callback_s structure.
It still has two pointers to two different list chains (*nxtconn and *nxtdev).
* As before the first argument (*dev) of the list functions can be NULL,
while the other argument (*list) is effective (not NULL).
* An extra (*tail) argument is added to devif_callback_alloc()
and devif_conn_callback_free() functions.
* devif_callback_alloc() time complexity is O(1) (i.e. O(n) to fill the whole list).
* devif_callback_free() time complexity is O(n) (i.e. O(n^2) to empty the whole list).
* devif_conn_event() time complexity is O(n).
Gregory Nutt has submitted the SGA
UVC Ingenieure has submitted the SGA
Max Holtzberg has submitted the ICLA
as a result we can migrate the licenses to Apache.
Signed-off-by: Alin Jerpelea <alin.jerpelea@sony.com>
* Do not accept the window in old segments.
Implement SND.WL1/WL2 things in the RFC.
* Do not accept the window in the segment w/o ACK bit set.
The window is an offset from the ack seq.
(maybe it's simpler to just drop segments w/o ACK though)
* Subtract snd_wnd by the amount of the ack advancement.
Fix a wrong assertion in:
```
commit 98ec46d726
Author: YAMAMOTO Takashi <yamamoto@midokura.com>
Date: Tue Jul 20 09:10:43 2021 +0900
tcp_send_buffered.c: fix iob allocation deadlock
Ensure to put the wrb back onto the write_q when blocking
on iob allocation. Otherwise, it can deadlock with other
threads doing the same thing.
```
I forget to submit this with https://github.com/apache/incubator-nuttx/pull/4257