Commit Graph

685 Commits

Author SHA1 Message Date
chao an
ffd81e63be net/tcp: reprepare response buffer from unthrottle pool
Signed-off-by: chao an <anchao@xiaomi.com>
2023-01-06 16:33:13 +08:00
chao an
3af47d1913 net/tcp: remove unnecessary clear for device buffer
which has been cleared in tcp_datahandler()

Signed-off-by: chao an <anchao@xiaomi.com>
2023-01-06 16:33:13 +08:00
chao an
1510dbee68 net/tcp: Do not trigger retransmission if the new data has not been consumed.
Signed-off-by: chao an <anchao@xiaomi.com>
2023-01-03 16:28:30 +08:00
chao an
6f15b32e34 net/tcp: rename NET_TCP_RECV_CONTIG to NET_TCP_RECV_PACK
Signed-off-by: chao an <anchao@xiaomi.com>
2022-12-19 01:32:05 +08:00
chao an
794adc5814 net/tcp: contig received data to reducing iob consumption
Fragmentation of network data will intensify iob consumption, if
the device receives a message storm of fragmented packets, the iob
cache will not be effectively used, this is not allowed on iot devices
since the resources of such devices are limited. Of course, this
also takes some disadvantages: data needs to be copied.
This option will brings some balance on resource-constrained devices,
enable this config to reduce the consumption of iob, the received iob
buffers will be merged into the contiguous iob chain.

Signed-off-by: chao an <anchao@xiaomi.com>
2022-12-18 21:15:47 +08:00
Xiang Xiao
d5689e070b net/arp: Remove nuttx/net/arp.h
1.move ARPHRD_ETHER to netinet/arp.h
1.move arp_entry_s to net/arp/arp.h
2.move arp_input to nuttx/net/netdev.h

Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-12-16 22:10:59 +02:00
chao an
11de08de27 net/devif_send: replace all block send to nonblock mode
Signed-off-by: chao an <anchao@xiaomi.com>
2022-12-08 09:53:15 +01:00
chao an
62004a28a6 net/d_buf: remove d_buf reference from l3/l4
l3/l4 stack will decouple the reference of d_buf gradually, Only legacy
devices still retain d_buf support, new net devices will use d_iob

Signed-off-by: chao an <anchao@xiaomi.com>
2022-12-04 20:37:14 +08:00
chao an
34d2cde8a8 net/l2/l3/l4: add support of iob offload
1. Add new config CONFIG_NET_LL_GUARDSIZE to isolation of l2 stack,
   which will benefit l3(IP) layer for multi-MAC(l2) implementation,
   especially in some NICs such as celluler net driver.

new configuration options: CONFIG_NET_LL_GUARDSIZE

CONFIG_NET_LL_GUARDSIZE will reserved l2 buffer header size of
network buffer to isolate the L2/L3 (MAC/IP) data on network layer,
which will be beneficial to L3 network layer protocol transparent
transmission and forwarding

------------------------------------------------------------
Layout of frist iob entry:

        iob_data (aligned by CONFIG_IOB_ALIGNMENT)
            |
            |                  io_offset(CONFIG_NET_LL_GUARDSIZE)
            |                                |
            -------------------------------------------------
      iob   |            Reserved            |    io_len    |
            -------------------------------------------------

-------------------------------------------------------------
Layout of different NICs implementation:

        iob_data (aligned by CONFIG_IOB_ALIGNMENT)
            |
            |                 io_offset(CONFIG_NET_LL_GUARDSIZE)
            |                                |
            -------------------------------------------------
 Ethernet   |       Reserved    | ETH_HDRLEN |    io_len    |
            ---------------------------------|---------------
 8021Q      |   Reserved  | ETH_8021Q_HDRLEN |    io_len    |
            ---------------------------------|---------------
 ipforward  |            Reserved            |    io_len    |
            -------------------------------------------------

--------------------------------------------------------------------

2. Support iob offload to l2 driver to avoid unnecessary memory copy

Support send/receive iob vectors directly between the NICs and l3/l4
stack to avoid unnecessary memory copies, especially on hardware that
supports Scatter/gather, which can greatly improve performance.

new interface to support iob offload:

  ------------------------------------------
  |    IOB version     |     original      |
  |----------------------------------------|
  |  devif_iob_poll()  |   devif_poll()    |
  |       ...          |       ...         |
  ------------------------------------------

--------------------------------------------------------------------

1> NIC hardware support Scatter/gather transfer

TX:

                tcp_poll()/udp_poll()/pkt_poll()/...(l3|l4)
                           /              \
                          /                \
devif_poll_[l3|l4]_connections()     devif_iob_send() (nocopy:udp/icmp/...)
           /                                   \      (copy:tcp)
          /                                     \
  devif_iob_poll("NIC"_txpoll)                callback() // "NIC"_txpoll
                                                  |
                            dev->d_iob:           |
                                                ---------------         ---------------
                             io_data       iob1 |  |          |    iob3 |  |          |
                                    \           ---------------         ---------------
                                  ---------------  |       --------------- |
                             iob0 |  |          |  |  iob2 |  |          | |
                                  ---------------  |       --------------- |
                                     \             |          /           /
                                        \          |       /           /
                                   ----------------------------------------------
                    NICs io vector |    |    |    |    |    |    |    |    |    |
                                   ----------------------------------------------

RX:

  [tcp|udp|icmp|...]ipv[4|6]_data_handler()(iob_concat/append to readahead)
                    |
                    |
      [tcp|udp|icmp|...]_ipv[4|6]_in()/...
                    |
                    |
          pkt/ipv[4/6]_input()/...
                    |
                    |
     NICs io vector receive(iov_base to each iobs)

--------------------------------------------------------------------

2> CONFIG_IOB_BUFSIZE is greater than MTU:

TX:

"(CONFIG_IOB_BUFSIZE) > (MAX_NETDEV_PKTSIZE + CONFIG_NET_GUARDSIZE + CONFIG_NET_LL_GUARDSIZE)"

                tcp_poll()/udp_poll()/pkt_poll()/...(l3|l4)
                           /              \
                          /                \
devif_poll_[l3|l4]_connections()     devif_iob_send() (nocopy:udp/icmp/...)
           /                                   \      (copy:tcp)
          /                                     \
  devif_iob_poll("NIC"_txpoll)                callback() // "NIC"_txpoll
                                                  |
                                             "NIC"_send()
                          (dev->d_iob->io_data[CONFIG_NET_LL_GUARDSIZE - NET_LL_HDRLEN(dev)])

RX:

  [tcp|udp|icmp|...]ipv[4|6]_data_handler()(iob_concat/append to readahead)
                    |
                    |
      [tcp|udp|icmp|...]_ipv[4|6]_in()/...
                    |
                    |
          pkt/ipv[4/6]_input()/...
                    |
                    |
     NICs io vector receive(iov_base to io_data)

--------------------------------------------------------------------

3> Compatible with all old flat buffer NICs

TX:
                tcp_poll()/udp_poll()/pkt_poll()/...(l3|l4)
                           /              \
                          /                \
devif_poll_[l3|l4]_connections()     devif_iob_send() (nocopy:udp/icmp/...)
           /                                   \      (copy:tcp)
          /                                     \
  devif_iob_poll(devif_poll_callback())  devif_poll_callback() /* new interface, gather iobs to flat buffer */
       /                                           \
      /                                             \
 devif_poll("NIC"_txpoll)                     "NIC"_send()(dev->d_buf)

RX:

  [tcp|udp|icmp|...]ipv[4|6]_data_handler()(iob_concat/append to readahead)
                    |
                    |
      [tcp|udp|icmp|...]_ipv[4|6]_in()/...
                    |
                    |
               netdev_input()  /* new interface, Scatter/gather flat/iob buffer */
                    |
                    |
          pkt/ipv[4|6]_input()/...
                    |
                    |
    NICs io vector receive(Orignal flat buffer)

3. Iperf passthrough on NuttX simulator:

  -------------------------------------------------
  |  Protocol      | Server | Client |            |
  |-----------------------------------------------|
  |  TCP           |  813   |   834  |  Mbits/sec |
  |  TCP(Offload)  | 1720   |  1100  |  Mbits/sec |
  |  UDP           |   22   |   757  |  Mbits/sec |
  |  UDP(Offload)  |   25   |  1250  |  Mbits/sec |
  -------------------------------------------------

Signed-off-by: chao an <anchao@xiaomi.com>
2022-12-03 11:47:04 +08:00
liyi
391b501639 net: extract l3 header build code into new functions
Signed-off-by: liyi <liyi25@xiaomi.com>
2022-11-29 18:36:15 +08:00
chao an
d54a20b393 net/tcp/udp: remove all domain assertion
fix build break if enable CONFIG_NET_IPv6 only

In file included from tcp/tcp_sendfile.c:38:
tcp/tcp_sendfile.c: In function ‘sendfile_eventhandler’:
tcp/tcp_sendfile.c:173:27: error: ‘struct tcp_conn_s’ has no member named ‘domain’
  173 |           DEBUGASSERT(conn->domain == PF_INET6);
      |                           ^~

Signed-off-by: chao an <anchao@xiaomi.com>
2022-11-29 17:28:36 +08:00
chao an
3caa551ff4 net: remove psock reference from connect
Signed-off-by: chao an <anchao@xiaomi.com>
2022-11-24 22:57:42 +08:00
chao an
f2a7711ef8 net/soerr: add new _SO_CONN_SETERRNO() macro
support so error code set from conn instance

Signed-off-by: chao an <anchao@xiaomi.com>
2022-11-24 22:57:42 +08:00
liangchaozhong
0ec4b0a149 tcp: recv returns 0 without set errno
Issue:
recv return 0  means peer side has already closed the connection according to
man page's description.
0 is returned without set errno when TCP rx timeout happens with current
design.

Solution:
return result instead of ir_result when ir_result is 0, then -1 will be
returned with errno set to EAGAIN for tcp rx buffer empty case.

Signed-off-by: liangchaozhong <liangchaozhong@xiaomi.com>
2022-11-23 23:23:44 +08:00
Xiang Xiao
0d516c7ff6 net/tcp: Remove the dependence on SCHED_WORKQUEUE
since it isn't required in case of NET_TCP_NO_STACK

Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-11-22 20:45:26 +09:00
chao an
ae4b5b5f50 net/tcp: TCP_WAITALL should use state flag
The feature of MSG_WAITALL is broken since the wrong flag is used

Signed-off-by: chao an <anchao@xiaomi.com>
2022-11-21 17:21:10 +08:00
anjiahao
d07792a343 Initialize global mutext/sem by NXMUTEX_INITIALIZER and SEM_INITIALIZER
Signed-off-by: anjiahao <anjiahao@xiaomi.com>
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-11-14 09:34:04 +09:00
Alan Carvalho de Assis
8286a558c8 net/tcp: Avoid starting TCP sequence number 0 2022-11-13 09:09:36 +08:00
zhanghongyu
ab15887a0b tcp: find bound device when laddr is ANY
icmp: find bound device when s_boundto is not zero

Signed-off-by: zhanghongyu <zhanghongyu@xiaomi.com>
2022-11-12 18:36:09 +08:00
Zhe Weng
f498102512 net: select NAT external port by tcp_selectport for TCP
Signed-off-by: Zhe Weng <wengzhe@xiaomi.com>
2022-11-11 14:36:55 +08:00
Zhe Weng
0a4e01d712 net: verify NAT port usage in tcp_selectport
Signed-off-by: Zhe Weng <wengzhe@xiaomi.com>
2022-11-11 14:36:55 +08:00
Xiang Xiao
a7e99346a1 net/tcp: Make TCP_MAXRTX and TCP_MAXSYNRTX configurable
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-11-07 10:05:57 +01:00
chao an
b353d84742 net/tcp: fix build break if enable NET_TCP_NO_STACK
devif/ipv6_input.c: In function 'ipv6_input':
devif/ipv6_input.c:59:33: error: 'TCPIPv6BUF' undeclared (first use in this function); did you mean 'IPv6BUF'?
   59 | #define PAYLOAD ((FAR uint8_t *)TCPIPv6BUF)
      |                                 ^~~~~~~~~~
devif/ipv6_input.c:302:14: note: in expansion of macro 'PAYLOAD'
  302 |   payload  = PAYLOAD;     /* Assume payload starts right after IPv6 header */
      |

Signed-off-by: chao an <anchao@xiaomi.com>
2022-10-31 15:31:31 +08:00
chao an
a8d3286258 net: move device buffer define to common header
Signed-off-by: chao an <anchao@xiaomi.com>
2022-10-28 00:32:16 -04:00
anjiahao
5724c6b2e4 sem:remove sem default protocl
Signed-off-by: anjiahao <anjiahao@xiaomi.com>
2022-10-22 14:50:48 +08:00
zhanghongyu
2a6a869962 tcp: Update conn laddr after select device
Signed-off-by: zhanghongyu <zhanghongyu@xiaomi.com>
2022-10-21 18:47:13 +08:00
chao an
6978446b8e net/tcp: remove debug counter of connect instance
Fixed by commit:

 | net: remove pvconn reference from all devif callback
 |
 | Do not use 'pvconn' argument to get the connection pointer since
 | pvconn is normally NULL for some events like NETDEV_DOWN.
 | Instead, the connection pointer can be reliably obtained from the
 | corresponding private pointer.

Signed-off-by: chao an <anchao@xiaomi.com>
2022-10-19 09:46:02 +08:00
chao.an
76c64b7d30 net/tcp: syscall send() should not return ETIMEDOUT
Signed-off-by: chao an <anchao@xiaomi.com>
2022-10-15 10:17:25 +09:00
Fotis Panagiotopoulos
143d1322ea Added handling of MSG_WAITALL flag in TCP recv. 2022-10-13 18:22:05 +08:00
Xiang Xiao
bdeaea3742 Remove the unnessary empty line after label
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-09-30 17:54:56 +02:00
Xiang Xiao
40ef5bc6db libc: Move queue.h from include to include/nuttx
to avoid the conflict with libuv's queue.h

Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-09-26 08:04:58 +02:00
wangbowen6
344c8be049 poll: add poll_notify() api and call it in all drivers
Signed-off-by: wangbowen6 <wangbowen6@xiaomi.com>
2022-09-26 12:06:32 +08:00
chao an
f43be61f69 net/tcp: remove the redundant ifdef CONFIG_NET_TCP
Signed-off-by: chao an <anchao@xiaomi.com>
2022-09-26 00:14:49 +09:00
chao an
e46688c1ee net/tcp: use independent work to free the conn instance
I noticed that the conn instance will leak during stress test,
The close work queued from tcp_close_eventhandler() will be canceled
by tcp_timer() immediately:

Breakpoint 1, tcp_close_eventhandler (dev=0x565cd338 <up_irq_restore+108>, pvpriv=0x5655e6ff <getpid+12>, flags=0) at tcp/tcp_close.c:71
(gdb) bt
| #0  tcp_close_eventhandler (dev=0x565cd338 <up_irq_restore+108>, pvpriv=0x5655e6ff <getpid+12>, flags=0) at tcp/tcp_close.c:71
| #1  0x5658bf1e in devif_conn_event (dev=0x5660bd80 <g_sim_dev>, flags=512, list=0x5660d558 <g_cbprealloc+312>) at devif/devif_callback.c:508
| #2  0x5658a219 in tcp_callback (dev=0x5660bd80 <g_sim_dev>, conn=0x5660c4a0 <g_tcp_connections>, flags=512) at tcp/tcp_callback.c:167
| #3  0x56589253 in tcp_timer (dev=0x5660bd80 <g_sim_dev>, conn=0x5660c4a0 <g_tcp_connections>) at tcp/tcp_timer.c:378
| #4  0x5658dd47 in tcp_poll (dev=0x5660bd80 <g_sim_dev>, conn=0x5660c4a0 <g_tcp_connections>) at tcp/tcp_devpoll.c:95
| #5  0x5658b95f in devif_poll_tcp_connections (dev=0x5660bd80 <g_sim_dev>, callback=0x565770f2 <netdriver_txpoll>) at devif/devif_poll.c:601
| #6  0x5658b9ea in devif_poll (dev=0x5660bd80 <g_sim_dev>, callback=0x565770f2 <netdriver_txpoll>) at devif/devif_poll.c:722
| #7  0x56577230 in netdriver_txavail_work (arg=0x5660bd80 <g_sim_dev>) at sim/up_netdriver.c:308
| #8  0x5655999e in work_thread (argc=2, argv=0xf3db5dd0) at wqueue/kwork_thread.c:178
| #9  0x5655983f in nxtask_start () at task/task_start.c:129

(gdb) c
Continuing.
Breakpoint 2, tcp_update_timer (conn=0x5660c4a0 <g_tcp_connections>) at tcp/tcp_timer.c:178
(gdb) bt
| #0  tcp_update_timer (conn=0x5660c4a0 <g_tcp_connections>) at tcp/tcp_timer.c:178
| #1  0x5658952a in tcp_timer (dev=0x5660bd80 <g_sim_dev>, conn=0x5660c4a0 <g_tcp_connections>) at tcp/tcp_timer.c:708
| #2  0x5658dd47 in tcp_poll (dev=0x5660bd80 <g_sim_dev>, conn=0x5660c4a0 <g_tcp_connections>) at tcp/tcp_devpoll.c:95
| #3  0x5658b95f in devif_poll_tcp_connections (dev=0x5660bd80 <g_sim_dev>, callback=0x565770f2 <netdriver_txpoll>) at devif/devif_poll.c:601
| #4  0x5658b9ea in devif_poll (dev=0x5660bd80 <g_sim_dev>, callback=0x565770f2 <netdriver_txpoll>) at devif/devif_poll.c:722
| #5  0x56577230 in netdriver_txavail_work (arg=0x5660bd80 <g_sim_dev>) at sim/up_netdriver.c:308
| #6  0x5655999e in work_thread (argc=2, argv=0xf3db5dd0) at wqueue/kwork_thread.c:178
| #7  0x5655983f in nxtask_start () at task/task_start.c:129

Since a separate work will add 24 bytes to each conn instance,
but in order to support the feature of asynchronous close(),
I can not find a better way than adding a separate work,
for resource constraints, I recommend the developers to enable
CONFIG_NET_ALLOC_CONNS, which will reduce the ram usage.

Signed-off-by: chao an <anchao@xiaomi.com>
2022-09-22 23:33:00 +08:00
zhanghongyu
9bff29d7e7 udp: add IPVx_PKTINFO related support
Signed-off-by: zhanghongyu <zhanghongyu@xiaomi.com>
2022-09-07 10:49:47 +08:00
Xiang Xiao
e0bb281e7a net: Align the prototype of sock_intf_s::si_ioctl with file_operations::ioctl
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-09-06 22:46:37 +08:00
chao.an
bf6cbbca5d net/tcp: fix devif callback list corruption on tcp_close()
devif_conn_event() will be called recursively in the psock_send_eventhandler(),
if the tcp event tcp_close_eventhandler() is marked as "next" in first devif_conn_event()
and released from sencond recursive call, the "next" event in the first devif_conn_event()
will become a wild pointer.

479 uint16_t devif_conn_event(FAR struct net_driver_s *dev, uint16_t flags,
480                           FAR struct devif_callback_s *list)
481 {
482   FAR struct devif_callback_s *next;
...
488   net_lock();
489   while (list && flags)
490     {
...
496       next = list->nxtconn;  <------------------  event tcp_close_eventhandler() on next
...
500       if (list->event != NULL && devif_event_trigger(flags, list->flags))
501         {
...
507           flags = list->event(dev, list->priv, flags);  <---------------- perform  psock_send_eventhandler(), event tcp_close_eventhandler() will be remove from tcp_lost_connection()
508         }
...
512       list = next;  <---------------- event tcp_close_eventhandler() has been released, wild pointer
513     }
514
515   net_unlock();
516   return flags;
517 }

The callstack as below:

Breakpoint 1, tcp_close_eventhandler (dev=0x56607d80 <g_sim_dev>, pvpriv=0x566084a0 <g_tcp_connections>, flags=65) at tcp/tcp_close.c:83
(gdb) bt
| #0  tcp_close_eventhandler (dev=0x56607d80 <g_sim_dev>, pvpriv=0x566084a0 <g_tcp_connections>, flags=65) at tcp/tcp_close.c:83
| #1  0x5658bb57 in devif_conn_event (dev=0x56607d80 <g_sim_dev>, flags=65, list=0x56609498 <g_cbprealloc+312>) at devif/devif_callback.c:507
                    ----------------> devif_conn_event() recursively
| #2  0x56589f8c in tcp_callback (dev=0x56607d80 <g_sim_dev>, conn=0x566084a0 <g_tcp_connections>, flags=65) at tcp/tcp_callback.c:169
| #3  0x565c55e4 in tcp_shutdown_monitor (conn=0x566084a0 <g_tcp_connections>, flags=65) at tcp/tcp_monitor.c:211
| #4  0x565c584b in tcp_lost_connection (conn=0x566084a0 <g_tcp_connections>, cb=0x566094b0 <g_cbprealloc+336>, flags=65) at tcp/tcp_monitor.c:391
| #5  0x565c028a in psock_send_eventhandler (dev=0x56607d80 <g_sim_dev>, pvpriv=0x566084a0 <g_tcp_connections>, flags=65) at tcp/tcp_send_buffered.c:544
                    ----------------> call psock_send_eventhandler() before tcp_close_eventhandler()
| #6  0x5658bb57 in devif_conn_event (dev=0x56607d80 <g_sim_dev>, flags=65, list=0x566094b0 <g_cbprealloc+336>) at devif/devif_callback.c:507
| #7  0x56589f8c in tcp_callback (dev=0x56607d80 <g_sim_dev>, conn=0x566084a0 <g_tcp_connections>, flags=65) at tcp/tcp_callback.c:169
| #8  0x5658e8cc in tcp_input (dev=0x56607d80 <g_sim_dev>, domain=2 '\002', iplen=20) at tcp/tcp_input.c:1059
| #9  0x5658ed77 in tcp_ipv4_input (dev=0x56607d80 <g_sim_dev>) at tcp/tcp_input.c:1355
| #10 0x5658c0a2 in ipv4_input (dev=0x56607d80 <g_sim_dev>) at devif/ipv4_input.c:358
| #11 0x56577017 in netdriver_recv_work (arg=0x56607d80 <g_sim_dev>) at sim/up_netdriver.c:182
| #12 0x5655999e in work_thread (argc=2, argv=0xf3db5dd0) at wqueue/kwork_thread.c:178
| #13 0x5655983f in nxtask_start () at task/task_start.c:129
(gdb) c
Continuing.
Breakpoint 1, tcp_close_eventhandler (dev=0x56607d80 <g_sim_dev>, pvpriv=0x566084a0 <g_tcp_connections>, flags=65) at tcp/tcp_close.c:83
(gdb) bt
| #0  tcp_close_eventhandler (dev=0x56607d80 <g_sim_dev>, pvpriv=0x566084a0 <g_tcp_connections>, flags=65) at tcp/tcp_close.c:83
      ----------------------> "next" corrupted, invaild call tcp_close_eventhandler()
| #1  0x5658bb57 in devif_conn_event (dev=0x56607d80 <g_sim_dev>, flags=65, list=0x56609498 <g_cbprealloc+312>) at devif/devif_callback.c:507
| #2  0x56589f8c in tcp_callback (dev=0x56607d80 <g_sim_dev>, conn=0x566084a0 <g_tcp_connections>, flags=65) at tcp/tcp_callback.c:169
| #3  0x5658e8cc in tcp_input (dev=0x56607d80 <g_sim_dev>, domain=2 '\002', iplen=20) at tcp/tcp_input.c:1059
| #4  0x5658ed77 in tcp_ipv4_input (dev=0x56607d80 <g_sim_dev>) at tcp/tcp_input.c:1355
| #5  0x5658c0a2 in ipv4_input (dev=0x56607d80 <g_sim_dev>) at devif/ipv4_input.c:358
| #6  0x56577017 in netdriver_recv_work (arg=0x56607d80 <g_sim_dev>) at sim/up_netdriver.c:182
| #7  0x5655999e in work_thread (argc=2, argv=0xf3db5dd0) at wqueue/kwork_thread.c:178
| #8  0x5655983f in nxtask_start () at task/task_start.c:129
(gdb) c
Continuing.
[    2.680000] up_assert: Assertion failed at file:devif/devif_callback.c line: 85 task: lpwork

Signed-off-by: chao.an <anchao@xiaomi.com>
2022-08-30 19:41:18 +08:00
chao.an
162fcd10ca net: cleanup pvconn reference to avoid confuse
More reference:
https://github.com/apache/incubator-nuttx/pull/5252
https://github.com/apache/incubator-nuttx/pull/5434

Signed-off-by: chao.an <anchao@xiaomi.com>
2022-08-26 20:58:11 +08:00
chao.an
ea621599fd net: remove pvconn reference from all devif callback
Do not use 'pvconn' argument to get the connection pointer since
pvconn is normally NULL for some events like NETDEV_DOWN.
Instead, the connection pointer can be reliably obtained from the
corresponding private pointer.

Signed-off-by: chao.an <anchao@xiaomi.com>
2022-08-26 20:58:11 +08:00
zhanghongyu
e03c2c321a tcp: reset conn->nrtx when ack received
Otherwise, when a long test triggers multiple timeout retransmissions,
the late timeout retransmissions are always delayed between 24 and 48 seconds

Signed-off-by: zhanghongyu <zhanghongyu@xiaomi.com>
2022-08-17 21:35:09 +03:00
Xiang Xiao
ba9486de4a iob: Remove iob_user_e enum and related code
since it is impossible to track producer and consumer
correctly if TCP/IP stack pass IOB directly to netdev

Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-08-15 08:41:20 +03:00
Peter van der Perk
26dbdba5d8 [TCP] Close RAM usage optimization 2022-07-29 23:51:06 +08:00
zhanghongyu
ef660083c8 tcp: check option length before d_len update
Signed-off-by: zhanghongyu <zhanghongyu@xiaomi.com>
2022-07-26 12:05:06 +03:00
chao.an
1c80675e87 net/tcp: fix regression of invalid update the rexmit_seq in buffer mode
fix regression of invalid update the rexmit_seq in buffer mode
rexmit_seq should not be used instead of sndseq in fast retransmission,
sndseq of retransmission in the packet does not need to be re-updated

Signed-off-by: chao.an <anchao@xiaomi.com>
2022-07-12 11:04:39 +03:00
chao.an
8ae8c10954 net/poll: fix race condition if connect free before poll teardown
Net poll teardown is not protected by net lock, if the conn is released
before teardown, the assertion failure will be triggered during free dev
callback, this patch will add the net lock around net poll teardown to
fix race condition

nuttx/libs/libc/assert/lib_assert.c:36
nuttx/net/devif/devif_callback.c:85
nuttx/net/tcp/tcp_netpoll.c:405
nuttx/fs/vfs/fs_poll.c:244
nuttx/fs/vfs/fs_poll.c:500

Signed-off-by: chao.an <anchao@xiaomi.com>
2022-07-09 19:11:42 +08:00
chao.an
9bdeed73e2 net/tcp: fix assertion of fallback connection alloc
When the free connection list is unenough to alloc a new instance,
the TCP stack will reuse the currently closed connection, but if
the handle is not released by the user via close(2), the reference
count of the connection remains in a non-zero value, it will cause
the assertion to fail, so when the handle is not released we should
not use such a conn instance when being actively closed, and ensure
that the reference count is assigned within the net lock protection

|(gdb) bt
|#0  up_assert (filename=0x565c78f7 "tcp/tcp_conn.c", lineno=771) at sim/up_assert.c:75
|#1  0x56566177 in _assert (filename=0x565c78f7 "tcp/tcp_conn.c", linenum=771) at assert/lib_assert.c:36
|#2  0x5657d620 in tcp_free (conn=0x565fb3e0 <g_tcp_connections>) at tcp/tcp_conn.c:771
|#3  0x5657d5a1 in tcp_alloc (domain=2 '\002') at tcp/tcp_conn.c:700
|#4  0x565b1f50 in inet_tcp_alloc (psock=0xf3dea150) at inet/inet_sockif.c:144
|#5  0x565b2082 in inet_setup (psock=0xf3dea150, protocol=0) at inet/inet_sockif.c:253
|#6  0x565b1bf0 in psock_socket (domain=2, type=1, protocol=0, psock=0xf3dea150) at socket/socket.c:121
|#7  0x56588f5f in socket (domain=2, type=1, protocol=0) at socket/socket.c:278
|#8  0x565b11c0 in hello_main (argc=1, argv=0xf3dfab10) at hello_main.c:35
|#9  0x56566631 in nxtask_startup (entrypt=0x565b10ef <hello_main>, argc=1, argv=0xf3dfab10) at sched/task_startup.c:70
|#10 0x565597fa in nxtask_start () at task/task_start.c:134

Signed-off-by: chao.an <anchao@xiaomi.com>
2022-07-09 09:37:49 +09:00
YAMAMOTO Takashi
19eb4d7d77 Revert "net/tcp: discard connect reference before free"
This reverts commit b88a1fd7fd. [1]

Because:

* It casues assertion failures like [2].

* I don't understand what it attempted to fix.

[1]
```
commit b88a1fd7fd
Author: chao.an <anchao@xiaomi.com>
Date:   Sat Jul 2 13:17:41 2022 +0800

    net/tcp: discard connect reference before free

    connect reference should be set to 0 before free

    Signed-off-by: chao.an <anchao@xiaomi.com>
```

[2]
```
    #0  up_assert (filename=0x5516d0 "tcp/tcp_conn.c", lineno=771) at sim/up_assert.c:75
    #1  0x000000000040a4bb in _assert (filename=0x5516d0 "tcp/tcp_conn.c", linenum=771) at assert/lib_assert.c:36
    #2  0x000000000042a2ad in tcp_free (conn=0x597fe0 <g_tcp_connections+384>) at tcp/tcp_conn.c:771
    #3  0x000000000053bdc2 in tcp_close_disconnect (psock=0x7f58d1abbd80) at tcp/tcp_close.c:331
    #4  0x000000000053bc69 in tcp_close (psock=0x7f58d1abbd80) at tcp/tcp_close.c:366
    #5  0x000000000052eefe in inet_close (psock=0x7f58d1abbd80) at inet/inet_sockif.c:1689
    #6  0x000000000052eb9b in psock_close (psock=0x7f58d1abbd80) at socket/net_close.c:102
    #7  0x0000000000440495 in sock_file_close (filep=0x7f58d1b35f40) at socket/socket.c:115
    #8  0x000000000043b8b6 in file_close (filep=0x7f58d1b35f40) at vfs/fs_close.c:74
    #9  0x000000000043ab22 in nx_close (fd=9) at inode/fs_files.c:544
    #10 0x000000000043ab7f in close (fd=9) at inode/fs_files.c:578
```
2022-07-08 16:11:24 +08:00
chao.an
b88a1fd7fd net/tcp: discard connect reference before free
connect reference should be set to 0 before free

Signed-off-by: chao.an <anchao@xiaomi.com>
2022-07-02 16:07:34 +08:00
Xiang Xiao
abc72ad128 net: Ensure sendmsg and sendfile return -EAGAIN in case of timeout
instead of -ETIMEOUT, as specify here:
https://pubs.opengroup.org/onlinepubs/009604599/functions/sendmsg.html
https://man7.org/linux/man-pages/man2/sendfile.2.html

Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-06-28 06:19:13 +03:00
chao.an
669fb83706 net/tcp(buffered): retransmit only one the earliest not acknowledged segment
Retransmit only one the earliest not acknowledged segment
(according to RFC 6298 (5.4)). The issue is the same as it was
in tcp_send_unbuffered.c and tcp_sendfile.c.

Signed-off-by: chao.an <anchao@xiaomi.com>
2022-06-16 18:14:29 +08:00
chao.an
845e259ac7 net/tcp: d_appdata should remove the tcp specific option field
applicate data field should not touch data of IP layer

Signed-off-by: chao.an <anchao@xiaomi.com>
2022-06-15 20:28:10 +08:00
chao.an
0636c17a63 net/tcp: wave hands on background
The time consuming of tcp waving hands(close(2)) will be affected
by network jitter, especially the wireless device cannot receive
the last-ack under worst environment, in this change we move the
tcp close callback into background and invoke the resource free
from workqueue, which will avoid the user application from being
blocked for a long time and unable to return in the call of close

Signed-off-by: chao.an <anchao@xiaomi.com>
2022-06-09 18:19:25 +03:00
Xiang Xiao
9d4c708913 net/tcp: Search conn list again to aovid the race condition in tcp_timer_expiry
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-06-07 20:15:41 +03:00
Xiang Xiao
298b4aba0c net/tcp: Hold the net lock in tcp_timer_expiry
to follow the call convention for d_txavail

Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-06-07 20:15:41 +03:00
Xiang Xiao
7ec6b4c7dd Change dpends on SCHED_[L|H]PWORK to SCHED_WORKQUEUE
since the code could map the unsupported work to the
supported one and remove select SCHED_WORKQUEUE from
Kconfig since SCHED_[L|H]PWORK already do the selection

Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-05-28 18:41:51 +03:00
zhanghongyu
3f8b71924f tcp: move wd_timer from wifi driver to tcp stack
Signed-off-by: zhanghongyu <zhanghongyu@xiaomi.com>
2022-05-28 16:29:51 +08:00
Xiang Xiao
753aa98ca7 net/tcp: Zero keeptimer in case caller set keepalive to false
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-05-19 09:08:36 -03:00
Xiang Xiao
d8b97d7ae8 net/tcp: Use the relative value for keep alive timer
unify the timer process logic as other tcp state

Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-05-18 18:40:41 +03:00
Xiang Xiao
2d3ee157ce net/tcp: Use the decrease timer in TCP_TIME_WAIT/TCP_FIN_WAIT_2
unify the timer process logic as other tcp state

Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-05-18 18:40:41 +03:00
Sebastien Lorquet
6ea8bac1c5 net: fix the build when CONFIG_NET_TCP_WRITE_BUFFERS is not enabled 2022-05-18 07:54:17 +09:00
Xiang Xiao
9072eecc30 sched/wqueue: Change the return type of work_notifier_teardown to void
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-05-14 00:35:29 +03:00
田昕
670ea1e5fb net/tcp:make initial tcp port more random
Signed-off-by: 田昕 <tianxin7@xiaomi.com>
2022-04-27 19:46:23 +03:00
zhanghongyu
c50d7e174f net: tcp/udp/icmp/icmpv6 add FIONSPACE support
Signed-off-by: zhanghongyu <zhanghongyu@xiaomi.com>
2022-04-02 13:39:38 +08:00
Xiang Xiao
7598070508 net: Remove the unnecessary initialization code
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-03-12 19:24:17 +02:00
Xiang Xiao
7028531e74 net/tcp: Remove tcp_listen_initialize
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-03-12 19:24:17 +02:00
Alexander Lunev
404ceffae2 tcp: added debug asserts and logging to investigate the rare (conn->dev == NULL) bug in callback handlers 2022-02-26 11:48:07 -03:00
chao.an
f073ed3a44 net/tcp: add support for send timeout on buffer mode
Signed-off-by: chao.an <anchao@xiaomi.com>
2022-02-18 16:04:55 +08:00
chao.an
1911ae2192 net/tcp: add interface tcp_wrbuffer_timedalloc()
add new interface to support alloc wrbuffer with timeout

Signed-off-by: chao.an <anchao@xiaomi.com>
2022-02-17 21:27:39 +01:00
Alexander Lunev
774994b951 net/tcp: support for FIN+ACK case in tcp send event handlers 2022-02-13 03:20:18 +08:00
chao.an
e749f6ca7e net/tcp/monitor: do not migrate the state to close
1. remove the unnecessary interfaces tcp_close_monitor()

socket flags(s_flags) is a global state for net connection
remove the incorrect update for stop monitor

2. do not start the tcp monitor from duplicated psock

the tcp monitor has already registered in connect callback

------------------------------------------------------------
This patch also fix the telnet issue reported by:
https://github.com/apache/incubator-nuttx/pull/5434#issuecomment-1035600651

the orignal session fd is closed after dup, the connect state
has incorrectly migrated to close:

drivers/net/telnet.c:
 977 static int telnet_session(FAR struct telnet_session_s *session)
 ...
 1031   ret = psock_dup2(psock, &priv->td_psock);
 ...
 1082   nx_close(session->ts_sd);

Signed-off-by: chao.an <anchao@xiaomi.com>
2022-02-11 18:56:40 +09:00
chao.an
8fb2468785 net/tcp: remove the socket hook reference from netdev callback
Signed-off-by: chao.an <anchao@xiaomi.com>
2022-02-10 15:04:33 -03:00
chao.an
9317626c32 net/inet: move socket linger into socket_conn_s
Signed-off-by: chao.an <anchao@xiaomi.com>
2022-02-10 15:04:33 -03:00
chao.an
3fce144aeb net/inet: move recv/send timeout into socket_conn_s
Signed-off-by: chao.an <anchao@xiaomi.com>
2022-02-10 15:04:33 -03:00
chao.an
cc6add58dc net/inet: move socket options into socket_conn_s
Signed-off-by: chao.an <anchao@xiaomi.com>
2022-02-10 15:04:33 -03:00
chao.an
99cde13a11 net/inet: move socket flags into socket_conn_s
Signed-off-by: chao.an <anchao@xiaomi.com>
2022-02-10 15:04:33 -03:00
chao.an
8f63596063 net/tcp: replace the common connect prologue
Signed-off-by: chao.an <anchao@xiaomi.com>
2022-02-10 15:04:33 -03:00
chao.an
39e142243d net/tcp/udp: move the send callback into tcp/udp structure
Signed-off-by: chao.an <anchao@xiaomi.com>
2022-02-10 15:04:33 -03:00
chao.an
56b5ae0640 net/tcp/netdev/mld: correct the netlock handling
Signed-off-by: chao.an <anchao@xiaomi.com>
2022-02-03 11:09:18 -03:00
Alexander Lunev
b2f3cefe3d sim/netdev,tapdev: implemented emulation of TX done and RX ready interrupts
and removed two tcp_send_txnotify() calls from tcp_sendfile (they are not needed anymore).

As a result, the TX throughput of both the tcp_send_buffered and tcp_send_unbuffered
is significantly boosted in case of TUN/TAP network device.
2022-01-28 18:16:42 +08:00
Alexander Lunev
2a6de301ee net/tcp: transformed NET_TCP_FAST_RETRANSMIT_WATERMARK option to boolean.
According to RFC 5681 (3.2) the TCP Fast Retransmit algorithm should start
if the threshold of 3 duplicate ACKs is reached.
Thus the threshold should be a constant, not an integer option.
2022-01-26 11:50:48 +08:00
Alexander Lunev
8be9cb9f72 net/tcp/sendfile: notify the device driver of the availability of TX data on TCP retransmission
(as well as on sending normal TCP packets).
2022-01-26 02:01:25 +08:00
Alexander Lunev
7e748e63dd net/tcp/tcp_sendfile: optimized out sendfile_txnotify() function 2022-01-26 02:01:06 +08:00
Alexander Lunev
ad25c43983 net/tcp/sendfile: fast retransmit on duplicate acknowledgments (RFC 5681).
(the same as it was implemented in tcp_send_unbuffered.c)
2022-01-25 16:30:38 +08:00
Alexander Lunev
eec94132c4 net/tcp/sendfile: removed excessive overwrites of conn->sndseq
(conn->sndseq was updated in multiple places that was unreasonable and complicated).
This optimization is the same as it was done for tcp_send_unbuffered.
2022-01-22 00:43:53 +08:00
Alexander Lunev
338b122b2b net/tcp/sendfile: fixed an issue with unackseq calculation.
Wrong unackseq calculation locked conn->tx_unacked at non-zero values
even if all ACKs were received.
This issue is the same as it was with tcp_send_unbuffered.
2022-01-22 00:42:29 +08:00
Alexander Lunev
c9e32dd4a4 tcp: fixed warning: ISO C90 forbids mixed declarations and code 2022-01-22 00:41:42 +08:00
Alexander Lunev
64dd669749 net/tcp/sendfile: retransmit only one the earliest not acknowledged segment
(according to RFC 6298 (5.4)). The issue is the same as it was in tcp_send_unbuffered.c.
2022-01-20 18:37:39 +08:00
Petro Karashchenko
08043fb5bc net: unify FAR keyword usage for all net buffer memory mapped buffers
Signed-off-by: Petro Karashchenko <petro.karashchenko@gmail.com>
2022-01-20 01:42:56 +08:00
Alexander Lunev
6bb7a92a9a net/tcp/tcp_send*: added debug asserts for TCP_ACKDATA, TCP_REXMIT and TCP_DISCONN_EVENTS flags 2022-01-19 10:45:38 +08:00
Alexander Lunev
79e609a8c8 net/tcp/sendfile: swapped the location of TCP_DISCONN_EVENTS and TCP_ACKDATA conditions towards tcp_send_unbuffered.c unification 2022-01-19 00:13:38 +08:00
Petro Karashchenko
9551de7115 net: use HTONS, NTOHS, HTONL, NTOHL macro in kernel code
Signed-off-by: Petro Karashchenko <petro.karashchenko@gmail.com>
2022-01-18 10:59:47 +01:00
Alexander Lunev
5b13797cce net/tcp/tcp_send*: reliably obtain the TCP connection pointer in TCP event handlers
Do not use pvconn argument to get the TCP connection pointer because pvconn is
normally NULL for some events like NETDEV_DOWN. Instead, the TCP connection pointer
can be reliably obtained from the corresponding TCP socket.
2022-01-18 16:14:38 +08:00
Alexander Lunev
f61f276120 net/tcp/sendfile: TCP retransmission could not start because of incorrect snd_ackcb callback handling:
Both the snd_ackcb and snd_datacb callbacks were created and destroyed right after sending every packet.
Whenever TCP_REXMIT event occurred due to TCP send timeout, TCP_REXMIT was ignored because
snd_ackcb callback had been destroyed by the time.
The issue is fixed as follows:
- both the snd_ackcb and snd_datacb callbacks are combined into one snd_cb callback
  (the same way as in tcp_send_unbuffered.c).
- the snd_cb callback lives until all requested data (via sendfile) is sent,
  including all ACKs and possible retransmissions.

As a positive side effect of the code optimization / fix, sendfile TCP payload throughput is increased.
2022-01-18 02:03:40 +08:00
Alexander Lunev
0f080cdeaf net/tcp/sendfile: NET_TCP_WRITE_BUFFERS and NET_SENDFILE were inconsistent with each other:
tcp_sendfile() reads data directly from a file and does not use NET_TCP_WRITE_BUFFERS data flow
even if CONFIG_NET_TCP_WRITE_BUFFERS option is enabled.
Despite this, tcp_sendfile relied on NET_TCP_WRITE_BUFFERS specific flow control variables that
were idle during sendfile operation. Thus it was a total inconsistency.

E.g. because of the issue, TCP socket used by sendfile() operation never issued
FIN packet on close() command, and the TCP connection hung up.

As a result of the fix, simultaneously enabled CONFIG_NET_TCP_WRITE_BUFFERS and
CONFIG_NET_SENDFILE options can coexist.
2022-01-17 01:42:41 +08:00
Petro Karashchenko
8d3bf05fd2 include: fix double include pre-processor guards
Signed-off-by: Petro Karashchenko <petro.karashchenko@gmail.com>
2022-01-16 11:11:14 -03:00
Alexander Lunev
e9ab3adf23 net/tcp(unbuffered): advance sndseq by +1 because SYN and FIN occupy one sequence number (RFC 793) 2022-01-03 12:18:44 +09:00
Alexander Lunev
0afb1d8dbb net/tcp(unbuffered): fast retransmit on duplicate acknowledgments 2022-01-02 23:25:09 +08:00
chao.an
38b7b3d26a net/tcp: add support for CONFIG_NET_ALLOC_CONNS 2022-01-01 20:40:02 +08:00
Alexander Lunev
2b60468845 net/tcp(unbuffered): removed excessive overwrites of conn->sndseq
(conn->sndseq was updated in multiple places that was unreasonable and complicated).
2021-12-29 05:35:23 -06:00
Alexander Lunev
e68ffb9f99 net/tcp(unbuffered): fixed an issue with unackseq calculation.
Wrong unackseq calculation locked conn->tx_unacked at non-zero values
even if all ACKs were received. Thus unbuffered psock_tcp_send() never completed.
2021-12-27 20:59:48 -06:00
Alexander Lunev
19dc121a4f net/tcp(unbuffered): fixed an issue with tx_unacked overflow that occurred if NET_TCP_WINDOW_SCALE option was enabled.
If the remote TCP receiver advertised TCP window size greater than 64 KB
and TCP ACK packets returned to the NuttX TCP sender with a significant delay,
tx_unacked variable overflowed and further TCP send stalled forever
(until TCP re-connection).
2021-12-27 11:05:01 -06:00
chao.an
0ee7400fdf net/tcp: fix send deadlock if disconnect
Signed-off-by: chao.an <anchao@xiaomi.com>
2021-12-16 01:29:10 -06:00
Alexander Lunev
1e07b6d528 net/tcp(unbuffered): retransmit only one the earliest not acknowledged segment
(according to RFC 6298 (5.4)).
2021-11-10 12:21:07 -06:00
YAMAMOTO Takashi
ecd6a3572b net/tcp/Kconfig: Remove NET_TCP_SPLIT
While it's a neat idea, it doesn't work well in reality.

* Many of modern tcp stacks don't obey the "ack every other packet"
  rule these days. (Linux, macOS, ...)

* Even if a traditional TCP implementation is assumed, we can't
  predict/control which packets are acked reliably. For example,
  window updates can easily mess up our strategy.
2021-11-04 13:32:57 -05:00
YAMAMOTO Takashi
28d168e1b8 tcp_send_unbuffered.c: Fix nxstyle errors 2021-11-04 13:32:57 -05:00
YAMAMOTO Takashi
1550a525e9 tcp_send_unbuffered.c: unifdef -UCONFIG_NET_TCP_SPLIT 2021-11-04 13:32:57 -05:00
Alexander Lunev
1e25602678 net/can,icmp,icmpv6,tcp,tcp_timer,udp: device should poll only those connections that are bound to the device.
tcp_timer: eliminated false decrements of conn->timer in case of multiple network adapters.
The false timer decrements sometimes provoked TCP spurious retransmissions due to premature timeouts.
2021-10-11 23:09:00 -07:00
chao.an
c132e5bed4 net/tcp: sanity check for the listen address
Signed-off-by: chao.an <anchao@xiaomi.com>
2021-09-23 23:07:57 -07:00
Alexander Lunev
36fbedcbfc net/devif/devif_callback.c: corrected the connection event list to work as FIFO instead of LIFO.
In case of enabled packet forwarding mode, packets were forwarded in a reverse order
because of LIFO behavior of the connection event list.
The issue exposed only during high network traffic. Thus the event list started to grow
that resulted in changing the order of packets inside of groups of several packets
like the following: 3, 2, 1, 6, 5, 4, 8, 7 etc.

Remarks concerning the connection event list implementation:
* Now the queue (list) is FIFO as it should be.
* The list is singly linked.
* The list has a head pointer (inside of outer net_driver_s structure),
  and a tail pointer is added into outer net_driver_s structure.
* The list item is devif_callback_s structure.
  It still has two pointers to two different list chains (*nxtconn and *nxtdev).
* As before the first argument (*dev) of the list functions can be NULL,
  while the other argument (*list) is effective (not NULL).
* An extra (*tail) argument is added to devif_callback_alloc()
  and devif_conn_callback_free() functions.
* devif_callback_alloc() time complexity is O(1) (i.e. O(n) to fill the whole list).
* devif_callback_free() time complexity is O(n) (i.e. O(n^2) to empty the whole list).
* devif_conn_event() time complexity is O(n).
2021-09-18 21:01:39 -05:00
Alin Jerpelea
91a5f90a7f author: UVC Ingenieure : update licenses to Apache
Gregory Nutt has submitted the SGA
UVC Ingenieure has submitted the SGA
Max Holtzberg has submitted the ICLA

as a result we can migrate the licenses to Apache.

Signed-off-by: Alin Jerpelea <alin.jerpelea@sony.com>
2021-09-15 15:57:55 +08:00
YAMAMOTO Takashi
42f1851ca6 tcp_send_buffered.c: Fix snd_wnd
snd_wnd is an offset from the acked sequence number.
2021-08-25 20:56:05 +08:00
YAMAMOTO Takashi
1b82f1c749 tcp_input: snd_wnd processing
* Do not accept the window in old segments.
  Implement SND.WL1/WL2 things in the RFC.

* Do not accept the window in the segment w/o ACK bit set.
  The window is an offset from the ack seq.
  (maybe it's simpler to just drop segments w/o ACK though)

* Subtract snd_wnd by the amount of the ack advancement.
2021-08-25 20:56:05 +08:00
chao.an
9701a678bd net/tcp: add nonblock connect(2) support
Signed-off-by: chao.an <anchao@xiaomi.com>
2021-08-19 19:19:05 -07:00
YAMAMOTO Takashi
b815a2c3a8 tcp_input: Don't put back sndseq on an old ack 2021-08-06 21:17:25 -07:00
YAMAMOTO Takashi
e53f989997 tcp_rexmit: advance conn->sndseq
Otherwise, we use an old sequence number when sending
non-data segment. (eg. window update)
The peer might consider such a segment stale and ignore.
2021-08-06 21:17:25 -07:00
YAMAMOTO Takashi
b5bb0d56ad tcp: fix an assertion in "fix iob allocation deadlock" commit
Fix a wrong assertion in:

```
commit 98ec46d726
Author: YAMAMOTO Takashi <yamamoto@midokura.com>
Date:   Tue Jul 20 09:10:43 2021 +0900

    tcp_send_buffered.c: fix iob allocation deadlock

    Ensure to put the wrb back onto the write_q when blocking
    on iob allocation. Otherwise, it can deadlock with other
    threads doing the same thing.
```

I forget to submit this with https://github.com/apache/incubator-nuttx/pull/4257
2021-08-05 23:50:06 -07:00
YAMAMOTO Takashi
1d3594ba07 tcp_send_buffered.c: Fix broken retransmit
With an applictation using mbedtls, I observed retransmitted segments
with corrupted user data, detected by the peer tls during mac processing.

Looking at the packet dump, I suspect that a wrb which has been put back
onto the write_q for retransmission was partially sent but fully acked.
Note: it's normal for a retransmission to be acked before sent.

In that case, the bug fixed in this commit would cause the wrb have
a wrong sequence number, possibly the same as the next wrb. It matches
what I saw in the packet dump. That is, the broken segments contain the
payload identical to one of the previous segment.
2021-08-02 11:30:01 -07:00
YAMAMOTO Takashi
98ec46d726 tcp_send_buffered.c: fix iob allocation deadlock
Ensure to put the wrb back onto the write_q when blocking
on iob allocation. Otherwise, it can deadlock with other
threads doing the same thing.
2021-08-02 11:25:55 -07:00
chao.an
7d4502aca6 net/socket: add SO_SNDBUF support
Signed-off-by: chao.an <anchao@xiaomi.com>
2021-07-20 20:24:58 -07:00
chao.an
f52757f90a net/wrb: add tcp/udp_inqueue_wrb_size() interface
Signed-off-by: chao.an <anchao@xiaomi.com>
2021-07-20 20:24:58 -07:00
YAMAMOTO Takashi
09f3a1ec8e tcp_send_buffered: throttle IOB allocations for send
Consider a bi-directional TCP connection:

1. we use all IOBs for tx queue
2. we advertize zero recv window because we have no free IOBs
3. if the peer tcp does the same thing,
   both sides advertize zero window and can not drain the tx queue.

For a similar stall to happen, the peer doesn't need to be
a naive tcp implementation like nuttx. A naive application blocking
on send() without draining its read buffer is enough.
(Probably such an application should be fixed to drain rx even
when tx is full. However, it's another story.)

This commit avoids the situation by prevent tx from grabbing
the all IOBs in the first place. (assuming CONFIG_IOB_THROTTLE > 0)
2021-07-14 15:08:18 +08:00
chao.an
83f7c08f65 net/tcp: only print the error when disable the TCP_NODELAY
Since we do not have the Nagle's algorithm,
the TCP_NODELAY socket option is enabled by default.

Change-Id: I0c8619bb06cf418f7eded5bd72ac512b349cacc5
Signed-off-by: chao.an <anchao@xiaomi.com>
2021-07-13 09:44:19 -03:00
Huang Qi
e5c278981a net: Rename IP_TTL to IP_TTL_DEFAULT
Since a SOL option IP_TTL exist, we should rename this IP_TTL
in netconfig.h to avoid confusion.

Signed-off-by: Huang Qi <huangqi3@xiaomi.com>
Change-Id: Ib04c36553f23bce8d362e97294a8b83eaa050cf3
2021-07-12 16:30:37 -03:00
Xiang Xiao
906cb8b0f4 net/tcp: tcp_sendfile need restore the file location at the end
quote from https://man7.org/linux/man-pages/man2/sendfile.2.html:
If offset is not NULL, then it points to a variable holding the
file offset from which sendfile() will start reading data from
in_fd.  When sendfile() returns, this variable will be set to the
offset of the byte following the last byte that was read.  If
offset is not NULL, then sendfile() does not modify the file
offset of in_fd; otherwise the file offset is adjusted to reflect
the number of bytes read from in_fd.
If offset is NULL, then data will be read from in_fd starting at
the file offset, and the file offset will be updated by the call.

The change also align with the implementation at:
libs/libc/misc/lib_sendfile.c

Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
Change-Id: I607944f40b04f76731af7b205dcd319b0637fa04
2021-07-12 05:20:45 -07:00
chao.an
d4ce70979e net/tcp: change all window relative value type to uint32_t
1. change all window relative value type to uint32_t
2. move window range validity check(UINT16_MAX) before assembling TCP header

Signed-off-by: chao.an <anchao@xiaomi.com>
2021-07-07 03:55:41 -05:00
chao.an
aab03ef86d net/tcp: add window scale support
Reference here:
https://tools.ietf.org/html/rfc1323

Signed-off-by: chao.an <anchao@xiaomi.com>
2021-07-07 03:55:41 -05:00
chao.an
a5cdc4e69b net/tcp: change the tcp optdata to dynamic arrays
Signed-off-by: chao.an <anchao@xiaomi.com>
2021-07-07 03:55:41 -05:00
chao.an
87bffc190c net/tcp: remove the invalid break during tcp option loop
Signed-off-by: chao.an <anchao@xiaomi.com>
2021-07-07 03:55:41 -05:00
chao.an
b901f22c27 net/socket: add SO_RCVBUF support
Signed-off-by: chao.an <anchao@xiaomi.com>
2021-07-06 01:44:55 -05:00
chao.an
eabe535de7 net/inet: add support of FIONREAD
Signed-off-by: chao.an <anchao@xiaomi.com>
2021-07-05 06:20:52 -05:00
Anthony Merlino
a885b24cc1 Attempt to fix race condition reported in issue #3647 2021-07-04 08:54:15 -05:00
YAMAMOTO Takashi
669619a06a tcp_close: Fix a race with passive close
tcp_close disposes the connection immediately if it's called in
TCP_LAST_ACK. If it happens, we will end up with responding the
last ACK with a RST.

This commit fixes it by making tcp_close wait for the completion
of the passive close.
2021-07-02 13:54:15 +09:00
YAMAMOTO Takashi
c7ba75697c tcp_recvwindow.c: Use iob_tailroom to replace the home grown one 2021-06-30 06:40:13 -05:00
YAMAMOTO Takashi
52c237cb5f net/tcp/tcp.h: Update a comment about readahead 2021-06-30 06:40:13 -05:00
YAMAMOTO Takashi
08e9dff0e9 tcp_close: disable send callback before sending FIN
This fixes connection closing issues with CONFIG_NET_TCP_WRITE_BUFFERS.

Because TCP_CLOSE is used for both of input and output for tcp_callback,
the close callback and the send callback confuses each other as
the following. As it effectively disposes the connection immediately,
we end up with responding to the consequent ACK and FIN/ACK from the peer
with RSTs.

tcp_timer
    -> tcp_close_eventhandler
        returns TCP_CLOSE (meaning an active close)
    -> psock_send_eventhandler
        called with TCP_CLOSE from tcp_close_eventhandler, misinterpet as
        a passive close.
        -> tcp_lost_connection
            -> tcp_shutdown_monitor
                -> tcp_callback
                    -> tcp_close_eventhandler
                        misinterpret TCP_CLOSE from itself as
                        a passive close
2021-06-30 06:39:13 -05:00
YAMAMOTO Takashi
326a8ef0a2 tcp_close_disconnect: don't nullify sndcb
It isn't necessary and I plan to use the value later in
the close processing.
2021-06-30 06:39:13 -05:00
YAMAMOTO Takashi
8472430f22 tcp_close: replace scaring comments 2021-06-30 06:39:13 -05:00
YAMAMOTO Takashi
1ce13ee731 tcp_reset: Don't copy the peer window
The current code just leave the window value from the segment
from the peer. It doesn't make sense.

Instead, always use 0.
This matches what NetBSD and Linux do.
(As far as I read their code correctly.)
2021-06-29 22:23:48 -05:00
YAMAMOTO Takashi
98e7c6924d tcp: always responds to keep-alive segments
* It doesn't make sense to have this conditional on our own
  SO_KEEPALIVE support. (CONFIG_NET_TCP_KEEPALIVE)
  Actually we don't have a control on the peer tcp stack,
  who decides to send us keep-alive probes.

* We should respond them for non ESTABLISHED states. eg. FIN_WAIT_2
  See also:
  https://github.com/apache/incubator-nuttx/pull/3919#issuecomment-868248576
2021-06-30 11:52:08 +09:00
YAMAMOTO Takashi
4878b7729c tcp: simplify readahead
Do not bother to preserve segment boundaries in the tcp
readahead queues.

* Avoid wasting the tail IOB space for each segments.
  Instead, pack the newly received data into the tail space
  of the last IOB. Also, advertise the tail space as
  a part of the window.

* Use IOB chain directly. Eliminate IOB queue overhead.

* Allow to accept only a part of a segment.

* This change improves the memory efficiency.
  And probably more importantly, allows less-confusing
  recv window advertisement behavior.
  Previously, even when we advertise N bytes window,
  we often couldn't actually accept N bytes. Depending on
  the segment sizes and IOB configurations, it was causing
  segment drops.
  Also, the previous code was moving the right edge of the
  window back and forth too often, even when nothing in
  the system was competing on the IOBs. Shrinking the
  window that way is a kinda well known recipe to confuse
  the peer stack.
2021-06-30 06:22:14 +09:00
YAMAMOTO Takashi
0886257eb4 tcp_input: Accept segments spanning over rcvseq 2021-06-30 06:22:14 +09:00
YAMAMOTO Takashi
eeafe070ec tcp.h: Add TCP_SEQ_ADD 2021-06-30 06:22:14 +09:00
YAMAMOTO Takashi
022a2490d1 tcp: Change the way to advance rcvseq
* Move the code to advance rcvseq for user data from tcp_input
  to receive handlers.
  Motivation: allow partial ack.

* If we drop a segment, ignore FIN as well. Note than tcp FIN bit is
  logically after the user data in the same segment.
2021-06-30 06:22:14 +09:00
YAMAMOTO Takashi
1448de4451 tcp_should_send_recvwindow: Remove function name from ninfo()
ninfo() itself usually prefixes the function name
automatically for us.
2021-06-18 00:47:47 -05:00
YAMAMOTO Takashi
ca0f2bdb95 tcp_get_recvwindow: Make this match the reality (tcp_datahandler)
It doesn't make much sense to advertize more than we can actually
accept, especially when we don't queue partial segments.
2021-06-15 01:17:38 -05:00
YAMAMOTO Takashi
af64912833 tcp_get_recvwindow: use tcp_rx_mss
Just to reduce code duplication.
No functional changes are intended.
2021-06-15 01:17:38 -05:00
YAMAMOTO Takashi
64676641cb tcp_datahandler: try throttled=false on iob_trycopyin failure as well
I assume this was just an oversight because I couldn't
find any obvious reason to special-case only the first IOB.

The commit message of the original commit is cited below.

```
commit bf21056001
Author: chao.an <anchao@xiaomi.com>
Date:   Fri Nov 27 09:50:38 2020 +0800

    net/tcp: fallback to unthrottle pool to avoid deadlock

    Add a fallback mechanism to ensure that there are still available
    iobs for an free connection, Guarantees all connections will have
    a minimum threshold iob to keep the connection not be hanged.

    Change-Id: I59bed98d135ccd1f16264b9ccacdd1b0d91261de
    Signed-off-by: chao.an <anchao@xiaomi.com>
```
2021-06-15 01:17:38 -05:00
YAMAMOTO Takashi
0347cd3f09 tcp_should_send_recvwindow: Add a few ninfo() 2021-06-13 21:20:24 -05:00
YAMAMOTO Takashi
14ec75e7fc tcp: window update improvements
* Fixes the case where the window was small but not zero.

* tcp_recvfrom: Remove tcp_ackhandler. Instead, simply schedule TX for
  a possible window update and make tcp_appsend decide.

* Replace rcv_wnd (the last advertized window size value) with
  rcv_adv. (the window edge sequence number advertized to the peer)
  rcv_wnd was complicated to deal with because its base (rcvseq) is
  also moving.

* tcp_appsend: Send a window update even if there are no other reasons
  to send an ack.
  Namely, send an update if it increases the window by
    * 2 * mss
    * or the half of the max possible window size
2021-06-13 21:20:24 -05:00
YAMAMOTO Takashi
1f6fdf04b7 tcp: Extract MSS calculation from tcp_synack
I plan to use it for recv window update decision logic.
2021-06-13 21:20:24 -05:00