Commit Graph

233 Commits

Author SHA1 Message Date
Gregory Nutt
22da175629 mm/iob: Add a divider that can be used to reduce the rate of IOB notifications. 2018-09-13 06:56:29 -06:00
Gregory Nutt
76eec53e4f mm/iob: iob_navail() was returning the number of free IOB chain queue entries, not the number of free IOBs. Completely misnamed. 2018-09-12 06:40:18 -06:00
Gregory Nutt
11a635dcb3 mm/iob: IOB free notifier should accept the work queue ID as a paramter. The notification may need to run on either the high- or low- priority work queue. sched/work: Change the default priority of the low-priority work queue to 100. 2018-09-11 08:49:39 -06:00
Gregory Nutt
af0ee3c8f7 sched/wqueue: Add an option to work queue notifier so that the notification can occur on different work queues. 2018-09-11 07:22:23 -06:00
Gregory Nutt
39df7ed0c0 mm/iob and sched/semaphore: Work around some issues with the IOB throttle semaphore. It has some odd behaviors that can cause assertions in sem_post(). Also, it seems to get outside of its range occasionally. Need to REVISIT this. 2018-09-10 11:32:09 -06:00
Gregory Nutt
9d3148406c Signals were not a good choice of IPC to implement the poll function for several reasons: In order to handle the asynchrnous poll-related event, a substantial amount of state information is needed. Signals are only capable of passing minimal amounts of data. There are also complexities with performing kernel space signal handlers in kernel space code that is better to avoid. So, instead of signals, the equivalent logic was converted to run via a callback that executes on the high-priority work queue.
Squashed commit of the following:

    Fix up some final compile isses.

    net/netdev:  Convert the network down notification logic to use the new wqueue-based notification factility.

    net/udp:  Convert the UDP readahead notification logic to use the new wqueue-based notification factility.

    net/tcp:  Convert the TCP readahead notification logic to use the new wqueue-based notification factility.

    mm/iob:  Convert the IOB notification logic to use the new wqueue-based notification factility.

    sched/wqueue:  Signals are not good IPCs to support the target poll functionality for several reasons including the amount of data that can be passed with a signal and in the fact that in protected and kernel modes, user threads executing signal handlers in protected, kernel memory is problematic.  Instead, convert the same logic to perform the notifications via function callback on the high priority work queue.
2018-09-09 15:01:44 -06:00
Gregory Nutt
32e3e51678 net/netdev: Add signal notification for the case where the network goes down. 2018-09-09 10:39:25 -06:00
Gregory Nutt
28f73bd928 net/tcp and udp: Add logic to signal events when TCP or UDP read-ahead data is buffered.
Squashed commit of the following:

    net/tcp:  Add signal notification for the case when UDP read-ahead data is buffered.  This is basically of clone of the TCP notification logic with naming adapted for UDP.

    net/tcp:  Add signal notification for the case when TCP read-ahead data is buffered.
2018-09-09 09:21:39 -06:00
Gregory Nutt
fc127fd297 sched/signal: Add a generic signal notification facility. Modify the custom IOB available notifier so that it is now just a wrapper around this generic signal notification. This generic signal notification faility will, eventually, be used to support network polling.
Squashed commit of the following:

    mm/iob:  The IOB available notifier is now just a wrapper around the common signal notifier.

    sched/signal:  Add a generic signal notification facility.

    sched/signal/sig_evthread.c:  More trivial naming changes.

    sched/signal:  Rename nxsig_notification() to nxsig_evthread() to make forthcoming naming additions more consistent.
2018-09-09 08:32:37 -06:00
Gregory Nutt
0eb1beee63 mm/iob: Improve the IOB notifier data structures. Replace the fixed array with a singly linked list. For small value of CONFIG_IOB_NWAITERS, either solution is fine. But the fixed array does not scale well when CONFIG_IOB_NWAITERS is large. 2018-09-08 17:47:43 -06:00
Gregory Nutt
5bd0f596fc Fix some trivial typos. 2018-09-08 16:53:12 -06:00
Gregory Nutt
bba8541ad6 mm/iob: Add an IOB notifier that will send a signal to any registered threads that want to be notified when an IOB has been freed. This is an untested work-in-progress and is intended to be a part of a larger solution to correctly handling network poll operations. 2018-09-08 11:21:18 -06:00
Xiang Xiao
86eef8ce3a mm/: add mm_heapmember function and reimplement kmm_heapmember base on mm_heapmember since this function is very useful if multiple heaps exist. 2018-08-23 09:38:49 -06:00
Alan Carvalho de Assis
283b73edc5 Fix lots of typos in C comments and Kconfig help text 2018-07-08 18:24:45 -06:00
Juha Niskanen
044d538da3 Fix some errors found during upstream merge 2018-06-28 07:06:57 -06:00
Gregory Nutt
0786b5d053 net/tcp: Re-think CONFIG_NET_TCP_RWND_CONTROL TCP windowing controls. 2018-06-24 14:46:12 -06:00
Dmitriy Linikov
87c8b116ca mm/iob/iob_copyin.c: Fixed problem with send() ret value when using nonblocking io over buffered tcp socket 2018-03-20 06:58:57 -06:00
Masayuki Ishikawa
1d958980bd Merged in masayuki2009/nuttx.nuttx/fix_smp_bugs (pull request #615)
Fix SMP related bugs

* sched/sched: Fix a deadlock in SMP mode

    Two months ago, I introduced sched_tasklist_lock() and
    sched_tasklist_unlock() to protect tasklists in SMP mode.
    Actually, this change works pretty well for HTTP audio
    streaming aging test with lc823450-xgevk.

    However, I found a deadlock in the scheduler when I tried
    similar aging tests with DVFS autonomous mode where CPU
    clock speed changed based on cpu load. In this case, call
    sequences were as follows;

    cpu1: sched_unlock()->sched_mergepending()->sched_addreadytorun()->up_cpu_pause()
    cpu0: sched_lock()->sched_mergepending()

    To avoid this deadlock, I added sched_tasklist_unlock() when calling
    up_cpu_pause() and sched_addreadytorun(). Also, added
    sched_tasklist_lock() after the call.

    Signed-off-by: Masayuki Ishikawa <Masayuki.Ishikawa@jp.sony.com>

* libc: Add critical section in lib_filesem.c for SMP

    To set my_pid into fs_folder atomically in SMP mode,
    critical section API must be used.

    Signed-off-by: Masayuki Ishikawa <Masayuki.Ishikawa@jp.sony.com>

* mm: Add critical section in mm_sem.c for SMP

    To set my_pid into mm_folder atomically in SMP mode,
    critical section API must be used.

    Signed-off-by: Masayuki Ishikawa <Masayuki.Ishikawa@jp.sony.com>

* net: Add critical section in net_lock.c for SMP

    To set my pid (me) into fs_folder atomically in SMP mode,
    critical section API must be used.

    Signed-off-by: Masayuki Ishikawa <Masayuki.Ishikawa@jp.sony.com>

Approved-by: Gregory Nutt <gnutt@nuttx.org>
2018-03-20 12:34:38 +00:00
Gregory Nutt
b54ffe858a Standardization of some function headers. 2018-03-13 09:52:27 -06:00
Dmitriy Linikov
a8c58607e9 Merged in hardlulz/modem-3.0-nuttx/fix-sem-EINTR (pull request #603)
Added ECANCELED condition to DEBUGASSERT-s checking sem_wait result

Approved-by: Gregory Nutt <gnutt@nuttx.org>
2018-02-20 18:24:53 +00:00
Gregory Nutt
7cf88d7dbd Make sure that labeling is used consistently in all function headers. 2018-02-01 10:00:02 -06:00
Gregory Nutt
3521aaf944 Squashed commit of the following:
binfmt/, configs/, grahics/, libc/, mm/, net/, sched/:  OS references to the errno variable should always use the set_errno(), get_errno() macros
    arch/arm/src/stm32 and stm32f7:  Architecture-specific code is not permitted to modify the errno variable.  drivers/ and libc/:  OS references to the errno variable should always use the set_errno(), get_errno() macros
2018-01-30 17:57:36 -06:00
Gregory Nutt
fef255e5be This commit adds an as-of-yet untested implemented of UDP write buffering.
Squashed commit of the following:

    net/udp:  Address most of the issues with UDP write buffering.  There is a remaining issue with handling one network going down in a multi-network environment.  None of this has been test but it is certainly ready for test.  Hence, the feature is marked EXPERIMENTAL.
    net/udp:  Some baby steps toward a corrected write buffering design.
    net/udp:  Remove pesky write buffer macros.
    Eliminate trailing space at the end of lines.
    net/udp:  A little more UDP write buffering logic.  Still at least on big gaping hole in the design.
    net/udp:  Undefined CONFIG_NET_SENDTO_TIMEOUT.
    net/udp:  Crude, naive port of the TCP write buffering logic into UDP.  This commit is certainly non-functional and is simply a starting point for the implementatin of UDP write buffering.
    net/udp:  Rename udp/udp_psock_sendto.c udp/udp_psock_sendto_unbuffered.c.
2018-01-22 18:32:02 -06:00
Gregory Nutt
fb6208fbbc mm: Fix a typo in a debug assertion. 2017-11-22 07:30:48 -06:00
Gregory Nutt
514ac3fe98 mm: Add a debug assertin to check for integer overflow in malloc. 2017-11-21 07:24:10 -06:00
Gregory Nutt
ef89207c8d Build system: Fix CONFIG_BUILD_KERNEL logic directories that have ubin and kbin subdirectories. Conditional logic was fine for CONFIG_BUILD_FLAT and CONFIG_BUILD_PROTECTED but generated useless dependencies if CONFIG_BUILD_KERNEL. 2017-11-15 07:54:09 -06:00
Gregory Nutt
fa4722a5ac mm/mm_gran: Combine some common logic into a function (also fixes a subtle bug). 2017-11-15 07:54:08 -06:00
Gregory Nutt
1191238b17 mm/mm_gran: Fix some issues found during test of the new gran_info() interface. 2017-11-14 16:13:20 -06:00
Gregory Nutt
d720711807 fs/procfs: Add logic to show the state of the page allocator in /proc/meminfo. 2017-11-14 14:59:51 -06:00
Gregory Nutt
54fad8d04f mm/mm_gran: Add a function to get information about the state of the granuale allocator. 2017-11-14 14:41:03 -06:00
Gregory Nutt
62b8026976 Remove CONFIG_GRAN_SINGLE. It adds no technical benefit (other than some minor reduction in the number of interface arguments) but adds a lot of code complexity. Better without it. 2017-11-14 11:47:12 -06:00
Jussi Kivilinna
75b53d563b mm/mm-heap: memalign: fix heap corruption caused by using unaligned chuck size. Unaligned nodes generated by memalign later cause heap corruptions when nodes are shrink further (for example, 24 bytes -> 8 bytes, when alignment is 16 bytes). 2017-10-24 15:35:52 -06:00
Gregory Nutt
34a572b226 Update last commit... Check should really use the definition MMSIZE_MAX which is really the same thing, but guaranteed to be the correct maximum size in any present and future configuration. 2017-10-17 07:34:06 -06:00
EunBong Song
196911d4fa If size is greater than (UINT32_MAX - SIZEOF_MM_ALLOCNODE), malloc size can be overflow by MM_ALIGN_UP macro. For example, if task_create() called with stack_size == -1, up_create_stack() functions allocates SIZEOF_MM_ALLOCNODE bytes for stack.
This can cause data abort in up_stack_color() function.
2017-10-17 06:37:09 -06:00
Gregory Nutt
700f1a8e8c Eliminate some warnings found in build testing. 2017-10-08 16:27:17 -06:00
Gregory Nutt
a857cc04e4 Fix some build problems after recent separation of internal OS from application interfaces. The build problem only occurs in the PROTECTED and KERNEL builds where separate libraries are built for the applications and for use within the OS. In these cases, the correct interfaces must be used. This commit fixes a few of these, so I can get through build testing, but there are many more that need fixin'. 2017-10-08 08:13:47 -06:00
Gregory Nutt
9e25d89223 Squashed commit of the following:
Replace all usage kill() in the OS proper with nxsig_kill().

    sched/signal:  Add nxsig_kill() which is functionally equivalent to kill() except that it does not modify the errno variable.
2017-10-07 08:22:18 -06:00
Gregory Nutt
91537a7722 mm/: Heap semaphore logic needs to use nxsem_* interfaces when available, but the standard semaphores only when implementing a user-space heap. Not this does introduce and issue: the memory management functions them become cancellation points because of the use of sem_wait. 2017-10-07 07:16:33 -06:00
Gregory Nutt
7cc63f90d9 sched/semaphore: sem_trywait() modifies the errno value and, hence, should not be used within the OS. Use nxsem_trywait() instead. 2017-10-05 07:59:06 -06:00
Gregory Nutt
9568600ab1 Squashed commit of the following:
This commit backs out most of commit b4747286b1.  That change was added because sem_wait() would sometimes cause cancellation points inappropriated.  But with these recent changes, nxsem_wait() is used instead and it is not a cancellation point.

    In the OS, all calls to sem_wait() changed to nxsem_wait().  nxsem_wait() does not return errors via errno so each place where nxsem_wait() is now called must not examine the errno variable.

    In all OS functions (not libraries), change sem_wait() to nxsem_wait().  This will prevent the OS from creating bogus cancellation points and from modifying the per-task errno variable.

    sched/semaphore:  Add the function nxsem_wait().  This is a new internal OS interface.  It is functionally equivalent to sem_wait() except that (1) it is not a cancellation point, and (2) it does not set the per-thread errno value on return.
2017-10-04 15:22:27 -06:00
Gregory Nutt
42a0796615 Squashed commit of the following:
sched/semaphore:  Add nxsem_post() which is identical to sem_post() except that it never modifies the errno variable.  Changed all references to sem_post in the OS to nxsem_post().

    sched/semaphore:  Add nxsem_destroy() which is identical to sem_destroy() except that it never modifies the errno variable.  Changed all references to sem_destroy() in the OS to nxsem_destroy().

    libc/semaphore and sched/semaphore:  Add nxsem_getprotocol() and nxsem_setprotocola which are identical to sem_getprotocol() and set_setprotocol() except that they never modifies the errno variable.  Changed all references to sem_setprotocol in the OS to nxsem_setprotocol().  sem_getprotocol() was not used in the OS
2017-10-03 15:35:24 -06:00
Gregory Nutt
83cdb0c552 Squashed commit of the following:
libc/semaphore:  Add nxsem_getvalue() which is identical to sem_getvalue() except that it never modifies the errno variable.  Changed all references to sem_getvalue in the OS to nxsem_getvalue().

    sched/semaphore:  Rename all internal private functions from sem_xyz to nxsem_xyz.  The sem_ prefix is (will be) reserved only for the application semaphore interfaces.

    libc/semaphore:  Add nxsem_init() which is identical to sem_init() except that it never modifies the errno variable.  Changed all references to sem_init in the OS to nxsem_init().

    sched/semaphore:  Rename sem_tickwait() to nxsem_tickwait() so that it is clear this is an internal OS function.

    sched/semaphoate:  Rename sem_reset() to nxsem_reset() so that it is clear this is an internal OS function.
2017-10-03 12:52:31 -06:00
Jussi Kivilinna
81b5118727 mm_mallinfo: do heap end debug assert check with heap semaphore held 2017-08-03 10:01:26 -06:00
Gregory Nutt
98d937104e mm/: Remove dangling space at the end of lines. 2017-06-28 13:31:21 -06:00
Gregory Nutt
d9bd5ca05f Update README and some C comments 2017-05-30 09:19:04 -06:00
Gregory Nutt
e9c55d8f7d IOBs: Fix a typing error mm/iob/iob.h mm/iob/iob_initialize.c 2017-05-27 08:03:00 -06:00
Gregory Nutt
2c00825dcf Porting Guide: Add description of IOBs. 2017-05-20 08:50:05 -06:00
Masayuki Ishikawa
f10e10e465 IOBs: Fix build break
Signed-off-by: Masayuki Ishikawa <masayuki.ishikawa@gmail.com>
2017-05-18 13:59:03 +09:00
Gregory Nutt
5ce2ece134 syslog: Add header file inclusion to eliminate a warning; mm/iob: private function needs static storage class. 2017-05-16 12:26:23 -06:00
Gregory Nutt
6a3800f611 There can be a failure in IOB allocation to some asynchronous behavior caused by the use of sem_post(). Consider this scenario:
Task A holds an IOB.  There are no further IOBs.  The value of semcount is zero.
Task B calls iob_alloc().  Since there are not IOBs, it calls sem_wait().  The v
alue of semcount is now -1.

Task A frees the IOB.  iob_free() adds the IOB to the free list and calls sem_post() this makes Task B ready to run and sets semcount to zero NOT 1.  There is one IOB in the free list and semcount is zero.  When Task B wakes up it would increment the sem_count back to the correct value.

But an interrupt or another task runs occurs before Task B executes.  The interrupt or other tak takes the IOB off of the free list and decrements the semcount.  But since semcount is then < 0, this causes the assertion because that is an invalid state in the interrupt handler.

So I think that the root cause is that there the asynchrony between incrementing the semcount.  This change separates the list of IOBs:  Currently there is only a free list of IOBs.  The problem, I believe, is because of asynchronies due sem_post() post cause the semcount and the list content to become out of sync.  This change adds a new 'committed' list:  When there is a task waiting for an IOB, it will go into the committed list rather than the free list before the semaphore is posted.  On the waiting side, when awakened from the semaphore wait, it will expect to find its IOB in the committed list, rather than free list.

In this way, the content of the free list and the value of the semaphore count always remain in sync.
2017-05-16 11:03:35 -06:00