Squashed commit of the following:
Fix up some final compile isses.
net/netdev: Convert the network down notification logic to use the new wqueue-based notification factility.
net/udp: Convert the UDP readahead notification logic to use the new wqueue-based notification factility.
net/tcp: Convert the TCP readahead notification logic to use the new wqueue-based notification factility.
mm/iob: Convert the IOB notification logic to use the new wqueue-based notification factility.
sched/wqueue: Signals are not good IPCs to support the target poll functionality for several reasons including the amount of data that can be passed with a signal and in the fact that in protected and kernel modes, user threads executing signal handlers in protected, kernel memory is problematic. Instead, convert the same logic to perform the notifications via function callback on the high priority work queue.
Squashed commit of the following:
net/tcp: Add signal notification for the case when UDP read-ahead data is buffered. This is basically of clone of the TCP notification logic with naming adapted for UDP.
net/tcp: Add signal notification for the case when TCP read-ahead data is buffered.
Squashed commit of the following:
mm/iob: The IOB available notifier is now just a wrapper around the common signal notifier.
sched/signal: Add a generic signal notification facility.
sched/signal/sig_evthread.c: More trivial naming changes.
sched/signal: Rename nxsig_notification() to nxsig_evthread() to make forthcoming naming additions more consistent.
Squashed commit of the following:
net/udp: Address most of the issues with UDP write buffering. There is a remaining issue with handling one network going down in a multi-network environment. None of this has been test but it is certainly ready for test. Hence, the feature is marked EXPERIMENTAL.
net/udp: Some baby steps toward a corrected write buffering design.
net/udp: Remove pesky write buffer macros.
Eliminate trailing space at the end of lines.
net/udp: A little more UDP write buffering logic. Still at least on big gaping hole in the design.
net/udp: Undefined CONFIG_NET_SENDTO_TIMEOUT.
net/udp: Crude, naive port of the TCP write buffering logic into UDP. This commit is certainly non-functional and is simply a starting point for the implementatin of UDP write buffering.
net/udp: Rename udp/udp_psock_sendto.c udp/udp_psock_sendto_unbuffered.c.
This commit backs out most of commit b4747286b1. That change was added because sem_wait() would sometimes cause cancellation points inappropriated. But with these recent changes, nxsem_wait() is used instead and it is not a cancellation point.
In the OS, all calls to sem_wait() changed to nxsem_wait(). nxsem_wait() does not return errors via errno so each place where nxsem_wait() is now called must not examine the errno variable.
In all OS functions (not libraries), change sem_wait() to nxsem_wait(). This will prevent the OS from creating bogus cancellation points and from modifying the per-task errno variable.
sched/semaphore: Add the function nxsem_wait(). This is a new internal OS interface. It is functionally equivalent to sem_wait() except that (1) it is not a cancellation point, and (2) it does not set the per-thread errno value on return.
sched/semaphore: Add nxsem_post() which is identical to sem_post() except that it never modifies the errno variable. Changed all references to sem_post in the OS to nxsem_post().
sched/semaphore: Add nxsem_destroy() which is identical to sem_destroy() except that it never modifies the errno variable. Changed all references to sem_destroy() in the OS to nxsem_destroy().
libc/semaphore and sched/semaphore: Add nxsem_getprotocol() and nxsem_setprotocola which are identical to sem_getprotocol() and set_setprotocol() except that they never modifies the errno variable. Changed all references to sem_setprotocol in the OS to nxsem_setprotocol(). sem_getprotocol() was not used in the OS
libc/semaphore: Add nxsem_getvalue() which is identical to sem_getvalue() except that it never modifies the errno variable. Changed all references to sem_getvalue in the OS to nxsem_getvalue().
sched/semaphore: Rename all internal private functions from sem_xyz to nxsem_xyz. The sem_ prefix is (will be) reserved only for the application semaphore interfaces.
libc/semaphore: Add nxsem_init() which is identical to sem_init() except that it never modifies the errno variable. Changed all references to sem_init in the OS to nxsem_init().
sched/semaphore: Rename sem_tickwait() to nxsem_tickwait() so that it is clear this is an internal OS function.
sched/semaphoate: Rename sem_reset() to nxsem_reset() so that it is clear this is an internal OS function.
Task A holds an IOB. There are no further IOBs. The value of semcount is zero.
Task B calls iob_alloc(). Since there are not IOBs, it calls sem_wait(). The v
alue of semcount is now -1.
Task A frees the IOB. iob_free() adds the IOB to the free list and calls sem_post() this makes Task B ready to run and sets semcount to zero NOT 1. There is one IOB in the free list and semcount is zero. When Task B wakes up it would increment the sem_count back to the correct value.
But an interrupt or another task runs occurs before Task B executes. The interrupt or other tak takes the IOB off of the free list and decrements the semcount. But since semcount is then < 0, this causes the assertion because that is an invalid state in the interrupt handler.
So I think that the root cause is that there the asynchrony between incrementing the semcount. This change separates the list of IOBs: Currently there is only a free list of IOBs. The problem, I believe, is because of asynchronies due sem_post() post cause the semcount and the list content to become out of sync. This change adds a new 'committed' list: When there is a task waiting for an IOB, it will go into the committed list rather than the free list before the semaphore is posted. On the waiting side, when awakened from the semaphore wait, it will expect to find its IOB in the committed list, rather than free list.
In this way, the content of the free list and the value of the semaphore count always remain in sync.