in the case of CONFIG_SEM_PREALLOCHOLDERS=0
The call to sem_restorebaseprio_task context switches in the
sem_foreachholder(sem, sem_restoreholderprioB, stcb); call
prior to releasing the holder. So the running task is left
as a holder as is the started task. Leaving both slots filled
Thus failing to perforem the boost/or restoration on the
correct tcb.
This PR fixes this by releasing the running task slot prior
to reprioritization that can lead to the context switch.
To faclitate this, the interface to sem_restorebaseprio
needed to take the tcb from the holder prior to the
holder being freed. In the failure case where sched_verifytcb
fails it added the overhead of looking up the holder.
There is also the adfitinal thunking on the foreach to
get from holer to holder->tcb.
An alternate approach could be to leve the interface the same
and allocate a holder on the stack of sem_restoreholderprioB
copy the sem's holder to it, free it as is done in this pr
and and then pass that address sem_restoreholderprio as the
holder. It could then get the holder's tcb but we would
keep the same sem_findholder in sched_verifytcb.
This caused a problem when the thread calling sem_wait() was very low priority. When it received the count, there may be higher priority threads "hogging" the CPU that prevent the lower priority task from running and, as a result, the sem_addholder() may be delayed indefinitely.
The fix was to have sem_post() call sem_addholder() just before restarting the thread waiting for the semaphore count.
This problem was noted by Benix Vincent who also suggested the solution.