nuttx/mm/iob/iob_initialize.c

171 lines
5.3 KiB
C
Raw Normal View History

2014-06-03 20:41:34 +02:00
/****************************************************************************
* mm/iob/iob_initialize.c
2014-06-03 20:41:34 +02:00
*
* SPDX-License-Identifier: Apache-2.0
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership. The
* ASF licenses this file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance with the
* License. You may obtain a copy of the License at
2014-06-03 20:41:34 +02:00
*
* http://www.apache.org/licenses/LICENSE-2.0
2014-06-03 20:41:34 +02:00
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
2014-06-03 20:41:34 +02:00
*
****************************************************************************/
/****************************************************************************
* Included Files
****************************************************************************/
#include <nuttx/config.h>
#include <stdbool.h>
#include <nuttx/mm/iob.h>
2014-06-03 20:41:34 +02:00
#include "iob.h"
/****************************************************************************
* Pre-processor Definitions
****************************************************************************/
/* Fix the I/O Buffer size with specified alignment size */
#ifdef CONFIG_IOB_ALLOC
# define IOB_ALIGN_SIZE ROUNDUP(sizeof(struct iob_s) + CONFIG_IOB_BUFSIZE, \
CONFIG_IOB_ALIGNMENT)
#else
# define IOB_ALIGN_SIZE ROUNDUP(sizeof(struct iob_s), CONFIG_IOB_ALIGNMENT)
#endif
#define IOB_BUFFER_SIZE (IOB_ALIGN_SIZE * CONFIG_IOB_NBUFFERS + \
CONFIG_IOB_ALIGNMENT - 1)
2014-06-03 20:41:34 +02:00
/****************************************************************************
* Private Data
****************************************************************************/
/* Following raw buffer will be divided into iob_s instances, the initial
* procedure will ensure that the member io_data of each iob_s is aligned
* to the CONFIG_IOB_ALIGNMENT memory boundary.
*/
#ifdef IOB_SECTION
static uint8_t g_iob_buffer[IOB_BUFFER_SIZE] locate_data(IOB_SECTION);
#else
static uint8_t g_iob_buffer[IOB_BUFFER_SIZE];
#endif
2014-06-03 20:41:34 +02:00
#if CONFIG_IOB_NCHAINS > 0
/* This is a pool of pre-allocated iob_qentry_s buffers */
static struct iob_qentry_s g_iob_qpool[CONFIG_IOB_NCHAINS];
#endif
2014-06-03 20:41:34 +02:00
/****************************************************************************
* Public Data
****************************************************************************/
/* A list of all free, unallocated I/O buffers */
FAR struct iob_s *g_iob_freelist;
There can be a failure in IOB allocation to some asynchronous behavior caused by the use of sem_post(). Consider this scenario: Task A holds an IOB.  There are no further IOBs.  The value of semcount is zero. Task B calls iob_alloc().  Since there are not IOBs, it calls sem_wait().  The v alue of semcount is now -1. Task A frees the IOB.  iob_free() adds the IOB to the free list and calls sem_post() this makes Task B ready to run and sets semcount to zero NOT 1.  There is one IOB in the free list and semcount is zero.  When Task B wakes up it would increment the sem_count back to the correct value. But an interrupt or another task runs occurs before Task B executes.  The interrupt or other tak takes the IOB off of the free list and decrements the semcount.  But since semcount is then < 0, this causes the assertion because that is an invalid state in the interrupt handler. So I think that the root cause is that there the asynchrony between incrementing the semcount. This change separates the list of IOBs: Currently there is only a free list of IOBs. The problem, I believe, is because of asynchronies due sem_post() post cause the semcount and the list content to become out of sync. This change adds a new 'committed' list: When there is a task waiting for an IOB, it will go into the committed list rather than the free list before the semaphore is posted. On the waiting side, when awakened from the semaphore wait, it will expect to find its IOB in the committed list, rather than free list. In this way, the content of the free list and the value of the semaphore count always remain in sync.
2017-05-16 19:03:35 +02:00
/* A list of I/O buffers that are committed for allocation */
FAR struct iob_s *g_iob_committed;
#if CONFIG_IOB_NCHAINS > 0
There can be a failure in IOB allocation to some asynchronous behavior caused by the use of sem_post(). Consider this scenario: Task A holds an IOB.  There are no further IOBs.  The value of semcount is zero. Task B calls iob_alloc().  Since there are not IOBs, it calls sem_wait().  The v alue of semcount is now -1. Task A frees the IOB.  iob_free() adds the IOB to the free list and calls sem_post() this makes Task B ready to run and sets semcount to zero NOT 1.  There is one IOB in the free list and semcount is zero.  When Task B wakes up it would increment the sem_count back to the correct value. But an interrupt or another task runs occurs before Task B executes.  The interrupt or other tak takes the IOB off of the free list and decrements the semcount.  But since semcount is then < 0, this causes the assertion because that is an invalid state in the interrupt handler. So I think that the root cause is that there the asynchrony between incrementing the semcount. This change separates the list of IOBs: Currently there is only a free list of IOBs. The problem, I believe, is because of asynchronies due sem_post() post cause the semcount and the list content to become out of sync. This change adds a new 'committed' list: When there is a task waiting for an IOB, it will go into the committed list rather than the free list before the semaphore is posted. On the waiting side, when awakened from the semaphore wait, it will expect to find its IOB in the committed list, rather than free list. In this way, the content of the free list and the value of the semaphore count always remain in sync.
2017-05-16 19:03:35 +02:00
/* A list of all free, unallocated I/O buffer queue containers */
FAR struct iob_qentry_s *g_iob_freeqlist;
There can be a failure in IOB allocation to some asynchronous behavior caused by the use of sem_post(). Consider this scenario: Task A holds an IOB.  There are no further IOBs.  The value of semcount is zero. Task B calls iob_alloc().  Since there are not IOBs, it calls sem_wait().  The v alue of semcount is now -1. Task A frees the IOB.  iob_free() adds the IOB to the free list and calls sem_post() this makes Task B ready to run and sets semcount to zero NOT 1.  There is one IOB in the free list and semcount is zero.  When Task B wakes up it would increment the sem_count back to the correct value. But an interrupt or another task runs occurs before Task B executes.  The interrupt or other tak takes the IOB off of the free list and decrements the semcount.  But since semcount is then < 0, this causes the assertion because that is an invalid state in the interrupt handler. So I think that the root cause is that there the asynchrony between incrementing the semcount. This change separates the list of IOBs: Currently there is only a free list of IOBs. The problem, I believe, is because of asynchronies due sem_post() post cause the semcount and the list content to become out of sync. This change adds a new 'committed' list: When there is a task waiting for an IOB, it will go into the committed list rather than the free list before the semaphore is posted. On the waiting side, when awakened from the semaphore wait, it will expect to find its IOB in the committed list, rather than free list. In this way, the content of the free list and the value of the semaphore count always remain in sync.
2017-05-16 19:03:35 +02:00
/* A list of I/O buffer queue containers that are committed for allocation */
FAR struct iob_qentry_s *g_iob_qcommitted;
#endif
/* Counting semaphores that tracks the number of free IOBs/qentries */
sem_t g_iob_sem = SEM_INITIALIZER(CONFIG_IOB_NBUFFERS);
#if CONFIG_IOB_THROTTLE > 0
/* Counts available I/O buffers when throttled */
sem_t g_throttle_sem = SEM_INITIALIZER(CONFIG_IOB_NBUFFERS -
CONFIG_IOB_THROTTLE);
#endif
#if CONFIG_IOB_NCHAINS > 0
/* Counts free I/O buffer queue containers */
sem_t g_qentry_sem = SEM_INITIALIZER(CONFIG_IOB_NCHAINS);
#endif
spinlock_t g_iob_lock = SP_UNLOCKED;
2014-06-03 20:41:34 +02:00
/****************************************************************************
* Public Functions
****************************************************************************/
/****************************************************************************
* Name: iob_initialize
*
* Description:
* Set up the I/O buffers for normal operations.
*
****************************************************************************/
void iob_initialize(void)
{
int i;
uintptr_t buf;
/* Get a start address which plus offsetof(struct iob_s, io_data) is
* aligned to the CONFIG_IOB_ALIGNMENT memory boundary
*/
buf = ROUNDUP((uintptr_t)g_iob_buffer + offsetof(struct iob_s, io_data),
CONFIG_IOB_ALIGNMENT) - offsetof(struct iob_s, io_data);
2014-06-03 20:41:34 +02:00
/* Get I/O buffer instance from the start address and add each I/O buffer
* to the free list
*/
2014-06-03 20:41:34 +02:00
for (i = 0; i < CONFIG_IOB_NBUFFERS; i++)
2014-06-03 20:41:34 +02:00
{
FAR struct iob_s *iob = (FAR struct iob_s *)(buf + i * IOB_ALIGN_SIZE);
2014-06-03 20:41:34 +02:00
/* Add the pre-allocate I/O buffer to the head of the free list */
iob->io_flink = g_iob_freelist;
#ifdef CONFIG_IOB_ALLOC
iob->io_bufsize = CONFIG_IOB_BUFSIZE;
iob->io_data = (FAR uint8_t *)(iob + 1);
#endif
g_iob_freelist = iob;
}
#if CONFIG_IOB_NCHAINS > 0
/* Add each I/O buffer chain queue container to the free list */
for (i = 0; i < CONFIG_IOB_NCHAINS; i++)
{
FAR struct iob_qentry_s *iobq = &g_iob_qpool[i];
There can be a failure in IOB allocation to some asynchronous behavior caused by the use of sem_post(). Consider this scenario: Task A holds an IOB.  There are no further IOBs.  The value of semcount is zero. Task B calls iob_alloc().  Since there are not IOBs, it calls sem_wait().  The v alue of semcount is now -1. Task A frees the IOB.  iob_free() adds the IOB to the free list and calls sem_post() this makes Task B ready to run and sets semcount to zero NOT 1.  There is one IOB in the free list and semcount is zero.  When Task B wakes up it would increment the sem_count back to the correct value. But an interrupt or another task runs occurs before Task B executes.  The interrupt or other tak takes the IOB off of the free list and decrements the semcount.  But since semcount is then < 0, this causes the assertion because that is an invalid state in the interrupt handler. So I think that the root cause is that there the asynchrony between incrementing the semcount. This change separates the list of IOBs: Currently there is only a free list of IOBs. The problem, I believe, is because of asynchronies due sem_post() post cause the semcount and the list content to become out of sync. This change adds a new 'committed' list: When there is a task waiting for an IOB, it will go into the committed list rather than the free list before the semaphore is posted. On the waiting side, when awakened from the semaphore wait, it will expect to find its IOB in the committed list, rather than free list. In this way, the content of the free list and the value of the semaphore count always remain in sync.
2017-05-16 19:03:35 +02:00
/* Add the pre-allocate buffer container to the head of the free
* list
*/
iobq->qe_flink = g_iob_freeqlist;
g_iob_freeqlist = iobq;
2014-06-03 20:41:34 +02:00
}
#endif
2014-06-03 20:41:34 +02:00
}