For SDMMC1, IDMA cannot access SRAM123 or SRAM4. Refer to ST AN5200 for
details. This patch makes stm32_dmapreflight check the buffer address and
return an error when the buffer is located in a invalid address space.
This does not fix the hardware limitation but at least makes it visible.
This simplifies the sdmmc driver when the IDMA is in use. There is no need to mix
IDMA and interrupt based transfers; instead, when making unaligned data tranfers,
just make IDMA into an internal aligned buffer and then copy the data.
Signed-off-by: Jukka Laitinen <jukka.laitinen@intel.com>
It should not be an error to clean cache beyond the dma source buffer
boundaries. It would just prematurely push some unrelated data from
cache to memory.
The only case where it would corrupt memory is that there is a dma
destination buffer overlapping the same cache line with the source
buffer. But this can't happen, because a destination buffer must always
be cache-line aligned when using write-back cache.
This patch enables doing dma tx-only transfer from unaligned source
buffer when using write-back cache.
Signed-off-by: Jukka Laitinen <jukka.laitinen@intel.com>
Correct flash write and erase functions, they inherit some
broken code from other platforms. Also fix the confusion between
eraseblock(sector) and page sizes.
Signed-off-by: Jari Nippula <jari.nippula@intel.com>
If the flash option register was locked before modifying it, return
it to the locked state after modify.
Signed-off-by: Jukka Laitinen <jukka.laitinen@intel.com>
Remove support for the Codesourcery, Atollic, DevKitArm, Raisonance, and CodeRed toolchains. Not only are these tools old and no longer used but they are all equivalent to standard ARM EABI toolchains. Retaining specific support has no effect (they are still supported, but now just as generic EABI toolchains).
First, configure the dmacfg in spi_dmarxsetup and spi_dmatxsetup. Then,
check for dmacapable, and only after that set up the dma.
This way the dmacapable actually works, and we don't need to initialize
the dmacfg structures twice.
Signed-off-by: Jukka Laitinen <jukka.laitinen@intel.com>
When starting dma transfer, the dcache for the TX buffer should be cleaned.
"flush" performs also invalidate, which is unnecessary. The TX buffer
can be unaligned to the cahche line in some(most) cases, whereas RX buffer
can never be.
The cache for the receive buffer can be dirty and valid before call to exchange.
Thus another memory access (hitting the same cache line) may corrupt receive data
while waiting for transfer to complete. So the receive buffer should be
invalidated before the transfer
Signed-off-by: Jukka Laitinen <jukka.laitinen@intel.com>
Modify some comments and debug assertions, which inherit from previous versions
and make no sense. Also add a few nerr printouts to make it easier to debug
running out of buffers
Signed-off-by: Jukka Laitinen <jukka.laitinen@intel.com>
See the following commit in SDK:
commit 62a2fb4fd3001aefad9ec3b2e2e7c47e5b0f21e1
Author: SPRESENSE <41312067+SPRESENSE@users.noreply.github.com>
Date: Fri Jan 24 13:32:04 2020 +0900
Enable dummy transfer by SPI using DMA
Signed-off-by: Masayuki Ishikawa <Masayuki.Ishikawa@jp.sony.com>
All complaints fixed except for those that were not possible to fix:
- Used of Mixed case identifier in ESP32 files. These are references to Expressif ROM functions which are outside of the scope of NuttX.
- Remove per-thread errno from the TCB structure (pterrno)
- Remove get_errno() and set_errno() as functions. The macros are still available as stubs and will be needed in the future if we need to access the errno from a different address environment (KERNEL mode).
- Add errno value to the tls_info_s structure definitions
- Move sched/errno to libs/libc/errno. Replace old TCB access to the errno with TLS access to the errno.