From cee8d59b5884264c473f1ba6a02860c823c090ac Mon Sep 17 00:00:00 2001 From: Gregory Nutt Date: Sun, 20 Nov 2016 12:26:08 -0600 Subject: [PATCH] Update TODO list --- TODO | 5 +++++ sched/sched/sched_cpupause.c | 1 - 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/TODO b/TODO index 252e195614..77fb109fe4 100644 --- a/TODO +++ b/TODO @@ -321,6 +321,11 @@ o SMP unless the spinlocks were made to be the same size as one cache line. + This might be doable if a write-through cache is used. Then you + could always safely invalidate the cache line before reading the + spinlock because there should never be any dirty cache lines in + this case. + The better option is to add compiler independent "ornamentation" to the spinlock so that the spinlocks are all linked together into a separate, non-cacheable memory regions. Because of diff --git a/sched/sched/sched_cpupause.c b/sched/sched/sched_cpupause.c index 0233bc064f..513e8deb48 100644 --- a/sched/sched/sched_cpupause.c +++ b/sched/sched/sched_cpupause.c @@ -116,4 +116,3 @@ int sched_cpu_pause(FAR struct tcb_s *tcb) } #endif /* CONFIG_SMP */ -