Use 'volatile' to enforce a new memory access on each lockless atomic
store and read. Without this a loop consisting of an atomic_read with
memory_order_relaxed would be simply optimized away. Also, using
volatile is cheaper than adding a full compiler barrier (also) in that
case.
This use of a volatile cast mirrors the Linux kernel ACCESS_ONCE macro.
Without this change the more rigorous atomic test cases introduced in
a following patch will hang due to the atomic accesses being optimized
away.
Signed-off-by: Jarno Rajahalme <jrajahalme@nicira.com>
Acked-by: Ben Pfaff <blp@nicira.com>
\
if (IS_LOCKLESS_ATOMIC(*dst__)) { \
atomic_thread_fence(ORDER); \
- *dst__ = src__; \
+ *(typeof(*DST) volatile *)dst__ = src__; \
atomic_thread_fence_if_seq_cst(ORDER); \
} else { \
atomic_store_locked(dst__, src__); \
\
if (IS_LOCKLESS_ATOMIC(*src__)) { \
atomic_thread_fence_if_seq_cst(ORDER); \
- *dst__ = *src__; \
+ *dst__ = *(typeof(*SRC) volatile *)src__; \
} else { \
atomic_read_locked(src__, dst__); \
} \