Ordering of non-atomic operations through the use of atomic operations as the basis for the higher-level synchronization facilities

atomic operations c++
c++ atomic<bool example
atomic variables in c
c++ atomic pointer
c atomic increment
c++ atomic example
how do atomic operations work

I first cites some description from "C++ concurrency in action" by Anthony Williams :

class spinlock_mutex


 std::atomic_flag flag;




 void lock()



 void unlock()




The lock() operation is a loop on flag.test_and_set() using std::memory_ order_acquire ordering, and the unlock() is a call to flag.clear() with std:: memory_order_release ordering. When the first thread calls lock(), the flag is initially clear, so the first call to test_and_set() will set the flag and return false, indicating that this thread now has the lock, and terminating the loop. The thread is then free to modify any data protected by the mutex. Any other thread that calls lock() at this time will find the flag already set and will be blocked in the test_and_set() loop.

When the thread with the lock has finished modifying the protected data, it calls unlock(), which calls flag.clear() with std::memory_order_release semantics. This then synchronizes-with (see section 5.3.1) a subsequent call to flag.test_and_set() from an invocation of lock() on another thread, because this call has std::memory_order_acquire semantics. Because the modification of the protected data is necessarily sequenced before the unlock() call, this modification happensbefore the unlock() and thus happens-before the subsequent lock() call from the second thread (because of the synchronizes-with relationship between the unlock() and the lock()) and happens-before any accesses to that data from this second thread once it has acquired the lock.

Q: If there are only two threads, and thread A has the object m1 invokes lock() for the first time, and thread B has the object m1 invokes lock() for the first time before m1 invoking unlock() in the thread A, why flag.test_and_set(std::memory_order_acquire) get true rather than false (the initial value) when m1 invokes lock function in the thread B?

I know the release sequence, but constituting a release sequence needs an atomic object invoking an atomic operation with std::memory_order_release and there is no operation invoked with std::memory_order_release.

The acquire and release semantics relate to the other (protected) resource, not shown here. In particular, don't move access after the lock or before the unlock. The atomic operations themselves are fully ordered.

Because the operations are fully ordered, your hypothetical order A:lock, B:lock, A:unlock is seen in the same order by both threads. Hence, when thread B calls lock, it sees only the lock from A and not the unlock.

atomic - Rust: Aquire-Release semantics, It's not the new syntax features, nor is it the new library facilities, but the new Every object in a C++ program has a modification order composed of all the writes to that object If you do use atomic operations, the compiler is responsible for ensuring that the The most basic of the atomic integral types is std::atomic< bool> . Ordering of non-atomic operations through the use of atomic operations as the basis for the higher-level synchronization facilities 3 Could this publish / check-for-update class for a single writer + reader use memory_order_relaxed or acquire/release for efficiency?

The thread doing things before each other doesn't really make sense other than it has the behaviour you wish to know. The memory_order doesn't come into this. It specifies how regular, non-atomic memory accesses are to be ordered around the atomic operation.

The reasons for having it are that if you do:


In two threads, foo in one thread cannot read or right before the lock or after the unlock, of the thread in question. This combined with the atomicity of the lock and unlock themselves gives the behaviour we expect. (i.e. no concurrent access from foo).

N4013: Atomic operations on non-atomic data, Now the lock method is an acquire-release operation. like that, which by definition of a trivial spinlock, should not be possible to begin with, right? It doesn't help that all spinlock examples online only use Aquire to lock, and of atomic operations as the basis for the higher-level synchronization facilities. Many systems provide an atomic fetch-and-increment instruction that reads from a memory location, unconditionally writes a new value (the old value plus one), and returns the old value. We can use this to fix the non-atomic counter algorithm as follows: Use fetch-and-increment to read the old value and write the incremented value back.

There is only one std::atomic_flag. At any one time, it is either set (true) or clear (false).

std::atomic_flag::test_and_set is defined as

Atomically changes the state of a std::atomic_flag to set (true) and returns the value it held before.

When A has called lock, it has changed the flag to set, so the state returned when B tries to lock is set. This is evaluated as the condition of the while, so the loop continues. Thread B will continue "spinning" in this loop until the lock is released

Finally when A calls unlock, the flag is changed to clear. Then B can test again, and the false ends the loop.

Atomic vs. Non-Atomic Operations, C++11 atomic operations are designed to only apply to atomic types; We cannot use an atomic operation on a plain integer. But we do have large legacy code bases, many of which have declared “atomic” variables as e.g. volatile In order to solve this problem, we need two facilities: One to test whether� The analysis extends to interprocessor synchronization constraints and to code where blocks of operations have to execute atomically. We use a conflict graph similar to that used to schedule

Memory Model, If you violate this rule, and either thread uses a non-atomic operation, you'll If sharedValue is accessed concurrently by different threads, several things can now go wrong: it will result in a permanently torn write: the upper 32 bits from one thread, the Let's look at atomicity at the C/C++ language level. CMF 4: Supporting non-atomic activities. Overview: Activities in a process may be non-atomic, i.e., may have a duration. Hence, compliance rule languages must also support non-atomic activities. Description: Non-atomic activities are durative activities whose execution spans across a time interval. While the execution of an atomic activity is

C++ Concurrency, The lack of support for non-determinism is far more disruptive. The HSA model includes a definition of platform atomic operations, meaning a required set in order to allow high-throughput parallel performance, and is synchronized with by design with all of the major high-level language memory models including the � For example, a client program may use this data storage system to store balances in an accounting system. Suppose that there are two accounts, called A and B, which contain $10 and $15 respectively. Further, suppose the client wishes to move $5 from A to B. The client might proceed as follows: read account A (obtaining $10)

SMP primer for Android, Using synchronization of operations to simplify code The C++ memory model and operations on atomic types &test); # pass by move std::unique_ptr<Test> p (new Test); std::thread t(func, std::move(p)); Acquire locks in a fixed order; Use a lock hierarchy Alternative facilities for protecting shared data. the operations within the processes require access to more than one of those resources. This means that each process will need to get a lock for each of the resources before performing its

  • The writing style suggests that the code and first two paragraphs are quotes from Antony Williams' book?
  • @MSalters Yes, the code and the first two paragraphs are all quotes from Antony William's book.
  • @Anthony Williams Could you help me solve the problem above?
  • I don't think you can ping book authors like that ;) But Mr. Williams has a consultancy firm, justsoftwaresolutions.co.uk
  • I know what you say, but I decide to make a try.
  • The atomic operations themselves are fully ordered if they invoke operations through std::memory_order_seq_cst. If not, they are not always fully ordered.
  • This answer is very misleading. In multi-threaded code, the concept of "at any one time" simply does not exist. Hence, the statement "At any one time, it is either set or clear" is wrong, and indeed two threads can simultaneously observe different values. (in particular with memory_order_relaxed)
  • @MSalters The whole point of atomic primitives are to reintroduce the concept of "at any one time" to multi-threaded code. The memory order tags distinguish what other state changes are synchronised with a particular operation.
  • @Caleth I think the concept of "at any one time" should only apply to operations invoked with std::memory_order_seq_cst. Please check C++ Concurrent in Action for the answers.
  • @albizzia for a given atomic primitive, there exists a total order of operations that are applied to it. Memory order does not affect that.
  • @Caleth: The C++ order of operations depends on happens-before and happens-after. Since two atomic operations do not necessarily have a happen-before relation, there is is no ordering between them. Memory order critically affects this by potentially creating a happens-before order.