mingw: fix building for ARM/AArch64

Don't use x86 inline assembly in these cases, but fall back to
__sync_fetch_and_or, similar to _InterlockedOr8 in the MSVC case.

This corresponds to what is done in src/unix/atomic-ops.h, where
ARM/AArch64 cases end up implementing cmpxchgi with
__sync_val_compare_and_swap.

PR-URL: https://github.com/libuv/libuv/pull/3236
Reviewed-By: Jameson Nash <vtjnash@gmail.com>
This commit is contained in:
Martin Storsjö 2021-07-14 19:36:30 +03:00 committed by GitHub
parent 5c85d67bc7
commit f9ad802fa5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -39,10 +39,11 @@ static char INLINE uv__atomic_exchange_set(char volatile* target) {
return _InterlockedOr8(target, 1);
}
#else /* GCC */
#else /* GCC, Clang in mingw mode */
/* Mingw-32 version, hopefully this works for 64-bit gcc as well. */
static inline char uv__atomic_exchange_set(char volatile* target) {
#if defined(__i386__) || defined(__x86_64__)
/* Mingw-32 version, hopefully this works for 64-bit gcc as well. */
const char one = 1;
char old_value;
__asm__ __volatile__ ("lock xchgb %0, %1\n\t"
@ -50,6 +51,9 @@ static inline char uv__atomic_exchange_set(char volatile* target) {
: "0"(one), "m"(*target)
: "memory");
return old_value;
#else
return __sync_fetch_and_or(target, 1);
#endif
}
#endif