Instead of calling mtx_enter_try() in each spinning loop, do it
only if the result of a lockless read indicates that the mutex has
been released. This avoids some expensive atomic compare-and-swap
operations. Up to 5% reduction of spinning time during kernel build
can been seen on a 8 core amd64 machine. On other machines there
was no visible effect.
Test on powerpc64 has revealed a bug in mtx_owner declaration. Not
the variable was volatile, but the object it points to. Move the
volatile declaration in struct mutex to avoid a hang when going to
multiuser.
from Mateusz Guzik; input kettenis@ jca@; OK mpi@
-/* $OpenBSD: kern_lock.c,v 1.72 2022/04/26 15:31:14 dv Exp $ */
+/* $OpenBSD: kern_lock.c,v 1.73 2024/03/26 18:18:30 bluhm Exp $ */
/*
* Copyright (c) 2017 Visa Hankala
spc->spc_spinning++;
while (mtx_enter_try(mtx) == 0) {
- CPU_BUSY_CYCLE();
-
+ do {
+ CPU_BUSY_CYCLE();
#ifdef MP_LOCKDEBUG
- if (--nticks == 0) {
- db_printf("%s: %p lock spun out\n", __func__, mtx);
- db_enter();
- nticks = __mp_lock_spinout;
- }
+ if (--nticks == 0) {
+ db_printf("%s: %p lock spun out\n",
+ __func__, mtx);
+ db_enter();
+ nticks = __mp_lock_spinout;
+ }
#endif
+ } while (mtx->mtx_owner != NULL);
}
spc->spc_spinning--;
}
-/* $OpenBSD: mutex.h,v 1.20 2024/02/03 22:50:09 mvs Exp $ */
+/* $OpenBSD: mutex.h,v 1.21 2024/03/26 18:18:30 bluhm Exp $ */
/*
* Copyright (c) 2004 Artur Grabowski <art@openbsd.org>
#include <sys/_lock.h>
struct mutex {
- volatile void *mtx_owner;
+ void *volatile mtx_owner;
int mtx_wantipl;
int mtx_oldipl;
#ifdef WITNESS