From: mpi Date: Mon, 17 Jan 2022 13:55:32 +0000 (+0000) Subject: Call uvm_pglistfree(9) instead of uvm_pmr_freepageq(). X-Git-Url: http://artulab.com/gitweb/?a=commitdiff_plain;h=7f144f4ceabe13eb54795db3c1858c4ab25a96d4;p=openbsd Call uvm_pglistfree(9) instead of uvm_pmr_freepageq(). There is no functionnal change as the former is just a wrapper around the latter. However upper layer of UVM do not need to mess with the internals of the page allocator. This will also help when a page cache will be introduced to reduce contention on the global mutex serializing acess to pmemrange's data. ok kettenis@, kn@, tb@ --- diff --git a/sys/uvm/uvm_object.c b/sys/uvm/uvm_object.c index 838c3adafb2..4de508e3abe 100644 --- a/sys/uvm/uvm_object.c +++ b/sys/uvm/uvm_object.c @@ -1,4 +1,4 @@ -/* $OpenBSD: uvm_object.c,v 1.23 2021/12/15 12:53:53 mpi Exp $ */ +/* $OpenBSD: uvm_object.c,v 1.24 2022/01/17 13:55:32 mpi Exp $ */ /* * Copyright (c) 2006, 2010, 2019 The NetBSD Foundation, Inc. @@ -229,7 +229,7 @@ uvm_obj_free(struct uvm_object *uobj) /* * Extract from rb tree in offset order. The phys addresses * usually increase in that order, which is better for - * uvm_pmr_freepageq. + * uvm_pglistfree(). */ RBT_FOREACH(pg, uvm_objtree, &uobj->memt) { /* @@ -242,6 +242,6 @@ uvm_obj_free(struct uvm_object *uobj) uvm_unlock_pageq(); TAILQ_INSERT_TAIL(&pgl, pg, pageq); } - uvm_pmr_freepageq(&pgl); + uvm_pglistfree(&pgl); }