There is no functionnal change as the former is just a wrapper around the
latter. However upper layer of UVM do not need to mess with the internals
of the page allocator.
This will also help when a page cache will be introduced to reduce contention
on the global mutex serializing acess to pmemrange's data.
ok kettenis@, kn@, tb@
-/* $OpenBSD: uvm_object.c,v 1.23 2021/12/15 12:53:53 mpi Exp $ */
+/* $OpenBSD: uvm_object.c,v 1.24 2022/01/17 13:55:32 mpi Exp $ */
/*
* Copyright (c) 2006, 2010, 2019 The NetBSD Foundation, Inc.
/*
* Extract from rb tree in offset order. The phys addresses
* usually increase in that order, which is better for
- * uvm_pmr_freepageq.
+ * uvm_pglistfree().
*/
RBT_FOREACH(pg, uvm_objtree, &uobj->memt) {
/*
uvm_unlock_pageq();
TAILQ_INSERT_TAIL(&pgl, pg, pageq);
}
- uvm_pmr_freepageq(&pgl);
+ uvm_pglistfree(&pgl);
}