Imagine a world where the page size is secret to userspace. Anything that needs page size alignment will be chosen by the kernel.
That in turn allows mixed page size, variable page size, heirarchical pages, etc.
* the known-constant page size that must be passed so you won't get failure from `mmap(MAP_FIXED)`, `mprotect`, etc. (even if you don't call these directly, ELF relies on them and traditionally 4K was used on many platforms; raising it requires a version of ld.so with proper support). There is no macro for this. On platforms I know about, it varies from 4K to 64K. Setting it to a higher multiple of 2 is by design always safe.
* the known-constant page size that unconstrained `mmap` is guaranteed to return a multiple of. There is no macro for this. This is 4K on every platform I know about, but I'm not omniscient. Setting it to a lower multiple of 2 is by design always safe.
* the dynamic page size that the current kernel is actually using. You can get this using `getpagesize()` or `sysconf(_SC_PAGESIZE)` (incidentally the man page for `getpagesize(2)` is outdated by assuming page size only varies by machine, not at boot time)
The macro `PAGESIZE` is traditionally provided if the upper/lower bounds are identical, and very many programs rely on it. Unfortunately, there's no way to ask the kernel to increase the alignment that unconstrained `mmap` returns, which would safely allow defining PAGESIZE to the largest possible value.
Note that it is possible to get compiler errors for incompatible definitions of constants (or ranges thereof) by relying on `ld` magic (each value pulls in variable definied in a distinct object file that also defines an additional shared variable, causing multiple-definition errors), but this would need to be done by whoever provides the macros (which should be libc).
But when someone releases a new CPU with a larger or smaller cache, all old software continues to work.
Secret page size would offer the same benefit.
And like with page sizes, the big problems with cache line sizes is not when people design things against a specific line size, but the (much more numerous) cases where they design something that only works well for a fixed size but they don't even know that, because they literally never thought about it, and it worked fine because every machine they used was similar.
It doesn't matter what apis you were provided or what analysis you did, you don't know your software works with different hardware until you test it on it.
And that 'copy' might be zero-overhead remapping of the original pages.
uint8_t* mem = malloc_pages(1024);
mprotect_paged(mem, 1024);
I think it's even part of POSIX: “The system performs mapping operations over whole pages. Thus, while the parameter len need not meet a size or alignment constraint, the system shall include, in any mapping operation, any partial page specified by the address range starting at pa and continuing for len bytes.”
"possibly copying a load of data into it for you incase you asked for a readonly range."
Considering that it'd be literally completely entirely unacceptable to ever hit such an emulated range in code of any significance as it'd be a slowdown in the thousands of times, it'd make mprotect entirely pointless and effectively broken. Unless of course you add "padding" of expected page size (not applicable for guard pages though, those are just SOL), which is basically status quo except that, instead of apps doing hard-coding crashing on unexpected page size, they just become potentially 1000x slower, but of course in practice there's no difference between a crash and 1000x slower. I'd even say a crash is better - means a bug report of it can't be dismissed as just a perf issue, and it's clear that something's going wrong in the first place.
The size of the mprotect region does not matter. What matters is worst-case behavior, and, regardless of the size, if a single critical byte of memory ends up in the emulated region, your program is dead to the user, causing data loss or whatever consequences from being, practically speaking, completely frozen your program has.
Trying to emulate that under 16kb pages would fault every memory operation.
How would you have had users handle SIMD tails?
But realistically, you only need to know the lower bound for the page size, so pages larger by an unknown multiple are not a problem. Or use masked loads, and don't even worry about pages.
Same reason why I think electron is great. Devex for 3 people is so much more important than ux for the 30M users.
I take it back. It is possible for me to disagree more strongly.
You don’t need to abstract away page size. Abstraction isn’t the solution to all problems. You just have to expose it. Devs shouldn’t assume page size. They simply need to be able query whether it’s 4 or 16 or 64 or however large and voila.
I think they were being sarcastic, and you might agree more than you realize.
Completely untrue and can be entirely circumvented by one ADB flag (--bypass-low-target-sdk-block) or a third-party .apk installer.
Instead, "Google took away my favorite software off my device that I paid money for, and fuck 'em for doing it" is a totally okay thing for normal people to do.
Huh? Tell that to my phone. Maybe Google dropped support from their own OS builds even for older phones, but I don't think AOSP as such has dropped 32-bit-compatibility yet, and other OEMs have kept 32-bit-support for those phones whose hardware still supports it.
(And at least one or two Chinese OEMs have been shipping some sort of compatibility layer for phones without hardware support, though possibly only for the domestic market, not export models/OS builds.)
It's true though that Google isn't too heavily invested in binary compatibility for native code, though to some extent their hand might have also been forced by the SOC manufacturers, because apparently 32-bits hardware compatibility is more expensive to provide on ARM.
https://source.android.com/docs/core/architecture/16kb-page-...
Yes — because the move to 16k pages happened at the same time as the move to arm64.
The executable format (if it’s native code) doesn’t actually specify what page size it wants right? So there would be no way for the OS to block 4 KB page apps.
I guess they could block stuff that targets an older API version, but that would block a lot of things that would otherwise work just fine.