Skip to content

Reapplying the Decay Range and other opts#812

Open
mjp41 wants to merge 2 commits intomainfrom
decay_range2
Open

Reapplying the Decay Range and other opts#812
mjp41 wants to merge 2 commits intomainfrom
decay_range2

Conversation

@mjp41
Copy link
Member

@mjp41 mjp41 commented Feb 22, 2026

This reapplies the DecayRange PR to the latest version of snmalloc. It also

  • removes some unrequired initialisation that occurs in a fast path for large allocations. Finally, it adds a LargeObjectCache to improve performance of large allocations.

This is a very draft PR with a lot of reviewing of AI refactorings required.

This might increase footprint considerably, and care is needed to configure who it operates precisely.

Comment on lines 62 to 64
// Touch first and last bytes to ensure pages are faulted in
p[0] = 1;
p[ALLOC_SIZE - 1] = 1;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly, Windows will only faults in touched pages and not the entire range. I don't know the behavior for other kernels but that's what the documention I can find says https://learn.microsoft.com/en-us/windows/win32/memory/reserving-and-committing-memory "... As an alternative to dynamic allocation, the process can simply commit the entire region instead of only reserving it. Both methods result in the same physical memory usage because committed pages do not consume any physical storage until they are first accessed."

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah the benchmark is not going to fault all the pages in. Just the first and last, and this would be true of most platforms. If this lands, I'll fix. Thanks

@mjp41 mjp41 force-pushed the decay_range2 branch 2 times, most recently from 717e506 to f3dc069 Compare February 24, 2026 11:03
@mjp41 mjp41 marked this pull request as ready for review February 24, 2026 19:41
@mjp41
Copy link
Member Author

mjp41 commented Feb 24, 2026

I think this is ready to review. But we should address #814 before we commit this, as it may either hide or worsen this behaviour.

Add a backend range that delays returning memory to the next level.  This reduces the pressure on the backend global allocator.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants