Skip to content

Remove hpa_sec_batch_fill_extra and calculate nallocs based on misses.#80

Draft
gctony wants to merge 2 commits intofacebook:devfrom
gctony:remove_hpa_sec_batch_fill_extra
Draft

Remove hpa_sec_batch_fill_extra and calculate nallocs based on misses.#80
gctony wants to merge 2 commits intofacebook:devfrom
gctony:remove_hpa_sec_batch_fill_extra

Conversation

@gctony
Copy link

@gctony gctony commented Mar 2, 2026

Remove bach_fill_extra field (and associated setters) and calculate nallocs based on misses. The more recorded misses, the higher the nallocs number up to a maximum.

@gctony gctony requested a review from spredolac March 2, 2026 19:14
@meta-cla meta-cla bot added the cla signed label Mar 2, 2026
@gctony gctony marked this pull request as draft March 2, 2026 19:14
@gctony
Copy link
Author

gctony commented Mar 2, 2026

I will fix test/unit/hpa_sec_integration.c after we finalize the approach.

@gctony gctony force-pushed the remove_hpa_sec_batch_fill_extra branch from 658fd45 to 8b65560 Compare March 2, 2026 19:29
@gctony
Copy link
Author

gctony commented Mar 2, 2026

A couple of thoughts on this:

  • I wonder if there's a better way to calculate the max instead of hard-coding it. A scheme to calculate a higher value for small sizes and a lower value for larger sizes to maybe waste less space?
  • I'd love to use the miss rate over time, e.g., if the miss rate is low we can use a lower nallocs value to, again, waste less space. Not sure it's worth adding timing info to the stats though.

@gctony gctony force-pushed the remove_hpa_sec_batch_fill_extra branch 2 times, most recently from b8124b1 to 4e8ea45 Compare March 3, 2026 16:21
@gctony
Copy link
Author

gctony commented Mar 3, 2026

Updated the max nallocs calculation and I now base it on the sec's max_bytes.

@gctony
Copy link
Author

gctony commented Mar 11, 2026

Latest changes:

  • I don't use the miss rate any more.
  • I calculate a min_nallocs (hard-coded: 4, equivalent to the previous default) and max_nallocs (max_bytes / 3)
  • The idea is to allocate at least min_nallocs extends and go up to max if we can allocate them out of the large huge page we used

src/sec.c Outdated

/* Take the number of misses into account.
As the number of misses goes up, we'll ask for extra nallocs. */
size_t nallocs = bin->stats.nmisses;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will only make sense very early on, after that you will always hit the cap.

src/sec.c Outdated
As the number of misses goes up, we'll ask for extra nallocs. */
size_t nallocs = bin->stats.nmisses;
nallocs = MINIMUM(nallocs, max_nallocs);
/* defenseive */

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spelling

src/sec.c Outdated
sec, sec_shard_pick(tsdn, sec), pszind);

/* Shoot for nallocs that corresponds to max_bytes / 2, but cap it by MAX_NALLOCS. */
const size_t max_nallocs = MINIMUM(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could max_bytes be 2 * size? If so this becomes 1?

src/sec.c Outdated
size_t max_nallocs = 1;

if (sec_size_supported(sec, size)) {
/* max is equivalent of 1/3 of max_bytes */

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should probably do 1/4 here

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did and I felt min_nallocs was ending up a bit too small (vs. the current default). But you were right that 1/2 was probably too much. So I decided to split the difference. But I can change it to 1/4.

@gctony gctony force-pushed the remove_hpa_sec_batch_fill_extra branch from 8994d97 to e02bfde Compare March 12, 2026 14:48
@gctony gctony force-pushed the remove_hpa_sec_batch_fill_extra branch from e02bfde to 9c0ad46 Compare March 12, 2026 20:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants