Add model manager that automatically allocate models and prevents OOMs#37103
Add model manager that automatically allocate models and prevents OOMs#37103AMOOOMA wants to merge 50 commits intoapache:masterfrom
Conversation
Summary of ChangesHello @AMOOOMA, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances Apache Beam's ML inference capabilities by introducing a robust Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #37103 +/- ##
=============================================
+ Coverage 40.38% 56.91% +16.53%
Complexity 3476 3476
=============================================
Files 1226 1227 +1
Lines 188553 189150 +597
Branches 3607 3607
=============================================
+ Hits 76138 107659 +31521
+ Misses 109012 78088 -30924
Partials 3403 3403
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
R: @damccorm |
|
Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment |
damccorm
left a comment
There was a problem hiding this comment.
Could you try splitting this PR up? Specifically, I think we could fairly easily decompose into:
unsafe_hard_deletechanges (and any othermulti_process_shared.pychanges)- Changes to
model_manager.py(most extensive changes) - Changes to
inference/base.py(smaller change set, but most impactful/dangerous)
| return dir | ||
|
|
||
|
|
||
| def _run_with_oom_protection(func, *args, **kwargs): |
There was a problem hiding this comment.
It is surprising to me that we're handling this here - if we do this, we should probably not do it in all cases. But delegating to the caller seems more natural to me
There was a problem hiding this comment.
Yep! will definitely split this change up. The OOM protection here is tricky, originally it's at the caller but I noticed the CUDA usage not being dropped down because the process will be left in a broken state but yeah now that I think about it I can probably pass it in as part of the constructor so it's more natural. will update!
Renamed the original model manager to ModelHandlerManager which is more aligned to its function
Added Model Manager as a util class that offers managed access to models, the client can request models without having to worry about managing GPU OOMs.
Also added various tests that checks the functions of all classes.
Added optional functionality to spawn new process in multiprocessshared to support running models in parallel
Changes should be safe because they are defaulted to False.
Classes
GPUMonitorstart(): Begins background memory polling.stop(): Stops polling.reset_peak(): Resets peak usage tracking.get_stats() -> (current, peak, total): Returns memory stats.ResourceEstimatoris_unknown(model_tag: str) -> bool: Checks if model needs profiling.get_estimate(model_tag: str, default_mb: float) -> float: Returns memory cost.set_initial_estimate(model_tag: str, cost: float): Manually sets cost.add_observation(active_snapshot, peak_memory): Updates cost model via NNLS solver.ModelManageracquire_model(tag: str, loader_func: Callable) -> Any: Gets model instance (handles isolation/concurrency).release_model(tag: str, instance: Any): Returns model to pool.force_reset(): Clears all models and caches.shutdown(): Cleans up resources.Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>instead.CHANGES.mdwith noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.