Conversation
runpod-Henrik
left a comment
There was a problem hiding this comment.
Review: dependencies cleanup (AE-2343)
1. Torch removal from dependencies — correct ✅
Removing torch from dependencies=[] in 4 files:
03_mixed_workers/gpu_worker.py04_dependencies/gpu_worker.py01_text_to_speech/gpu_worker.py01_network_volumes/gpu_worker.py
Since torch is baked into the GPU base image, declaring it in dependencies caused it to be re-bundled into the build artifact (~2.5GB), hitting the upload size limit (AE-2343). This is the right fix.
2. Version stringification — correct ✅
# Before (broken when unpickled locally):
"pandas": pd.__version__
# After (safe — plain string):
"pandas": str(pd.__version__)This fixes a real issue: __version__ attributes on some packages are lazy objects (not plain strings). When cloudpickle serializes them for the response and the local machine doesn't have the package installed, deserialization fails. Wrapping in str() forces eager evaluation on the worker where the package is available.
Changed in 3 files: cpu_worker.py (4 versions), gpu_worker.py (4 versions + opencv), consistent pattern throughout.
3. README updates — clean
Dependencies README correctly updated to use lightweight packages (requests, httpx, python-dateutil) as examples instead of torch. Added helpful note about torch being in the base image.
4. Minor nit (non-blocking)
gpu_worker.py:49 — the new import:
from importlib.metadata import versionThis shadows the outer scope if anyone later adds a version variable. Fine in practice since it's inside the function body, but metadata_version or similar would avoid any confusion. Not worth changing.
Verdict: PASS — recommend approving. Clean, minimal fix for a real build-size issue. No new bugs introduced.
🤖 Reviewed by Henrik's AI-Powered Bug Finder
7210495 to
79a7715
Compare
Two main changes:
torchas an explicit dependency on a lot of the gpu workers; it is baked into the base flash worker image, so installing it again and having it get packaged up at build time was unnecessary