Cortex-M Backend: Add tiny model tests for nn.Modules, nn.functional,…#18297
Cortex-M Backend: Add tiny model tests for nn.Modules, nn.functional,…#18297
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18297
Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
There was a problem hiding this comment.
Pull request overview
Adds new Cortex-M backend model-level test suites to exercise common composite patterns (torch op compositions, nn.Module building blocks, and torch.nn.functional usage) through the Cortex-M quantizer + pass pipeline.
Changes:
- Add composite “torch function pattern” models (mul/add, transpose+linear, conv chains, depthwise/inverted residual, linear+softmax).
- Add
nn.Module-focused coverage (conv/bn/relu, pooling, conv-transpose, hardswish/sigmoid, softmax) with an explicit xfail. - Add
nn.functional-pattern coverage (conv+activation+pooling, pad, linear+bn, hardtanh).
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
backends/cortex_m/test/models/test_torch_functions.py |
New composite torch-op pattern tests for Cortex-M dialect pipeline |
backends/cortex_m/test/models/test_nn_modules.py |
New nn.Module composite tests + one documented xfail |
backends/cortex_m/test/models/test_nn_functional.py |
New functional-pattern composite tests for Cortex-M dialect pipeline |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Adds Cortex‑M backend model-level test coverage for common torch composite patterns, nn.Module compositions, and torch.nn.functional usage by introducing three new pytest suites that run through the Cortex‑M quantizer and pass pipeline.
Changes:
- Add composite “torch function” pattern tests (arithmetic fusion, transpose+linear, conv blocks, MobileNet-style blocks, softmax).
- Add
nn.Modulecomposition tests (conv/bn/relu, pooling, activations, softmax, conv transpose, and an xfailed fusion case). - Add
torch.nn.functional-oriented tests (pad, activations, pooling, linear+BN composition).
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 7 comments.
| File | Description |
|---|---|
backends/cortex_m/test/models/test_torch_functions.py |
New composite-pattern tests driven by CortexMTester.test_dialect |
backends/cortex_m/test/models/test_nn_modules.py |
New nn.Module composition tests with an explicit xfail for unsupported fusion |
backends/cortex_m/test/models/test_nn_functional.py |
New functional-style op composition tests (pad/activations/pooling/linear+BN) |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
… and torch patterns Add 21 new test cases across 3 files that exercise the Cortex-M quantizer and pass manager on small composite models. These mirror the Arm backend's test_nn_modules/test_nn_functional/test_torch_functions pattern (PR #18225) but target the Cortex-M pipeline. Tests cover: ConvBnReLU, LinearReLU, ConvTranspose2d, AdaptiveAvgPool2d, MaxPool2d, AvgPool2d, Softmax, Hardswish, Hardsigmoid, depthwise separable conv, inverted residual blocks (MobileNet-style), and multi-op functional compositions. All tests use test_dialect() which runs quantize→export→to_edge→ run_passes→compare_outputs entirely on the host (no FVP needed). Co-authored-by: Claude <noreply@anthropic.com>
… and torch patterns
Add 21 new test cases across 3 files that exercise the Cortex-M quantizer and pass manager on small composite models. These mirror the Arm backend's test_nn_modules/test_nn_functional/test_torch_functions pattern but target the Cortex-M pipeline.
Tests cover: ConvBnReLU, LinearReLU, ConvTranspose2d, AdaptiveAvgPool2d, MaxPool2d, AvgPool2d, Softmax, Hardswish, Hardsigmoid, depthwise separable conv, inverted residual blocks (MobileNet-style), and multi-op functional compositions.
All tests use test_dialect() which runs quantize→export→to_edge→ run_passes→compare_outputs entirely on the host (no FVP needed).