Skip to content

[TRTLLM-10927][perf] Use NCCL LSA Barrier to implement synchronization for NVLinkOneSided AlltoAll kernels.#11366

Open
bobboli wants to merge 5 commits intoNVIDIA:mainfrom
bobboli:alltoall_lsa_barrier
Open

[TRTLLM-10927][perf] Use NCCL LSA Barrier to implement synchronization for NVLinkOneSided AlltoAll kernels.#11366
bobboli wants to merge 5 commits intoNVIDIA:mainfrom
bobboli:alltoall_lsa_barrier

Conversation

@bobboli
Copy link
Collaborator

@bobboli bobboli commented Feb 8, 2026

Summary by CodeRabbit

Release Notes

  • Refactor
    • Use NCCL Barrier Session instead of custom implementation for syncrhonization.
    • Remove prepareDispatchKernel, and only launch prepareCombineKernel when necessary.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@bobboli bobboli force-pushed the alltoall_lsa_barrier branch 2 times, most recently from 456e107 to 183a17e Compare February 10, 2026 17:23
@bobboli bobboli marked this pull request as ready for review February 10, 2026 17:23
@bobboli bobboli requested review from a team as code owners February 10, 2026 17:23
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 10, 2026

📝 Walkthrough

Walkthrough

Replaced flag-based synchronization with NCCL LSA barrier-based synchronization in MOE All-to-All kernels. Removed flag_val and completion_flags parameters from kernel signatures. Updated metadata structures to eliminate flag-related offset indices. Modified Python bindings and test verification to reflect the new synchronization approach.

Changes

Cohort / File(s) Summary
GPU Kernel & Header Implementation
cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu, cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.h
Replaced in-kernel flag signaling with NCCL LSA barrier-based synchronization. Removed flag_val and flag_val_ptr parameters from moeA2APrepareDispatchKernel and moeA2APrepareCombineKernel. Added ncclDevComm references to DispatchKernelPointers, CombineKernelPointers, and related parameter structs. Added NCCL device communicator lifecycle helpers (create/destroy functions) and necessary includes (nccl.h, nccl_device.h, mpiUtils.h, multiDeviceUtils.h). Updated grid sizing logic and kernel invocations to propagate dev_comm.
Metadata Structure Updates
cpp/tensorrt_llm/thop/moeAlltoAllMeta.h
Removed FLAG_VAL_OFFSET_INDEX, DISPATCH_COMPLETION_FLAGS_OFFSET_INDEX, and COMBINE_COMPLETION_FLAGS_OFFSET_INDEX from MoeA2AMetaInfoIndex enum. Renumbered remaining enum values; decreased NUM_METAINFO_FIELDS from 10 to 7. Updated getMoeA2AMetaInfoIndexPairs to remove mappings for the deleted offset indices.
C++ Operation Wiring
cpp/tensorrt_llm/thop/moeAlltoAllOp.cpp
Added static g_moeDevComm for persistent NCCL device communicator. Removed flag_val and completion_flags initialization and usage from dispatch/combine paths. Updated moeA2AInitializeOp to create g_moeDevComm on first use. Modified moeA2ADispatchOp and moeA2ACombineOp to pass g_moeDevComm through parameter structs instead of flag-based signaling. Added prepare-launch helper usages (moe_a2a_prepare_dispatch_launch, moe_a2a_prepare_combine_launch).
Python Interface Updates
tensorrt_llm/_torch/distributed/moe_alltoall.py, tensorrt_llm/_torch/modules/fused_moe/communication/nvlink_one_sided.py
Removed FLAG_VAL_OFFSET_INDEX, DISPATCH_COMPLETION_FLAGS_OFFSET_INDEX, and COMBINE_COMPLETION_FLAGS_OFFSET_INDEX from metadata offset dictionaries and class attributes. Updated NVLinkOneSided initialization to use LOCAL_TOKEN_COUNTER_OFFSET_INDEX as the sentinel value instead of flag-related indices.
Test Verification
tests/unittest/_torch/multi_gpu/test_moe_a2a.py
Removed runtime verification blocks that asserted completion flags after dispatch and combine operations. Retained verification logic for counters, top-k indices, payload content, and expert ID sanitization checks.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 21.43% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ⚠️ Warning The PR description is completely empty, containing only the template structure with no actual implementation details, rationale, or test coverage information provided. Fill in the Description section explaining what NCCL LSA Barrier synchronization does and why it improves the code, the Test Coverage section listing relevant tests, and check the PR Checklist items that were completed.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly and specifically describes the main change: replacing flag-based synchronization with NCCL LSA Barrier in NVLinkOneSided AlltoAll kernels for performance improvement.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@cpp/tensorrt_llm/thop/moeAlltoAllOp.cpp`:
- Around line 138-151: The global communicator g_moeDevComm may be reused with
mismatched epSize/epRank; modify moeA2AInitializeOp to validate that an existing
g_moeDevComm was created with the same epSize/epRank before reusing it
(call-site: the if (g_moeDevComm == nullptr) block and the early-reuse path). If
parameters differ, either log/error and abort or recreate the communicator by
destroying the old g_moeDevComm and calling
create_moe_nccl_dev_comm(static_cast<int>(epSize), static_cast<int>(epRank),
/*num_lsa_barriers=*/2); ensure the check compares the stored communicator's
configuration (add fields or accessors to g_moeDevComm metadata if needed) and
include a clear diagnostic message mentioning epSize/epRank and the expected vs
actual values.
🧹 Nitpick comments (7)
cpp/tensorrt_llm/thop/moeAlltoAllMeta.h (1)

17-17: Header guard style: #pragma once vs TRTLLM_-prefixed include guard.

The coding guidelines specify using a preprocessor guard with prefix TRTLLM_ followed by the filename in uppercase for header files. This file uses #pragma once instead. Since this was pre-existing and not part of this PR's changes, it's a minor note.

As per coding guidelines, "Use a preprocessor guard with prefix TRTLLM_ followed by the filename in uppercase (e.g., TRTLLM_FOO_BAR_HELLO_H)."

cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.h (1)

23-26: Forward declarations for NCCL types look good.

The forward declarations avoid pulling the full NCCL headers into this widely-included header. The ncclComm_t typedef matches the NCCL convention. Note that ncclComm_t is only needed for the destroy_moe_nccl_dev_comm signature (which takes ncclDevComm*, not ncclComm_t), so technically line 25 is unused in this header.

#!/bin/bash
# Check if ncclComm_t is actually used in the header file
rg -n 'ncclComm_t' cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.h
cpp/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu (3)

1219-1226: Static variable naming: should use s prefix per coding guidelines.

g_moeNcclComm is declared static, so per the naming convention it should use the s prefix (e.g., sMoeNcclComm). The g prefix is reserved for non-static, non-anonymous-namespace globals.

As per coding guidelines, "Use camelCase prefixed by lowercase 's' for non-magic-number global variables that are static or in anonymous namespaces (e.g., sMutableStaticGlobal)."


1259-1276: Missing CUDA error checking in destroy_moe_nccl_dev_comm.

Lines 1265 and 1274 use cudaMemcpy and cudaFree without error checking, while create_moe_nccl_dev_comm consistently uses TLLM_CUDA_CHECK. Silently ignoring errors during cleanup can mask bugs.

Proposed fix: add error checking
     if (dev_comm != nullptr)
     {
         // Copy back to host for ncclDevCommDestroy
         ncclDevComm hostDevComm;
-        cudaMemcpy(&hostDevComm, dev_comm, sizeof(ncclDevComm), cudaMemcpyDeviceToHost);
+        TLLM_CUDA_CHECK(cudaMemcpy(&hostDevComm, dev_comm, sizeof(ncclDevComm), cudaMemcpyDeviceToHost));

         if (g_moeNcclComm != nullptr)
         {
             ncclDevCommDestroy(g_moeNcclComm, &hostDevComm);
             ncclCommDestroy(g_moeNcclComm);
             g_moeNcclComm = nullptr;
         }

-        cudaFree(dev_comm);
+        TLLM_CUDA_CHECK(cudaFree(dev_comm));
     }

1227-1257: create_moe_nccl_dev_comm overwrites g_moeNcclComm without checking for existing comm.

If create_moe_nccl_dev_comm is called more than once (e.g., due to a missing guard at the call site), the existing NCCL communicator in g_moeNcclComm would be leaked. The caller in moeAlltoAllOp.cpp guards with if (g_moeDevComm == nullptr), but adding a defensive check here would make the function safer.

Defensive guard
 ncclDevComm* create_moe_nccl_dev_comm(int ep_size, int ep_rank, int num_lsa_barriers)
 {
+    TLLM_CHECK_WITH_INFO(g_moeNcclComm == nullptr, "NCCL communicator already exists; destroy it before creating a new one");
+
     // 1. Create NCCL communicator via MPI bootstrap (collective call)
cpp/tensorrt_llm/thop/moeAlltoAllOp.cpp (2)

28-29: Redundant forward declaration.

ncclDevComm is already forward-declared in moeAlltoAllKernels.h (line 26), which is included at line 18. This forward declaration is harmless but redundant.


107-108: Static variable naming and resource management.

Two notes:

  1. Naming: g_moeDevComm is static, so per coding guidelines it should use the s prefix: sMoeDevComm.

  2. Resource leak: g_moeDevComm is allocated via create_moe_nccl_dev_comm but destroy_moe_nccl_dev_comm is never called. While the OS will reclaim resources at process exit, this leaks the NCCL communicator and device memory for the lifetime of the process. Consider registering an atexit handler or adding explicit teardown.

As per coding guidelines, "Use camelCase prefixed by lowercase 's' for non-magic-number global variables that are static or in anonymous namespaces."

#!/bin/bash
# Check if destroy_moe_nccl_dev_comm is ever called anywhere in the codebase
rg -n 'destroy_moe_nccl_dev_comm' -g '*.{cpp,cu,h,hpp,cuh,py}'

…AlltoAll kernels.

Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
…en necessary.

Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
@bobboli bobboli force-pushed the alltoall_lsa_barrier branch from c93bcbc to b83ec19 Compare February 26, 2026 04:41
@bobboli
Copy link
Collaborator Author

bobboli commented Feb 26, 2026

/bot run --disable-fail-fast

@bobboli bobboli requested a review from kaiyux February 26, 2026 04:44
@tensorrt-cicd
Copy link
Collaborator

PR_Github #36869 [ run ] triggered by Bot. Commit: b83ec19 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36869 [ run ] completed with state SUCCESS. Commit: b83ec19
/LLM/main/L0_MergeRequest_PR pipeline #28547 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
@bobboli
Copy link
Collaborator Author

bobboli commented Feb 27, 2026

/bot run --disable-fail-fast --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #37095 [ run ] triggered by Bot. Commit: 8236572 Link to invocation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants