Skip to content

Conversation

@chenyushuo
Copy link
Collaborator

@chenyushuo chenyushuo commented Jan 15, 2026

Description

  1. Rename warmup_style to lr_scheduler_type.
  2. Remove lr_warmup_init, lr_decay_steps, lr_decay_style and min_lr in OptimizerConfig.
  3. Bug fix in tinker_model.py.

Unittest Results

image

Checklist

Please check the following items before code is ready to be reviewed.

  • Code has passed all tests
  • Docstrings have been added/updated in Google Style
  • Documentation has been updated
  • Code is ready for review

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @chenyushuo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on enhancing the clarity and maintainability of the codebase by standardizing the naming convention for learning rate scheduler configurations. The parameter previously known as warmup_style has been renamed to lr_scheduler_type, which more accurately describes its function. This change is propagated throughout various configuration files and updated in the documentation, guiding users towards the new, more descriptive parameter and ensuring a consistent approach to defining learning rate schedules.

Highlights

  • Configuration Parameter Rename: The configuration parameter warmup_style has been renamed to lr_scheduler_type for improved clarity and consistency in defining learning rate scheduling.
  • Extensive Configuration File Updates: This rename has been applied across numerous YAML configuration files, including benchmark templates, example configurations, and test configurations, ensuring all relevant files use the new parameter name.
  • Documentation Updates: The documentation (both English and Chinese versions) has been updated to reflect this change, deprecating warmup_style and introducing lr_scheduler_type with its supported values (constant, cosine).
  • Codebase Adaptation: The core configuration definition in trinity/common/verl_config.py has been updated to include both the deprecated warmup_style (marked as such) and the new lr_scheduler_type.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors warmup_style to lr_scheduler_type across various configuration files and documentation. The changes are mostly consistent. I've pointed out a typo in the documentation and a more significant issue regarding backward compatibility in trinity/common/verl_config.py, where the deprecated warmup_style field is not handled, potentially causing issues with older configurations.

@chenyushuo chenyushuo force-pushed the fix/rename_warmup_style branch from 46edfe3 to 3f7765e Compare January 16, 2026 06:27
@pan-x-c
Copy link
Collaborator

pan-x-c commented Jan 16, 2026

/unittest-diff

@github-actions
Copy link

Summary

Tests 📝 Passed ✅ Failed ❌ Skipped ⏭️ Other ❓ Flaky 🍂 Duration ⏱️
68 62 0 6 0 0 52m 36s

Skipped

Tests Status
tests/common/vllm_test.py::TestTinkerAsyncAPIServer::test_api_async skipped ⏭️
tests/trainer/trainer_test.py::TestMultiModalGRPO::test_trainer skipped ⏭️
tests/trainer/trainer_test.py::TestMultiModalSFT::test_trainer skipped ⏭️
tests/trainer/trainer_test.py::TestTinkerTrainer::test_trainer skipped ⏭️
tests/trainer/trainer_test.py::TestTinkerTrainer::test_trainer_class skipped ⏭️
tests/trainer/trainer_test.py::AgentScopeTunerTest::test_agentscope_tuner skipped ⏭️

Tests

Test Name Status Flaky Duration
tests/common/config_test.py::TestConfig::test_all_examples_are_valid 34.9s
tests/common/config_test.py::TestConfig::test_chat_template_path 74ms
tests/common/config_test.py::TestConfig::test_config_flatten 30ms
tests/common/config_test.py::TestConfig::test_continue_from_checkpoint_is_valid 154ms
tests/common/config_test.py::TestConfig::test_default_workflow 71ms
tests/common/config_test.py::TestConfig::test_load_default_config 3.9s
tests/common/config_test.py::TestConfig::test_max_token_len_per_gpu_set_correctly 73ms
tests/common/config_test.py::TestConfig::test_optimizer_config_propagation 73ms
tests/common/config_test.py::TestConfig::test_update_config_from_ray_cluster 2.0s
tests/common/experience_test.py::TestEID::test_eid_properties 1ms
tests/common/experience_test.py::TestExperience::test_action_mask_and_logprobs_type 1ms
tests/common/experience_test.py::TestExperience::test_assertions 1ms
tests/common/experience_test.py::TestExperience::test_dpo_experience 1ms
tests/common/experience_test.py::TestExperience::test_gather 1ms
tests/common/experience_test.py::TestExperience::test_gather_with_token_level_reward 1ms
tests/common/experience_test.py::TestExperience::test_hf_datasets_conversion 15ms
tests/common/experience_test.py::TestExperience::test_multi_turn_experience 1ms
tests/common/experience_test.py::TestExperience::test_serialize_deserialize 1ms
tests/common/experience_test.py::TestExperience::test_single_turn_experience 1ms
tests/common/experience_test.py::TestExperience::test_to_dict 1ms
tests/common/experience_test.py::TestExperienceConversion::test_batch_conversion 1ms
tests/common/experience_test.py::TestExperienceConversion::test_dpo_experience_batch_conversion 1ms
tests/common/experience_test.py::TestExperienceConversion::test_experience_model_experience_conversion 1ms
tests/common/experience_test.py::TestExperienceConversion::test_gather_experiences_with_custom_fields 1ms
tests/common/experience_test.py::TestExperienceConversion::test_multiturn_experience_batch_converstion 1ms
tests/common/vllm_test.py::ModelWrapperTest_0::test_generate 1m 17s
tests/common/vllm_test.py::ModelWrapperTest_1::test_generate 1m 5s
tests/common/vllm_test.py::ModelWrapperTest_2::test_generate 35.0s
tests/common/vllm_test.py::TestModelLen_0::test_model_len 58.5s
tests/common/vllm_test.py::TestModelLen_1::test_model_len 22.1s
tests/common/vllm_test.py::TestModelLen_2::test_model_len 51.4s
tests/common/vllm_test.py::TestModelLenWithoutPromptTruncation::test_model_len 51.8s
tests/common/vllm_test.py::TestAPIServer::test_api 24.0s
tests/common/vllm_test.py::TestLogprobs::test_logprobs_api 51.2s
tests/common/vllm_test.py::TestAsyncAPIServer::test_api_async 23.8s
tests/common/vllm_test.py::TestTinkerAsyncAPIServer::test_api_async ⏭️ 1ms
tests/common/vllm_test.py::TestTokenizer::test_action_mask 256ms
tests/common/vllm_test.py::TestTokenizer::test_action_mask_with_tools 254ms
tests/common/vllm_test.py::TestAPIServerToolCall_0_deepseek_r1::test_api_tool_calls 53.3s
tests/common/vllm_test.py::TestAPIServerToolCall_1::test_api_tool_calls 51.9s
tests/common/vllm_test.py::TestSuperLongGeneration::test_generate 3m 11s
tests/common/vllm_test.py::TestTinkerAPI::test_tinker_api 1m 13s
tests/trainer/trainer_test.py::TestTrainerCountdown_0_fsdp::test_trainer 2m 46s
tests/trainer/trainer_test.py::TestTrainerCountdown_1_megatron::test_trainer 4m 21s
tests/trainer/trainer_test.py::TestStepAheadAsyncRL::test_trainer 1m 47s
tests/trainer/trainer_test.py::TestTrainerGSM8K_0_fsdp::test_trainer 1m 20s
tests/trainer/trainer_test.py::TestTrainerGSM8K_1_fsdp2::test_trainer 48.6s
tests/trainer/trainer_test.py::TestTrainerGSM8K_2_fsdp::test_trainer 52.3s
tests/trainer/trainer_test.py::TestTrainerGSM8K_3_fsdp2::test_trainer 58.8s
tests/trainer/trainer_test.py::TestTrainerSFTWarmupGSM8K::test_trainer 1m 58s
tests/trainer/trainer_test.py::TestTrainerDPO::test_trainer 35.5s
tests/trainer/trainer_test.py::TestTrainerSFT::test_trainer 31.0s
tests/trainer/trainer_test.py::TestTrainerToolsSFT::test_trainer_tools 30.1s
tests/trainer/trainer_test.py::TestFullyAsyncMode_0_fsdp::test_fully_async_mode 1m 36s
tests/trainer/trainer_test.py::TestFullyAsyncMode_1_fsdp::test_fully_async_mode 1m 31s
tests/trainer/trainer_test.py::TestFullyAsyncMode_2_megatron::test_fully_async_mode 2m 20s
tests/trainer/trainer_test.py::TestTrainerCheckpointSave_0_fsdp::test_trainer 2m 24s
tests/trainer/trainer_test.py::TestTrainerCheckpointSave_1_megatron::test_trainer 5m 23s
tests/trainer/trainer_test.py::TestTrainerMIX::test_trainer 1m 35s
tests/trainer/trainer_test.py::TestServeWithTrainer::test_serve_with_trainer 1m 48s
tests/trainer/trainer_test.py::TestMultiModalGRPO::test_trainer ⏭️ 2.0s
tests/trainer/trainer_test.py::TestMultiModalSFT::test_trainer ⏭️ 554ms
tests/trainer/trainer_test.py::TestTrainerLoRA::test_trainer 2m 43s
tests/trainer/trainer_test.py::TestOverRollout::test_trainer 46.9s
tests/trainer/trainer_test.py::TestTrainerPromptTruncation::test_trainer 1m 12s
tests/trainer/trainer_test.py::TestTinkerTrainer::test_trainer ⏭️ 1ms
tests/trainer/trainer_test.py::TestTinkerTrainer::test_trainer_class ⏭️ 1ms
tests/trainer/trainer_test.py::AgentScopeTunerTest::test_agentscope_tuner ⏭️ 1ms

Github Test Reporter by CTRF 💚

@pan-x-c pan-x-c merged commit c9168b6 into agentscope-ai:main Jan 16, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants