Skip to content

[Optimization]Optimize CPU utilization#6950

Open
luukunn wants to merge 2 commits intoPaddlePaddle:developfrom
luukunn:cpu
Open

[Optimization]Optimize CPU utilization#6950
luukunn wants to merge 2 commits intoPaddlePaddle:developfrom
luukunn:cpu

Conversation

@luukunn
Copy link
Collaborator

@luukunn luukunn commented Mar 20, 2026

Motivation

💡 If this PR is a Cherry Pick, the PR title needs to follow the format by adding the [Cherry-Pick] label at the very beginning and appending the original PR ID at the end. For example, [Cherry-Pick][CI] Add check trigger and logic(#5191)

💡 如若此PR是Cherry Pick,PR标题需遵循格式,在最开始加上[Cherry-Pick]标签,以及最后面加上原PR ID,例如[Cherry-Pick][CI] Add check trigger and logic(#5191)

Modifications

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

Copilot AI review requested due to automatic review settings March 20, 2026 07:28
@paddle-bot
Copy link

paddle-bot bot commented Mar 20, 2026

Thanks for your contribution!

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

该 PR 旨在通过减少解码阶段的临时对象创建与重复反射判断,优化服务端在流式/非流式输出处理时的 CPU 开销,涉及输入处理器的增量解码与 OpenAI 响应处理流程。

Changes:

  • DataProcessor/Ernie4_5Processor.ids2tokens 中改为对历史 token 列表就地 extend,避免每步生成 previous + token_id 的 O(n) 临时 list。
  • ChatResponseProcessor 中缓存 process_response_dict 是否为协程函数,避免在循环中重复 inspect.iscoroutinefunction(...)
  • 对若干内部状态访问进行局部变量缓存(status = self.decode_status[task_id])以减少字典/索引访问开销。

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.

File Description
fastdeploy/input/text_processor.py 增量解码路径就地累积 token,减少临时 list 分配
fastdeploy/input/ernie4_5_processor.py 同步增量解码路径同样改为就地累积 token
fastdeploy/entrypoints/openai/response_processors.py 缓存协程判断,减少循环内反射开销(同时暴露出一个需要修复的同步分支问题)

Comment on lines 183 to 187
for part in self._multipart_buffer:
if part["decode_type"] == 0:
if inspect.iscoroutinefunction(self.data_processor.process_response_dict):
if self._is_async_processor:
await self.data_processor.process_response_dict(
response_dict=part["request_output"],
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里改为使用 self._is_async_processor 后,同一代码块下方的同步分支(对应 else)仍然调用 process_response_dict 时传入了外层的 request_output/stream 变量,而不是当前 part["request_output"](且应固定 stream=False),会导致非流式 multipart(text+image+text) 场景文本解码使用错误输入。建议把同步分支也改成对每个 part 调用 process_response_dict(response_dict=part["request_output"], stream=False, ...) 以与异步路径一致。

Copilot uses AI. Check for mistakes.
@luukunn luukunn changed the title Optimize CPU utilization [Optimization]Optimize CPU utilization Mar 20, 2026
@codecov-commenter
Copy link

codecov-commenter commented Mar 20, 2026

Codecov Report

❌ Patch coverage is 68.00000% with 8 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@2b10ebc). Learn more about missing BASE report.

Files with missing lines Patch % Lines
fastdeploy/input/text_processor.py 61.53% 5 Missing ⚠️
...stdeploy/entrypoints/openai/response_processors.py 25.00% 0 Missing and 3 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #6950   +/-   ##
==========================================
  Coverage           ?   73.73%           
==========================================
  Files              ?      399           
  Lines              ?    55624           
  Branches           ?     8766           
==========================================
  Hits               ?    41017           
  Misses             ?    11707           
  Partials           ?     2900           
Flag Coverage Δ
GPU 73.73% <68.00%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants