feat(lark): 支持飞书 CardKit 流式输出#5777
Conversation
There was a problem hiding this comment.
Hey - 我已经给出了一些整体性的反馈:
- 在
send_streaming的两个回退分支中(卡片创建失败或发送失败),生成器会被完全消费来构建buffer,然后又对已经耗尽的生成器调用并等待super().send_streaming(generator, use_fallback);建议在这里移除或重构这次super().send_streaming调用,以避免多余的工作或潜在的副作用。 - 适配器在装饰器中注册时使用的是
support_streaming_message=True,但运行时的元数据返回的是support_streaming_message=self.enable_streaming;如果系统的其它部分依赖装饰器中的标志,这种不一致可能会造成困惑——建议统一这两个来源,或者注明哪个是权威来源。
给 AI Agent 的提示
Please address the comments from this code review:
## Overall Comments
- In the two fallback branches of `send_streaming` (card creation or send failure), the generator is fully consumed to build `buffer` and then `super().send_streaming(generator, use_fallback)` is awaited on an already-exhausted generator; consider removing or refactoring the `super().send_streaming` call there to avoid redundant work or unexpected side effects.
- The adapter is registered with `support_streaming_message=True` in the decorator, but the runtime metadata returns `support_streaming_message=self.enable_streaming`; if other parts of the system rely on the decorator flag, this mismatch could cause confusion—consider aligning the two or documenting which one is authoritative.帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈改进后续的评审。
Original comment in English
Hey - I've left some high level feedback:
- In the two fallback branches of
send_streaming(card creation or send failure), the generator is fully consumed to buildbufferand thensuper().send_streaming(generator, use_fallback)is awaited on an already-exhausted generator; consider removing or refactoring thesuper().send_streamingcall there to avoid redundant work or unexpected side effects. - The adapter is registered with
support_streaming_message=Truein the decorator, but the runtime metadata returnssupport_streaming_message=self.enable_streaming; if other parts of the system rely on the decorator flag, this mismatch could cause confusion—consider aligning the two or documenting which one is authoritative.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In the two fallback branches of `send_streaming` (card creation or send failure), the generator is fully consumed to build `buffer` and then `super().send_streaming(generator, use_fallback)` is awaited on an already-exhausted generator; consider removing or refactoring the `super().send_streaming` call there to avoid redundant work or unexpected side effects.
- The adapter is registered with `support_streaming_message=True` in the decorator, but the runtime metadata returns `support_streaming_message=self.enable_streaming`; if other parts of the system rely on the decorator flag, this mismatch could cause confusion—consider aligning the two or documenting which one is authoritative.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! 此 PR 旨在为飞书机器人添加流式消息发送功能,通过 CardKit 实现类似打字机的效果,逐步显示 LLM 的回复。它通过新增配置项、优化发送架构和增加错误处理机制,提升了用户体验和系统的稳定性。同时,前端 i18n 也进行了相应的更新,以支持中英文环境下的流式输出描述和提示。 Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
这次的 PR 实现了飞书 CardKit 的流式输出功能,整体架构设计合理,采用了带回退机制的解耦异步发送循环。代码实现完整,包含了配置、后端逻辑和前端 i18n 的更新。
我发现了一些可以改进的地方,主要集中在 lark_event.py 文件中:
send_streaming方法中存在重复的回退逻辑代码,可以提取成一个辅助函数以提高代码的可维护性。- 在
send_streaming方法的末尾,存在一次冗余的文本更新 API 调用,即使内容已经被发送循环成功更新。
这些修改将有助于提升代码的健壮性和可读性。除此之外,其他部分的修改看起来都很不错。
There was a problem hiding this comment.
Code Review
This PR introduces CardKit-based streaming output functionality for the Lark adapter, which is a great new feature. While the overall implementation structure is clear, a critical vulnerability has been identified in the fallback mechanism introduced in lark_event.py. This vulnerability allows LLM-generated content to trigger arbitrary file reads or SSRF because the adapter's message conversion logic trustingly handles local file paths and URLs in message components. Beyond this, general feedback includes suggestions for improving code robustness and consistency in the core streaming sending logic. To mitigate the security risk, strict validation for all message components derived from LLM output is recommended.
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - 我发现了 1 个问题,并给出了一些整体层面的反馈:
- 在
_fallback_send_streaming中,生成器在调用super().send_streaming(generator, use_fallback)之前就已经被完全消费掉了,因此父类实现接收到的是一个已经耗尽的生成器;如果你只需要父类的副作用,建议显式地暴露这些副作用,或者移除这句多余的调用以避免混淆。 - 适配器在注册时使用了
support_streaming_message=True,而meta().support_streaming_message又受self.enable_streaming控制;如果系统的其他部分依赖装饰器上的support_streaming_message标志,可能需要将两者对齐,使功能能力与运行时暴露情况和配置开关保持一致。
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `_fallback_send_streaming`, the generator is fully consumed before calling `super().send_streaming(generator, use_fallback)`, so the parent implementation will receive an exhausted generator; if you only need the parent’s side effects, consider exposing those explicitly or removing the redundant call to avoid confusion.
- The adapter is registered with `support_streaming_message=True` while `meta().support_streaming_message` is gated by `self.enable_streaming`; if other parts of the system rely on the decorator’s `support_streaming_message` flag, you may want to align these so that the capability and the runtime exposure are consistent with the config toggle.
## Individual Comments
### Comment 1
<location path="astrbot/core/platform/sources/lark/lark_event.py" line_range="727-736" />
<code_context>
+ else:
+ logger.debug(f"[Lark] 流式模式已关闭: {card_id}")
+
+ async def _fallback_send_streaming(self, generator, use_fallback: bool = False):
+ """回退到非流式发送:缓冲全部文本后一次性发送,并保留父类副作用。"""
buffer = None
async for chain in generator:
if not buffer:
buffer = chain
else:
buffer.chain.extend(chain.chain)
- if not buffer:
- return None
- buffer.squash_plain()
- await self.send(buffer)
- return await super().send_streaming(generator, use_fallback)
+
+ if buffer:
+ buffer.squash_plain()
+ await self.send(buffer)
+
+ await super().send_streaming(generator, use_fallback)
+
+ async def send_streaming(self, generator, use_fallback: bool = False):
</code_context>
<issue_to_address>
**issue (bug_risk):** The fallback path fully consumes `generator` before delegating to `super().send_streaming`, so the parent implementation will see an exhausted generator.
In `_fallback_send_streaming`, the `async for chain in generator` loop exhausts `generator` before `await super().send_streaming(generator, use_fallback)`. As a result, the base implementation receives no streaming content, so any behavior that depends on the generator (e.g., logging, tracing, alternate sinks) is effectively bypassed in the fallback path. Please either drop the `super().send_streaming` call here, or refactor so the generator is only consumed once (for example by moving the buffering logic into the parent or otherwise sharing/cloning the stream).
</issue_to_address>帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈改进后续的代码评审。
Original comment in English
Hey - I've found 1 issue, and left some high level feedback:
- In
_fallback_send_streaming, the generator is fully consumed before callingsuper().send_streaming(generator, use_fallback), so the parent implementation will receive an exhausted generator; if you only need the parent’s side effects, consider exposing those explicitly or removing the redundant call to avoid confusion. - The adapter is registered with
support_streaming_message=Truewhilemeta().support_streaming_messageis gated byself.enable_streaming; if other parts of the system rely on the decorator’ssupport_streaming_messageflag, you may want to align these so that the capability and the runtime exposure are consistent with the config toggle.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `_fallback_send_streaming`, the generator is fully consumed before calling `super().send_streaming(generator, use_fallback)`, so the parent implementation will receive an exhausted generator; if you only need the parent’s side effects, consider exposing those explicitly or removing the redundant call to avoid confusion.
- The adapter is registered with `support_streaming_message=True` while `meta().support_streaming_message` is gated by `self.enable_streaming`; if other parts of the system rely on the decorator’s `support_streaming_message` flag, you may want to align these so that the capability and the runtime exposure are consistent with the config toggle.
## Individual Comments
### Comment 1
<location path="astrbot/core/platform/sources/lark/lark_event.py" line_range="727-736" />
<code_context>
+ else:
+ logger.debug(f"[Lark] 流式模式已关闭: {card_id}")
+
+ async def _fallback_send_streaming(self, generator, use_fallback: bool = False):
+ """回退到非流式发送:缓冲全部文本后一次性发送,并保留父类副作用。"""
buffer = None
async for chain in generator:
if not buffer:
buffer = chain
else:
buffer.chain.extend(chain.chain)
- if not buffer:
- return None
- buffer.squash_plain()
- await self.send(buffer)
- return await super().send_streaming(generator, use_fallback)
+
+ if buffer:
+ buffer.squash_plain()
+ await self.send(buffer)
+
+ await super().send_streaming(generator, use_fallback)
+
+ async def send_streaming(self, generator, use_fallback: bool = False):
</code_context>
<issue_to_address>
**issue (bug_risk):** The fallback path fully consumes `generator` before delegating to `super().send_streaming`, so the parent implementation will see an exhausted generator.
In `_fallback_send_streaming`, the `async for chain in generator` loop exhausts `generator` before `await super().send_streaming(generator, use_fallback)`. As a result, the base implementation receives no streaming content, so any behavior that depends on the generator (e.g., logging, tracing, alternate sinks) is effectively bypassed in the fallback path. Please either drop the `super().send_streaming` call here, or refactor so the generator is only consumed once (for example by moving the buffering logic into the parent or otherwise sharing/cloning the stream).
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
- 修复 sender loop 中 delta 快照竟态: await 期间 delta 被 generator 更新导致 last_sent 记录了未发送的值, 造成输出卡在最后一段 - send_streaming 入口增加 platform_meta 守卫, 未启用时直接回退 - _fallback_send_streaming 移除对已耗尽 generator 的 super() 调用, 改为内联父类副作用 (Metric.upload + _has_send_oper) - Metric.upload 统一改为 await, 确保指标上报在方法返回前完成 - 装饰器 support_streaming_message 改为 False, 与 meta() 动态配置对齐 - i18n hint 补充提示: 需在「AI 配置 → 其他配置」中开启流式输出
There was a problem hiding this comment.
嗨,我已经审查了你的改动,一切都很棒!
帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈来改进后续的评审。
Original comment in English
Hey - I've reviewed your changes and they look great!
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
Modifications / 改动点
手机版预览
Screenrecorder-2026-03-08-00-00-49-179.1.mp4
电脑版预览
2026-03-07.23-59-21.1.1.mp4
streaming_mode=true)interactive类型消息适配器配置 — lark_adapter.py
lark_enable_streaming配置项support_streaming_message根据配置动态返回默认配置 — default.py
lark_enable_streaming: False(默认关闭)cardkit:card:write权限,飞书客户端需 ≥ 7.20前端 i18n
Screenshots or Test Results / 运行截图或测试结果
Checklist / 检查清单
requirements.txt和pyproject.toml文件相应位置。/ I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations inrequirements.txtandpyproject.toml.Summary by Sourcery
为 Lark(飞书)适配器添加可配置的基于 CardKit 的流式消息支持,并通过平台元数据和控制台配置对外暴露该能力。
新功能:
lark_enable_streaming开关,用于控制是否开启流式输出支持。增强:
Original summary in English
Summary by Sourcery
Add configurable CardKit-based streaming message support for the Lark (Feishu) adapter and expose it via platform metadata and dashboard configuration.
New Features:
Enhancements:
新特性:
lark_enable_streaming标志,用于切换流式输出支持。增强:
Original summary in English
Summary by Sourcery
为 Lark(飞书)适配器添加可配置的基于 CardKit 的流式消息支持,并通过平台元数据和控制台配置对外暴露该能力。
新功能:
lark_enable_streaming开关,用于控制是否开启流式输出支持。增强:
Original summary in English
Summary by Sourcery
Add configurable CardKit-based streaming message support for the Lark (Feishu) adapter and expose it via platform metadata and dashboard configuration.
New Features:
Enhancements: