Skip to content

feat(lark): 支持飞书 CardKit 流式输出#5777

Merged
Soulter merged 6 commits intoAstrBotDevs:masterfrom
camera-2018:feat/lark-streaming-output
Mar 10, 2026
Merged

feat(lark): 支持飞书 CardKit 流式输出#5777
Soulter merged 6 commits intoAstrBotDevs:masterfrom
camera-2018:feat/lark-streaming-output

Conversation

@camera-2018
Copy link
Contributor

@camera-2018 camera-2018 commented Mar 5, 2026

Modifications / 改动点

手机版预览

Screenrecorder-2026-03-08-00-00-49-179.1.mp4

电脑版预览

2026-03-07.23-59-21.1.1.mp4
  • 新增 send_streaming 方法,采用解耦异步 sender loop 架构,RTT 自然背压
  • 新增 _create_streaming_card:创建 Card JSON 2.0 卡片实体(streaming_mode=true
  • 新增 _send_card_message:发送 interactive 类型消息
  • 新增 _update_streaming_text:增量 PUT 更新 markdown 内容(含 sequence 递增)
  • 新增 _close_streaming_mode:关闭流式模式,使卡片可被转发
  • CardKit 操作失败时自动降级为非流式发送

适配器配置 — lark_adapter.py

  • 读取 lark_enable_streaming 配置项
  • meta() 中 support_streaming_message 根据配置动态返回

默认配置 — default.py

  • 飞书模板新增 lark_enable_streaming: False(默认关闭)
  • metadata 新增描述、类型和 hint:提示需开通 cardkit:card:write 权限,飞书客户端需 ≥ 7.20

前端 i18n

  • zh-CN:新增「启用流式输出」描述及权限/版本提示
  • en-US:新增 "Enable Streaming Output" 描述及对应提示
  • This is NOT a breaking change. / 这不是一个破坏性变更。

Screenshots or Test Results / 运行截图或测试结果


Checklist / 检查清单

  • 😊 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。/ If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
  • 👀 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”。/ My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
  • 🤓 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到了 requirements.txtpyproject.toml 文件相应位置。/ I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
  • 😮 我的更改没有引入恶意代码。/ My changes do not introduce malicious code.

Summary by Sourcery

为 Lark(飞书)适配器添加可配置的基于 CardKit 的流式消息支持,并通过平台元数据和控制台配置对外暴露该能力。

新功能:

  • 为 Lark 实现基于 CardKit 的流式消息发送,使用流式卡片和增量 Markdown 更新。
  • 在飞书(Feishu)提供方模板中引入可配置的 lark_enable_streaming 开关,用于控制是否开启流式输出支持。
  • 根据配置,通过平台元数据动态暴露 Lark 适配器的流式能力。
  • 在控制台中为 Lark 流式输出选项添加 zh-CN 和 en-US 的 i18n 元数据条目。

增强:

  • 在使用新的流式发送路径时,为 Lark 记录并标记流式发送的相关指标。
Original summary in English

Summary by Sourcery

Add configurable CardKit-based streaming message support for the Lark (Feishu) adapter and expose it via platform metadata and dashboard configuration.

New Features:

  • Implement CardKit-based streaming message sending for Lark using streaming cards and incremental markdown updates.
  • Introduce a configurable lark_enable_streaming flag in the Feishu provider template to toggle streaming output support.
  • Expose the Lark adapter's streaming capability dynamically through platform metadata based on configuration.
  • Add dashboard i18n metadata entries in zh-CN and en-US to describe the Lark streaming output option.

Enhancements:

  • Record and tag streaming send metrics for Lark when using the new streaming path.

新特性:

  • 为 Lark 实现基于 CardKit 的流式消息发送,使用流式卡片和增量 markdown 更新。
  • 在 Feishu provider 模板中引入可配置的 lark_enable_streaming 标志,用于切换流式输出支持。
  • 基于配置,通过平台元数据动态暴露 Lark 适配器的流式能力。
  • 添加 i18n 元数据条目,在 zh-CN 和 en-US 控制台中描述 Lark 流式输出选项。

增强:

  • 在使用新流式路径时,为 Lark 的发送操作记录流式发送指标并进行标记。
Original summary in English

Summary by Sourcery

为 Lark(飞书)适配器添加可配置的基于 CardKit 的流式消息支持,并通过平台元数据和控制台配置对外暴露该能力。

新功能:

  • 为 Lark 实现基于 CardKit 的流式消息发送,使用流式卡片和增量 Markdown 更新。
  • 在飞书(Feishu)提供方模板中引入可配置的 lark_enable_streaming 开关,用于控制是否开启流式输出支持。
  • 根据配置,通过平台元数据动态暴露 Lark 适配器的流式能力。
  • 在控制台中为 Lark 流式输出选项添加 zh-CN 和 en-US 的 i18n 元数据条目。

增强:

  • 在使用新的流式发送路径时,为 Lark 记录并标记流式发送的相关指标。
Original summary in English

Summary by Sourcery

Add configurable CardKit-based streaming message support for the Lark (Feishu) adapter and expose it via platform metadata and dashboard configuration.

New Features:

  • Implement CardKit-based streaming message sending for Lark using streaming cards and incremental markdown updates.
  • Introduce a configurable lark_enable_streaming flag in the Feishu provider template to toggle streaming output support.
  • Expose the Lark adapter's streaming capability dynamically through platform metadata based on configuration.
  • Add dashboard i18n metadata entries in zh-CN and en-US to describe the Lark streaming output option.

Enhancements:

  • Record and tag streaming send metrics for Lark when using the new streaming path.

@auto-assign auto-assign bot requested review from Soulter and advent259141 March 5, 2026 16:15
@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Mar 5, 2026
@camera-2018 camera-2018 marked this pull request as draft March 5, 2026 16:16
@dosubot dosubot bot added the area:platform The bug / feature is about IM platform adapter, such as QQ, Lark, Telegram, WebChat and so on. label Mar 5, 2026
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - 我已经给出了一些整体性的反馈:

  • send_streaming 的两个回退分支中(卡片创建失败或发送失败),生成器会被完全消费来构建 buffer,然后又对已经耗尽的生成器调用并等待 super().send_streaming(generator, use_fallback);建议在这里移除或重构这次 super().send_streaming 调用,以避免多余的工作或潜在的副作用。
  • 适配器在装饰器中注册时使用的是 support_streaming_message=True,但运行时的元数据返回的是 support_streaming_message=self.enable_streaming;如果系统的其它部分依赖装饰器中的标志,这种不一致可能会造成困惑——建议统一这两个来源,或者注明哪个是权威来源。
给 AI Agent 的提示
Please address the comments from this code review:

## Overall Comments
- In the two fallback branches of `send_streaming` (card creation or send failure), the generator is fully consumed to build `buffer` and then `super().send_streaming(generator, use_fallback)` is awaited on an already-exhausted generator; consider removing or refactoring the `super().send_streaming` call there to avoid redundant work or unexpected side effects.
- The adapter is registered with `support_streaming_message=True` in the decorator, but the runtime metadata returns `support_streaming_message=self.enable_streaming`; if other parts of the system rely on the decorator flag, this mismatch could cause confusion—consider aligning the two or documenting which one is authoritative.

Sourcery 对开源项目免费——如果你觉得这些评审有帮助,可以考虑分享一下 ✨
帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈改进后续的评审。
Original comment in English

Hey - I've left some high level feedback:

  • In the two fallback branches of send_streaming (card creation or send failure), the generator is fully consumed to build buffer and then super().send_streaming(generator, use_fallback) is awaited on an already-exhausted generator; consider removing or refactoring the super().send_streaming call there to avoid redundant work or unexpected side effects.
  • The adapter is registered with support_streaming_message=True in the decorator, but the runtime metadata returns support_streaming_message=self.enable_streaming; if other parts of the system rely on the decorator flag, this mismatch could cause confusion—consider aligning the two or documenting which one is authoritative.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In the two fallback branches of `send_streaming` (card creation or send failure), the generator is fully consumed to build `buffer` and then `super().send_streaming(generator, use_fallback)` is awaited on an already-exhausted generator; consider removing or refactoring the `super().send_streaming` call there to avoid redundant work or unexpected side effects.
- The adapter is registered with `support_streaming_message=True` in the decorator, but the runtime metadata returns `support_streaming_message=self.enable_streaming`; if other parts of the system rely on the decorator flag, this mismatch could cause confusion—consider aligning the two or documenting which one is authoritative.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@dosubot
Copy link

dosubot bot commented Mar 5, 2026

Related Documentation

Checked 1 published document(s) in 1 knowledge base(s). No updates required.

How did I do? Any feedback?  Join Discord

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

此 PR 旨在为飞书机器人添加流式消息发送功能,通过 CardKit 实现类似打字机的效果,逐步显示 LLM 的回复。它通过新增配置项、优化发送架构和增加错误处理机制,提升了用户体验和系统的稳定性。同时,前端 i18n 也进行了相应的更新,以支持中英文环境下的流式输出描述和提示。

Highlights

  • 流式输出支持: 本 PR 实现了对飞书 CardKit 的流式输出支持,允许 LLM 的回复以“打字机”效果逐步输出。
  • 异步解耦架构: 引入了解耦异步 sender loop 架构,通过 RTT 自然背压控制发送频率,优化用户体验。
  • 配置项新增: 新增了 lark_enable_streaming 配置项,允许用户在配置中开启或关闭流式输出功能。
  • 错误处理: 增加了 CardKit 操作失败时的自动降级处理,确保在流式发送失败时,能够回退到非流式发送。
Changelog
  • 前端 i18n
    • zh-CN:新增「启用流式输出」描述及权限/版本提示
    • en-US:新增 "Enable Streaming Output" 描述及对应提示
  • 适配器配置
    • 读取 lark_enable_streaming 配置项
    • meta() 中 support_streaming_message 根据配置动态返回
  • 飞书卡片
    • 新增 send_streaming 方法,采用解耦异步 sender loop 架构,RTT 自然背压
    • 新增 _create_streaming_card:创建 Card JSON 2.0 卡片实体(streaming_mode=true
    • 新增 _send_card_message:发送 interactive 类型消息
    • 新增 _update_streaming_text:增量 PUT 更新 markdown 内容(含 sequence 递增)
    • 新增 _close_streaming_mode:关闭流式模式,使卡片可被转发
    • CardKit 操作失败时自动降级为非流式发送
  • 默认配置
    • 飞书模板新增 lark_enable_streaming: False(默认关闭)
    • metadata 新增描述、类型和 hint:提示需开通 cardkit:card:write 权限,飞书客户端需 ≥ 7.20
Activity
  • 新增了 send_streaming 方法,支持飞书 CardKit 流式输出。
  • 实现了异步 sender loop 架构,优化了发送体验。
  • 新增了 lark_enable_streaming 配置项,用于控制流式输出的开关。
  • 增加了 CardKit 操作失败时的自动降级处理。
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

这次的 PR 实现了飞书 CardKit 的流式输出功能,整体架构设计合理,采用了带回退机制的解耦异步发送循环。代码实现完整,包含了配置、后端逻辑和前端 i18n 的更新。

我发现了一些可以改进的地方,主要集中在 lark_event.py 文件中:

  1. send_streaming 方法中存在重复的回退逻辑代码,可以提取成一个辅助函数以提高代码的可维护性。
  2. send_streaming 方法的末尾,存在一次冗余的文本更新 API 调用,即使内容已经被发送循环成功更新。

这些修改将有助于提升代码的健壮性和可读性。除此之外,其他部分的修改看起来都很不错。

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This PR introduces CardKit-based streaming output functionality for the Lark adapter, which is a great new feature. While the overall implementation structure is clear, a critical vulnerability has been identified in the fallback mechanism introduced in lark_event.py. This vulnerability allows LLM-generated content to trigger arbitrary file reads or SSRF because the adapter's message conversion logic trustingly handles local file paths and URLs in message components. Beyond this, general feedback includes suggestions for improving code robustness and consistency in the core streaming sending logic. To mitigate the security risk, strict validation for all message components derived from LLM output is recommended.

@camera-2018
Copy link
Contributor Author

@sourcery-ai review

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - 我发现了 1 个问题,并给出了一些整体层面的反馈:

  • _fallback_send_streaming 中,生成器在调用 super().send_streaming(generator, use_fallback) 之前就已经被完全消费掉了,因此父类实现接收到的是一个已经耗尽的生成器;如果你只需要父类的副作用,建议显式地暴露这些副作用,或者移除这句多余的调用以避免混淆。
  • 适配器在注册时使用了 support_streaming_message=True,而 meta().support_streaming_message 又受 self.enable_streaming 控制;如果系统的其他部分依赖装饰器上的 support_streaming_message 标志,可能需要将两者对齐,使功能能力与运行时暴露情况和配置开关保持一致。
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `_fallback_send_streaming`, the generator is fully consumed before calling `super().send_streaming(generator, use_fallback)`, so the parent implementation will receive an exhausted generator; if you only need the parent’s side effects, consider exposing those explicitly or removing the redundant call to avoid confusion.
- The adapter is registered with `support_streaming_message=True` while `meta().support_streaming_message` is gated by `self.enable_streaming`; if other parts of the system rely on the decorator’s `support_streaming_message` flag, you may want to align these so that the capability and the runtime exposure are consistent with the config toggle.

## Individual Comments

### Comment 1
<location path="astrbot/core/platform/sources/lark/lark_event.py" line_range="727-736" />
<code_context>
+        else:
+            logger.debug(f"[Lark] 流式模式已关闭: {card_id}")
+
+    async def _fallback_send_streaming(self, generator, use_fallback: bool = False):
+        """回退到非流式发送:缓冲全部文本后一次性发送,并保留父类副作用。"""
         buffer = None
         async for chain in generator:
             if not buffer:
                 buffer = chain
             else:
                 buffer.chain.extend(chain.chain)
-        if not buffer:
-            return None
-        buffer.squash_plain()
-        await self.send(buffer)
-        return await super().send_streaming(generator, use_fallback)
+
+        if buffer:
+            buffer.squash_plain()
+            await self.send(buffer)
+
+        await super().send_streaming(generator, use_fallback)
+
+    async def send_streaming(self, generator, use_fallback: bool = False):
</code_context>
<issue_to_address>
**issue (bug_risk):** The fallback path fully consumes `generator` before delegating to `super().send_streaming`, so the parent implementation will see an exhausted generator.

In `_fallback_send_streaming`, the `async for chain in generator` loop exhausts `generator` before `await super().send_streaming(generator, use_fallback)`. As a result, the base implementation receives no streaming content, so any behavior that depends on the generator (e.g., logging, tracing, alternate sinks) is effectively bypassed in the fallback path. Please either drop the `super().send_streaming` call here, or refactor so the generator is only consumed once (for example by moving the buffering logic into the parent or otherwise sharing/cloning the stream).
</issue_to_address>

Sourcery 对开源项目免费使用——如果你觉得我们的代码评审有帮助,欢迎分享 ✨
帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈改进后续的代码评审。
Original comment in English

Hey - I've found 1 issue, and left some high level feedback:

  • In _fallback_send_streaming, the generator is fully consumed before calling super().send_streaming(generator, use_fallback), so the parent implementation will receive an exhausted generator; if you only need the parent’s side effects, consider exposing those explicitly or removing the redundant call to avoid confusion.
  • The adapter is registered with support_streaming_message=True while meta().support_streaming_message is gated by self.enable_streaming; if other parts of the system rely on the decorator’s support_streaming_message flag, you may want to align these so that the capability and the runtime exposure are consistent with the config toggle.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `_fallback_send_streaming`, the generator is fully consumed before calling `super().send_streaming(generator, use_fallback)`, so the parent implementation will receive an exhausted generator; if you only need the parent’s side effects, consider exposing those explicitly or removing the redundant call to avoid confusion.
- The adapter is registered with `support_streaming_message=True` while `meta().support_streaming_message` is gated by `self.enable_streaming`; if other parts of the system rely on the decorator’s `support_streaming_message` flag, you may want to align these so that the capability and the runtime exposure are consistent with the config toggle.

## Individual Comments

### Comment 1
<location path="astrbot/core/platform/sources/lark/lark_event.py" line_range="727-736" />
<code_context>
+        else:
+            logger.debug(f"[Lark] 流式模式已关闭: {card_id}")
+
+    async def _fallback_send_streaming(self, generator, use_fallback: bool = False):
+        """回退到非流式发送:缓冲全部文本后一次性发送,并保留父类副作用。"""
         buffer = None
         async for chain in generator:
             if not buffer:
                 buffer = chain
             else:
                 buffer.chain.extend(chain.chain)
-        if not buffer:
-            return None
-        buffer.squash_plain()
-        await self.send(buffer)
-        return await super().send_streaming(generator, use_fallback)
+
+        if buffer:
+            buffer.squash_plain()
+            await self.send(buffer)
+
+        await super().send_streaming(generator, use_fallback)
+
+    async def send_streaming(self, generator, use_fallback: bool = False):
</code_context>
<issue_to_address>
**issue (bug_risk):** The fallback path fully consumes `generator` before delegating to `super().send_streaming`, so the parent implementation will see an exhausted generator.

In `_fallback_send_streaming`, the `async for chain in generator` loop exhausts `generator` before `await super().send_streaming(generator, use_fallback)`. As a result, the base implementation receives no streaming content, so any behavior that depends on the generator (e.g., logging, tracing, alternate sinks) is effectively bypassed in the fallback path. Please either drop the `super().send_streaming` call here, or refactor so the generator is only consumed once (for example by moving the buffering logic into the parent or otherwise sharing/cloning the stream).
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

- 修复 sender loop 中 delta 快照竟态: await 期间 delta 被 generator
  更新导致 last_sent 记录了未发送的值, 造成输出卡在最后一段
- send_streaming 入口增加 platform_meta 守卫, 未启用时直接回退
- _fallback_send_streaming 移除对已耗尽 generator 的 super() 调用,
  改为内联父类副作用 (Metric.upload + _has_send_oper)
- Metric.upload 统一改为 await, 确保指标上报在方法返回前完成
- 装饰器 support_streaming_message 改为 False, 与 meta() 动态配置对齐
- i18n hint 补充提示: 需在「AI 配置 → 其他配置」中开启流式输出
@camera-2018 camera-2018 marked this pull request as ready for review March 7, 2026 16:01
@auto-assign auto-assign bot requested a review from Fridemn March 7, 2026 16:01
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

嗨,我已经审查了你的改动,一切都很棒!


Sourcery 对开源项目是免费的——如果你觉得我们的评审有帮助,欢迎分享 ✨
帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈来改进后续的评审。
Original comment in English

Hey - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Mar 10, 2026
@Soulter Soulter merged commit ec21cb1 into AstrBotDevs:master Mar 10, 2026
5 of 6 checks passed
Soulter added a commit that referenced this pull request Mar 10, 2026
…5993)

* feat(mcp): enhance logging and initialize MCP clients in background

fixes: #5777

* rf

* fix(mcp): simplify MCP client initialization in background

* fix(mcp): update error message for MCP background initialization failure
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:platform The bug / feature is about IM platform adapter, such as QQ, Lark, Telegram, WebChat and so on. lgtm This PR has been approved by a maintainer size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants