fix: suppress GeneratorExit during client cleanup#1856
Open
codefromthecrypt wants to merge 1 commit intomodelcontextprotocol:mainfrom
Open
fix: suppress GeneratorExit during client cleanup#1856codefromthecrypt wants to merge 1 commit intomodelcontextprotocol:mainfrom
codefromthecrypt wants to merge 1 commit intomodelcontextprotocol:mainfrom
Conversation
173f35e to
d3a38f9
Compare
Author
|
@Kludex I ran into this with llama-stack and verify no errors with this patch. if I'm addressing this the wrong way, lemme know.
|
932f052 to
ecf1234
Compare
GeneratorExit can leak from sse_client and streamablehttp_client during cleanup, causing RuntimeError in downstream code. This handles both direct GeneratorExit and BaseExceptionGroup wrapping (cpython#95571). Fixes modelcontextprotocol#1214 Signed-off-by: Adrian Cole <adrian@tetrate.io>
ecf1234 to
bce6dd2
Compare
This was referenced Jan 15, 2026
Author
|
@Kludex mind looking at this for merge or suggest another way? Would love to have llama-stack which uses auto instrumentation in its server work out of box without patches |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Motivation and Context
In integrating latest llama-stack with MCP examples via OpenAI Responses API, I ran into a couple glitches which broke requests. I narrowed them down to
GeneratorExithandling. When rebuilding llama-stack against this branch all scenarios pass.I noticed this also fixes #1214 which was closed due to missing tests. This adds them.
The Problem
Scenario 1: GC cleanup leaks GeneratorExit
When garbage collector cleans up an orphaned generator while a background task has failed:
Background:
Scenario 2: User TaskGroup wraps GeneratorExit
When user code has a generator yielding inside a TaskGroup:
Background:
except*can break@trio.as_safe_channelcleanup python-trio/trio#3324 - why except* breaks generator cleanupThe Fix
Two exception handlers at the yield point handle both scenarios:
Changes
GeneratorExitandBaseExceptionGrouphandling insse_clientandstreamable_http_clientexceptiongroup>=1.0.0as conditional dependency for Python 3.10 supportread_streamin finally blockHow Has This Been Tested?
Parameterized tests in
tests/client/test_resource_cleanup.py(each runs for both SSE and Streamable HTTP):test_generator_exit_on_gc_cleanup[sse/streamable]- Scenario 1: GC cleanuptest_generator_exit_in_exception_group[sse/streamable]- Scenario 2:BaseExceptionGroup([GeneratorExit])test_generator_exit_mixed_group[sse/streamable]- Scenario 2:BaseExceptionGroup([GeneratorExit, ValueError])Breaking Changes
None
Types of changes
Checklist