Skip to content

feat: select/replace tools Fixes #9701#9698

Draft
koush wants to merge 1 commit intoanomalyco:devfrom
koush:select-replace-tools
Draft

feat: select/replace tools Fixes #9701#9698
koush wants to merge 1 commit intoanomalyco:devfrom
koush:select-replace-tools

Conversation

@koush
Copy link

@koush koush commented Jan 20, 2026

What does this PR do?

Opened this PR as a draft because it's just something I think was worth prototyping.

The search/replace mechanism that has become the defacto workflow for edits has a couple problems:

  • The LLM has to perfectly recall and generate the code it wants to replace in full. Granted, the edit tool has generous matching here.
  • LLM token generation/output is an order of more expensive and several order of magnitudes slower than prefill/input of the same size.
  • Failures are expensive and slow. If the match fails, the entire tool call needs to be reexecuted.

This pull request adds two new tools: select-text and replace-selection.

The flow is:

  • LLM selects text using start and end search strings, this allows the model to not generate/recall large blocks of code that need replacing.
  • If the search fails, the failure is fast and cheap, and the model reattempts the search.
  • On success, the select tool returns the selected code for the model to review and continue. Prefill of large code blocks, even thousands of lines, is sub second, whereas generation may take minutes.
  • The LLM then uses the replace tool and generates/outputs tokens for only the new code.

This is roughly 2x fast for half the price with Claude, for example.

Model Base Input Tokens 5m Cache Writes 1h Cache Writes Cache Hits & Refreshes Output Tokens
Claude Opus 4.5 $5 / MTok $6.25 / MTok $10 / MTok $0.50 / MTok $25 / MTok

The downside is that model training data now extensively uses the existing edit tool style calls, so that flow is baked into their weights. However, as mentioned, failures are fast and inexpensive comparatively with this new tool call flow.

The change needed a way for tools to access the conversation history, as the new tools are stateful.

How did you verify your code works?

Tests and usage.

@github-actions
Copy link
Contributor

Thanks for your contribution!

This PR doesn't have a linked issue. All PRs must reference an existing issue.

Please:

  1. Open an issue describing the bug/feature (if one doesn't exist)
  2. Add Fixes #<number> or Closes #<number> to this PR description

See CONTRIBUTING.md for details.

@github-actions
Copy link
Contributor

The following comment was made by an LLM, it may be inaccurate:

No duplicate PRs found

@koush koush changed the title feat: select/replace tools feat: select/replace tools Fixes 9701 Jan 20, 2026
@koush koush changed the title feat: select/replace tools Fixes 9701 feat: select/replace tools Fixes https://github.com/anomalyco/opencode/issues/9701 Jan 20, 2026
@koush koush changed the title feat: select/replace tools Fixes https://github.com/anomalyco/opencode/issues/9701 feat: select/replace tools Fixes #9701 Jan 20, 2026
@koush koush marked this pull request as ready for review January 20, 2026 21:35
@koush
Copy link
Author

koush commented Jan 20, 2026

Fixes #9701

@koush koush marked this pull request as draft January 20, 2026 21:39
@kripper
Copy link

kripper commented Jan 24, 2026

Does this solve the problem of LLMs struggling to distinguish between spaces and tabs when editing files?
I love tabs, and I hate using 2, 3, or 4 spaces as tabs, which I consider a hack born from historical reasons.

@koush
Copy link
Author

koush commented Jan 24, 2026

Does this solve the problem of LLMs struggling to distinguish between spaces and tabs when editing files? I love tabs, and I hate using 2, 3, or 4 spaces as tabs, which I consider a hack born from historical reasons.

The search string doesn't require indent but the tool response encourages it. The selected code is also echoed back so the model is less likely to make an error with the code in recent context.
This implementation doesn't have any of the fuzzy matching from the existing edit tool but it could be added.

@kripper
Copy link

kripper commented Jan 25, 2026

Yes. The implementation must:

  1. Perform fuzzy matching on the source string.
  2. Infer the indentation scheme used by the original file (tabs vs. spaces, width).
  3. Normalize the LLM output by replacing its indentation with a deterministic indentation generated from the inferred scheme.

LLMs are unreliable for this and this is a cause of constant failures.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants