Skip to content

UN-3215 [FIX] Add LLMCompat bridge class to fix retriever LLM compatibility with llama-index#1788

Open
pk-zipstack wants to merge 10 commits intomainfrom
fix/retriever-llm-bridge-class
Open

UN-3215 [FIX] Add LLMCompat bridge class to fix retriever LLM compatibility with llama-index#1788
pk-zipstack wants to merge 10 commits intomainfrom
fix/retriever-llm-bridge-class

Conversation

@pk-zipstack
Copy link
Contributor

@pk-zipstack pk-zipstack commented Feb 12, 2026

What

  • Added LLMCompat class in SDK1 that emulates the llama-index LLM interface without requiring llama-index as a dependency
  • Added RetrieverLLM class in prompt-service that inherits from llama-index's LLM base class and delegates calls to LLMCompat
  • Updated all affected prompt-service retrievers to use RetrieverLLM when passing LLM to llama-index components
  • Fixed SubQuestionQueryEngine failing due to missing llama-index-question-gen-openai package

Why

  • After the SDK1 migration, unstract.sdk1.llm.LLM uses litellm.completion() directly and does not inherit from llama_index.core.llms.llm.LLM
  • Prompt-service retrievers (KeywordTable, Subquestion, Fusion, Router) pass this LLM to llama-index components that expect the llama-index base class
  • This causes 'LLM' object has no attribute 'predict' errors at runtime for all retrieval strategies except Simple
  • Additionally, llama-index's resolve_llm() asserts isinstance(llm, LLM) which fails for plain classes — breaking Fusion, Router, and Subquestion retrievers

How

Two-layer architecture:

  1. LLMCompat (unstract/sdk1/src/unstract/sdk1/llm.py): A plain Python class (no llama-index dependency) that emulates the llama-index LLM interface. Uses local dataclass emulations of llama-index types (ChatMessage, ChatResponse, CompletionResponse, LLMMetadata, MessageRole). Follows the existing EmbeddingCompat initialization pattern — takes raw adapter params and creates an SDK1 LLM internally. Implements chat(), complete(), predict(), achat(), acomplete(), apredict().

  2. RetrieverLLM (prompt-service/.../core/retrievers/retriever_llm.py): Inherits from llama_index.core.llms.llm.LLM (passing resolve_llm() isinstance checks) and delegates all LLM calls to an internal LLMCompat instance. Converts emulated return types back to real llama-index types.

Supporting changes:

  • llm_helper.py: Helper to convert SDK1 LLMRetrieverLLM instance
  • base_retriever.py: Added llama_index_llm property that lazily creates and caches the RetrieverLLM wrapper
  • Retriever updates: Replaced self.llm with self.llama_index_llm where passed to llama-index components (keyword_table, subquestion, fusion, router)
  • SubQuestion fix: Explicitly pass LLMQuestionGenerator to SubQuestionQueryEngine.from_defaults() to avoid import of llama-index-question-gen-openai

Can this PR break any existing features. If yes, please list possible items. If no, please explain why. (PS: Admins do not merge the PR without this section filled)

  • No. LLMCompat and RetrieverLLM are only used when retrievers pass LLM to llama-index components. The Simple, Automerging, and Recursive retrievers don't pass LLM to llama-index and remain unchanged. The bridge delegates all calls to the existing SDK1 LLM, so no behavior changes.

Database Migrations

  • None

Env Config

  • None

Relevant Docs

  • N/A

Related Issues or PRs

Dependencies Versions

  • No new dependencies added. LLMCompat uses emulated types (no llama-index imports in SDK1). RetrieverLLM uses llama-index core classes already present in the prompt-service.

Notes on Testing

  • Set retrieval strategy to Keyword Table, Subquestion, Fusion, or Router in Prompt Studio with chunk_size > 0 and run extraction — should no longer error with 'LLM' object has no attribute 'predict'
  • Verify Simple, Automerging, Recursive strategies still work (regression check)

Screenshots

  • N/A

Checklist

I have read and understood the Contribution Guidelines.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 12, 2026

Summary by CodeRabbit

  • New Features

    • Enhanced LLM compatibility for retrieval operations with improved framework integration
    • Added automatic question generation for subquery retrieval to better handle complex queries
    • Expanded LLM capabilities with support for both chat-based and completion-based interactions
  • Refactor

    • Refactored internal LLM adapter layer to support comprehensive async and sync operations

Walkthrough

The changes introduce a compatibility layer to adapt the SDK1 LLM interface for use with llama-index retriever components. This includes new emulated llama-index types (MessageRole, ChatMessage, ChatResponse, CompletionResponse, LLMMetadata) and a wrapper class LLMCompat in SDK1, along with a RetrieverLLM adapter in the prompt service that bridges the two interfaces.

Changes

Cohort / File(s) Summary
SDK1 Compatibility Types
unstract/sdk1/src/unstract/sdk1/llm.py
Introduces emulated llama-index types (MessageRole, ChatMessage, ChatResponse, CompletionResponse, LLMMetadata) and LLMCompat wrapper class providing sync/async chat and completion interfaces that delegate to the underlying LLM instance.
Retriever LLM Adapter
prompt-service/src/unstract/prompt_service/core/retrievers/retriever_llm.py
New file containing RetrieverLLM class that adapts an SDK1 LLMCompat instance to conform to llama-index's LLM interface with chat, complete, and async methods; streaming methods raise NotImplementedError.
Base Retriever Integration
prompt-service/src/unstract/prompt_service/core/retrievers/base_retriever.py
Adds RetrieverLLM import and introduces static _get_llm() method to convert SDK1 LLM instances to RetrieverLLM for llama-index compatibility.
Retriever Implementation Updates
prompt-service/src/unstract/prompt_service/core/retrievers/fusion.py, prompt-service/src/unstract/prompt_service/core/retrievers/keyword_table.py, prompt-service/src/unstract/prompt_service/core/retrievers/subquestion.py
Minor updates including comment removal and addition of LLMQuestionGenerator import and integration in SubQuestionQueryEngine initialization.

Sequence Diagram

sequenceDiagram
    participant Client
    participant Retriever as BaseRetriever
    participant Converter as _get_llm()
    participant LLMCompat
    participant RetrieverLLM
    participant LlamaIndex as llama-index<br/>Components

    Client->>Retriever: Initialize with LLM
    Retriever->>Converter: _get_llm(llm)
    Converter->>LLMCompat: Wrap LLM instance
    LLMCompat-->>Converter: Return LLMCompat
    Converter->>RetrieverLLM: Initialize with LLMCompat
    RetrieverLLM-->>Converter: Return RetrieverLLM instance
    Converter-->>Retriever: Return RetrieverLLM
    Retriever->>LlamaIndex: Pass RetrieverLLM to retriever ops
    LlamaIndex->>RetrieverLLM: Call chat/complete methods
    RetrieverLLM->>LLMCompat: Delegate to internal LLMCompat
    LLMCompat->>LLMCompat: Route to chat/complete/predict
    LLMCompat-->>RetrieverLLM: Return ChatResponse/CompletionResponse
    RetrieverLLM-->>LlamaIndex: Return response
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 72.50% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly describes the main change: adding LLMCompat bridge class to fix retriever LLM compatibility with llama-index. It is concise, specific, and directly related to the primary objective of the PR.
Description check ✅ Passed The description is comprehensive and well-structured, covering all required template sections including What, Why, How, backward compatibility, testing notes, and related issues. All critical information is provided.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/retriever-llm-bridge-class

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

pre-commit-ci bot and others added 3 commits February 12, 2026 19:55
…endency (#1793)

* [FEAT] Rewrite LLMCompat to emulate llama-index interface without dependency

- Add emulated llama-index types (MessageRole, ChatMessage, ChatResponse,
  CompletionResponse, LLMMetadata) as dataclasses
- Rewrite LLMCompat to match llama-index LLM interface without inheritance
- Implement chat(), complete(), achat(), acomplete() for retriever compatibility
- Follow EmbeddingCompat initialization pattern (takes adapter params directly)
- Remove llama-index imports from llm.py
- Raise NotImplementedError for streaming methods not needed by retrievers

This allows SubQuestionQueryEngine, QueryFusionRetriever, and other
llama-index components to use LLMCompat without requiring llama-index
as a dependency.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Added retrieverLLM class to handle issues with retreivers

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: pk-zipstack <praveen@zipstack.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
@pk-zipstack pk-zipstack marked this pull request as ready for review February 19, 2026 04:28
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (6)
unstract/sdk1/src/unstract/sdk1/llm.py (4)

36-43: Prefer enum.StrEnum over (str, Enum) dual inheritance.

Python 3.11+ provides StrEnum which is the idiomatic replacement. Since the codebase already uses | union types (Python 3.10+), StrEnum should be available.

♻️ Suggested change
-from enum import Enum
+from enum import StrEnum

-class MessageRole(str, Enum):
+class MessageRole(StrEnum):
     """Emulates llama_index.core.base.llms.types.MessageRole."""
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@unstract/sdk1/src/unstract/sdk1/llm.py` around lines 36 - 43, Replace the
MessageRole class to inherit from enum.StrEnum instead of (str, Enum): import
StrEnum from enum and change the class definition for MessageRole to subclass
StrEnum; keep the same member names and values (SYSTEM, USER, ASSISTANT,
FUNCTION, TOOL) so behavior and comparisons remain identical.

659-676: predict and apredict assume a PromptTemplate-like interface on prompt.

These methods call prompt.format_messages(llm=self, ...) and prompt.format(llm=self, ...), expecting a llama-index PromptTemplate object. The type is annotated as Any, which hides this contract. Since llama-index's LLM.predict receives a BasePromptTemplate, this should work — but if ever called with a plain string, it'll raise AttributeError.

This is acceptable for the bridge use case, but documenting the expected type (even as a comment) would help maintainability.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@unstract/sdk1/src/unstract/sdk1/llm.py` around lines 659 - 676, predict and
apredict assume a PromptTemplate-like object (they call
prompt.format_messages(...) and prompt.format(...)) but are annotated as Any;
update the contract by changing the type annotation from Any to the appropriate
llama-index type (e.g. BasePromptTemplate or PromptTemplate) or at minimum add a
clear inline comment/docstring stating the expected type, and add a simple
runtime check in predict and apredict that raises a descriptive TypeError if the
prompt lacks format/format_messages (e.g. if not hasattr(prompt, "format") and
not hasattr(prompt, "format_messages")), referencing the predict/apredict
methods and self.metadata.is_chat_model, so callers get a clear error instead of
an AttributeError.

574-628: LLMCompat creates a second LLM instance — consider accepting an existing one.

RetrieverLLM.__init__ (in retriever_llm.py, Line 33-43) reads private attributes (llm._adapter_id, llm._adapter_metadata, etc.) from the SDK1 LLM to construct LLMCompat, which then creates another LLM internally (Line 613). This means every retriever bridge creates a duplicate LLM with duplicate adapter validation, platform config fetching, and callback setup.

Consider allowing LLMCompat to accept an existing LLM instance directly rather than always constructing a new one, to avoid this duplication and the reliance on private attributes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@unstract/sdk1/src/unstract/sdk1/llm.py` around lines 574 - 628, LLMCompat
currently always constructs a new LLM (in LLMCompat.__init__) causing duplicate
LLMs; change the constructor to accept an optional existing LLM instance
parameter (e.g., llm_instance: LLM | None = None) and if provided set
self._llm_instance = llm_instance and skip creating a new LLM, while still
setting self._tool, self._adapter_instance_id, self.model_name =
self._llm_instance.get_model_name(), and self.callback_manager =
self._llm_instance.callback_manager (or None) to preserve behavior; retain the
original parameter list and only construct a new LLM when llm_instance is None
so RetrieverLLM can pass the SDK1 LLM directly and avoid duplicated adapter
validation and config fetching.

804-815: Add comment documenting the duck-typing contract for ChatMessage compatibility.

_to_litellm_messages relies on duck typing to accept both the emulated ChatMessage (defined in this file) and real llama_index.core.base.llms.types.ChatMessage objects, since achat() and acomplete() may receive either. Both provide .role and .content attributes, with the getattr(m.role, "value", ...) pattern handling both enum and string variants.

While the .content attribute is maintained in llama-index as a backward-compatibility property (returning str | None from the underlying block-based structure), a brief comment here clarifying the intentional duck-typing contract would improve maintainability and signal to future maintainers why both types are accepted.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@unstract/sdk1/src/unstract/sdk1/llm.py` around lines 804 - 815, Add a brief
doc-comment above the _to_litellm_messages function explaining the duck-typing
contract: that the function accepts both the emulated ChatMessage defined in
this module and llama_index.core.base.llms.types.ChatMessage objects because
both expose .role and .content, and that getattr(m.role, "value", str(m.role))
handles enum or string roles while m.content may be str | None; mention this is
intentional to support inputs from achat() and acomplete().
prompt-service/src/unstract/prompt_service/core/retrievers/retriever_llm.py (1)

15-15: Remove unused noqa directives flagged by Ruff.

Static analysis (Ruff RUF100) reports 10 unused noqa directives across this file (N811, ANN401). These should be removed to keep the codebase clean.

Also applies to: 33-33, 57-57, 72-72, 80-80, 88-88, 97-97, 112-112, 120-120, 128-128

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prompt-service/src/unstract/prompt_service/core/retrievers/retriever_llm.py`
at line 15, Remove the redundant noqa directives reported by Ruff (RUF100) in
retriever_llm.py: delete the unused "# noqa: N811" on the import "from
llama_index.core.llms.llm import LLM as LlamaIndexBaseLLM" and likewise remove
the other unused "# noqa" tags on the affected lines (the import/annotation
lines referencing LlamaIndexBaseLLM and any annotated functions/classes flagged
with ANN401). Ensure you only remove the "# noqa" comments (not the imports or
annotations themselves), run Ruff/pytest to verify no new linter errors, and
keep the symbols like LlamaIndexBaseLLM unchanged so references in the file
remain valid.
llm-predict-error-investigation.md (1)

1-221: Debug artifact should not be committed to main.

This file reads as a transient investigation note, not stable documentation. Committing it to main introduces noise with no durable value: it already contains stale content (see below) and will continue diverging from the codebase over time.

If a record of the design decision is needed, consider:

  • Converting it to an Architecture Decision Record (ADR) in a docs/adr/ folder, keeping only the decision, rationale, and final chosen approach.
  • Moving the content into the PR description (where it already lives as context) and dropping the file entirely.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@llm-predict-error-investigation.md` around lines 1 - 221, This investigation
file is a debug artifact and should not be committed to main; remove
llm-predict-error-investigation.md from the branch (or revert the commit) and
either convert its essential outcome into a concise ADR under docs/adr/
(summarizing the chosen approach such as the SDK1LLMBridge, use of LiteLLM, or
SimpleKeywordTableIndex and the BaseRetriever/KeywordTableIndex impact) or move
the full investigative content into the PR description/temporary notes, keeping
only the final decision and rationale in the repo.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@llm-predict-error-investigation.md`:
- Around line 150-184: Update the document's Option 1 example to match the
actual two-layer implementation: show LLMCompat (from
unstract/sdk1/src/unstract/sdk1/llm.py) that emulates llama-index types and
RetrieverLLM (from prompt-service/.../core/retrievers/retriever_llm.py) which
inherits from llama_index.core.llms.llm.LLM and delegates to LLMCompat; replace
the single-class SDK1LLMBridge/CustomLLM example with a short description and
class names matching LLMCompat -> RetrieverLLM delegation and note that
RetrieverLLM inherits from llama_index.core.llms.llm.LLM (not CustomLLM).
- Line 5: Add explicit language specifiers to the two fenced code blocks to
satisfy MD040: change the traceback block that starts with "ERROR:
unstract.prompt_service.core.retrievers.keyword_table:" to use ```text and
change the block containing "BaseLLM -> LLM (has predict()) ->
CustomLLM/FunctionCallingLLM -> Provider" to use ```python so markdownlint
recognizes their languages.

In `@prompt-service/src/unstract/prompt_service/core/retrievers/retriever_llm.py`:
- Around line 33-43: Declare _compat as a Pydantic PrivateAttr at the class
level for v2 compatibility: import PrivateAttr from pydantic and add a class
attribute like "_compat: Any = PrivateAttr()" (or similar typed PrivateAttr) on
the retriever class, then keep the existing assignment to self._compat in
__init__ where LLMCompat(...) is created (referencing __init__, LLMCompat, and
the _compat attribute).

In `@unstract/sdk1/src/unstract/sdk1/llm.py`:
- Around line 698-728: LLMCompat.chat() and LLMCompat.complete() call
litellm.completion() directly, skipping the SDK’s usage recording and structured
error wrapping used elsewhere; update these methods to (1) call into the
internal LLM instance’s usage recorder (e.g., invoke
self._llm_instance._record_usage(...) or the same _record_usage flow used by
LLM.complete) after getting the response so usage/audit data is emitted, (2)
wrap litellm.completion() in a try/except and re-raise errors as LLMError to
match callers' expectations, and (3) if metrics capture is required, apply the
same `@capture_metrics` behavior or add a TODO comment documenting intentional
omission; locate changes in the LLMCompat.chat and LLMCompat.complete methods
around the litellm.completion(...) calls and the response handling to implement
these fixes.

---

Nitpick comments:
In `@llm-predict-error-investigation.md`:
- Around line 1-221: This investigation file is a debug artifact and should not
be committed to main; remove llm-predict-error-investigation.md from the branch
(or revert the commit) and either convert its essential outcome into a concise
ADR under docs/adr/ (summarizing the chosen approach such as the SDK1LLMBridge,
use of LiteLLM, or SimpleKeywordTableIndex and the
BaseRetriever/KeywordTableIndex impact) or move the full investigative content
into the PR description/temporary notes, keeping only the final decision and
rationale in the repo.

In `@prompt-service/src/unstract/prompt_service/core/retrievers/retriever_llm.py`:
- Line 15: Remove the redundant noqa directives reported by Ruff (RUF100) in
retriever_llm.py: delete the unused "# noqa: N811" on the import "from
llama_index.core.llms.llm import LLM as LlamaIndexBaseLLM" and likewise remove
the other unused "# noqa" tags on the affected lines (the import/annotation
lines referencing LlamaIndexBaseLLM and any annotated functions/classes flagged
with ANN401). Ensure you only remove the "# noqa" comments (not the imports or
annotations themselves), run Ruff/pytest to verify no new linter errors, and
keep the symbols like LlamaIndexBaseLLM unchanged so references in the file
remain valid.

In `@unstract/sdk1/src/unstract/sdk1/llm.py`:
- Around line 36-43: Replace the MessageRole class to inherit from enum.StrEnum
instead of (str, Enum): import StrEnum from enum and change the class definition
for MessageRole to subclass StrEnum; keep the same member names and values
(SYSTEM, USER, ASSISTANT, FUNCTION, TOOL) so behavior and comparisons remain
identical.
- Around line 659-676: predict and apredict assume a PromptTemplate-like object
(they call prompt.format_messages(...) and prompt.format(...)) but are annotated
as Any; update the contract by changing the type annotation from Any to the
appropriate llama-index type (e.g. BasePromptTemplate or PromptTemplate) or at
minimum add a clear inline comment/docstring stating the expected type, and add
a simple runtime check in predict and apredict that raises a descriptive
TypeError if the prompt lacks format/format_messages (e.g. if not
hasattr(prompt, "format") and not hasattr(prompt, "format_messages")),
referencing the predict/apredict methods and self.metadata.is_chat_model, so
callers get a clear error instead of an AttributeError.
- Around line 574-628: LLMCompat currently always constructs a new LLM (in
LLMCompat.__init__) causing duplicate LLMs; change the constructor to accept an
optional existing LLM instance parameter (e.g., llm_instance: LLM | None = None)
and if provided set self._llm_instance = llm_instance and skip creating a new
LLM, while still setting self._tool, self._adapter_instance_id, self.model_name
= self._llm_instance.get_model_name(), and self.callback_manager =
self._llm_instance.callback_manager (or None) to preserve behavior; retain the
original parameter list and only construct a new LLM when llm_instance is None
so RetrieverLLM can pass the SDK1 LLM directly and avoid duplicated adapter
validation and config fetching.
- Around line 804-815: Add a brief doc-comment above the _to_litellm_messages
function explaining the duck-typing contract: that the function accepts both the
emulated ChatMessage defined in this module and
llama_index.core.base.llms.types.ChatMessage objects because both expose .role
and .content, and that getattr(m.role, "value", str(m.role)) handles enum or
string roles while m.content may be str | None; mention this is intentional to
support inputs from achat() and acomplete().

@pk-zipstack pk-zipstack changed the title [FIX] Add LLMCompat bridge class to fix retriever LLM compatibility with llama-index UN-3215 [FIX] Add LLMCompat bridge class to fix retriever LLM compatibility with llama-index Mar 3, 2026
Copy link
Contributor

@harini-venkataraman harini-venkataraman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some comments. Please address. Otherwise LGTM

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

@hari-kuriakose hari-kuriakose left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pk-zipstack LGTM overall.

Let's address the minor comments though, from both humans and AI.

…e_retriever.py

Co-authored-by: Hari John Kuriakose <hari@zipstack.com>
Signed-off-by: Praveen Kumar <praveen@zipstack.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@prompt-service/src/unstract/prompt_service/core/retrievers/base_retriever.py`:
- Around line 28-41: The constructor is using an undefined _get_llm and never
initializes the cache causing NameError/AttributeError and potential
double-wrapping of LLMs; fix by (1) replacing the call to _get_llm with the
correct factory (e.g., get_llm) when assigning self.llm (or import/define
_get_llm if that was intended), (2) initializing self._llama_index_llm = None in
__init__, and (3) adjust the llama_index_llm property to avoid double-bridging
by checking types: if self._llama_index_llm is None then if isinstance(self.llm,
RetrieverLLM) set self._llama_index_llm = get_llama_index_llm(self.llm) (or use
the existing RetrieverLLM directly) otherwise convert once with
get_llama_index_llm(self.llm), then return self._llama_index_llm.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 0be9fb1 and 4a6c392.

📒 Files selected for processing (1)
  • prompt-service/src/unstract/prompt_service/core/retrievers/base_retriever.py

pk-zipstack and others added 2 commits March 3, 2026 15:28
- Remove llm_helper.py — move conversion to _get_llm staticmethod
  in BaseRetriever
- Convert SDK1 LLM to RetrieverLLM eagerly in constructor
- Remove llama_index_llm lazy property
- Revert all retrievers to use self.llm directly

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@sonarqubecloud
Copy link

sonarqubecloud bot commented Mar 3, 2026

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
prompt-service/src/unstract/prompt_service/core/retrievers/subquestion.py (1)

43-50: Fail fast when self.llm is missing before building subquestion components.

Add an explicit guard before constructing question_gen and query engine so error handling is deterministic and immediate.

Suggested patch
             query_bundle = QueryBundle(query_str=self.prompt)

+            if self.llm is None:
+                raise RetrievalError(
+                    "SubquestionRetriever requires an initialized LLM instance."
+                )
+
             question_gen = LLMQuestionGenerator.from_defaults(
                 llm=self.llm,
             )
             query_engine = SubQuestionQueryEngine.from_defaults(
                 query_engine_tools=query_engine_tools,
                 question_gen=question_gen,
                 use_async=True,
                 llm=self.llm,
             )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prompt-service/src/unstract/prompt_service/core/retrievers/subquestion.py`
around lines 43 - 50, Before calling LLMQuestionGenerator.from_defaults and
SubQuestionQueryEngine.from_defaults, add an explicit guard that checks self.llm
and raises a clear exception (e.g., ValueError) if it is None or falsy so
construction fails fast; update the code path where question_gen and
query_engine are created (LLMQuestionGenerator.from_defaults,
SubQuestionQueryEngine.from_defaults) to perform this check first and
return/raise immediately with a descriptive message referencing self.llm.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@prompt-service/src/unstract/prompt_service/core/retrievers/subquestion.py`:
- Around line 43-50: Before calling LLMQuestionGenerator.from_defaults and
SubQuestionQueryEngine.from_defaults, add an explicit guard that checks self.llm
and raises a clear exception (e.g., ValueError) if it is None or falsy so
construction fails fast; update the code path where question_gen and
query_engine are created (LLMQuestionGenerator.from_defaults,
SubQuestionQueryEngine.from_defaults) to perform this check first and
return/raise immediately with a descriptive message referencing self.llm.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to Reviews > Disable Cache setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 4a6c392 and a1040c1.

📒 Files selected for processing (4)
  • prompt-service/src/unstract/prompt_service/core/retrievers/base_retriever.py
  • prompt-service/src/unstract/prompt_service/core/retrievers/fusion.py
  • prompt-service/src/unstract/prompt_service/core/retrievers/keyword_table.py
  • prompt-service/src/unstract/prompt_service/core/retrievers/subquestion.py
✅ Files skipped from review due to trivial changes (2)
  • prompt-service/src/unstract/prompt_service/core/retrievers/keyword_table.py
  • prompt-service/src/unstract/prompt_service/core/retrievers/fusion.py

@github-actions
Copy link
Contributor

github-actions bot commented Mar 3, 2026

Test Results

Summary
  • Runner Tests: 11 passed, 0 failed (11 total)
  • SDK1 Tests: 63 passed, 0 failed (63 total)

Runner Tests - Full Report
filepath function $$\textcolor{#23d18b}{\tt{passed}}$$ SUBTOTAL
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_logs}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_cleanup}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_cleanup\_skip}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_client\_init}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_image\_exists}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_image}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_container\_run\_config}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_container\_run\_config\_without\_mount}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_run\_container}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_image\_for\_sidecar}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_sidecar\_container}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{TOTAL}}$$ $$\textcolor{#23d18b}{\tt{11}}$$ $$\textcolor{#23d18b}{\tt{11}}$$
SDK1 Tests - Full Report
filepath function $$\textcolor{#23d18b}{\tt{passed}}$$ SUBTOTAL
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_success\_on\_first\_attempt}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_retry\_on\_connection\_error}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_non\_retryable\_http\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_retryable\_http\_errors}}$$ $$\textcolor{#23d18b}{\tt{3}}$$ $$\textcolor{#23d18b}{\tt{3}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_post\_method\_retry}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_retry\_logging}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_prompt.py}}$$ $$\textcolor{#23d18b}{\tt{TestPromptToolRetry.test\_success\_on\_first\_attempt}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_prompt.py}}$$ $$\textcolor{#23d18b}{\tt{TestPromptToolRetry.test\_retry\_on\_errors}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_prompt.py}}$$ $$\textcolor{#23d18b}{\tt{TestPromptToolRetry.test\_wrapper\_methods\_retry}}$$ $$\textcolor{#23d18b}{\tt{4}}$$ $$\textcolor{#23d18b}{\tt{4}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_connection\_error\_is\_retryable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_timeout\_is\_retryable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_http\_error\_retryable\_status\_codes}}$$ $$\textcolor{#23d18b}{\tt{3}}$$ $$\textcolor{#23d18b}{\tt{3}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_http\_error\_non\_retryable\_status\_codes}}$$ $$\textcolor{#23d18b}{\tt{5}}$$ $$\textcolor{#23d18b}{\tt{5}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_http\_error\_without\_response}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_os\_error\_retryable\_errno}}$$ $$\textcolor{#23d18b}{\tt{5}}$$ $$\textcolor{#23d18b}{\tt{5}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_os\_error\_non\_retryable\_errno}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_other\_exception\_not\_retryable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_exponential\_backoff\_without\_jitter}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_exponential\_backoff\_with\_jitter}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_max\_delay\_cap}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_max\_delay\_cap\_with\_jitter}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_successful\_call\_first\_attempt}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_retry\_after\_transient\_failure}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_max\_retries\_exceeded}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_retry\_with\_custom\_predicate}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_no\_retry\_with\_predicate\_false}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_exception\_not\_in\_tuple\_not\_retried}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_default\_configuration}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_environment\_variable\_configuration}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_invalid\_max\_retries}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_invalid\_base\_delay}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_invalid\_multiplier}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_jitter\_values}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_custom\_exceptions\_only}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_custom\_predicate\_only}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_both\_exceptions\_and\_predicate}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_exceptions\_match\_but\_predicate\_false}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_retry\_platform\_service\_call\_exists}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_retry\_prompt\_service\_call\_exists}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_platform\_service\_decorator\_retries\_on\_connection\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_prompt\_service\_decorator\_retries\_on\_timeout}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryLogging.test\_warning\_logged\_on\_retry}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryLogging.test\_info\_logged\_on\_success\_after\_retry}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryLogging.test\_exception\_logged\_on\_giving\_up}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{TOTAL}}$$ $$\textcolor{#23d18b}{\tt{63}}$$ $$\textcolor{#23d18b}{\tt{63}}$$

@sonarqubecloud
Copy link

sonarqubecloud bot commented Mar 3, 2026

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants