Skip to content

Security: HexmosTech/LiveReview

SECURITY.md

LiveReview Security

This document answers security and procurement questions for LiveReview in concrete terms.

Quick Answers

  • Vulnerability reports: use GitHub private security reporting first; use shrijith@hexmos.com if GitHub reporting is not possible.
  • Response time: acknowledgement within 2 business days; for internally confirmed findings, triage and remediation work starts within 7 calendar days.
  • Self-hosted mode: your team runs LiveReview and keeps data in your own infrastructure.
  • External model mode: if you configure OpenAI, Anthropic, Gemini, or other external APIs, review content is sent to that provider for inference.
  • Local model mode: if you configure Ollama on your own host, inference traffic stays on your infrastructure.
  • Security scans: gitleaks, OSV scanner, govulncheck, and Semgrep run in GitHub Actions.
  • SBOM: generated by workflow and published in GitHub release assets.

Security Contact And Response Times

Primary private reporting channel: GitHub Security Advisories for this repository.

Fallback private channel (if GitHub reporting is unavailable): shrijith@hexmos.com.

We treat security issues as high-priority work. This address is the founder's direct inbox so reports receive immediate attention.

Disclosure process:

  1. We acknowledge receipt within 2 business days.
  2. We begin triage and remediation planning within 7 calendar days for findings confirmed by our internal security review.
  3. We coordinate disclosure timing with the reporter for high-impact issues.

Please include reproduction steps, affected version, deployment mode, and impact.

Deployment Models

Self-Hosted Deployment

In self-hosted deployment, your team runs the application stack and database. Typical local development ports are:

  • API: 8888
  • Frontend: 8081

In this mode, data storage location, backup policy, retention policy, and network egress policy are controlled by your infrastructure team.

Cloud/Provider-Integrated Operation

LiveReview supports external AI providers and VCS provider APIs. When these integrations are configured, LiveReview sends request payloads to those external endpoints to perform review and integration actions.

Examples include:

  • AI inference calls to configured provider API endpoints.
  • API calls to GitHub, GitLab, Gitea, and Bitbucket endpoints for review automation and comment workflows.

Data Sent And Data Stored

What LiveReview Sends Over Network

Event Data Sent Destination When It Happens
AI review request Review prompt and review context payload Configured AI provider endpoint (or local Ollama endpoint) When AI review is triggered
Git provider operations Provider API request payloads, auth context GitHub/GitLab/Gitea/Bitbucket API During provider integration and review actions
Webhook processing and callbacks Webhook payload handling and follow-up API requests Configured provider endpoints When webhook events are received

What LiveReview Stores Locally/In DB

Data Type Storage Why
Auth/session and integration token records Database tables and provider token stores User sessions and provider integration
Review, organization, and user records Database tables Product functionality and auditability
Connector and configuration metadata Database tables/configuration Connector setup and runtime behavior

Output Sanitization Before Returning Results

LiveReview sanitizes model output before returning it to users and before posting comment bodies to provider markdown renderers. This includes redaction for high-confidence sensitive patterns and markdown/link safety handling.

Reference: docs/security/llm_output_sanitization.md

AI Risks And Mitigations

AI Input Guardrails (Before Model Call)

Risk Automatic Handling Where Implemented
Prompt-injection text in comments/diffs Risk scoring, injection pattern detection, and neutralization run before provider call internal/aisanitize/sanitizer.go, internal/prompts/code_changes.go
Role/control token smuggling Known control tokens are replaced with blocked markers internal/aisanitize/sanitizer.go
Instruction override phrases Phrases like ignore previous instructions are neutralized internal/aisanitize/sanitizer.go
Hidden character obfuscation Zero-width control characters are stripped internal/aisanitize/sanitizer.go
Secret leakage in prompt content Secret patterns are redacted before request egress internal/aisanitize/sanitizer.go
PII leakage in natural-language fragments De-identification is applied to comment-like/natural-language text internal/aisanitize/sanitizer.go, internal/prompts/code_changes.go

Cloud provider preflight is wired in the connector path before outbound provider calls.

Evidence: internal/aiconnectors/connector.go

Prompt risk thresholds are configurable with environment variables used by the sanitizer layer:

  • LIVEREVIEW_SANITIZER_MEDIUM_THRESHOLD
  • LIVEREVIEW_SANITIZER_HIGH_THRESHOLD

AI Output Guardrails (After Model Response)

Risk Automatic Handling Where Implemented
Secret values in generated output Post-output secret redaction runs before returning response internal/aisanitize/sanitizer.go, internal/api/unified_processor_v2.go
PII values in generated output Post-output de-identification runs before user-visible output internal/aisanitize/sanitizer.go
Unsafe HTML in model output Raw HTML tags are escaped internal/aisanitize/markdown.go
Unsafe markdown link schemes Unsafe destinations are neutralized; safe destinations are preserved internal/aisanitize/markdown.go
Unsanitized provider comment posting Provider formatters sanitize content before outbound API submission internal/providers/github/github.go, internal/providers/gitlab/gitlab.go, internal/providers/gitea/gitea_provider.go

For structured JSON responses, sanitization is applied after parse/repair and before fields are returned.

Evidence: internal/ai/langchain/json_repair_integration.go

Current behavior is sanitize-and-continue. The system redacts/neutralizes and continues review flow instead of hard-failing the request.

Logging Redaction And Safety

  • Logs keep guardrail metadata such as risk band, counts, and flags.
  • Logs do not intentionally include raw secrets, raw tokens, or full prompt bodies.
  • Sanitization wrappers include panic-safe handling so sanitizer failures do not crash review flow.

Evidence: internal/aisanitize/sanitizer.go, internal/api/unified_processor_v2.go, internal/ai/langchain/json_repair_integration.go

Insecure Suggestions

Risk: model output can include insecure or incorrect recommendations.

Current handling:

  • Generated suggestions are advisory and require human review.
  • Teams can enforce branch protection, CI checks, and reviewer approval before merge.

Model Provenance

Risk: model behavior differs by provider and deployment.

Current handling:

  • Self-hosted local model option (for example Ollama) keeps model serving in customer-controlled infrastructure.
  • External provider mode is explicit and operator-configured.

AI Guardrail Verification

Targeted tests and docs:

Automated Security Checks

Workflow Badge What It Checks Trigger What It Guarantees What It Does Not Guarantee
gitleaks gitleaks Secret pattern scanning in repository history/content Pull request, push, manual Detects many leaked credential patterns early Cannot guarantee zero secret exposure or catch every custom secret format
osv-scanner osv-scanner Dependency vulnerability scan using OSV database Pull request, push, manual Detects known vulnerable dependencies in scan scope Cannot detect unknown (0-day) vulnerabilities
govulncheck govulncheck Go package vulnerability analysis Pull request, push, manual Detects known Go vulnerability matches Cannot guarantee all runtime exploit paths are covered
Semgrep Semgrep Static analysis for security patterns Pull request, push, scheduled, manual Detects many common code-level security anti-patterns Cannot prove absence of logic flaws or business-logic abuse
SBOM sbom Software bill of materials generation (Syft) Release publish, push (dependency-relevant files), manual Produces auditable component inventory for releases Does not by itself prove component safety

SBOM And Dependency Transparency

On release publication, SBOM JSON artifacts are generated and uploaded to the release assets so buyers can audit dependency inventory for shipped versions.

Security Refactor Evidence (Storage And Network Split)

LiveReview completed a large code organization refactor that separates local persistence and outbound network behavior into dedicated modules.

Why this matters:

  1. Database and file operations are cataloged in one place for storage audit.
  2. HTTP call construction and transport operations are cataloged in one place for network audit.
  3. Security review can verify changes by reviewing status docs when operations move or new operations are added.

Known Limits

  • Automated scanners reduce risk but do not guarantee absence of vulnerabilities.
  • If external AI providers are configured, review data is sent to those providers during inference.
  • Data retention and deletion windows in self-hosted deployment are set by the deployment operator unless otherwise configured.

Supported Versions

Security fixes are prioritized on currently supported, actively maintained releases. Upgrade to the latest release to receive the most recent security improvements.

Where To Verify

There aren’t any published security advisories