Skip to content

Implement LLM-Generated Next-Action Chips #1319

@TheMostlyGreat

Description

@TheMostlyGreat

Summary

Transform hardcoded next-action chips into LLM-generated, contextual suggestions that work for ANY Arcade tool (not just Gmail_ListEmails).

Approach

Selected: Option 1 - LLM-generated suggestions (server-side with env toggle)

Key Features:

  • Server-side generation using Claude Sonnet 4
  • Semantic understanding of tool outputs
  • Works for all Arcade tools (no hardcoding)
  • Environment variable toggle: NEXT_ACTION_CHIPS_ENABLED=true
  • Chips appear AFTER AI summary (intentional UX design - users need context before action)
  • Cost-effective: ~$0.05-0.10/day with 90% prompt caching

Critical Verification ✅

VERIFIED: writer.write() can be called from onFinish callback

  • Architecture analysis confirms writer remains in scope
  • Pattern proven by existing tools (createDocument, etc.)
  • Implementation approach is sound

Documentation

Full implementation plan: planning/next-action-chips-llm.md
Critique & verification: planning/PLAN-CRITIQUE.md
Design rationale: planning/ONFINISH-CALLBACK-EXPLANATION.md

Implementation Checklist

Task 1: Environment Configuration

  • Add NEXT_ACTION_CHIPS_ENABLED=true to .env
  • Remove client-side env var exposure (server-side only)

Task 2: LLM Generation Module

  • Create lib/ai/generate-next-steps-llm.ts
    • generateNextStepsLLM function
    • NextStep type definition
    • Zod schema for validation
    • Prompt with caching enabled
    • Cost optimization (maxTokens: 200)

Task 3: Server Integration

  • Modify app/(chat)/api/chat/route.ts
    • Import generation function
    • Add logic to onFinish callback
    • Extract tool invocations from response
    • Generate next-steps for each tool
    • Stream via writer.write()

Task 4: Client Components

  • Create hooks/use-next-steps.ts
    • Follow DataStreamHandler pattern
    • Use useChat({ id }) hook
    • Process dataStream with lastProcessedIndex
    • Return Map of toolCallId → NextStep[]
  • Modify components/message.tsx
    • Import and use useNextSteps hook
    • Pass nextSteps to NextStepChips
  • Modify components/next-step-chips.tsx
    • Accept nextSteps prop (remove hardcoded generation)
    • Purely presentational component
  • Modify components/data-stream-handler.tsx
    • Add next-steps to DataStreamDelta type
    • Add NextStepsData type definition

Task 5: Testing

  • Unit tests for generateNextStepsLLM
  • Integration tests for onFinish → writer flow
  • E2E tests for full user flow
  • Test with multiple Arcade tools

Task 6: Documentation

  • Update ARCHITECTURE.md with design decisions
  • Document cost implications
  • Add troubleshooting guide

Cost Analysis

Per generation: 550-800 tokens ($0.012-0.016)
With caching (90% reduction): ~$0.0012-0.0016
Daily estimate (50 tool calls): ~$0.06-0.08

Minor Items

  • Clarify useChat() usage pattern (multiple calls vs shared)
  • Consider discriminated unions for better TypeScript inference
  • Verify integration test passes with real API

Technical Details

Server API:

  • Uses writer.write({ type: 'data-next-steps', id: generateUUID(), data: {...} })
  • AI SDK strips data- prefix before client receives it
  • Access data via delta.content, not delta.data

Client API:

  • Uses useChat({ id }) to access data stream
  • Process with lastProcessedIndex pattern
  • Check delta.type === 'next-steps' (no prefix)

Timing:

  • Tool executes → AI generates summary → onFinish fires → Chips stream to client
  • User sees: Tool result → AI explanation → Next-action suggestions
  • Timeline: ~3-4s total (feels natural)

Ready for implementation - Core assumption verified, plan reviewed and corrected.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions