-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Open
Labels
Description
Describe the bug
I love the Copilot CLI and autopilot and /fleet ( see HERE or HERE ). Thank you!
It looks like there's a bug where it claims very long sessions with 10-20 million tokens used up est. 0 premium requests.
This is what I got just now after a session with GPT Codex:
% copilot --yolo
Total usage est: 0 Premium requests
API time spent: 41m 22s
Total session time: 47m 28s
Total code changes: +8 -2
Breakdown by AI model:
gpt-5.3-codex 18.9m in, 92.4k out, 16.9m cached (Est. 0 Premium requests)
claude-haiku-4.5 321.7k in, 3.3k out, 290.1k cached (Est. 0 Premium requests)
And the same with Claude Opus:
% copilot --yolo
Total usage est: 0 Premium requests
API time spent: 29m 56s
Total session time: 37m 48s
Total code changes: +647 -45
Breakdown by AI model:
claude-opus-4.6 14.2m in, 67.2k out, 13.9m cached (Est. 0 Premium requests)
claude-haiku-4.5 2.0m in, 22.1k out, 1.8m cached (Est. 0 Premium requests)
Can you please try to fix the premium request estimate (ideally count and don't estimate)?
Also would be nice to have a bit more info about the session like e.g. how many subagents or turns or tool calls.
Affected version
GitHub Copilot CLI 0.0.420.
Steps to reproduce the behavior
Select yolo, autopilot and /fleet and submit any prompt.
Expected behavior
Copilot shows premium request count or at least an estimate > 0.
Additional context
MacOS, VS Code terminal.
Reactions are currently unavailable