diff --git a/docs/community/contributing/index.md b/docs/community/contributing/index.md index 02ef8c1c3..9ca313e19 100644 --- a/docs/community/contributing/index.md +++ b/docs/community/contributing/index.md @@ -12,7 +12,7 @@ _docker-agent is open source. Here's how to set up your development environment ### Prerequisites -- [Go 1.25](https://go.dev/dl/) or higher +- [Go 1.26](https://go.dev/dl/) or higher - API key(s) for your chosen AI provider - [Task 3.44](https://taskfile.dev/installation/) or higher - [golangci-lint](https://golangci-lint.run/docs/welcome/install/#binaries) diff --git a/docs/features/cli/index.md b/docs/features/cli/index.md index 22ade3ae0..64d5e6db8 100644 --- a/docs/features/cli/index.md +++ b/docs/features/cli/index.md @@ -32,7 +32,6 @@ $ docker agent run [config] [message...] [flags] | `--model <ref>` | Override model(s). Use `provider/model` for all agents, or `agent=provider/model` for specific agents. Comma-separate multiple overrides. | | `--session <id>` | Resume a previous session. Supports relative refs (`-1` = last, `-2` = second to last) | | `--prompt-file <path>` | Include file contents as additional system context (repeatable) | -| `-c <name>` | Run a named command from the YAML config | | `-d, --debug` | Enable debug logging | | `--log-file <path>` | Custom debug log location | | `-o, --otel` | Enable OpenTelemetry tracing | @@ -45,7 +44,6 @@ $ docker agent run agent.yaml -a developer --yolo $ docker agent run agent.yaml --model anthropic/claude-sonnet-4-0 $ docker agent run agent.yaml --model "dev=openai/gpt-4o,reviewer=anthropic/claude-sonnet-4-0" $ docker agent run agent.yaml --session -1 # resume last session -$ docker agent run agent.yaml -c df # run named command $ docker agent run agent.yaml --prompt-file ./context.md # include file as context # Queue multiple messages (processed in sequence) @@ -80,7 +78,7 @@ $ docker agent new [flags] # Examples $ docker agent new -$ docker agent new --model openai/gpt-5-mini --max-tokens 32000 +$ docker agent new --model openai/gpt-5-mini $ docker agent new --model dmr/ai/gemma3-qat:12B --max-iterations 15 ``` diff --git a/docs/getting-started/quickstart/index.md b/docs/getting-started/quickstart/index.md index 9e741b1ab..70832ac9b 100644 --- a/docs/getting-started/quickstart/index.md +++ b/docs/getting-started/quickstart/index.md @@ -42,8 +42,8 @@ $ docker agent new # Or specify options directly $ docker agent new --model openai/gpt-4o -# Override context size and iteration limits -$ docker agent new --model dmr/ai/gemma3-qat:12B --max-tokens 32000 --max-iterations 15 +# Override iteration limits +$ docker agent new --model dmr/ai/gemma3-qat:12B --max-iterations 15 ``` This generates an `agent.yaml` in the current directory. Then run it: diff --git a/docs/guides/tips/index.md b/docs/guides/tips/index.md index 8a77f3884..b5936f6cd 100644 --- a/docs/guides/tips/index.md +++ b/docs/guides/tips/index.md @@ -266,8 +266,8 @@ Understand the difference between `sub_agents` and `handoffs`:

handoffs (A2A)

Transfers control entirely to another agent (possibly remote). One-way handoff.

handoffs:
-  - name: specialist
-    url: http://...
+ - specialist + - namespace/remote-agent @@ -320,10 +320,10 @@ $ docker agent run agent.yaml --debug --log-file ./debug.log ### Check Token Usage -Use the `/usage` command during a session to see token consumption: +Use the `/cost` command during a session to see token consumption: ```text -/usage +/cost Token Usage: Input: 12,456 tokens