You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: grammars/README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# GBNF Guide
2
2
3
-
GBNF (GGML BNF) is a format for defining [formal grammars](https://en.wikipedia.org/wiki/Formal_grammar) to constrain model outputs in `llama.cpp`. For example, you can use it to force the model to generate valid JSON, or speak only in emojis. GBNF grammars are supported in various ways in `tools/main` and `tools/server`.
3
+
GBNF (GGML BNF) is a format for defining [formal grammars](https://en.wikipedia.org/wiki/Formal_grammar) to constrain model outputs in `llama.cpp`. For example, you can use it to force the model to generate valid JSON, or speak only in emojis. GBNF grammars are supported in various ways in `tools/cli`, `tools/completion` and `tools/server`.
4
4
5
5
## Background
6
6
@@ -135,7 +135,7 @@ While semantically correct, the syntax `x? x? x?.... x?` (with N repetitions) ma
135
135
You can use GBNF grammars:
136
136
137
137
- In [llama-server](../tools/server)'s completion endpoints, passed as the `grammar` body field
138
-
- In [llama-cli](../tools/main), passed as the `--grammar` & `--grammar-file` flags
138
+
- In [llama-cli](../tools/cli) and [llama-completion](../tools/completion), passed as the `--grammar` & `--grammar-file` flags
139
139
- With [test-gbnf-validator](../tests/test-gbnf-validator.cpp), to test them against strings.
140
140
141
141
## JSON Schemas → GBNF
@@ -145,7 +145,7 @@ You can use GBNF grammars:
145
145
- In [llama-server](../tools/server):
146
146
- For any completion endpoints, passed as the `json_schema` body field
147
147
- For the `/chat/completions` endpoint, passed inside the `response_format` body field (e.g. `{"type", "json_object", "schema": {"items": {}}}` or `{ type: "json_schema", json_schema: {"schema": ...} }`)
148
-
- In [llama-cli](../tools/main), passed as the `--json` / `-j` flag
148
+
- In [llama-cli](../tools/cli) and [llama-completion](../tools/completion), passed as the `--json` / `-j` flag
149
149
- To convert to a grammar ahead of time:
150
150
- in CLI, with [examples/json_schema_to_grammar.py](../examples/json_schema_to_grammar.py)
151
151
- in JavaScript with [json-schema-to-grammar.mjs](../tools/server/public_legacy/json-schema-to-grammar.mjs) (this is used by the [server](../tools/server)'s Web UI)
Copy file name to clipboardExpand all lines: tools/completion/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# llama.cpp/tools/main
1
+
# llama.cpp/tools/completion
2
2
3
3
This example program allows you to use various LLaMA language models easily and efficiently. It is specifically designed to work with the [llama.cpp](https://github.com/ggml-org/llama.cpp) project, which provides a plain C/C++ implementation with optional 4-bit quantization support for faster, lower memory inference, and is optimized for desktop CPUs. This program can be used to perform various inference tasks with LLaMA models, including generating text based on user-provided prompts and chat-like interactions with reverse prompts.
0 commit comments