Skip to content

Commit

Permalink
doc
Browse files Browse the repository at this point in the history
  • Loading branch information
grencez committed Jul 27, 2024
1 parent d3881de commit c514bfd
Show file tree
Hide file tree
Showing 4 changed files with 12 additions and 18 deletions.
9 changes: 4 additions & 5 deletions example/prompt/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,12 @@ In order of interest:
- [confidant_alpaca](confidant_alpaca/): A camelid that occasionally spits.
- Demonstrates a method of prompting instruction-tuned models to fill in character dialogue.
- Instruction-following AI assistants.
- For all of these examples, the assistant must end its messages with a special token like EOS.
- [assistant_alpaca](assistant_alpaca/): Alpaca prompt format.
- [assistant_chatml](assistant_chatml/): ChatML prompt format that requires special `<|im_start|>` and `<|im_end|>` tokens.
- [assistant_gemma](assistant_gemma/): Gemma prompt format requires special `<start_of_turn>` and `<end_of_turn>` tokens.
- [assistant_chatml](assistant_chatml/): ChatML prompt format that typically requires special `<|im_start|>` and `<|im_end|>` tokens but is configured with fallbacks.
- [assistant_gemma](assistant_gemma/): Gemma prompt format that requires special `<start_of_turn>` and `<end_of_turn>` tokens.
- [assistant_mistral](assistant_mistral/): Mistral propmt format that requires special `[INST]` and `[/INST]` tokens.
- [assistant_vicuna](assistant_vicuna/): Conversational AI assistant.
- Minimial prompt that lets a Vicuna-style model do its thing.
- Only works with models that end the assistant's message with an EOS token.
- [assistant_vicuna](assistant_vicuna/): Vicuna prompt format.
- [assistant_coprocess](assistant_coprocess/): A simple assistant that can be controlled as a coprocess.
- Demonstrates the `/puts` and `/gets` commands.

5 changes: 4 additions & 1 deletion example/prompt/assistant_chatml/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
# ChatML Assistant

This example should be run with [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md)-style models that are tuned to behave like an instruction-following assistant chatbot.
Most importantly, the model must have special `<|im_start|>` and `<|im_end|>` tokens.

The model typically should have special `<|im_start|>` and `<|im_end|>` tokens, but `setting.sxpb` configures fallbacks that attempt to support any model.
Models that don't support ChatML may produce nonsense, but Gemma seems to behave well, so we specifically recognize Gemma-style `<start_of_turn>` and `<end_of_turn>` tokens in this example.
When no special tokens are found, we fall back to using BOS and EOS tokens to support jondurbin's Bagel finetunes like [bagel-7b-v0.5](https://huggingface.co/jondurbin/bagel-7b-v0.5).
2 changes: 2 additions & 0 deletions example/prompt/assistant_gemma/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,5 @@

This example should be run with Gemma-style models that are tuned to behave like an instruction-following assistant chatbot.
Most importantly, the model must have special `<start_of_turn>` and `<end_of_turn>` tokens.

It's like the [assistant_chatml](../assistant_chatml/) example but without a system prompt.
14 changes: 2 additions & 12 deletions example/prompt/assistant_gemma/setting.sxpb
Original file line number Diff line number Diff line change
Expand Up @@ -9,18 +9,8 @@
)
(substitution
(special_tokens (())
(()
(alias "<start_of_turn>")
(candidates (())
"<start_of_turn>"
"<|im_start|>" ; For ChatML models.
))
(()
(alias "<end_of_turn>")
(candidates (())
"<end_of_turn>"
"<|im_end|>" ; For ChatML models.
))
(() (alias "<start_of_turn>"))
(() (alias "<end_of_turn>"))
)
)

Expand Down

0 comments on commit c514bfd

Please sign in to comment.