Skip to content
Vitor Lima edited this page Sep 8, 2025 · 9 revisions

The Chat Template panel is the control center for a Chat / World. It selects the model, applies your Prompt Format Template, attaches world Lorebooks, and sets limits that control context and responses. You can keep unique templates per chat or reuse the same template across multiple chats.

You can also reference Chat Templates in other features like Summarization and Expressions to keep behavior consistent.

image


Main Settings

These settings determine how the LLM runs your session.

Setting Description
Model Choose from models you integrated in [Setting Up Models]. This is the engine that generates participant replies.
Format Pick the [Prompt Format Template] used to build each participant’s prompt (system + message layout).
Lorebooks Select one or more [Lorebooks] to inject world or character knowledge when relevant.
Context Size The maximum tokens sent to the model. When the limit is reached, NarratrixAI drops the oldest messages to stay within budget. Check your provider’s max context for the chosen model.
Lorebook Budget Token budget reserved for lorebook injections. If no lorebook is selected, nothing is reserved.
Response Length The maximum tokens allowed in a single participant response.
Max Depth The maximum number of past messages eligible for inclusion in the prompt. Lower values keep focus tight; higher values keep longer-running threads.

About tokens: tokens are model-specific chunks of text. As a rough rule, 1 token ≈ 0.75 words in English. Always validate limits against your model’s documentation.


Custom Prompts

Add prompts you can toggle on/off without editing your main Prompt Format Template. Each custom prompt defines a Role and a Position.

image

Roles:

  • User — inserted as your user message
  • Character — inserted as a specific character participant
  • System — injected into the system section of the Prompt Format Template

Position:

  • Top of Conversation
  • Bottom of Conversation
  • At specific Depth

Prefilling
If your model supports prefilling, you may use this feature to setup a Character prompt at the Bottom of Conversation.

Examples:

  • “Safety rails” or content boundaries (System, Top of Conversation)
  • Scene framing for the next few turns (User, At specific Depth - 2)
  • Character voice primer (User, Bottom of Conversation)

Tip
You can build lightweight chats using only Custom Prompts and a minimal Prompt Format Template. You’ll lose some section‑level conditions, but it’s fast for experiments.


Inference Settings

Providers expose parameters that change model behavior. NarratrixAI shows only the fields your selected model supports and hides unsupported ones automatically.

image

You can add a parameter and check it's help (?) icon for basic documentation on what the parameter does.


Practical Tips

  • Keep Context Size below your model’s hard limit to avoid provider errors.
  • Raise Lorebook Budget only when you need heavy world detail; it competes with message history for tokens.

Clone this wiki locally