-
-
Notifications
You must be signed in to change notification settings - Fork 5
Debugging
The Debugging Tool shows exactly what NarratrixAI sent to the model and what came back. It’s the fastest way to verify your Prompt Format Template, Chat Template settings, inference parameters, and token usage during a Chat / World session.
Press Ctrl+` (backtick) in any chat to open the tool. It captures a snapshot for each model call so you can inspect prompts, parameters, raw outputs, and stats.
- You must be inside a Chat / World.
- Press Ctrl+` to open or close the Debugging Tool.
- The tool is keyboard-only for now—there’s no menu button.
The left sidebar keeps the latest 5 requests. Select any item to load its snapshot. Each entry shows:
- Timestamp of the request
- Provider/model label (from your Setting Up Models configuration)
- Optional status tag (e.g., “OpenAI 5”)
Use this to compare consecutive attempts after changing a system prompt, section order, or inference setting.
Shows the final prompt that NarratrixAI sent:
- System Prompt — the composed text after all Prompt Format Template sections, Lorebooks injections, and separators are applied.
- Conversation Messages — the user/character history included for this call after cleanup and depth constraints.
What you see here is exactly what the provider received—no hidden transformations after this point.
If you use an Inference Template, toggle “Single Block” to view the payload as one concatenated string. This helps debug patterns that depend on precise spacing, separators, or stop sequences.
Lists the exact parameters sent to the model engine. This includes:
- Response Length, Context Size, and Max Depth (from your Chat Template)
- Sampling controls (e.g., temperature, top_p, penalties)
- Engine-specific fields supported by your provider
Use this tab to confirm the app used the values you intended and that provider-specific flags are present.
Displays the raw provider response. No rewriting, trimming, or formatting by NarratrixAI. Useful for:
- Checking whether the model followed your output contract (e.g., a single label for Expressions)
- Inspecting metadata some engines return (e.g., finish reasons)
Provides a snapshot of usage and limits for this request:
- Token Breakdown (system, history, response)
- Utilization metrics vs. model context and response limits
- Message statistics (count, average tokens)
- Any timing or cost fields exposed by the engine (when available)
Use this tab to catch context overruns, overly long system prompts, or too-short response caps.
- Change your Prompt Format Template (e.g., reorder Context and Character).
- Send a new message.
- Open Debugging Tool (Ctrl+`) and compare the new Payload and Stats against the previous entry in the sidebar.