Skip to content

Commit 9e5bfc3

Browse files
authored
Update navigate_mxgenai.md
added paragraph
1 parent efec553 commit 9e5bfc3

File tree

1 file changed

+3
-1
lines changed
  • content/en/docs/appstore/use-content/platform-supported-content/modules/genai/mendix-cloud-genai

1 file changed

+3
-1
lines changed

content/en/docs/appstore/use-content/platform-supported-content/modules/genai/mendix-cloud-genai/navigate_mxgenai.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,9 @@ The Token consumption monitor shows detailed graphs of the token consumption use
9393

9494
### Why do we measure token consumption?
9595

96-
In order for a large language model to understand text input, the text is first ‘tokenized’ - broken down into smaller pieces where each piece represents a token with its unique ID. A good rule of thumb is that 100 tokens are around 75 English words, however there are always differences depending on the model or the language used. After tokenization, each token will be assigned an embeddings vector. The tokens required to feed the input prompt to the model are called ‘input tokens’, the tokens required to transform the model output into for example text or images are called ‘output tokens’. Tokens are what you pay for when consuming large language model services. For Embeddings resources, only input token consumption is being measured, since only the generated embedding vectors are returned and no tokenization takes place when generating the output. Text generation resources contain both input and output tokens (text sent to the model and generated by the model).
96+
In order for a large language model to understand text input, the text is first ‘tokenized’ - broken down into smaller pieces where each piece represents a token with its unique ID. A good rule of thumb is that 100 tokens are around 75 English words, however there are always differences depending on the model or the language used. After tokenization, each token will be assigned an embeddings vector. The tokens required to feed the input prompt to the model are called ‘input tokens’, the tokens required to transform the model output into for example text or images are called ‘output tokens’. Tokens are what you pay for when consuming large language model services.
97+
98+
Note: For Embeddings resources, only input token consumption is being measured, since only the generated embedding vectors are returned and no tokenization takes place when generating the output. Text generation resources contain both input and output tokens (text sent to the model and generated by the model).
9799

98100
### Knowledgebases and Embeddings Resources
99101

0 commit comments

Comments
 (0)