We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello,
I noticed that the current context window for codestral is set to 32000 intput (https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json#L2205) I think this is incorrect as stated here: https://mistral.ai/en/news/codestral-2501
It should be now 256k context window. I didn't do a PR because I'm not quite sure what to do with the other parameters max-tokens and max-input-token
max-tokens
max-input-token
No
Aider v0.74.1
No response
The text was updated successfully, but these errors were encountered:
{ "id": "codestral-latest", "object": "model", "created": 1739046941, "owned_by": "mistralai", "capabilities": { "completion_chat": true, "completion_fim": true, "function_calling": true, "fine_tuning": false, "vision": false }, "name": "codestral-2501", "description": "Official codestral-2501 Mistral AI model", "max_context_length": 262144, "aliases": [ "codestral-2501", "codestral-2412", "codestral-2411-rc5" ], "deprecation": null, "default_model_temperature": 0.3, "type": "base" },
https://api.mistral.ai/v1/models/{model_id}: max_context_length is 262144. max-tokens and max-input tokens are the same.
From the documentation:
The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length.
Thus, both max_input_tokens and max_output_tokens should be 262144.
max_input_tokens
max_output_tokens
Sorry, something went wrong.
No branches or pull requests
What happened?
Hello,
I noticed that the current context window for codestral is set to 32000 intput (https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json#L2205)
I think this is incorrect as stated here: https://mistral.ai/en/news/codestral-2501
It should be now 256k context window.
I didn't do a PR because I'm not quite sure what to do with the other parameters
max-tokens
andmax-input-token
Relevant log output
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
Aider v0.74.1
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: