title | description |
---|---|
Model tuning in {{ foundation-models-full-name }} |
With {{ foundation-models-full-name }}, you can tune {{ gpt-lite }} and {{ llama }} 8b-1 text generation models and {{ gpt-lite }}-based classifiers using the {{ lora }} method. |
With {{ foundation-models-full-name }}, you can tune {{ gpt-lite }} and {{ llama }} 8b^1^ text generation models and {{ gpt-lite }}-based classifiers using the {{ lora }} (Low-Rank Adaptation of Large Language Models) method.
Model tuning in {{ foundation-models-full-name }} is at the Preview stage.
{% include tuning-abilities %}
For more information on tuning data requirements, see {#T} and {#T}.
You need to upload the prepared data to {{ yandex-cloud }} as a dataset. By default, you can upload up to 5 GB of tuning data into one dataset. For all limitations, see {#T}.
After you upload a dataset, start tuning by specifying its type and parameters (optional). Tuning can take from 1 to 24 hours depending on the amount of data and system workload.
For a model tuning example, see {#T}.
You will need an ai.editor
role for model tuning in {{ foundation-models-name }}. This role allows you to upload data and start the tuning process.
{#T}.
^1^ {{ meta-disclaimer }}