Skip to content

Latest commit

 

History

History
33 lines (18 loc) · 1.78 KB

File metadata and controls

33 lines (18 loc) · 1.78 KB
title description
Model tuning in {{ foundation-models-full-name }}
With {{ foundation-models-full-name }}, you can tune {{ gpt-lite }} and {{ llama }} 8b-1 text generation models and {{ gpt-lite }}-based classifiers using the {{ lora }} method.

Model tuning

With {{ foundation-models-full-name }}, you can tune {{ gpt-lite }} and {{ llama }} 8b^1^ text generation models and {{ gpt-lite }}-based classifiers using the {{ lora }} (Low-Rank Adaptation of Large Language Models) method.

Model tuning in {{ foundation-models-full-name }} is at the Preview stage.

Fine-tuning text generation models {#tuning-abilities}

{% include tuning-abilities %}

Fine-tuning in {{ foundation-models-name }} {#fm-tuning}

For more information on tuning data requirements, see {#T} and {#T}.

You need to upload the prepared data to {{ yandex-cloud }} as a dataset. By default, you can upload up to 5 GB of tuning data into one dataset. For all limitations, see {#T}.

After you upload a dataset, start tuning by specifying its type and parameters (optional). Tuning can take from 1 to 24 hours depending on the amount of data and system workload.

For a model tuning example, see {#T}.

You will need an ai.editor role for model tuning in {{ foundation-models-name }}. This role allows you to upload data and start the tuning process.

Examples {#examples}

{#T}.

^1^ {{ meta-disclaimer }}