You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This repo is research code and not 100% stable. Please use github issues or contact me via email (niels dot warncke at gmail dot com) or slack when you encounter issues.
2
2
3
3
# OpenWeights
4
-
An openai-like sdk for finetuning and batch inference. Manages runpod instances for you, or you can run a [worker](openweights/worker) on your own GPU.
4
+
An openai-like sdk with the flexibility of working on a local GPU: finetune, inference, API deployments and custom workloads on managed runpod instances.
5
5
6
6
# Installation
7
7
Clone the repo and run `pip install -e .`.
@@ -10,104 +10,126 @@ Then add your `$OPENWEIGHTS_API_KEY` to the `.env`. You can create one via the [
10
10
# Quickstart
11
11
```python
12
12
from openweights import OpenWeights
13
-
client = OpenWeights()
13
+
import openweights.jobs.unsloth # This import makes ow.fine_tuning available
Currently supported are sft, dpo and orpo on models up to 32B in bf16 or 70B in 4bit. More info: [Fine-tuning Options](docs/finetuning.md)
24
26
25
-
# Client-side usage:
27
+
# Overview
26
28
27
-
## Create a finetuningjob
29
+
A bunch of things work out of the box: for example lora finetuning, API deployments, batch inference jobs, or running MMLU-pro and inspect-ai evals. However, the best and most useful and coolest feature is that you can very easily [create your own jobs](example/custom_job/) or modify existing ones: all built-in jobs can just as well live outside of this repo. For example, you can copy and modify [the finetuning code](openweights/jobs/unsloth): when a job is created, the necessary source code is uploaded as part of the job and therefore does not need to be part of this repo.
28
30
31
+
## Inference
29
32
```python
30
33
from openweights import OpenWeights
31
-
from dotenv import load_dotenv
34
+
import openweights.jobs.inference # This import makes ow.inference available
The `job_id` is based on the params hash, which means that if you submit the same job many times, it will only run once. If you resubmit a failed or canceled job, it will reset the job status to `pending`.
48
-
49
-
More infos: [Fine-tuning Options](docs/finetuning.md)
import openweights.jobs.vllm # this makes ow.api available
77
62
78
-
## Custom jobs
79
-
Maybe you'd like to use autoscaling with queues for workloads that are not currently supported. You can start a pod that is set up like a worker but doesn't start `openweights/worker/main.py` by running:
You can deploy models as openai-like APIs in one of the following ways (sorted from highest to lowest level of abstraction)
88
-
- create chat completions via `ow.chat.completions.sync_create` or `.async_create` - this will deploy models when needed. This queues to-be-deployed models for 5 seconds and then deploys them via `ow.multi_deploy`. This client is optimized to not overload the vllm server it is talking to and caches requests on disk when a `seed` parameter is given.
89
-
- pass a list of models to deploy to `ow.multi_deploy` - this takes a list of models or lora adapters, groups them by `base_model`, and deploys all lora adapters of the same base model on one API to save runpod resources. Calls `ow.deploy` for each single deployment job. [Example](example/multi_lora_deploy.py)
90
-
-`ow.deploy` - takes a single model and optionally a list of lora adapters, then creates a job of type `api`. Returns a `openweights.client.temporary_api.TemporaryAPI` object. [Example](example/gradio_ui_with_temporary_api.py)
91
78
92
79
API jobs can never complete, they stop either because they are canceled or failed. API jobs have a timeout 15 minutes in the future when they are being created, and while a `TemporaryAPI` is alive (after `api.up()` and before `api.down()` has been called), it resets the timeout every minute. This ensures that an API is alive while the process that created it is running, at that it will automatically shut down later - but not immediately so that during debugging you don't always have to wait for deployment.
93
80
81
+
## `ow.chat.completions`
82
+
We implement an efficient chat client that handles local caching on disk when a seed is provided as well as concurrency management and backpressure. It also deploys models when they are not openai models and not already deployed. We make many guesses that are probably suboptimal for many use cases when we automatically deploy models - for those cases you should explicitly use `ow.api.deploy`.
83
+
84
+
## Inspect-AI
85
+
```python
94
86
95
-
## Using `client.deploy(model)`
96
-
```py
97
87
from openweights import OpenWeights
88
+
import openweights.jobs.inspect_ai # this makes ow.inspect_ai available
89
+
ow = OpenWeights()
98
90
99
-
client = OpenWeights()
91
+
job = ow.inspect_ai.create(
92
+
model='meta-llama/Llama-3.3-70B-Instruct',
93
+
eval_name='inspect_evals/gpqa_diamond',
94
+
options='--top-p 0.9', # Can be any options that `inspect eval` accepts - we simply pass them on without validation
import openweights.jobs.mmlu_pro # this makes ow.mmlu_pro available
106
+
ow = OpenWeights()
107
+
108
+
job = ow.mmlu_pro.create(
109
+
model=args.model,
110
+
ntrain=args.ntrain,
111
+
selected_subjects=args.selected_subjects,
112
+
save_dir=args.save_dir,
113
+
global_record_file=args.global_record_file,
114
+
gpu_util=args.gpu_util
115
+
)
116
+
117
+
if job.status =='completed':
118
+
job.download(f"{args.local_save_dir}")
108
119
```
109
120
110
-
More examples:
111
-
- do a [hyperparameter sweep](example/hparams_sweep.py) and [visualize the results](example/analyze_hparam_sweep.ipynb)
112
-
-[download artifacts](example/download.py) from a job and plot training
113
-
- and [more](example/)
121
+
# General notes
122
+
123
+
## Job and file IDs are content hashes
124
+
The `job_id` is based on the params hash, which means that if you submit the same job many times, it will only run once. If you resubmit a failed or canceled job, it will reset the job status to `pending`.
125
+
126
+
## More docs
127
+
-[Fine-tuning Options](docs/finetuning.md)
128
+
-[APIs](docs/api.md)
129
+
-[Custom jobs](example/custom_job/)
130
+
131
+
## Development
132
+
Start a pod in dev mode - that allows ssh'ing into it without starting a worker automatically. This is useful to debug the worker.
You can deploy models as openai-like APIs in one of the following ways (sorted from highest to lowest level of abstraction)
4
+
- create chat completions via `ow.chat.completions.sync_create` or `.async_create` - this will deploy models when needed. This queues to-be-deployed models for 5 seconds and then deploys them via `ow.multi_deploy`. This client is optimized to not overload the vllm server it is talking to and caches requests on disk when a `seed` parameter is given.
5
+
- pass a list of models to deploy to `ow.multi_deploy` - this takes a list of models or lora adapters, groups them by `base_model`, and deploys all lora adapters of the same base model on one API to save runpod resources. Calls `ow.deploy` for each single deployment job. [Example](example/multi_lora_deploy.py)
6
+
-`ow.api.deploy` - takes a single model and optionally a list of lora adapters, then creates a job of type `api`. Returns a `openweights.client.temporary_api.TemporaryAPI` object. [Example](../example/gradio_ui_with_temporary_api.py)
7
+
8
+
API jobs can never complete, they stop either because they are canceled or failed. API jobs have a timeout 15 minutes in the future when they are being created, and while a `TemporaryAPI` is alive (after `api.up()` and before `api.down()` has been called), it resets the timeout every minute. This ensures that an API is alive while the process that created it is running, at that it will automatically shut down later - but not immediately so that during debugging you don't always have to wait for deployment.
0 commit comments