Skip to content

Commit c8571b3

Browse files
authored
Merge pull request #3 from samestrin/1.0.1
1.0.1
2 parents c020289 + 3902875 commit c8571b3

File tree

110 files changed

+5350
-3110
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

110 files changed

+5350
-3110
lines changed

.eslintrc.json

+11
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
{
2+
"env": {
3+
"node": true,
4+
"commonjs": true,
5+
"es2021": true
6+
},
7+
"extends": "eslint:recommended",
8+
"parserOptions": {
9+
"ecmaVersion": 12
10+
}
11+
}

.github/workflows/main.yml

+3-3
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,11 @@ on:
55
branches:
66
- main
77
paths:
8-
- "package.json"
8+
- 'package.json'
99
workflow_dispatch:
1010
schedule:
1111
# Run the workflow every week (adjust as needed)
12-
- cron: "0 0 * * 0"
12+
- cron: '0 0 * * 0'
1313

1414
jobs:
1515
update-packages:
@@ -21,7 +21,7 @@ jobs:
2121
- name: Setup Node.js
2222
uses: actions/setup-node@v3
2323
with:
24-
node-version: "20"
24+
node-version: '20'
2525

2626
- name: Update NPM packages
2727
run: |

.prettierrc

+4
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
{
2+
"singleQuote": true,
3+
"trailingComma": "all"
4+
}

README.md

+12-14
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,12 @@ The LLM Interface project is a versatile and comprehensive wrapper designed to i
2020

2121
## Updates
2222

23+
**v1.0.01**
24+
25+
- **LLMInterfaceSendMessage**: Send a message to any LLM provider without creating a new instance of the `llm-interface`.
26+
- **Model Aliases**: Simplified model selection, `default`, `small`, and `large` model aliases now available.
27+
- **Major Refactor**: Improved comments, test cases, centralized LLM provider definitions.
28+
2329
**v1.0.00**
2430

2531
- **Initial 1.0 Release**
@@ -28,14 +34,6 @@ The LLM Interface project is a versatile and comprehensive wrapper designed to i
2834

2935
- **Simple Prompt Handler**: Added support for simplified prompting.
3036

31-
**v0.0.10**
32-
33-
- **Hugging Face**: Added support for new LLM provider Hugging Face (_over 150,000 publicly accessible machine learning models_)
34-
- **Perplexity**: Added support for new LLM provider Perplexity
35-
- **AI21**: Add support for new LLM provider AI21 Studio
36-
- **JSON Output Improvements**: The `json_object` mode now guarantees the return a valid JSON object or null.
37-
- **Graceful Retries**: Retry LLM queries upon failure with progressive delays.
38-
3937
## Dependencies
4038

4139
The project relies on several npm packages and APIs. Here are the primary dependencies:
@@ -64,13 +62,13 @@ npm install llm-interface
6462
Import `llm-interface` using:
6563

6664
```javascript
67-
const LLMInterface = require("llm-interface");
65+
const LLMInterface = require('llm-interface');
6866
```
6967

7068
or
7169

7270
```javascript
73-
import LLMInterface from "llm-interface";
71+
import LLMInterface from 'llm-interface';
7472
```
7573

7674
then call the handler you want to use:
@@ -79,10 +77,10 @@ then call the handler you want to use:
7977
const openai = new LLMInterface.openai(process.env.OPENAI_API_KEY);
8078

8179
const message = {
82-
model: "gpt-3.5-turbo",
80+
model: 'gpt-3.5-turbo',
8381
messages: [
84-
{ role: "system", content: "You are a helpful assistant." },
85-
{ role: "user", content: "Explain the importance of low latency LLMs." },
82+
{ role: 'system', content: 'You are a helpful assistant.' },
83+
{ role: 'user', content: 'Explain the importance of low latency LLMs.' },
8684
],
8785
};
8886

@@ -102,7 +100,7 @@ or if you want to keep things _simple_ you can use:
102100
const openai = new LLMInterface.openai(process.env.OPENAI_API_KEY);
103101

104102
openai
105-
.sendMessage("Explain the importance of low latency LLMs.")
103+
.sendMessage('Explain the importance of low latency LLMs.')
106104
.then((response) => {
107105
console.log(response);
108106
})

docs/API.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Different LLMs may have their own message object rules. For example, both Anthro
2828

2929
```javascript
3030
openai
31-
.sendMessage(message, { max_tokens: 150, response_format: "json_object" })
31+
.sendMessage(message, { max_tokens: 150, response_format: 'json_object' })
3232
.then((response) => {
3333
console.log(response);
3434
})
@@ -241,7 +241,7 @@ perplexity
241241

242242
- **Parameters:**
243243
- `message`: An object containing the model and messages to send.
244-
- `options`: An optional object containing `model`. This method currently has no token limitation.
244+
- `options`: An optional object containing `max_tokens` and `model`.
245245
- `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` and `retryAttempts`.
246246
- **Returns:** A promise that resolves to the response text.
247247
- **Example:**

docs/USAGE.md

+49-49
Original file line numberDiff line numberDiff line change
@@ -32,13 +32,13 @@ How to use `llm-interface` in your project.
3232
First, require the LLMInterface from the `llm-interface` package:
3333

3434
```javascript
35-
const LLMInterface = require("llm-interface");
35+
const LLMInterface = require('llm-interface');
3636
```
3737

3838
or import it:
3939

4040
```javascript
41-
import LLMInterface from "llm-interface";
41+
import LLMInterface from 'llm-interface';
4242
```
4343

4444
## Basic Usage Examples
@@ -55,10 +55,10 @@ The OpenAI interface allows you to send messages to the OpenAI API.
5555
const openai = new LLMInterface.openai(process.env.OPENAI_API_KEY);
5656

5757
const message = {
58-
model: "gpt-3.5-turbo",
58+
model: 'gpt-3.5-turbo',
5959
messages: [
60-
{ role: "system", content: "You are a helpful assistant." },
61-
{ role: "user", content: "Explain the importance of low latency LLMs." },
60+
{ role: 'system', content: 'You are a helpful assistant.' },
61+
{ role: 'user', content: 'Explain the importance of low latency LLMs.' },
6262
],
6363
};
6464

@@ -169,10 +169,10 @@ The Gemini interface allows you to send messages to the Google Gemini API.
169169
const gemini = new LLMInterface.gemini(process.env.GEMINI_API_KEY);
170170

171171
const message = {
172-
model: "gemini-1.5-flash",
172+
model: 'gemini-1.5-flash',
173173
messages: [
174-
{ role: "system", content: "You are a helpful assistant." },
175-
{ role: "user", content: "Explain the importance of low latency LLMs." },
174+
{ role: 'system', content: 'You are a helpful assistant.' },
175+
{ role: 'user', content: 'Explain the importance of low latency LLMs.' },
176176
],
177177
};
178178

@@ -196,10 +196,10 @@ The Goose AI interface allows you to send messages to the Goose AI API.
196196
const goose = new LLMInterface.goose(process.env.GROQ_API_KEY);
197197

198198
const message = {
199-
model: "gpt-neo-20b",
199+
model: 'gpt-neo-20b',
200200
messages: [
201-
{ role: "system", content: "You are a helpful assistant." },
202-
{ role: "user", content: "Explain the importance of low latency LLMs." },
201+
{ role: 'system', content: 'You are a helpful assistant.' },
202+
{ role: 'user', content: 'Explain the importance of low latency LLMs.' },
203203
],
204204
};
205205

@@ -223,10 +223,10 @@ The Groq interface allows you to send messages to the Groq API.
223223
const groq = new LLMInterface.groq(process.env.GROQ_API_KEY);
224224

225225
const message = {
226-
model: "llama3-8b-8192",
226+
model: 'llama3-8b-8192',
227227
messages: [
228-
{ role: "system", content: "You are a helpful assistant." },
229-
{ role: "user", content: "Explain the importance of low latency LLMs." },
228+
{ role: 'system', content: 'You are a helpful assistant.' },
229+
{ role: 'user', content: 'Explain the importance of low latency LLMs.' },
230230
],
231231
};
232232

@@ -250,15 +250,15 @@ The HuggingFace interface allows you to send messages to the HuggingFace API.
250250
const huggingface = new LLMInterface.huggingface(process.env.ANTHROPIC_API_KEY);
251251

252252
const message = {
253-
model: "claude-3-opus-20240229",
253+
model: 'claude-3-opus-20240229',
254254
messages: [
255255
{
256-
role: "user",
256+
role: 'user',
257257
content:
258-
"You are a helpful assistant. Say OK if you understand and stop.",
258+
'You are a helpful assistant. Say OK if you understand and stop.',
259259
},
260-
{ role: "system", content: "OK" },
261-
{ role: "user", content: "Explain the importance of low latency LLMs." },
260+
{ role: 'system', content: 'OK' },
261+
{ role: 'user', content: 'Explain the importance of low latency LLMs.' },
262262
],
263263
};
264264

@@ -282,10 +282,10 @@ The Mistral AI interface allows you to send messages to the Mistral AI API.
282282
const mistral = new LLMInterface.mistral(process.env.GROQ_API_KEY);
283283

284284
const message = {
285-
model: "llama3-8b-8192",
285+
model: 'llama3-8b-8192',
286286
messages: [
287-
{ role: "system", content: "You are a helpful assistant." },
288-
{ role: "user", content: "Explain the importance of low latency LLMs." },
287+
{ role: 'system', content: 'You are a helpful assistant.' },
288+
{ role: 'user', content: 'Explain the importance of low latency LLMs.' },
289289
],
290290
};
291291

@@ -309,15 +309,15 @@ The Perplexity interface allows you to send messages to the Perplexity API.
309309
const perplexity = new LLMInterface.perplexity(process.env.ANTHROPIC_API_KEY);
310310

311311
const message = {
312-
model: "claude-3-opus-20240229",
312+
model: 'claude-3-opus-20240229',
313313
messages: [
314314
{
315-
role: "user",
315+
role: 'user',
316316
content:
317-
"You are a helpful assistant. Say OK if you understand and stop.",
317+
'You are a helpful assistant. Say OK if you understand and stop.',
318318
},
319-
{ role: "system", content: "OK" },
320-
{ role: "user", content: "Explain the importance of low latency LLMs." },
319+
{ role: 'system', content: 'OK' },
320+
{ role: 'user', content: 'Explain the importance of low latency LLMs.' },
321321
],
322322
};
323323

@@ -341,22 +341,22 @@ The Reka AI interface allows you to send messages to the Reka AI REST API.
341341
const reka = new LLMInterface.reka(process.env.REKA_API_KEY);
342342

343343
const message = {
344-
model: "reka-core",
344+
model: 'reka-core',
345345
messages: [
346346
{
347-
role: "user",
347+
role: 'user',
348348
content:
349-
"You are a helpful assistant. Say OK if you understand and stop.",
349+
'You are a helpful assistant. Say OK if you understand and stop.',
350350
},
351-
{ role: "system", content: "OK" },
352-
{ role: "user", content: "Explain the importance of low latency LLMs." },
351+
{ role: 'system', content: 'OK' },
352+
{ role: 'user', content: 'Explain the importance of low latency LLMs.' },
353353
],
354354
};
355355

356356
reka
357357
.sendMessage(message, {})
358-
.then((response) => console.log("Response:", response))
359-
.catch((error) => console.error("Error:", error));
358+
.then((response) => console.log('Response:', response))
359+
.catch((error) => console.error('Error:', error));
360360
```
361361

362362
### LLaMA.cpp Interface
@@ -369,9 +369,9 @@ The LLaMA.cpp interface allows you to send messages to the LLaMA.cpp API; this i
369369
const llamacpp = new LLMInterface.llamacpp(process.env.LLAMACPP_URL);
370370

371371
const message = {
372-
model: "some-llamacpp-model",
372+
model: 'some-llamacpp-model',
373373
messages: [
374-
{ role: "user", content: "Explain the importance of low latency LLMs." },
374+
{ role: 'user', content: 'Explain the importance of low latency LLMs.' },
375375
],
376376
};
377377

@@ -398,7 +398,7 @@ This simplified example uses a string based prompt with the default OpenAI model
398398
```javascript
399399
const openai = new LLMInterface.openai(process.env.OPENAI_API_KEY);
400400

401-
const message = "Explain the importance of low latency LLMs.";
401+
const message = 'Explain the importance of low latency LLMs.';
402402

403403
openai
404404
.sendMessage(message)
@@ -424,22 +424,22 @@ Some interfaces allows you request the response back in JSON, currently **OpenAI
424424
const openai = new LLMInterface.openai(process.env.OPENAI_API_KEY);
425425

426426
const message = {
427-
model: "gpt-3.5-turbo",
427+
model: 'gpt-3.5-turbo',
428428
messages: [
429429
{
430-
role: "system",
431-
content: "You are a helpful assistant.",
430+
role: 'system',
431+
content: 'You are a helpful assistant.',
432432
},
433433
{
434-
role: "user",
434+
role: 'user',
435435
content:
436-
"Explain the importance of low latency LLMs. Return the results as a JSON object. Follow this format: [{reason, reasonDescription}].",
436+
'Explain the importance of low latency LLMs. Return the results as a JSON object. Follow this format: [{reason, reasonDescription}].',
437437
},
438438
],
439439
};
440440

441441
openai
442-
.sendMessage(message, { max_tokens: 150, response_format: "json_object" })
442+
.sendMessage(message, { max_tokens: 150, response_format: 'json_object' })
443443
.then((response) => {
444444
console.log(response);
445445
})
@@ -458,10 +458,10 @@ To reduce operational costs and improve performance you can optionally specify a
458458
const openai = new LLMInterface.openai(process.env.OPENAI_API_KEY);
459459

460460
const message = {
461-
model: "gpt-3.5-turbo",
461+
model: 'gpt-3.5-turbo',
462462
messages: [
463-
{ role: "system", content: "You are a helpful assistant." },
464-
{ role: "user", content: "Explain the importance of low latency LLMs." },
463+
{ role: 'system', content: 'You are a helpful assistant.' },
464+
{ role: 'user', content: 'Explain the importance of low latency LLMs.' },
465465
],
466466
};
467467

@@ -485,10 +485,10 @@ You can gracefully retry your requests. In this example we use OpenAI and up to
485485
const openai = new LLMInterface.openai(process.env.OPENAI_API_KEY);
486486

487487
const message = {
488-
model: "gpt-3.5-turbo",
488+
model: 'gpt-3.5-turbo',
489489
messages: [
490-
{ role: "system", content: "You are a helpful assistant." },
491-
{ role: "user", content: "Explain the importance of low latency LLMs." },
490+
{ role: 'system', content: 'You are a helpful assistant.' },
491+
{ role: 'user', content: 'Explain the importance of low latency LLMs.' },
492492
],
493493
};
494494

eslint.config.mjs

+8
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
import globals from 'globals';
2+
import pluginJs from '@eslint/js';
3+
4+
export default [
5+
{ files: ['**/*.js'], languageOptions: { sourceType: 'commonjs' } },
6+
{ languageOptions: { globals: globals.browser } },
7+
pluginJs.configs.recommended,
8+
];

jest-serializer.js

-29
This file was deleted.

0 commit comments

Comments
 (0)