Skip to content

Commit 5427571

Browse files
committedJul 12, 2024
feat: interfaces
1 parent e971cf4 commit 5427571

File tree

11 files changed

+169
-61
lines changed

11 files changed

+169
-61
lines changed
 

‎docs/index.md

+33-7
Original file line numberDiff line numberDiff line change
@@ -10,19 +10,26 @@ Welcome to the documentation for the LLM Interface package. This documentation p
1010
- [Usage](#usage)
1111
- [LLMInterface](#llminterface)
1212
- [getAllModelNames()](#getallmodelnames)
13+
- [getEmbeddingsModelAlias(interfaceName, alias)](#getembeddingsmodelaliasinterfacename-alias)
1314
- [getInterfaceConfigValue(interfaceName, key)](#getInterfaceConfigValueinterfacename-key)
15+
- [getModelAlias(interfaceName, alias)](#getmodelaliasinterfacename-alias)
1416
- [setApiKey(interfaceNames, apiKey)](#setapikeyinterfacenames-apikey)
15-
- [setModelAlias(interfaceName, alias, name, tokens = null)](#setmodelaliasinterfacename-alias-name-tokens--null)
17+
- [setEmbeddingsModelAlias(interfaceName, alias, name)](#setembeddingsmodelaliasinterfacename-alias-name)
18+
- [setModelAlias(interfaceName, alias, name)](#setmodelaliasinterfacename-alias-name)
1619
- [configureCache(cacheConfig = {})](#configurecachecacheconfig--)
20+
- [flushCache()](#flushcache)
1721
- [sendMessage(interfaceName, message, options = {}, interfaceOptions = {})](#sendmessageinterfacename-message-options---interfaceoptions--)
1822
- [streamMessage(interfaceName, message, options = {})](#streammessageinterfacename-message-options--)
19-
- [Supported Interface Names](#supported-interface-names)
23+
- [embeddings(interfaceName, embeddingString, options = {}, interfaceOptions = {})](#embeddingsinterfacename-embeddingstring-options---interfaceoptions--)
24+
- [chat.completions.create(interfaceName, message, options = {}, interfaceOptions = {})](#chatcompletionscreateinterfacename-message-options---interfaceoptions--)
2025
- [LLMInterfaceSendMessage](#llminterfacesendmessage)
2126
- [LLMInterfaceSendMessage(interfaceName, apiKey, message, options = {}, interfaceOptions = {})](#llminterfacesendmessageinterfacename-apikey-message-options---interfaceoptions--)
2227
- [LLMInterfaceStreamMessage](#llminterfacestreammessage)
2328
- [LLMInterfaceStreamMessage(interfaceName, apiKey, message, options = {})](#llminterfacestreammessageinterfacename-apikey-message-options--)
2429
- [Message Object](#message-object)
2530
- [Structure of a Message Object](#structure-of-a-message-object)
31+
- [Options Object](#options-object)
32+
- [Structure of an Options Object](#structure-of-an-options-object)
2633
- [Interface Options Object](#interface-options-object)
2734
- [Structure of an Interface Options Object](#structure-of-an-interface-options-object)
2835
- [Caching](#caching)
@@ -40,7 +47,9 @@ Welcome to the documentation for the LLM Interface package. This documentation p
4047
- [MongoDB](#mongodb)
4148
- [Memory Cache](#memory-cache)
4249
- [Example Usage](#example-usage-4)
43-
- [Models](#models)
50+
- [Support](#support)
51+
- [Model Aliases](#model-aliases)
52+
- [Embeddings Model Aliases](#embedding-model-aliases)
4453
- [Jailbreaking](#jailbreaking)
4554
- [Glossary](#glossary)
4655
- [Examples](#examples)
@@ -53,7 +62,7 @@ The LLMInterface npm module provides a unified interface for interacting with va
5362

5463
## API Keys
5564

56-
To interact with different LLM providers, you will need API keys. Refer to [API Keys](api-key.md) for detailed instructions on obtaining and configuring API keys for supported providers.
65+
To interact with different LLM providers, you will need API keys. Refer to [API Keys](api-keys.md) for detailed instructions on obtaining and configuring API keys for supported providers.
5766

5867
## Usage
5968

@@ -62,26 +71,39 @@ The [Usage](usage.md) section contains detailed documentation on how to use the
6271
### LLMInterface
6372

6473
- [getAllModelNames()](usage.md#getallmodelnames)
74+
- [getEmbeddingsModelAlias(interfaceName, alias)](usage.md#getembeddingsmodelaliasinterfacename-alias)
6575
- [getInterfaceConfigValue(interfaceName, key)](usage.md#getInterfaceConfigValueinterfacename-key)
76+
- [getModelAlias(interfaceName, alias)](usage.md#getmodelaliasinterfacename-alias)
6677
- [setApiKey(interfaceNames, apiKey)](usage.md#setapikeyinterfacenames-apikey)
67-
- [setModelAlias(interfaceName, alias, name, tokens = null)](usage.md#setmodelaliasinterfacename-alias-name-tokens--null)
78+
- [setEmbeddingsModelAlias(interfaceName, alias, name)](usage.md#setembeddingsmodelaliasinterfacename-alias-name)
79+
- [setModelAlias(interfaceName, alias, name)](usage.md#setmodelaliasinterfacename-alias-name)
6880
- [configureCache(cacheConfig = {})](usage.md#configurecachecacheconfig--)
81+
- [flushCache()](usage.md#flushcache)
6982
- [sendMessage(interfaceName, message, options = {}, interfaceOptions = {})](usage.md#sendmessageinterfacename-message-options---interfaceoptions--)
7083
- [streamMessage(interfaceName, message, options = {})](usage.md#streammessageinterfacename-message-options--)
71-
- [Supported Interface Names](usage.md#supported-interface-names)
84+
- [embeddings(interfaceName, embeddingString, options = {}, interfaceOptions = {})](usage.md#embeddingsinterfacename-embeddingstring-options---interfaceoptions--)
85+
- [chat.completions.create(interfaceName, message, options = {}, interfaceOptions = {})](usage.md#chatcompletionscreateinterfacename-message-options---interfaceoptions--)
7286

7387
### LLMInterfaceSendMessage
7488

7589
- [LLMInterfaceSendMessage(interfaceName, apiKey, message, options = {}, interfaceOptions = {})](usage.md#llminterfacesendmessageinterfacename-apikey-message-options---interfaceoptions--)
7690

91+
_This is a legacy function and will be depreciated._
92+
7793
### LLMInterfaceStreamMessage
7894

7995
- [LLMInterfaceStreamMessage(interfaceName, apiKey, message, options = {})](usage.md#llminterfacestreammessageinterfacename-apikey-message-options--)
8096

97+
_This is a legacy function and will be depreciated._
98+
8199
### Message Object
82100

83101
- [Structure of a Message Object](usage.md#structure-of-a-message-object)
84102

103+
### Options Object
104+
105+
- [Structure of an Options Object](usage.md#structure-of-an-options-object)
106+
85107
### Interface Options Object
86108

87109
- [Structure of an Interface Options Object](usage.md#structure-of-an-interface-options-object)
@@ -103,7 +125,11 @@ The [Usage](usage.md) section contains detailed documentation on how to use the
103125
- [Memory Cache](usage.md#memory-cache)
104126
- [Example Usage](usage.md#example-usage-4)
105127

106-
## Models
128+
## Support
129+
130+
A complete list of [supported providers](support.md) is availabe [here](support.md).
131+
132+
## Model Aliases
107133

108134
The LLMInterface supports multiple model aliases for different providers. See [Models](models.md) for a list of model aliases and their descriptions.
109135

‎docs/support.md

+44
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
# Supported Providers
2+
3+
The following providers are supported by LLMInterface.
4+
5+
| | Provider Name | Interface Name | .sendMessage | .embeddings |
6+
| --- | --- | --- | --- | --- |
7+
| ![ai21](https://samestrin.github.io/media/llm-interface/icons/ai21.png) | [AI21 Studio](providers/ai21.md) | `ai21` | ✓ | ✓ |
8+
| | [AiLAYER](providers/ailayer.md) | `ailayer` | ✓ | |
9+
| ![aimlapi](https://samestrin.github.io/media/llm-interface/icons/aimlapi.png) | [AIMLAPI](providers/aimlapi.md) | `aimlapi` | ✓ | ✓ |
10+
| ![anthropic](https://samestrin.github.io/media/llm-interface/icons/anthropic.png) | [Anthropic](providers/anthropic.md) | `anthropic` | ✓ | |
11+
| ![anyscale](https://samestrin.github.io/media/llm-interface/icons/anyscale.png) | [Anyscale](providers/anyscale.md) | `anyscale` | ✓ | ✓ |
12+
| ![cloudflareai](https://samestrin.github.io/media/llm-interface/icons/cloudflareai.png) | [Cloudflare AI](providers/cloudflareai.md) | `cloudflareai` | ✓ | ✓ |
13+
| ![cohere](https://samestrin.github.io/media/llm-interface/icons/cohere.png) | [Cohere](providers/cohere.md) | `cohere` | ✓ | ✓ |
14+
| ![corcel](https://samestrin.github.io/media/llm-interface/icons/corcel.png) | [Corcel](providers/corcel.md) | `corcel` | ✓ | |
15+
| ![deepinfra](https://samestrin.github.io/media/llm-interface/icons/deepinfra.png) | [DeepInfra](providers/deepinfra.md) | `deepinfra` | ✓ | ✓ |
16+
| ![deepseek](https://samestrin.github.io/media/llm-interface/icons/deepseek.png) | [DeepSeek](providers/deepseek.md) | `deepseek` | ✓ | |
17+
| | [Fireworks AI](providers/fireworksai.md) | `fireworksai` | ✓ | ✓ |
18+
| ![forefront](https://samestrin.github.io/media/llm-interface/icons/forefront.png) | [Forefront AI](providers/forefront.md) | `forefront` | ✓ | |
19+
| | [FriendliAI](providers/friendliai.md) | `friendliai` | ✓ | |
20+
| | [Google Gemini](providers/gemini.md) | `gemini` | ✓ | ✓ |
21+
| ![gooseai](https://samestrin.github.io/media/llm-interface/icons/gooseai.png) | [GooseAI](providers/gooseai.md) | `gooseai` | ✓ | |
22+
| | [Groq](providers/groq.md) | `groq` | ✓ | |
23+
| | [Hugging Face Inference](providers/huggingface.md) | `huggingface` | ✓ | ✓ |
24+
| | [HyperBee AI](providers/hyperbeeai.md) | `hyperbeeai` | ✓ | |
25+
| ![lamini](https://samestrin.github.io/media/llm-interface/icons/lamini.png) | [Lamini](providers/lamini.md) | `lamini` | ✓ | ✓ |
26+
| | [LLaMA.CPP](providers/llamacpp.md) | `llamacpp` | ✓ | ✓ |
27+
| ![mistralai](https://samestrin.github.io/media/llm-interface/icons/mistralai.png) | [Mistral AI](providers/mistralai.md) | `mistralai` | ✓ | ✓ |
28+
| ![monsterapi](https://samestrin.github.io/media/llm-interface/icons/monsterapi.png) | [Monster API](providers/monsterapi.md) | `monsterapi` | ✓ | |
29+
| ![neetsai](https://samestrin.github.io/media/llm-interface/icons/neetsai.png) | [Neets.ai](providers/neetsai.md) | `neetsai` | ✓ | |
30+
| | [Novita AI](providers/novitaai.md) | `novitaai` | ✓ | |
31+
| | [NVIDIA AI](providers/nvidia.md) | `nvidia` | ✓ | |
32+
| | [OctoAI](providers/octoai.md) | `octoai` | ✓ | |
33+
| | [Ollama](providers/ollama.md) | `ollama` | ✓ | ✓ |
34+
| | [OpenAI](providers/openai.md) | `openai` | ✓ | ✓ |
35+
| ![perplexity](https://samestrin.github.io/media/llm-interface/icons/perplexity.png) | [Perplexity AI](providers/perplexity.md) | `perplexity` | ✓ | |
36+
| ![rekaai](https://samestrin.github.io/media/llm-interface/icons/rekaai.png) | [Reka AI](providers/rekaai.md) | `rekaai` | ✓ | |
37+
| ![replicate](https://samestrin.github.io/media/llm-interface/icons/replicate.png) | [Replicate](providers/replicate.md) | `replicate` | ✓ | |
38+
| ![shuttleai](https://samestrin.github.io/media/llm-interface/icons/shuttleai.png) | [Shuttle AI](providers/shuttleai.md) | `shuttleai` | ✓ | |
39+
| | [TheB.ai](providers/thebai.md) | `thebai` | ✓ | |
40+
| ![togetherai](https://samestrin.github.io/media/llm-interface/icons/togetherai.png) | [Together AI](providers/togetherai.md) | `togetherai` | ✓ | ✓ |
41+
| | [Voyage AI](providers/voyage.md) | `voyage` | | ✓ |
42+
| | [Watsonx AI](providers/watsonxai.md) | `watsonxai` | ✓ | ✓ |
43+
| ![writer](https://samestrin.github.io/media/llm-interface/icons/writer.png) | [Writer](providers/writer.md) | `writer` | ✓ | |
44+
| | [Zhipu AI](providers/zhipuai.md) | `zhipuai` | ✓ | |

‎docs/usage.md

+34-8
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,17 @@
44

55
- [LLMInterface](#llminterface)
66
- [getAllModelNames()](#getallmodelnames)
7+
- [getEmbeddingsModelAlias(interfaceName, alias)](#getembeddingsmodelaliasinterfacename-alias)
78
- [getInterfaceConfigValue(interfaceName, key)](#getInterfaceConfigValueinterfacename-key)
9+
- [getModelByAlias(interfaceName, alias)](#getmodelbyaliasinterfacename-alias)
810
- [setApiKey(interfaceNames, apiKey)](#setapikeyinterfacenames-apikey)
911
- [setEmbeddingsModelAlias(interfaceName, alias, name)](#setembeddingsmodelaliasinterfacename-alias-name)
1012
- [setModelAlias(interfaceName, alias, name)](#setmodelaliasinterfacename-alias-name)
1113
- [configureCache(cacheConfig = {})](#configurecachecacheconfig--)
1214
- [flushCache()](#flushcache)
1315
- [sendMessage(interfaceName, message, options = {}, interfaceOptions = {})](#sendmessageinterfacename-message-options---interfaceoptions--)
1416
- [streamMessage(interfaceName, message, options = {})](#streammessageinterfacename-message-options--)
15-
- [embedding(interfaceName, embeddingString, options = {}, interfaceOptions = {})](#embeddinginterfacename-embeddingstring-options---interfaceoptions--)
17+
- [embeddings(interfaceName, embeddingString, options = {}, interfaceOptions = {})](#embeddingsinterfacename-embeddingstring-options---interfaceoptions--)
1618
- [chat.completions.create(interfaceName, message, options = {}, interfaceOptions = {})](#chatcompletionscreateinterfacename-message-options---interfaceoptions--)
1719
- [Supported Interface Names](#supported-interface-names)
1820
- [LLMInterfaceSendMessage](#llminterfacesendmessage)
@@ -65,6 +67,15 @@ const modelNames = LLMInterface.getAllModelNames();
6567
console.log(modelNames);
6668
```
6769

70+
### getEmbeddingsModelAlias(interfaceName, alias)
71+
72+
Retrieves an embeddings model name for a specific interfaceName alias.
73+
74+
```javascript
75+
const model = LLMInterface.getEmbeddingsModelAlias('openai','default');
76+
console.log(model);
77+
```
78+
6879
### getInterfaceConfigValue(interfaceName, key)
6980

7081
Retrieves a specific configuration value for a given model.
@@ -77,6 +88,15 @@ const apiKey = LLMInterface.getInterfaceConfigValue('openai', 'apiKey');
7788
console.log(apiKey);
7889
```
7990

91+
### getModelAlias(interfaceName, alias)
92+
93+
Retrieves a model name for a specific interfaceName alias.
94+
95+
```javascript
96+
const model = LLMInterface.getModelAlias('openai','default');
97+
console.log(model);
98+
```
99+
80100
### setApiKey(interfaceNames, apiKey)
81101

82102
Sets the API key for one or multiple interfaces.
@@ -185,7 +205,7 @@ try {
185205

186206
_processStream(stream) is not part of LLMInterface. It is defined in the[streaming mode example](/examples/misc/streaming-mode.js)._
187207

188-
### embedding(interfaceName, embeddingString, options = {}, interfaceOptions = {})
208+
### embeddings(interfaceName, embeddingString, options = {}, interfaceOptions = {})
189209

190210
Generates embeddings using a specified LLM interface.
191211

@@ -245,7 +265,6 @@ The following are the interfaceNames for each supported LLM provider (in alphabe
245265
| | `hyperbeeai` | [HyperBee AI](providers/hyperbeeai.md) | ✓ | |
246266
| ![lamini](https://samestrin.github.io/media/llm-interface/icons/lamini.png) | `lamini` | [Lamini](providers/lamini.md) | ✓ | ✓ |
247267
| | `llamacpp` | [LLaMA.CPP](providers/llamacpp.md) | ✓ | ✓ |
248-
| | `azureai` | [Microsoft Azure AI](providers/azureai.md) | ✓ | ✓ |
249268
| ![mistralai](https://samestrin.github.io/media/llm-interface/icons/mistralai.png) | `mistralai` | [Mistral AI](providers/mistralai.md) | ✓ | ✓ |
250269
| ![monsterapi](https://samestrin.github.io/media/llm-interface/icons/monsterapi.png) | `monsterapi` | [Monster API](providers/monsterapi.md) | ✓ | |
251270
| ![neetsai](https://samestrin.github.io/media/llm-interface/icons/neetsai.png) | `neetsai` | [Neets.ai](providers/neetsai.md) | ✓ | |
@@ -258,7 +277,6 @@ The following are the interfaceNames for each supported LLM provider (in alphabe
258277
| ![rekaai](https://samestrin.github.io/media/llm-interface/icons/rekaai.png) | `rekaai` | [Reka AI](providers/rekaai.md) | ✓ | |
259278
| ![replicate](https://samestrin.github.io/media/llm-interface/icons/replicate.png) | `replicate` | [Replicate](providers/replicate.md) | ✓ | |
260279
| ![shuttleai](https://samestrin.github.io/media/llm-interface/icons/shuttleai.png) | `shuttleai` | [Shuttle AI](providers/shuttleai.md) | ✓ | |
261-
| | `siliconflow` | [SiliconFlow](providers/siliconflow.md) | ✓ | ✓ |
262280
| | `thebai` | [TheB.ai](providers/thebai.md) | ✓ | |
263281
| ![togetherai](https://samestrin.github.io/media/llm-interface/icons/togetherai.png) | `togetherai` | [Together AI](providers/togetherai.md) | ✓ | ✓ |
264282
| | `voyage` | [Voyage AI](providers/voyage.md) | | ✓ |
@@ -306,6 +324,8 @@ try {
306324
}
307325
```
308326

327+
_This is a legacy function and will be depreciated._
328+
309329
## LLMInterfaceStreamMessage
310330

311331
To use the `LLMInterfaceStreamMessage` function, first import `LLMInterfaceStreamMessage`. You can do this using either the CommonJS `require` syntax:
@@ -329,8 +349,8 @@ Streams a message using the specified LLM interface.
329349
- `message` (String|Object): The message to send.
330350
- `options` (Object, optional): Additional options for the message.
331351

332-
````javascript
333-
try {}
352+
```javascript
353+
try {
334354
const stream = await LLMInterfaceStreamMessage('openai', 'your-api-key', 'Hello, world!', { max_tokens: 100 });
335355
const result = await processStream(stream.data);
336356
} catch (error) {
@@ -339,6 +359,8 @@ try {}
339359
```
340360
_processStream(stream) is defined in the [streaming mode example](/examples/misc/streaming-mode.js)._
341361

362+
_This is a legacy function and will be depreciated._
363+
342364
## Message Object
343365

344366
The message object is a critical component when interacting with the various LLM APIs through the LLMInterface npm module. It contains the data that will be sent to the LLM for processing and allows for complex conversations. Below is a detailed explanation of the structure of a valid message object."
@@ -350,11 +372,15 @@ A valid message object typically includes the following properties:
350372
- `model`: A string specifying the model to use for the request (optional).
351373
- `messages`: An array of message objects that form the conversation history.
352374

353-
Different LLMs may have their own message object rules. For example, both Anthropic and Gemini always expect the initial message to have the `user` role. Please be aware of this and structure your message objects accordingly. _LLMInterface will attempt to auto-correct invalid objects where possible._
375+
Different LLMs may have their own message object rules. For example, both Anthropic and Gemini always expect the initial message to have the `user` role. Please be aware of this and structure your message objects accordingly.
376+
377+
_LLMInterface will attempt to auto-correct invalid objects where possible._
354378

355379
## Options Object
356380

357-
The options object is an optional component that lets you send LLM provider specific parameters. While parameter names are fairly consistent, they can vary slightly, so it is important to pay attention. However, `max_token` is a special value, and is automatically normalized.
381+
The options object is an optional component that lets you send LLM provider specific parameters. While parameter names are fairly consistent, they can vary slightly, so it is important to pay attention.
382+
383+
However, `max_token` is a special value, and is automatically normalized and is set with a default value of `1024`.
358384

359385
### Structure of an Options Object
360386

‎examples/basic-usage/chat.js

+14-18
Original file line numberDiff line numberDiff line change
@@ -32,25 +32,21 @@ To run this example, you first need to install the required modules by executing
3232
* Main exampleUsage() function.
3333
*/
3434
async function exampleUsage() {
35-
try {
36-
console.time('Timer');
37-
// OpenAI chat.completion structure
38-
const openaiCompatibleStructure = {
39-
"model": "gemma-7b-it",
40-
"messages":
41-
[
42-
{ "role": "system", "content": "You are a helpful assistant." },
43-
{ "role": "user", "content": "Say hello with a polite greeting!" },
44-
{ "role": "system", "content": "Hello there! It's an absolute pleasure to make your acquaintance. How may I have the honor of assisting you today?" },
45-
{ "role": "user", "content": "I need help understanding low latency LLMs!" }
46-
],
47-
"max_tokens": 100
48-
}
49-
50-
// Concatenate messages into a single string
51-
const concatenatedMessages = openaiCompatibleStructure.messages.map(message => `${message.role}: ${message.content}`).join('\n');
52-
5335

36+
console.time('Timer');
37+
// OpenAI chat.completion structure
38+
const openaiCompatibleStructure = {
39+
"model": "gemma-7b-it",
40+
"messages":
41+
[
42+
{ "role": "system", "content": "You are a helpful assistant." },
43+
{ "role": "user", "content": "Say hello with a polite greeting!" },
44+
{ "role": "system", "content": "Hello there! It's an absolute pleasure to make your acquaintance. How may I have the honor of assisting you today?" },
45+
{ "role": "user", "content": "I need help understanding low latency LLMs!" }
46+
],
47+
"max_tokens": 100
48+
}
49+
try {
5450
prettyHeader(
5551
'Chat Example',
5652
description,
There was a problem loading the remainder of the diff.

0 commit comments

Comments
 (0)
Please sign in to comment.