|
2 | 2 |
|
3 | 3 | [](https://github.com/samestrin/llm-interface/stargazers) [](https://github.com/samestrin/llm-interface/network/members) [](https://github.com/samestrin/llm-interface/watchers)
|
4 | 4 |
|
5 |
| - [](https://opensource.org/licenses/MIT) [](https://nodejs.org/) |
| 5 | + [](https://opensource.org/licenses/MIT) [](https://nodejs.org/) |
6 | 6 |
|
7 | 7 | ## Introduction
|
8 | 8 |
|
9 |
| -`llm-interface` is a wrapper designed to interact with multiple Large Language Model (LLM) APIs. `llm-interface` simplifies integrating various LLM providers, including **OpenAI, AI21 Studio, Anthropic, Cloudflare AI, Cohere, DeepInfra, Fireworks AI, Friendli AI, Google Gemini, Goose AI, Groq, Hugging Face, Mistral AI, Monster API, Octo AI, Perplexity, Reka AI, watsonx.ai, and LLaMA.cpp**, into your applications. It is available as an [NPM package](https://www.npmjs.com/package/llm-interface). |
| 9 | +`llm-interface` is a wrapper designed to interact with multiple Large Language Model (LLM) APIs. `llm-interface` simplifies integrating various LLM providers, including **OpenAI, AI21 Studio, AIML API, Anthropic, Cloudflare AI, Cohere, DeepInfra, Fireworks AI, Forefront, Friendli AI, Google Gemini, Goose AI, Groq, Hugging Face, Mistral AI, Monster API, Octo AI, Ollama, Perplexity, Reka AI, Replicate, watsonx.ai, Writer, and LLaMA.cpp**, into your applications. It is available as an [NPM package](https://www.npmjs.com/package/llm-interface). |
10 | 10 |
|
11 | 11 | This goal of `llm-interface` is to provide a single, simple, unified interface for sending messages and receiving responses from different LLM services. This will make it easier for developers to work with multiple LLMs without worrying about the specific intricacies of each API.
|
12 | 12 |
|
13 | 13 | ## Features
|
14 | 14 |
|
15 |
| -- **Unified Interface**: `LLMInterfaceSendMessage` is a single, consistent interface to interact with **19 different LLM APIs**. |
| 15 | +- **Unified Interface**: `LLMInterfaceSendMessage` is a single, consistent interface to interact with **24 different LLM APIs** (22 hosted LLM providers and 2 local LLM providers). |
16 | 16 | - **Dynamic Module Loading**: Automatically loads and manages LLM interfaces only when they are invoked, minimizing resource usage.
|
17 | 17 | - **Error Handling**: Robust error handling mechanisms to ensure reliable API interactions.
|
18 | 18 | - **Extensible**: Easily extendable to support additional LLM providers as needed.
|
19 | 19 | - **Response Caching**: Efficiently caches LLM responses to reduce costs and enhance performance.
|
20 | 20 | - **Graceful Retries**: Automatically retry failed prompts with increasing delays to ensure successful responses.
|
21 |
| -- **JSON Output**: Simple to use native JSON output for OpenAI, Fireworks AI, and Gemini responses. |
22 |
| -- **JSON Repair**: Detect and repair invalid JSON responses. |
| 21 | +- **JSON Output**: Simple to use native JSON output for various LLM providers including OpenAI, Fireworks AI, Google Gemini, and more. |
| 22 | +- **JSON Repair**: Detect and repair invalid JSON responses. |
23 | 23 |
|
24 | 24 | ## Updates
|
25 | 25 |
|
26 |
| -**v2.0.8** |
27 |
| - |
28 |
| -- **Removing Dependencies**: The removal of OpenAI and Groq SDKs results in a smaller bundle, faster installs, and reduced complexity. |
| 26 | +**v2.0.9** |
29 | 27 |
|
30 |
| -**v2.0.7** |
| 28 | +- **New LLM Providers**: Added support for AIML API (_currently not respecting option values_), DeepSeek, Forefront, Ollama, Replicate, and Writer. |
| 29 | +- **New LLMInterface Methods**: `LLMInterface.setApiKey`, `LLMInterface.sendMesage`, and `LLMInterface.streamMessage`. |
| 30 | +- **Streaming**: Streaming support available for: AI21 Studio, AIML API, DeepInfra, DeepSeek, Fireworks AI, FriendliAI, Groq, Hugging Face, LLaMa.CPP, Mistral AI, Monster API, NVIDIA, |
| 31 | +Octo AI, Ollama, OpenAI, Perplexity, Together AI, and Writer. |
| 32 | +- **New Interface Function**: `LLMInterfaceStreamMessage` |
| 33 | +- **Test Coverage**: 100% test coverage for all interface classes. |
| 34 | +- **Examples**: New usage [examples](/examples). |
31 | 35 |
|
32 |
| -- **New LLM Providers**: Added support for DeepInfra, FriendliAI, Monster API, Octo AI, Together AI, and NVIDIA. |
33 |
| -- **Improved Test Coverage**: New DeepInfra, FriendliAI, Monster API, NVIDIA, Octo AI, Together AI, and watsonx.ai test cases. |
34 |
| -- **Refactor**: Improved support for OpenAI compatible APIs using new BaseInterface class. |
35 |
| - |
36 |
| -**v2.0.6** |
| 36 | +**v2.0.8** |
37 | 37 |
|
38 |
| -- **New LLM Provider**: Added support for watsonx.ai. |
| 38 | +- **Removing Dependencies**: The removal of OpenAI and Groq SDKs results in a smaller bundle, faster installs, and reduced complexity. |
39 | 39 |
|
40 | 40 | ## Dependencies
|
41 | 41 |
|
@@ -111,13 +111,13 @@ The project includes tests for each LLM handler. To run the tests, use the follo
|
111 | 111 | npm test
|
112 | 112 | ```
|
113 | 113 |
|
114 |
| -#### Current Test Results |
| 114 | +#### Current Test Results |
115 | 115 |
|
116 | 116 | ```bash
|
117 |
| -Test Suites: 52 passed, 52 total |
118 |
| -Tests: 2 skipped, 215 passed, 217 total |
| 117 | +Test Suites: 1 skipped, 65 passed, 65 of 66 total |
| 118 | +Tests: 2 skipped, 291 passed, 293 total |
119 | 119 | Snapshots: 0 total
|
120 |
| -Time: 76.236 s |
| 120 | +Time: 103.293 s, estimated 121 s |
121 | 121 | ```
|
122 | 122 |
|
123 | 123 | _Note: Currently skipping NVIDIA test cases due to API key limits._
|
|
0 commit comments