Skip to content

Commit ece694a

Browse files
authored
Merge pull request #8 from samestrin/2.0.10
2.0.10
2 parents faeabd4 + 3690034 commit ece694a

File tree

293 files changed

+17243
-9723
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

293 files changed

+17243
-9723
lines changed

.eslintrc.json

-11
This file was deleted.

.gitignore

+6
Original file line numberDiff line numberDiff line change
@@ -134,3 +134,9 @@ dist
134134

135135
.DS_STORE
136136
cache/
137+
build/
138+
.eslint*
139+
eslint*
140+
jest*
141+
babel.config.js
142+
.prettier*

.npmignore

+146-3
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,146 @@
1-
node_modules
2-
test
3-
.env
1+
# Logs
2+
logs
3+
*.log
4+
npm-debug.log*
5+
yarn-debug.log*
6+
yarn-error.log*
7+
lerna-debug.log*
8+
.pnpm-debug.log*
9+
10+
# Diagnostic reports (https://nodejs.org/api/report.html)
11+
report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json
12+
13+
# Runtime data
14+
pids
15+
*.pid
16+
*.seed
17+
*.pid.lock
18+
19+
# Directory for instrumented libs generated by jscoverage/JSCover
20+
lib-cov
21+
22+
# Coverage directory used by tools like istanbul
23+
coverage
24+
*.lcov
25+
26+
# nyc test coverage
27+
.nyc_output
28+
29+
# Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)
30+
.grunt
31+
32+
# Bower dependency directory (https://bower.io/)
33+
bower_components
34+
35+
# node-waf configuration
36+
.lock-wscript
37+
38+
# Compiled binary addons (https://nodejs.org/api/addons.html)
39+
build/Release
40+
41+
# Dependency directories
42+
node_modules/
43+
jspm_packages/
44+
45+
# Snowpack dependency directory (https://snowpack.dev/)
46+
web_modules/
47+
48+
# TypeScript cache
49+
*.tsbuildinfo
50+
51+
# Optional npm cache directory
52+
.npm
53+
54+
# Optional eslint cache
55+
.eslintcache
56+
57+
# Optional stylelint cache
58+
.stylelintcache
59+
60+
# Microbundle cache
61+
.rpt2_cache/
62+
.rts2_cache_cjs/
63+
.rts2_cache_es/
64+
.rts2_cache_umd/
65+
66+
# Optional REPL history
67+
.node_repl_history
68+
69+
# Output of 'npm pack'
70+
*.tgz
71+
72+
# Yarn Integrity file
73+
.yarn-integrity
74+
75+
# dotenv environment variable files
76+
.env
77+
.env.development.local
78+
.env.test.local
79+
.env.production.local
80+
.env.local
81+
82+
# parcel-bundler cache (https://parceljs.org/)
83+
.cache
84+
.parcel-cache
85+
86+
# Next.js build output
87+
.next
88+
out
89+
90+
# Nuxt.js build / generate output
91+
.nuxt
92+
dist
93+
94+
# Gatsby files
95+
.cache/
96+
# Comment in the public line in if your project uses Gatsby and not Next.js
97+
# https://nextjs.org/blog/next-9-1#public-directory-support
98+
# public
99+
100+
# vuepress build output
101+
.vuepress/dist
102+
103+
# vuepress v2.x temp and cache directory
104+
.temp
105+
.cache
106+
107+
# Docusaurus cache and generated files
108+
.docusaurus
109+
110+
# Serverless directories
111+
.serverless/
112+
113+
# FuseBox cache
114+
.fusebox/
115+
116+
# DynamoDB Local files
117+
.dynamodb/
118+
119+
# TernJS port file
120+
.tern-port
121+
122+
# Stores VSCode versions used for testing VSCode extensions
123+
.vscode-test
124+
125+
# yarn v2
126+
.yarn/cache
127+
.yarn/unplugged
128+
.yarn/build-state.yml
129+
.yarn/install-state.gz
130+
.pnp.*
131+
132+
/src/cache
133+
.prettier*
134+
135+
.DS_STORE
136+
cache/
137+
build/
138+
.eslint*
139+
eslint*
140+
jest*
141+
babel.config.js
142+
.prettier*
143+
144+
examples/
145+
docs/
146+
test/

.prettierrc

-4
This file was deleted.

README.md

+68-28
Original file line numberDiff line numberDiff line change
@@ -2,17 +2,28 @@
22

33
[![Star on GitHub](https://img.shields.io/github/stars/samestrin/llm-interface?style=social)](https://github.com/samestrin/llm-interface/stargazers) [![Fork on GitHub](https://img.shields.io/github/forks/samestrin/llm-interface?style=social)](https://github.com/samestrin/llm-interface/network/members) [![Watch on GitHub](https://img.shields.io/github/watchers/samestrin/llm-interface?style=social)](https://github.com/samestrin/llm-interface/watchers)
44

5-
![Version 2.0.9](https://img.shields.io/badge/Version-2.0.9-blue) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Built with Node.js](https://img.shields.io/badge/Built%20with-Node.js-green)](https://nodejs.org/)
5+
![Version 2.0.10](https://img.shields.io/badge/Version-2.0.10-blue) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Built with Node.js](https://img.shields.io/badge/Built%20with-Node.js-green)](https://nodejs.org/)
66

77
## Introduction
88

9-
`llm-interface` is a wrapper designed to interact with multiple Large Language Model (LLM) APIs. `llm-interface` simplifies integrating various LLM providers, including **OpenAI, AI21 Studio, AIML API, Anthropic, Cloudflare AI, Cohere, DeepInfra, Fireworks AI, Forefront, Friendli AI, Google Gemini, Goose AI, Groq, Hugging Face, Mistral AI, Monster API, Octo AI, Ollama, Perplexity, Reka AI, Replicate, watsonx.ai, Writer, and LLaMA.cpp**, into your applications. It is available as an [NPM package](https://www.npmjs.com/package/llm-interface).
9+
LLM Interface is an npm module that streamlines your interactions with various Large Language Model (LLM) providers in your Node.js applications. It offers a unified interface, simplifying the process of switching between providers and their models.
1010

11-
This goal of `llm-interface` is to provide a single, simple, unified interface for sending messages and receiving responses from different LLM services. This will make it easier for developers to work with multiple LLMs without worrying about the specific intricacies of each API.
11+
The LLM Interface package offers comprehensive support for a wide range of language model providers, encompassing 36 different providers and hundreds of models. This extensive coverage ensures that you have the flexibility to choose the best models suited to your specific needs.
12+
13+
## Extensive Support for 36 Providers and Hundreds of Models
14+
15+
LLM Interface supports: **AI21 Studio, AiLAYER, AIMLAPI, Anyscale, Anthropic, Microsoft Azure AI, Cloudflare AI, Cohere, Corcel, DeepInfra, DeepSeek, Fireworks AI, Forefront AI, FriendliAI, Google Gemini, GooseAI, Groq, Hugging Face Inference API, HyperBee AI, Lamini, LLaMA.CPP, Mistral AI, Monster API, Neets.ai, Novita AI, NVIDIA AI, OctoAI, Ollama, OpenAI, Perplexity AI, Reka AI, Replicate, Shuttle AI, TheB.ai, Together AI, Voyage AI, Watsonx AI, Writer, and Zhipu AI**.
16+
17+
<!-- Support List -->
18+
![AI21 Studio](https://samestrin.github.io/media/llm-interface/icons/ai21.png) ![AIMLAPI](https://samestrin.github.io/media/llm-interface/icons/aimlapi.png) ![Anthropic](https://samestrin.github.io/media/llm-interface/icons/anthropic.png) ![Anyscale](https://samestrin.github.io/media/llm-interface/icons/anyscale.png) ![blank.png](https://samestrin.github.io/media/llm-interface/icons/blank.png) ![Cloudflare AI](https://samestrin.github.io/media/llm-interface/icons/cloudflareai.png) ![Cohere](https://samestrin.github.io/media/llm-interface/icons/cohere.png) ![Corcel](https://samestrin.github.io/media/llm-interface/icons/corcel.png) ![DeepInfra](https://samestrin.github.io/media/llm-interface/icons/deepinfra.png) ![DeepSeek](https://samestrin.github.io/media/llm-interface/icons/deepseek.png) ![Forefront AI](https://samestrin.github.io/media/llm-interface/icons/forefront.png) ![GooseAI](https://samestrin.github.io/media/llm-interface/icons/gooseai.png) ![Lamini](https://samestrin.github.io/media/llm-interface/icons/lamini.png) ![Mistral AI](https://samestrin.github.io/media/llm-interface/icons/mistralai.png) ![Monster API](https://samestrin.github.io/media/llm-interface/icons/monsterapi.png) ![Neets.ai](https://samestrin.github.io/media/llm-interface/icons/neetsai.png) ![Perplexity AI](https://samestrin.github.io/media/llm-interface/icons/perplexity.png) ![Reka AI](https://samestrin.github.io/media/llm-interface/icons/rekaai.png) ![Replicate](https://samestrin.github.io/media/llm-interface/icons/replicate.png) ![Shuttle AI](https://samestrin.github.io/media/llm-interface/icons/shuttleai.png) ![Together AI](https://samestrin.github.io/media/llm-interface/icons/togetherai.png) ![Writer](https://samestrin.github.io/media/llm-interface/icons/writer.png)
19+
<!-- Support List End -->
20+
21+
[Detailed Provider List](docs/providers.md)
1222

1323
## Features
1424

15-
- **Unified Interface**: `LLMInterface.sendMessage` is a single, consistent interface to interact with **24 different LLM APIs** (22 hosted LLM providers and 2 local LLM providers).
25+
26+
- **Unified Interface**: `LLMInterface.sendMessage` is a single, consistent interface to interact with **36 different LLM APIs** (34 hosted LLM providers and 2 local LLM providers).
1627
- **Dynamic Module Loading**: Automatically loads and manages LLM interfaces only when they are invoked, minimizing resource usage.
1728
- **Error Handling**: Robust error handling mechanisms to ensure reliable API interactions.
1829
- **Extensible**: Easily extendable to support additional LLM providers as needed.
@@ -23,6 +34,15 @@ This goal of `llm-interface` is to provide a single, simple, unified interface f
2334

2435
## Updates
2536

37+
**v2.0.10**
38+
39+
- **New LLM Providers**: Anyscale, Bigmodel, Corcel, Deepseek, Hyperbee AI, Lamini, Neets AI, Novita AI, NVIDIA, Shuttle AI, TheB.AI, and Together AI.
40+
- **Caching**: Supports multiple caches: `simple-cache`, `flat-cache`, and `cache-manager`. _`flat-cache` is now an optional package._
41+
- **Logging**: Improved logging with the `loglevel`.
42+
- **Improved Documentation**: Improved [documentation](docs/index.md) with new examples, glossary, and provider details. Updated API key details, model alias breakdown, and usage information.
43+
- **More Examples**: [LangChain.js RAG](examples/langchain/rag.js), [Mixture-of-Authorities (MoA)](examples/moa/moa.js), and [more](docs/examples.md).
44+
- **Removed Dependency**: `@anthropic-ai/sdk` is no longer required.
45+
2646
**v2.0.9**
2747

2848
- **New LLM Providers**: Added support for AIML API (_currently not respecting option values_), DeepSeek, Forefront, Ollama, Replicate, and Writer.
@@ -31,59 +51,70 @@ This goal of `llm-interface` is to provide a single, simple, unified interface f
3151
Octo AI, Ollama, OpenAI, Perplexity, Together AI, and Writer.
3252
- **New Interface Function**: `LLMInterfaceStreamMessage`
3353
- **Test Coverage**: 100% test coverage for all interface classes.
34-
- **Examples**: New usage [examples](/examples).
35-
36-
**v2.0.8**
37-
38-
- **Removing Dependencies**: The removal of OpenAI and Groq SDKs results in a smaller bundle, faster installs, and reduced complexity.
54+
- **Examples**: New usage [examples](examples).
3955

4056
## Dependencies
4157

4258
The project relies on several npm packages and APIs. Here are the primary dependencies:
4359

4460
- `axios`: For making HTTP requests (used for various HTTP AI APIs).
45-
- `@anthropic-ai/sdk`: SDK for interacting with the Anthropic API.
4661
- `@google/generative-ai`: SDK for interacting with the Google Gemini API.
4762
- `dotenv`: For managing environment variables. Used by test cases.
48-
- `flat-cache`: For optionally caching API responses to improve performance and reduce redundant requests.
4963
- `jsonrepair`: Used to repair invalid JSON responses.
50-
- `jest`: For running test cases.
64+
- `loglevel`: A minimal, lightweight logging library with level-based logging and filtering.
65+
66+
The following optional packages can added to extend LLMInterface's caching capabilities:
67+
68+
- `flat-cache`: A simple JSON based cache.
69+
- `cache-manager`: An extendible cache module that supports various backends including Redis, MongoDB, File System, Memcached, Sqlite, and more.
5170

5271
## Installation
5372

54-
To install the `llm-interface` package, you can use npm:
73+
To install the LLM Interface npm module, you can use npm:
5574

5675
```bash
5776
npm install llm-interface
5877
```
78+
## Quick Start
5979

60-
## Usage
80+
- Looking for [API Keys](/docs/api-keys.md)? This document provides helpful links.
81+
- Detailed [usage](/docs/usage.md) documentation is available here.
82+
- Various [examples](/examples) are also available to help you get started.
83+
- A breakdown of [model aliaes](/docs/models.md) aliases is available here.
84+
- If you still want more examples, you may wish to review the [test cases](/test/) for further examples.
6185

62-
### Example
86+
## Usage
6387

64-
First import `LLMInterfaceSendMessage`. You can do this using either the CommonJS `require` syntax:
88+
First import `LLMInterface`. You can do this using either the CommonJS `require` syntax:
6589

6690
```javascript
67-
const { LLMInterfaceSendMessage } = require('llm-interface');
91+
const { LLMInterface } = require('llm-interface');
6892
```
6993

7094
or the ES6 `import` syntax:
7195

7296
```javascript
73-
import { LLMInterfaceSendMessage } from 'llm-interface';
97+
import { LLMInterface } from 'llm-interface';
7498
```
7599

76-
then send your prompt to the LLM provider of your choice:
100+
then send your prompt to the LLM provider:
77101

78102
```javascript
103+
LLMInterface.setApiKey({'openai': process.env.OPENAI_API_KEY});
104+
79105
try {
80-
const response = LLMInterfaceSendMessage('openai', process.env.OPENAI_API_KEY, 'Explain the importance of low latency LLMs.');
106+
const response = await LLMInterface.sendMessage('openai', 'Explain the importance of low latency LLMs.');
81107
} catch (error) {
82108
console.error(error);
83109
}
84110
```
111+
if you prefer, you can pass use a one-liner to pass the provider and API key, essentially skipping the LLMInterface.setApiKey() step.
112+
113+
```javascript
114+
const response = await LLMInterface.sendMessage(['openai',process.env.OPENAI_API_KEY], 'Explain the importance of low latency LLMs.');
115+
```
85116

86-
or if you'd like to chat, use the message object. You can also pass through options such as `max_tokens`.
117+
Passing a more complex message object is just as simple. The same rules apply:
87118

88119
```javascript
89120
const message = {
@@ -95,13 +126,12 @@ const message = {
95126
};
96127

97128
try {
98-
const response = LLMInterfaceSendMessage('openai', process.env.OPENAI_API_KEY, message, { max_tokens: 150 });
129+
const response = await LLMInterface.sendMessage('openai', message, { max_tokens: 150 });
99130
} catch (error) {
100131
console.error(error);
101132
}
102133
```
103-
104-
If you need [API Keys](/docs/APIKEYS.md), use this [starting point](/docs/APIKEYS.md). Additional [usage examples](/docs/USAGE.md) and an [API reference](/docs/API.md) are available. You may also wish to review the [test cases](/test/) for further examples.
134+
_LLMInterfaceSendMessage and LLMInterfaceStreamMessage are still available and will be available until version 3_
105135

106136
## Running Tests
107137

@@ -114,13 +144,23 @@ npm test
114144
#### Current Test Results
115145

116146
```bash
117-
Test Suites: 1 skipped, 65 passed, 65 of 66 total
118-
Tests: 2 skipped, 291 passed, 293 total
147+
Test Suites: 9 skipped, 93 passed, 93 of 102 total
148+
Tests: 86 skipped, 784 passed, 870 total
119149
Snapshots: 0 total
120-
Time: 103.293 s, estimated 121 s
150+
Time: 630.029 s
121151
```
122152

123-
_Note: Currently skipping NVIDIA test cases due to API key limits._
153+
_Note: Currently skipping NVIDIA test cases due to API issues, and Ollama due to performance issues._
154+
155+
## TODO
156+
157+
- [ ] Provider > Models > Azure AI
158+
- [ ] Provider > Models > Groq
159+
- [ ] Provider > Models > SiliconFlow
160+
- [ ] Provider > Embeddings > Nomic
161+
- [ ] _Feature > Image Generation?_
162+
163+
_Submit your suggestions!_
124164

125165
## Contribute
126166

babel.config.js

-4
This file was deleted.

0 commit comments

Comments
 (0)