Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2.0.10 #8

Merged
merged 83 commits into from
Jul 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
83 commits
Select commit Hold shift + click to select a range
55322af
feat: caching
samestrin Jun 30, 2024
da5b0f1
feat: new llm providers
samestrin Jul 1, 2024
7634bd5
feat: rss-feed-summaries.js (wip)
samestrin Jul 1, 2024
2a5bb37
feat: simple langchain.js example
samestrin Jul 1, 2024
fc9b81c
feat: example chart generation (wip)
samestrin Jul 1, 2024
08bca00
feat: refactor test cases
samestrin Jul 2, 2024
cc9aebf
feat: Improved caching
samestrin Jul 2, 2024
6fe424d
feat: Simple Error classes
samestrin Jul 2, 2024
34c142c
Update streamMessageUtil.js
samestrin Jul 2, 2024
47d19db
feat: removed flat-cache requirement
samestrin Jul 2, 2024
3d10ff3
feat: change version
samestrin Jul 2, 2024
6a160d9
Update README.md
samestrin Jul 2, 2024
0fbc12f
docs: updating docs to match v2.0.10
samestrin Jul 2, 2024
cfcdd1a
Update API.md
samestrin Jul 2, 2024
6e62e5f
feat: updated docs for 2.0.10
samestrin Jul 5, 2024
a1f4613
Create index.md
samestrin Jul 5, 2024
ec64fd6
feat: updated examples
samestrin Jul 5, 2024
827105f
test: updating tests to prevent api errors
samestrin Jul 5, 2024
b68695c
v2.0.10
samestrin Jul 5, 2024
004dc21
feat: new llm providers
samestrin Jul 5, 2024
3f119d1
Update index.js
samestrin Jul 5, 2024
3f15461
feat: interface updates
samestrin Jul 5, 2024
85d0a9b
feat: new util classes
samestrin Jul 5, 2024
e0052c1
test: updated test cases to test all cache engines
samestrin Jul 5, 2024
7a8b159
Update README.md
samestrin Jul 5, 2024
7738eb4
feat: Updated .env template
samestrin Jul 5, 2024
485315d
docs: providers and more
samestrin Jul 7, 2024
3e721d1
Update .gitignore
samestrin Jul 7, 2024
f7457a3
feat: examples
samestrin Jul 7, 2024
4ecdffc
tests: further refactoring
samestrin Jul 7, 2024
1c9a831
Update README.md
samestrin Jul 7, 2024
bbefebc
Update README.md
samestrin Jul 7, 2024
8f5cb9f
Create index.md
samestrin Jul 7, 2024
87c0372
Remove configuration files: .eslintrc.json, babel.config.js, eslint.c…
samestrin Jul 11, 2024
f66dfd4
Remove .prettierrc
samestrin Jul 12, 2024
72be990
Update .gitignore
samestrin Jul 12, 2024
9bca3f9
Update .npmignore
samestrin Jul 12, 2024
2014c94
docs: Updating docs for 2.0.10
samestrin Jul 12, 2024
06d15b6
Update .gitignore
samestrin Jul 12, 2024
398c48a
Update .npmignore
samestrin Jul 12, 2024
be268d7
Update env
samestrin Jul 12, 2024
9130d0e
feat: examples
samestrin Jul 12, 2024
095e4ea
feat: New configuration model
samestrin Jul 12, 2024
5c3b46c
tests: updated test cases for 2.0.10
samestrin Jul 12, 2024
4b4b629
Update package.json
samestrin Jul 12, 2024
0957b9f
Update package-lock.json
samestrin Jul 12, 2024
b1137e7
Update README.md
samestrin Jul 12, 2024
8ce7a20
feat: interfaces and utility classes for v2.0.10
samestrin Jul 12, 2024
b56f5b9
Remove MODELS.md and USAGE.md
samestrin Jul 12, 2024
8343fdd
docs: updated for v2.0.10
samestrin Jul 12, 2024
9142f02
Update usage.md
samestrin Jul 12, 2024
20d35f1
docs: revision
samestrin Jul 12, 2024
fe2df7d
feat: examples v2.0.10
samestrin Jul 12, 2024
23d4f32
Update .npmignore
samestrin Jul 12, 2024
d1130d6
Update index.js
samestrin Jul 12, 2024
cdf9675
Update baseInterface.js
samestrin Jul 12, 2024
b847bab
Update config.js
samestrin Jul 12, 2024
a35c081
Update api-keys.md
samestrin Jul 12, 2024
0b2df64
Update api-keys.md
samestrin Jul 12, 2024
4280692
Update api-keys.md
samestrin Jul 12, 2024
2356d09
Update models.md
samestrin Jul 12, 2024
39a7db3
Update embeddings.md
samestrin Jul 12, 2024
d264a48
Update models.md
samestrin Jul 12, 2024
ae4bb8e
docs: providers
samestrin Jul 12, 2024
501b138
docs: provider docs
samestrin Jul 12, 2024
551433b
Update examples.md
samestrin Jul 12, 2024
633d94d
Update glossary.md
samestrin Jul 12, 2024
3a67460
Update glossary.md
samestrin Jul 12, 2024
9d6740c
docs: update for v2.0.10
samestrin Jul 12, 2024
e971cf4
Update glossary.md
samestrin Jul 12, 2024
5427571
feat: interfaces
samestrin Jul 12, 2024
67b0cbb
Update utils.test.js
samestrin Jul 12, 2024
d1c9d82
Update config.js
samestrin Jul 12, 2024
4f38ff9
Update utils.js
samestrin Jul 12, 2024
c433bb6
docs
samestrin Jul 12, 2024
ede158c
feat: new examples
samestrin Jul 12, 2024
d149d5d
Update README.md
samestrin Jul 12, 2024
d7e8c1a
Update README.md
samestrin Jul 12, 2024
1ca2c7a
Update replicate.js
samestrin Jul 12, 2024
a821142
Update README.md
samestrin Jul 12, 2024
09f354a
feat: examples
samestrin Jul 12, 2024
3763a8a
docs: revision
samestrin Jul 12, 2024
3690034
Merge branch 'main' into 2.0.10
samestrin Jul 12, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 0 additions & 11 deletions .eslintrc.json

This file was deleted.

6 changes: 6 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -134,3 +134,9 @@ dist

.DS_STORE
cache/
build/
.eslint*
eslint*
jest*
babel.config.js
.prettier*
149 changes: 146 additions & 3 deletions .npmignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,146 @@
node_modules
test
.env
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
lerna-debug.log*
.pnpm-debug.log*

# Diagnostic reports (https://nodejs.org/api/report.html)
report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json

# Runtime data
pids
*.pid
*.seed
*.pid.lock

# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov

# Coverage directory used by tools like istanbul
coverage
*.lcov

# nyc test coverage
.nyc_output

# Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)
.grunt

# Bower dependency directory (https://bower.io/)
bower_components

# node-waf configuration
.lock-wscript

# Compiled binary addons (https://nodejs.org/api/addons.html)
build/Release

# Dependency directories
node_modules/
jspm_packages/

# Snowpack dependency directory (https://snowpack.dev/)
web_modules/

# TypeScript cache
*.tsbuildinfo

# Optional npm cache directory
.npm

# Optional eslint cache
.eslintcache

# Optional stylelint cache
.stylelintcache

# Microbundle cache
.rpt2_cache/
.rts2_cache_cjs/
.rts2_cache_es/
.rts2_cache_umd/

# Optional REPL history
.node_repl_history

# Output of 'npm pack'
*.tgz

# Yarn Integrity file
.yarn-integrity

# dotenv environment variable files
.env
.env.development.local
.env.test.local
.env.production.local
.env.local

# parcel-bundler cache (https://parceljs.org/)
.cache
.parcel-cache

# Next.js build output
.next
out

# Nuxt.js build / generate output
.nuxt
dist

# Gatsby files
.cache/
# Comment in the public line in if your project uses Gatsby and not Next.js
# https://nextjs.org/blog/next-9-1#public-directory-support
# public

# vuepress build output
.vuepress/dist

# vuepress v2.x temp and cache directory
.temp
.cache

# Docusaurus cache and generated files
.docusaurus

# Serverless directories
.serverless/

# FuseBox cache
.fusebox/

# DynamoDB Local files
.dynamodb/

# TernJS port file
.tern-port

# Stores VSCode versions used for testing VSCode extensions
.vscode-test

# yarn v2
.yarn/cache
.yarn/unplugged
.yarn/build-state.yml
.yarn/install-state.gz
.pnp.*

/src/cache
.prettier*

.DS_STORE
cache/
build/
.eslint*
eslint*
jest*
babel.config.js
.prettier*

examples/
docs/
test/
4 changes: 0 additions & 4 deletions .prettierrc

This file was deleted.

96 changes: 68 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,28 @@

[![Star on GitHub](https://img.shields.io/github/stars/samestrin/llm-interface?style=social)](https://github.com/samestrin/llm-interface/stargazers) [![Fork on GitHub](https://img.shields.io/github/forks/samestrin/llm-interface?style=social)](https://github.com/samestrin/llm-interface/network/members) [![Watch on GitHub](https://img.shields.io/github/watchers/samestrin/llm-interface?style=social)](https://github.com/samestrin/llm-interface/watchers)

![Version 2.0.9](https://img.shields.io/badge/Version-2.0.9-blue) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Built with Node.js](https://img.shields.io/badge/Built%20with-Node.js-green)](https://nodejs.org/)
![Version 2.0.10](https://img.shields.io/badge/Version-2.0.10-blue) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Built with Node.js](https://img.shields.io/badge/Built%20with-Node.js-green)](https://nodejs.org/)

## Introduction

`llm-interface` is a wrapper designed to interact with multiple Large Language Model (LLM) APIs. `llm-interface` simplifies integrating various LLM providers, including **OpenAI, AI21 Studio, AIML API, Anthropic, Cloudflare AI, Cohere, DeepInfra, Fireworks AI, Forefront, Friendli AI, Google Gemini, Goose AI, Groq, Hugging Face, Mistral AI, Monster API, Octo AI, Ollama, Perplexity, Reka AI, Replicate, watsonx.ai, Writer, and LLaMA.cpp**, into your applications. It is available as an [NPM package](https://www.npmjs.com/package/llm-interface).
LLM Interface is an npm module that streamlines your interactions with various Large Language Model (LLM) providers in your Node.js applications. It offers a unified interface, simplifying the process of switching between providers and their models.

This goal of `llm-interface` is to provide a single, simple, unified interface for sending messages and receiving responses from different LLM services. This will make it easier for developers to work with multiple LLMs without worrying about the specific intricacies of each API.
The LLM Interface package offers comprehensive support for a wide range of language model providers, encompassing 36 different providers and hundreds of models. This extensive coverage ensures that you have the flexibility to choose the best models suited to your specific needs.

## Extensive Support for 36 Providers and Hundreds of Models

LLM Interface supports: **AI21 Studio, AiLAYER, AIMLAPI, Anyscale, Anthropic, Microsoft Azure AI, Cloudflare AI, Cohere, Corcel, DeepInfra, DeepSeek, Fireworks AI, Forefront AI, FriendliAI, Google Gemini, GooseAI, Groq, Hugging Face Inference API, HyperBee AI, Lamini, LLaMA.CPP, Mistral AI, Monster API, Neets.ai, Novita AI, NVIDIA AI, OctoAI, Ollama, OpenAI, Perplexity AI, Reka AI, Replicate, Shuttle AI, TheB.ai, Together AI, Voyage AI, Watsonx AI, Writer, and Zhipu AI**.

<!-- Support List -->
![AI21 Studio](https://samestrin.github.io/media/llm-interface/icons/ai21.png) ![AIMLAPI](https://samestrin.github.io/media/llm-interface/icons/aimlapi.png) ![Anthropic](https://samestrin.github.io/media/llm-interface/icons/anthropic.png) ![Anyscale](https://samestrin.github.io/media/llm-interface/icons/anyscale.png) ![blank.png](https://samestrin.github.io/media/llm-interface/icons/blank.png) ![Cloudflare AI](https://samestrin.github.io/media/llm-interface/icons/cloudflareai.png) ![Cohere](https://samestrin.github.io/media/llm-interface/icons/cohere.png) ![Corcel](https://samestrin.github.io/media/llm-interface/icons/corcel.png) ![DeepInfra](https://samestrin.github.io/media/llm-interface/icons/deepinfra.png) ![DeepSeek](https://samestrin.github.io/media/llm-interface/icons/deepseek.png) ![Forefront AI](https://samestrin.github.io/media/llm-interface/icons/forefront.png) ![GooseAI](https://samestrin.github.io/media/llm-interface/icons/gooseai.png) ![Lamini](https://samestrin.github.io/media/llm-interface/icons/lamini.png) ![Mistral AI](https://samestrin.github.io/media/llm-interface/icons/mistralai.png) ![Monster API](https://samestrin.github.io/media/llm-interface/icons/monsterapi.png) ![Neets.ai](https://samestrin.github.io/media/llm-interface/icons/neetsai.png) ![Perplexity AI](https://samestrin.github.io/media/llm-interface/icons/perplexity.png) ![Reka AI](https://samestrin.github.io/media/llm-interface/icons/rekaai.png) ![Replicate](https://samestrin.github.io/media/llm-interface/icons/replicate.png) ![Shuttle AI](https://samestrin.github.io/media/llm-interface/icons/shuttleai.png) ![Together AI](https://samestrin.github.io/media/llm-interface/icons/togetherai.png) ![Writer](https://samestrin.github.io/media/llm-interface/icons/writer.png)
<!-- Support List End -->

[Detailed Provider List](docs/providers.md)

## Features

- **Unified Interface**: `LLMInterface.sendMessage` is a single, consistent interface to interact with **24 different LLM APIs** (22 hosted LLM providers and 2 local LLM providers).

- **Unified Interface**: `LLMInterface.sendMessage` is a single, consistent interface to interact with **36 different LLM APIs** (34 hosted LLM providers and 2 local LLM providers).
- **Dynamic Module Loading**: Automatically loads and manages LLM interfaces only when they are invoked, minimizing resource usage.
- **Error Handling**: Robust error handling mechanisms to ensure reliable API interactions.
- **Extensible**: Easily extendable to support additional LLM providers as needed.
Expand All @@ -23,6 +34,15 @@ This goal of `llm-interface` is to provide a single, simple, unified interface f

## Updates

**v2.0.10**

- **New LLM Providers**: Anyscale, Bigmodel, Corcel, Deepseek, Hyperbee AI, Lamini, Neets AI, Novita AI, NVIDIA, Shuttle AI, TheB.AI, and Together AI.
- **Caching**: Supports multiple caches: `simple-cache`, `flat-cache`, and `cache-manager`. _`flat-cache` is now an optional package._
- **Logging**: Improved logging with the `loglevel`.
- **Improved Documentation**: Improved [documentation](docs/index.md) with new examples, glossary, and provider details. Updated API key details, model alias breakdown, and usage information.
- **More Examples**: [LangChain.js RAG](examples/langchain/rag.js), [Mixture-of-Authorities (MoA)](examples/moa/moa.js), and [more](docs/examples.md).
- **Removed Dependency**: `@anthropic-ai/sdk` is no longer required.

**v2.0.9**

- **New LLM Providers**: Added support for AIML API (_currently not respecting option values_), DeepSeek, Forefront, Ollama, Replicate, and Writer.
Expand All @@ -31,59 +51,70 @@ This goal of `llm-interface` is to provide a single, simple, unified interface f
Octo AI, Ollama, OpenAI, Perplexity, Together AI, and Writer.
- **New Interface Function**: `LLMInterfaceStreamMessage`
- **Test Coverage**: 100% test coverage for all interface classes.
- **Examples**: New usage [examples](/examples).

**v2.0.8**

- **Removing Dependencies**: The removal of OpenAI and Groq SDKs results in a smaller bundle, faster installs, and reduced complexity.
- **Examples**: New usage [examples](examples).

## Dependencies

The project relies on several npm packages and APIs. Here are the primary dependencies:

- `axios`: For making HTTP requests (used for various HTTP AI APIs).
- `@anthropic-ai/sdk`: SDK for interacting with the Anthropic API.
- `@google/generative-ai`: SDK for interacting with the Google Gemini API.
- `dotenv`: For managing environment variables. Used by test cases.
- `flat-cache`: For optionally caching API responses to improve performance and reduce redundant requests.
- `jsonrepair`: Used to repair invalid JSON responses.
- `jest`: For running test cases.
- `loglevel`: A minimal, lightweight logging library with level-based logging and filtering.

The following optional packages can added to extend LLMInterface's caching capabilities:

- `flat-cache`: A simple JSON based cache.
- `cache-manager`: An extendible cache module that supports various backends including Redis, MongoDB, File System, Memcached, Sqlite, and more.

## Installation

To install the `llm-interface` package, you can use npm:
To install the LLM Interface npm module, you can use npm:

```bash
npm install llm-interface
```
## Quick Start

## Usage
- Looking for [API Keys](/docs/api-keys.md)? This document provides helpful links.
- Detailed [usage](/docs/usage.md) documentation is available here.
- Various [examples](/examples) are also available to help you get started.
- A breakdown of [model aliaes](/docs/models.md) aliases is available here.
- If you still want more examples, you may wish to review the [test cases](/test/) for further examples.

### Example
## Usage

First import `LLMInterfaceSendMessage`. You can do this using either the CommonJS `require` syntax:
First import `LLMInterface`. You can do this using either the CommonJS `require` syntax:

```javascript
const { LLMInterfaceSendMessage } = require('llm-interface');
const { LLMInterface } = require('llm-interface');
```

or the ES6 `import` syntax:

```javascript
import { LLMInterfaceSendMessage } from 'llm-interface';
import { LLMInterface } from 'llm-interface';
```

then send your prompt to the LLM provider of your choice:
then send your prompt to the LLM provider:

```javascript
LLMInterface.setApiKey({'openai': process.env.OPENAI_API_KEY});

try {
const response = LLMInterfaceSendMessage('openai', process.env.OPENAI_API_KEY, 'Explain the importance of low latency LLMs.');
const response = await LLMInterface.sendMessage('openai', 'Explain the importance of low latency LLMs.');
} catch (error) {
console.error(error);
}
```
if you prefer, you can pass use a one-liner to pass the provider and API key, essentially skipping the LLMInterface.setApiKey() step.

```javascript
const response = await LLMInterface.sendMessage(['openai',process.env.OPENAI_API_KEY], 'Explain the importance of low latency LLMs.');
```

or if you'd like to chat, use the message object. You can also pass through options such as `max_tokens`.
Passing a more complex message object is just as simple. The same rules apply:

```javascript
const message = {
Expand All @@ -95,13 +126,12 @@ const message = {
};

try {
const response = LLMInterfaceSendMessage('openai', process.env.OPENAI_API_KEY, message, { max_tokens: 150 });
const response = await LLMInterface.sendMessage('openai', message, { max_tokens: 150 });
} catch (error) {
console.error(error);
}
```

If you need [API Keys](/docs/APIKEYS.md), use this [starting point](/docs/APIKEYS.md). Additional [usage examples](/docs/USAGE.md) and an [API reference](/docs/API.md) are available. You may also wish to review the [test cases](/test/) for further examples.
_LLMInterfaceSendMessage and LLMInterfaceStreamMessage are still available and will be available until version 3_

## Running Tests

Expand All @@ -114,13 +144,23 @@ npm test
#### Current Test Results

```bash
Test Suites: 1 skipped, 65 passed, 65 of 66 total
Tests: 2 skipped, 291 passed, 293 total
Test Suites: 9 skipped, 93 passed, 93 of 102 total
Tests: 86 skipped, 784 passed, 870 total
Snapshots: 0 total
Time: 103.293 s, estimated 121 s
Time: 630.029 s
```

_Note: Currently skipping NVIDIA test cases due to API key limits._
_Note: Currently skipping NVIDIA test cases due to API issues, and Ollama due to performance issues._

## TODO

- [ ] Provider > Models > Azure AI
- [ ] Provider > Models > Groq
- [ ] Provider > Models > SiliconFlow
- [ ] Provider > Embeddings > Nomic
- [ ] _Feature > Image Generation?_

_Submit your suggestions!_

## Contribute

Expand Down
4 changes: 0 additions & 4 deletions babel.config.js

This file was deleted.

Loading
Loading