Welcome to the documentation for the LLM Interface package. This documentation provides comprehensive guides on how to set up, configure, and use the LLM Interface with various Language Model providers.
- Introduction
- Installation
- API Keys
- Usage
- LLMInterface
- getAllModelNames()
- getEmbeddingsModelAlias(interfaceName, alias)
- getInterfaceConfigValue(interfaceName, key)
- getModelAlias(interfaceName, alias)
- setApiKey(interfaceNames, apiKey)
- setEmbeddingsModelAlias(interfaceName, alias, name)
- setModelAlias(interfaceName, alias, name)
- configureCache(cacheConfig = {})
- flushCache()
- sendMessage(interfaceName, message, options = {}, interfaceOptions = {})
- streamMessage(interfaceName, message, options = {})
- embeddings(interfaceName, embeddingString, options = {}, interfaceOptions = {})
- chat.completions.create(interfaceName, message, options = {}, interfaceOptions = {})
- LLMInterfaceSendMessage
- LLMInterfaceStreamMessage
- Message Object
- Options Object
- Interface Options Object
- Caching
- Support
- Model Aliases
- Embeddings Model Aliases
- Jailbreaking
- Glossary
- Examples
- LLMInterface
The LLM Interface npm module provides a unified interface for interacting with various large language models (LLMs). This documentation covers setup, configuration, usage, and examples to help you integrate LLMs into your projects efficiently.
To interact with different LLM providers, you will need API keys. Refer to API Keys for detailed instructions on obtaining and configuring API keys for supported providers.
The Usage section contains detailed documentation on how to use the LLM Interface npm module. This includes:
- getAllModelNames()
- getEmbeddingsModelAlias(interfaceName, alias)
- getInterfaceConfigValue(interfaceName, key)
- getModelAlias(interfaceName, alias)
- setApiKey(interfaceNames, apiKey)
- setEmbeddingsModelAlias(interfaceName, alias, name)
- setModelAlias(interfaceName, alias, name)
- configureCache(cacheConfig = {})
- flushCache()
- sendMessage(interfaceName, message, options = {}, interfaceOptions = {})
- streamMessage(interfaceName, message, options = {})
- embeddings(interfaceName, embeddingString, options = {}, interfaceOptions = {})
- chat.completions.create(interfaceName, message, options = {}, interfaceOptions = {})
This is a legacy function and will be depreciated.
This is a legacy function and will be depreciated.
A complete list of supported providers is availabe here.
The LLMInterface supports multiple model aliases for different providers. See Models for a list of model aliases and their descriptions.
For more detailed information, please refer to the respective sections in the documentation.
If you'd like to attempt to jailbreak your AI model you try a version of the message obeject found here.
Thanks to Shuttle AI for the original concept!
A glossary of terms is available here.
Check out Examples for practical demonstrations of how to use the LLM Interface npm module in various scenarios.