-
-
Notifications
You must be signed in to change notification settings - Fork 83
feat(ai): llm provider #539
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
Hi guys 💚 I know that I did things different here, by introducing internal Since I did it, I wondering that may be nice if we have
My idea is to have something like that:
import "jsr:@supabase/functions-js/edge-runtime.d.ts"; I believe that native cc @laktek, @nyannyacha |
- LLM Section is a wrapper to handle LLM inference based on the selected provider
- Extracting json parsers to a separated file - Moving LLM stream related code to a separated folder
- Applying LLM provider interfaces to implement the Ollama provider
- Applying LLM provider interfaces to implement the 'openaicompatible' mode
- Improving Typescript support for dynamic suggestion based on the selected Session type. - Break: Now LLM models must be defined inside `options` argument, it allows a better typescript checking as well makes easier to extend the API. - There's no need to check if `inferenceHost` env var is defined, since we can now switch between different LLM providers. Instead, we can enable LLM support if the given type is an allowed provider.
- Improving typescript with conditional output types based on the selected provider - Defining common properties for LLM providers like `usage` metrics and simplified `value`
- OpenAI uses a different streaming alternative that ends with `[DONE]`
- Applying 'pattern matching' and 'Result pattern' to improve error handling. It enforces that users must first check for errors before consuming the message
- It ensures that only valid strings with content can be embeded
- Fix wrong input variable name. - Accepting 'opts' param as optinal, applying null safes.
- Improving tests by checking the result types: success or errors - Testing invalid `gte-small` type name
25ab35a
to
23cd02e
Compare
What kind of change does this PR introduce?
feature, refactor
What is the current behaviour?
Current the
Session
only supports self-hosted Ollama or some OpenAI like provider - with no way to specify the API key.What is the new behaviour?
This PR applies some refactors in the
ai
module to support an unified LLM provider API, this way it can be easily extended to new providers as well exporting a more standardised output format.Improved typescript support
The
ai
module was huge refactored to provide better ts hints that dynamically changes based on the selectedtype
:examples
using type

gte-small
:using type

ollama
:using type

openaicompatible
:Automatically infer
AsyncGenerator
type whenstream: true
Improved error handling support
In order to ensure error checking, the
ai
module was been refactored to followResult
pattern - Go like. It means that whileSession.run()
the returned value will be a tuple array of[success, error]
, this result is compatible with TS pattern matching, so it provides a completely LSP feedback.examples
Non stream
Result type def
Checking
error
automatically validates thesuccess
partStream
When
stream: true
the first result will handle errors that may occur before create theAsyncGenerator
.Then the incoming message will be a result as well, this way users can apply error handling while streaming.
Result type def

Streaming type def

Common response and Usage metrics
Since all LLM providers must implement a common interface they now also shared a unified response object.
response definitions
Success part
Error part
Tested OpenAI compatible providers
missing
ideas