Skip to content

This repository contains the code to deploy a Mistral-based chatbot using Docker Compose and Huggingface Inference API.

Notifications You must be signed in to change notification settings

robertanto/Local-LLM-UI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deploy a chatbot with Huggingface Inference API

This repository contains the code to deploy a Mistral-based chatbot using Docker Compose and Huggingface Inference API.

Technological stack

As shown in the figure below the following frameworks have been used in this project:

  • Langchain
  • Huggingface API
  • FastAPI
  • Gradio

How to use

  1. Clone the repository.
git clone https://github.com/robertanto/local-chatbot-ui.git
cd local-chatbot-ui
  1. Create a Huggingface API token as shown here and insert it in the docker-compose.yaml file.

  2. Run the containers

docker compose up -d

You can interact with the chatbot at http://localhost:7860/.

About

This repository contains the code to deploy a Mistral-based chatbot using Docker Compose and Huggingface Inference API.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published