Skip to content

Files

Latest commit

aec9268 · Feb 7, 2024

History

History
34 lines (22 loc) · 843 Bytes

README.md

File metadata and controls

34 lines (22 loc) · 843 Bytes

Deploy a chatbot with Huggingface Inference API

This repository contains the code to deploy a Mistral-based chatbot using Docker Compose and Huggingface Inference API.

Technological stack

As shown in the figure below the following frameworks have been used in this project:

  • Langchain
  • Huggingface API
  • FastAPI
  • Gradio

How to use

  1. Clone the repository.
git clone https://github.com/robertanto/local-chatbot-ui.git
cd local-chatbot-ui
  1. Create a Huggingface API token as shown here and insert it in the docker-compose.yaml file.

  2. Run the containers

docker compose up -d

You can interact with the chatbot at http://localhost:7860/.