This project is an AI Orchestrator that processes user inputs using an LLM (Groq) to determine the correct task and executes the corresponding Docker container for processing. The frontend provides a chat-based interface to interact with the backend.
- The frontend sends user input (prompt) to the orchestrator backend.
- The LLM (Groq) processes the input and extracts the relevant task type (e.g.,
data-cleaning
,sentiment-analysis
). - The LLM returns a structured JSON containing:
{ "task": "data-cleaning", "input": { "data": [...dataset provided by user] } }
- Based on the identified task, the orchestrator selects and runs the corresponding Docker container.
- The orchestrator executes:
docker run --rm <task-container-name> '<input-data>'
- The container processes the input data and generates an output.
- The orchestrator captures the processed data from the container output.
- The result is formatted into a structured response and sent back to the frontend.
- The chat UI displays:
- The task response from the backend.
- The processed dataset in a structured format.
cd api
Ensure you have Node.js installed, then run:
npm install
Create a .env
file in the api/
directory and add:
GROQ_API_KEY=your_groq_api_key_here
PORT=4000
Replace your_groq_api_key_here
with your actual Groq API key.
npm start
This will start the backend at http://localhost:4000
.
docker-compose up --build
This will:
- Build and start the required task containers (e.g.,
data-cleaning
,sentiment-analysis
). - Start the orchestrator service.
cd client
npm install
npm run dev
This will start the frontend at http://localhost:5173
.
Once both backend and frontend are running, open the browser and go to:
- Frontend UI:
http://localhost:5173
- Backend API:
http://localhost:4000/api/process/prompt
You should be able to interact with the AI Orchestrator through the chat interface.
- Backend not starting? Ensure
PORT
is correctly set in.env
. - Docker not running? Run
docker ps
to check running containers. - Frontend API errors? Ensure backend is running at
http://localhost:4000
.
🚀 Now your AI Orchestrator is fully set up! Enjoy!