Full-stack personal finance assistant that combines a React dashboard, an Express/MongoDB API, and a FastAPI + LangGraph service for expense analytics and conversational insights.
- Track online and manual UPI transactions with search, caching, and nickname support for frequent payees.
- OTP-based sign-up/login with JWT session cookies, password management, and account deletion.
- Sync historical transactions from the LLM service, enrich them with nicknames, and push formatted data back for downstream analysis.
- ChatBot using underlying agents to help users talk to their expenses and budgets.
- FastAPI layer wraps Groq-hosted LLMs to power the chatbot, natural-language querying, merchant/date extraction, and budget checks grounded on real transactions.
https://drive.google.com/file/d/1o93DdkPTtgBXnVGRwTtNTuHS7Dn4QlqZ/view?usp=sharing
SEProj/
├── server.js # Express entrypoint
├── controllers/ # Auth, expense, nickname, profile handlers
├── models/ # Mongoose schemas (users, transactions, nicknames)
├── routes/ # REST route registrations
├── frontend/ # React SPA (login, dashboard, chatbot)
├── llm/ # FastAPI + LangChain chatbot service
├── utils/ # Helpers for building agent payloads
└── req.txt # Python dependencies for the LLM service
[React SPA] ⇄ (CORS, cookies) ⇄ [Express API] ⇄ [MongoDB]
│
├─ sync → /expense (FastAPI)
└─ push nicknames → /updateFormattedData (FastAPI)
- Expense dashboard – recent transactions, nickname editor, cached search, modal-driven manual entry, and CSV-style layout.
- Profile management – update display name, change password, delete account, and trigger a two-month historical sync.
- FineTuned DistillBert for intent Classification – Used a sample of 400 prompts for classification training.
- Chatbot assistant – LLM answers spend questions, extracts merchants/date ranges, and can log new expenses conversationally.
- Budget agent – Uses
llm/budgets.json, compares against actual expenses, and surfaces top related transactions to keep answers grounded. - Nickname-to-UPI mapping – central store in MongoDB updates both dashboard and LLM context automatically.
- OTP sign-up flow – Gmail transport sends one-time codes, verified before user creation.
- Node.js ≥ 18 and npm
- Python ≥ 3.10 with
pip - MongoDB instance (local or remote)
- Gmail account with an App Password for transactional email
- Groq API key (for LangChain
ChatGroq) - Optional: Hugging Face token if the hosted intent classifier requires authentication
Create a .env file in the repository root:
| Variable | Description |
|---|---|
PORT |
Express port (defaults to 4000 in code) |
URL |
MongoDB connection string |
JWT_SECRET |
Secret for signing auth cookies |
MAIL_USER |
Gmail address that sends OTP emails |
MAIL_PASS |
Gmail App Password (not your account password) |
Create llm/.env for the chatbot service:
| Variable | Description |
|---|---|
API_KEY |
Groq API key used by LangChain clients |
Create frontend/.env for the admin login :
| Variable | Description |
|---|---|
REACT_APP_ADMIN_EMAIL |
Admin's email for login |
REACT_APP_ADMIN_PASSWORD |
Admin's password for the app |
Never commit these files to version control.
-
Install backend dependencies
npm install
-
Install frontend dependencies
cd frontend npm install -
Install Python dependencies for the LLM service
cd llm python -m venv .venv source .venv/bin/activate # Windows: .venv\Scripts\activate pip install -r ../req.txt
-
Ensure MongoDB is running and reachable at the URI you placed in
URL.
Use separate terminals (or a process manager) for each service:
-
Express API (Port 4000)
npm install # if you skipped earlier nodemon server.js # or: node server.js
-
React frontend (Port 3000 by default)
cd frontend npm start # CRA dev server
The dev script in
package.jsonpoints to Next.js; usenpm startinstead. -
FastAPI + LLM service (Port 8000)
cd llm source .venv/bin/activate uvicorn app:app --reload --port 8000
Visit http://localhost:3000 once all services are up. The frontend talks to the Express API at http://localhost:4000, which in turn calls the FastAPI service at http://localhost:8000.
| Method | Path | Description |
|---|---|---|
| POST | /signup |
Register after OTP verify |
| POST | /login |
Issue JWT cookie |
| POST | /logout |
Clear auth cookie |
| POST | /sendOTP |
Email a six-digit OTP |
| POST | /verifyOTP |
Mark temporary user verified |
| POST | /resetPass |
Reset password post-OTP |
| GET | /checkAuth |
Validate session cookie |
| Method | Path | Description |
|---|---|---|
| POST | /getExp |
Fetch last 7 days (merged online + manual) |
| POST | /search |
Filtered search by date range or nickname |
| POST | /add |
Add manual transaction (nicknames supported) |
| DELETE | /delete/:id |
Remove manual transaction |
| Method | Path | Description |
|---|---|---|
| POST | /api/nicknames/get |
Retrieve nickname map |
| POST | /api/nicknames/save |
Upsert nickname for a UPI ID |
| GET | /api/profile/me |
Fetch authenticated profile |
| POST | /api/profile/name |
Update display name |
| POST | /api/profile/password |
Change password |
| DELETE | /api/profile/account |
Delete account and related data |
| POST | /api/profile/data |
Trigger 60-day sync from FastAPI |
| Method | Path | Description |
|---|---|---|
| POST | /expense |
Return parsed transactions for an email/date |
| POST | /chat |
LLM chatbot response for expense questions |
| POST | /updateData |
Persist formatted transactions to data_array.json |
| POST | /updateFormattedData* |
Updates LLM cache with nickname-enriched data |
*Called internally by the Express API; exposed for completeness.
profile/datafetches ~60 days of history from/expense, upserts intoonlineTransaction, then rebuilds the chatbot payload.- Manual additions from the frontend hit
/api/expense/add, writing tomanualTransaction. - Nickname updates rebuild the agent payload via
buildAgentJsonand POST to/updateFormattedData, keeping the LLM context in sync. - Chat queries route through the LangGraph pipeline (
llm/chat.py) which:- Classifies intent with the Hugging Face model.
- Extracts merchants and dates via Groq-hosted LLM prompts.
- Aggregates spend metrics from
data_array.json(expense agent). - Runs budget checks against
llm/budgets.jsonand actual expenses, surfacing top related transactions (budget agent). - Optionally logs new expenses when intent detection confirms it.
- Dummy budgets live in
llm/budgets.json(sample merchant and category caps). - The FastAPI
/chatroute loads bothdata_array.jsonandbudgets.json, so budget queries work out of the box. - Budget answers are grounded on filtered transactions (top by amount) to avoid hallucinated spend.
- Implement add-expense persistence: replace the
add_expense_in_databaseplaceholder inllm/utils.pywith a real Express/FastAPI call. - Frontend: surface budget views and chat support (currently only backend/bot logic is wired).
- Backend: add a CRUD endpoint for budgets (read/write
llm/budgets.jsonor a DB model) and plumb it into the chatbot loader. - History: re-enable/chat history append in
ChatBot.add_to_historyif conversational context is desired.