Skip to content

crystalmaith/SDLC-using-hugging-face

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🔐 Private SDLC AI Assistant

A fully local, privacy-first SDLC assistant that helps you manage, visualize, and document all phases of the Software Development Life Cycle using locally hosted Hugging Face models. No OpenAI or external API calls.

Features

  • AI SDLC Chat Assistant (local model)
  • SDLC Phase Workspace (Requirements, Design, Implementation, Testing, Deployment, Maintenance)
  • AI-generated documentation per phase + improvement suggestions
  • Visualization dashboard (progress, time allocation, done vs pending)
  • Export full report or per-phase to PDF
  • Offline mode support (HF_HUB_OFFLINE)

Tech Stack

  • Backend/Inference: Python + Hugging Face transformers
  • UI: Streamlit
  • Visuals: Plotly
  • PDF: reportlab

Directory Structure

SDLC huggingface/
├─ app.py
├─ utils.py
├─ requirements.txt
├─ README.md
└─ models/               # place offline models here (optional)

Setup

  1. Create and activate a virtual environment (recommended).
  2. Install Python dependencies:
    pip install -r requirements.txt
  3. Install PyTorch per your environment (CUDA or CPU):
    # Example (CUDA 12.1):
    pip install torch --index-url https://download.pytorch.org/whl/cu121
    # Or CPU only:
    pip install torch --index-url https://download.pytorch.org/whl/cpu
  4. (Optional) Prepare models for fully offline usage:
    • Download a model with huggingface-cli or git lfs on a machine with internet and copy into models/.
    • Suggested local folders:
      • models/mistral/ containing mistralai/Mistral-7B-Instruct-v0.2
      • models/phi3/ containing microsoft/Phi-3-mini-4k-instruct
    • Or rely on your local HF cache if pre-populated.

Offline Mode

  • Toggle "Offline mode (HF_HUB_OFFLINE)" in the sidebar to enforce offline inference.
  • Ensure the model is available locally (either in models/ or your HF cache). When offline is enabled, no network calls are attempted.

Running the App

streamlit run app.py

Then open the provided local URL in your browser.

Model Loading

  • From the sidebar, click "Load/Reload Model".
  • Preferred Model options:
    • Auto (picks Mistral if RAM ≥ 16GB, else Phi-3 Mini)
    • models/local-mistral (alias for models/mistral if present)
    • models/local-phi3 (alias for models/phi3 if present)
    • Direct HF IDs (if cached locally)
  • If bitsandbytes is available, 4-bit quantization will be used when possible.

PDF Export

  • Use the sidebar to export a full SDLC report.
  • Or export per-phase from the phase tab.
  • PDFs are written to the outputs/ folder.

Notes on Windows

  • bitsandbytes may not always be available or stable; the app will still run without it (using CPU or CUDA as available).
  • If VRAM/RAM is limited, prefer microsoft/Phi-3-mini-4k-instruct.

Privacy

  • No external APIs are called. In offline mode, no network calls are made at all.
  • All data remains on your machine.

Optional Ideas to Extend

  • Add voice input via local Whisper (e.g., faster-whisper) and a small audio recorder widget.
  • Add risk matrix and Gantt charts (Plotly).
  • Integrate with local Git to show commit history per phase.

About

Documents all phases of the Software Development Life Cycle — Requirements, Design, Implementation, Testing, Deployment, and Maintenance — using a locally hosted Hugging Face model for privacy (no OpenAI API calls).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages