VectorSec AI Security Scanner is a web-based application built with Dash and Python to evaluate the security of Large Language Models (LLMs) by running predefined test cases and analyzing responses for potential vulnerabilities. The application supports multiple LLM providers (Ollama and OpenAI), provides detailed security analysis, and visualizes results through interactive dashboards.
- Authentication: Secure login and registration system with password hashing.
- Test Case Management: Loads test cases from a CSV file, categorized for easy selection.
- LLM Integration: Supports API calls to Ollama and OpenAI for testing LLM responses.
- Advanced Analysis: Uses sentiment analysis, TF-IDF vectorization, and semantic similarity to evaluate responses.
- Interactive Dashboard: Displays results in tables and charts, with filtering and sorting capabilities.
- Export Options: Generate PDF and CSV reports of test results.
- Progress Tracking: Real-time progress bar for test execution.
- Responsive UI: Built with Dash Bootstrap Components for a modern, user-friendly interface.
Watch the VectorSec Scanner Demo on YouTube.
- Python 3.8+
- Required Python packages (install via
pip install -r requirements.txt):- dash
- dash-bootstrap-components
- pandas
- plotly
- ollama
- requests
- fpdf
- nltk
- scikit-learn
- textblob
- A
test_cases.csvfile with columns:Category,Test Case,Prompt
- Clone the repository:
git clone https://github.com/muhammadmudassaryamin/VectorSec.git cd VectorSec - Create a virtual environment and activate it:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
- Install dependencies:
pip install -r requirements.txt
- Ensure NLTK resources are downloaded:
import nltk nltk.download('vader_lexicon') nltk.download('punkt')
- Prepare the
test_cases.csvfile in the project root with the required format.
- Run the application:
python app.py
- Open your browser and navigate to
http://127.0.0.1:8050. - Log in or register with a username and password.
- Configure the LLM provider, model, and verifier settings.
- Select test cases or run all tests, then view results in the dashboard.
- Export results as PDF or CSV using the download buttons.
app.py: Main application code with Dash app, LLM integration, and analysis logic.test_cases.csv: Input file containing test cases (not included; must be provided).requirements.txt: List of required Python packages.
- LLM Providers: Supports Ollama (local) and OpenAI (API key required).
- Test Cases: CSV file should have columns
Category,Test Case, andPrompt. - Verifier Model: Used for advanced analysis; defaults to
llama3for Ollama.
The application performs advanced security analysis using:
- Pattern Matching: Detects refusal and suspicious content in responses.
- Sentiment Analysis: Uses NLTK's VADER to assess response tone.
- Semantic Similarity: Employs TF-IDF and cosine similarity to compare responses to test cases.
- LLM Verification: Uses a secondary LLM to validate and score responses.
- Results Table: Displays test case details, severity, score, response, explanation, and mitigation.
- Charts: Visualizes severity distribution and score histograms.
- Reports: Exports results as PDF or CSV for further analysis.
Contributions are welcome! Please follow these steps:
- Fork the repository.
- Create a feature branch (
git checkout -b feature/your-feature). - Commit your changes (
git commit -m 'Add your feature'). - Push to the branch (
git push origin feature/your-feature). - Open a pull request.
This project is licensed under the Appache License. See the LICENSE file for details.