An intelligent AI assistant that combines terminal output capture and computer vision to automate penetration testing and bug bounty hunting workflows.
Support this project β€οΈ
- π₯οΈ Terminal Output Capture: Real-time monitoring and analysis of terminal commands and output
- ποΈ Computer Vision: Screenshot capture, OCR text extraction, and GUI element detection
- π€ AI-Powered Analysis: Integration with Claude, GPT-4, or local Ollama models for intelligent insights
- π Automated Workflows: Pre-built penetration testing workflows (recon, webapp, exploit, privesc)
- π Report Generation: Automated HTML, Markdown, and JSON report creation
- π₯ Screen Recording: Record entire testing sessions for documentation
- π Security Audit Logging: Track all commands and sensitive operations
- Anthropic Claude: Claude 3.5 Sonnet with vision capabilities
- OpenAI: GPT-4 Turbo with vision support
- Ollama: Local/offline AI models for privacy-conscious operations
- Intelligent command suggestions based on context
- Pattern-based terminal monitoring
- Automated reconnaissance workflows
- Vulnerability exploitation assistance
- Privilege escalation enumeration
- Custom workflow creation via AI
- OS: Kali Linux, Parrot OS, BlackArch, or any Debian/Ubuntu-based security distribution
- Python: 3.10 or higher
- RAM: Minimum 4GB (8GB recommended)
- Disk Space: 2GB for dependencies and logs
The following tools enhance Lucifer's capabilities but are not required:
nmap- Network scanninggobuster- Directory enumerationnikto- Web server scanningsqlmap- SQL injection testingmetasploit-framework- Exploitation frameworksearchsploit- Exploit databasewhatweb- Web technology identification
# Clone the repository
git clone https://github.com/yashab-cyber/lucifer.git
cd lucifer
# Create virtual environment
python3 -m venv venv
source venv/bin/activate
# Install Lucifer
pip install -e .
# Install system dependencies (Tesseract for OCR)
sudo apt-get update
sudo apt-get install tesseract-ocr tesseract-ocr-eng
# Copy and configure environment
cp .env.example .env
nano .env # Add your AI API keysEdit .env file with your settings:
# Choose your AI provider
AI_PROVIDER=anthropic # or openai, ollama
# Add API key for your provider
ANTHROPIC_API_KEY=your_anthropic_key_here
# or
OPENAI_API_KEY=your_openai_key_here
# Configure settings
LOG_LEVEL=INFO
AUTO_SUGGEST_COMMANDS=true
CONFIRMATION_REQUIRED=truelucifer config-checkStart Lucifer in interactive mode for full control:
lucifer start --interactiveAvailable Commands:
analyze- Analyze current terminal and screen statesuggest- Get AI-powered next step suggestionsexecute <command>- Execute command with AI assistanceworkflow <name> <target>- Run automated workflowrecord/stop-record- Screen recordingreport- Generate penetration testing reporthelp- Show all commands
Perform rapid reconnaissance on a target:
lucifer quick-scan 192.168.1.100Execute specific workflows:
# Reconnaissance workflow
lucifer start -t 192.168.1.100 -w recon
# Web application testing
lucifer start -t example.com -w webapp
# Exploitation workflow
lucifer start -t 192.168.1.100 -w exploit
# Privilege escalation
lucifer start -t localhost -w privesclucifer workflows# Start interactive mode
lucifer start -i
# In Lucifer shell:
lucifer> workflow recon target.com
lucifer> analyze
lucifer> suggest
lucifer> report# Direct workflow execution
lucifer start -t https://target.com -w webapp
# The workflow will:
# - Enumerate directories with gobuster
# - Scan for vulnerabilities with nikto
# - Test for SQL injection with sqlmap
# - Generate comprehensive report# Python API usage
from lucifer import LuciferAssistant
import asyncio
async def main():
async with LuciferAssistant() as assistant:
# Start monitoring
await assistant.start_terminal_monitoring()
# Run recon
results = await assistant.run_automated_recon("192.168.1.0/24")
# Get AI suggestions
suggestions = await assistant.suggest_next_actions()
# Generate report
report = await assistant.generate_report()
print(f"Report: {report}")
asyncio.run(main())lucifer/
βββ src/lucifer/
β βββ core/
β β βββ assistant.py # Main AI assistant
β β βββ terminal_capture.py # Terminal monitoring
β β βββ vision.py # Computer vision
β β βββ ai_engine.py # AI integration
β β βββ config.py # Configuration
β βββ automation/
β β βββ workflows.py # Pentest workflows
β βββ utils/
β β βββ logger.py # Logging utilities
β β βββ report_generator.py # Report generation
β βββ cli.py # Command-line interface
βββ tests/ # Unit tests
βββ pyproject.toml # Project configuration
βββ README.md
Captures and monitors terminal output in real-time:
from lucifer.core.terminal_capture import TerminalCapture
capture = TerminalCapture(buffer_size=10000)
capture.start_shell_capture("/bin/bash")
output = capture.get_recent_output(lines=50)Screenshot capture and analysis:
from lucifer.core.vision import ScreenCapture, OCREngine
# Capture screenshot
screen = ScreenCapture()
screenshot = screen.capture_screenshot()
# Extract text with OCR
ocr = OCREngine()
text = ocr.extract_text(screenshot)Analyze terminal output and screenshots:
from lucifer.core.ai_engine import create_ai_engine
engine = create_ai_engine()
analysis = await engine.analyze_terminal_output(output)
suggestions = await engine.suggest_next_steps(output)Lucifer includes built-in protection against dangerous commands:
rm -rf- Recursive deletiondd if=- Disk operationsmkfs- Filesystem formatting- Fork bombs and destructive operations
Configure in .env:
DANGEROUS_COMMANDS_FILTER=true
CONFIRMATION_REQUIRED=trueAll commands and security events are logged:
AUDIT_LOG_ENABLED=true
AUDIT_LOG_FILE=logs/audit.log- Never commit
.envfile to version control - Use environment variables in production
- Rotate API keys regularly
- Consider using Ollama for offline/sensitive operations
Run the test suite:
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# With coverage
pytest --cov=src/lucifer --cov-report=html
# Type checking
mypy src/lucifer
# Code formatting
black src/lucifer
ruff check src/luciferWe welcome contributions! Please follow these guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
# Clone your fork
git clone https://github.com/YOUR_USERNAME/lucifer.git
cd lucifer
# Install in development mode
pip install -e ".[dev]"
# Install pre-commit hooks
pre-commit install
# Run tests before committing
pytestThis project is licensed under the MIT License - see the LICENSE file for details.
IMPORTANT: Lucifer is designed for authorized security testing only.
- Only use on systems you own or have explicit permission to test
- Unauthorized access to computer systems is illegal
- Users are responsible for compliance with applicable laws
- The authors assume no liability for misuse of this tool
By using Lucifer, you agree to use it responsibly and ethically.
- Anthropic - Claude AI
- OpenAI - GPT-4
- Ollama - Local AI models
- Kali Linux and the cybersecurity community
- All open-source contributors
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: Wiki
- Desktop GUI application
- Multi-target parallel scanning
- Integration with Burp Suite
- Custom plugin system
- Machine learning-based vulnerability detection
- Automated exploit generation
- Cloud deployment support
- Team collaboration features
Yashab Alam
- LinkedIn: linkedin.com/in/yashab-alam
- Instagram: @yashab.alam
- Email: [email protected]
Support this project β€οΈ
This project is licensed under the MIT License - see the LICENSE file for details.
- Thanks to the cybersecurity community for inspiration
- All open-source tool developers whose work makes this possible
- AI model providers (Anthropic, OpenAI, Ollama) for powerful inference capabilities
Made with β€οΈ for the cybersecurity community
β Star us on GitHub if you find Lucifer useful!