Visdecode is a web application for analyzing and improving data visualization code using AI-powered assistance. VisDecode helps users upload chart images, decode them into code, and provides intelligent feedback on visualization best practices.
- 🖼️ Image Upload & Processing: Upload chart images for automated code generation
- 🤖 AI-Powered Analysis: Leverages Ollama's Gemma 3 (27B) model
- 📊 Interactive Code Editor: Monaco-based real-time editing
- ✅ Rule Checking: Validation against best practices
- 🔄 Real-time Updates: WebSocket live feedback
- React with TypeScript
- Vite for fast development and building
- TailwindCSS for styling
- shadcn/ui component library
- React Router for navigation
- Socket.io for real-time communication
- Monaco Editor for code editing
- Flask web framework
- Flask-SocketIO for WebSocket support
- Ollama for LLM integration
- Matplotlib and Pillow for image processing
Before running this application, ensure you have the following installed:
- Node.js (v16 or higher) and npm
- Python (v3.8 or higher) and pip
- Ollama (Installation guide)
git clone https://github.com/LIA-DiTella/visdecode_webpage.git
cd visdecode_webpageCreate a .env file in the root directory with the following variables:
VITE_GOOGLE_CLIENT_ID=your_google_client_id_here
VITE_BACKEND_URL=http://localhost:5000Environment Variables:
VITE_GOOGLE_CLIENT_ID: Your Google OAuth 2.0 Client ID (obtain from Google Cloud Console)VITE_BACKEND_URL: The URL where your backend server is running (default:http://localhost:5000)
Install and run Ollama with the Gemma 3 model:
# Install Ollama from https://ollama.ai
# Pull the required model
ollama pull gemma3:27b
# Start Ollama (if not running as a service)
ollama serveNote: The
gemma3:27bmodel is large (~16GB). Ensure you have sufficient disk space and RAM.
Install dependencies and start the development server:
# Install dependencies
npm install
# Start the development server
npm run devThe frontend will be available at http://localhost:5001 (or the port shown in your terminal).
Navigate to the backend directory, install dependencies, and run the Flask server:
# Navigate to backend directory
cd backend
# Install Python dependencies
pip install -r requirements.txt
# Run the Flask application
python app.pyThe backend will start on http://localhost:5000 by default.
-
Start all services: Ensure Ollama, the backend server, and the frontend development server are all running.
-
Upload a chart: Upload an image of a data visualization.
-
Review generated code: The AI will analyze the chart and generate corresponding code.
-
Get feedback: Receive suggestions for improving your visualization based on best practices.
-
Edit and iterate: Use the interactive code editor to make changes and see results in real-time.
This project uses shadcn/ui for component management. To add new components:
npx shadcn-ui@latest add [component-name]The backend uses Flask-SocketIO for real-time communication. Key files:
app.py: Main routes and WebSocket handlersllm_functions.py: LLM interaction logicsession_manager.py: Session state management
- CORS errors: Ensure
VITE_BACKEND_URLin.envmatches your backend URL - Google login fails: Verify your
VITE_GOOGLE_CLIENT_IDis correct and the OAuth consent screen is properly configured
- Port already in use: Change the Flask port in
app.pyor stop the conflicting process - Ollama connection errors: Ensure Ollama is running (
ollama serve) and the model is pulled
- Model not found: Run
ollama pull gemma3:27b - Out of memory: The 27B model requires significant RAM (~16GB+). Consider using a smaller model if needed
Contributions are welcome! Please feel free to submit a Pull Request.
This project is part of the LIA-DiTella research group.
For questions or support, please open an issue on GitHub or contact the LIA-DiTella team.

