Skip to content

LIA-DiTella/visdecode-tool

Repository files navigation

Visdecode is a web application for analyzing and improving data visualization code using AI-powered assistance. VisDecode helps users upload chart images, decode them into code, and provides intelligent feedback on visualization best practices.

Features

  • 🖼️ Image Upload & Processing: Upload chart images for automated code generation
  • 🤖 AI-Powered Analysis: Leverages Ollama's Gemma 3 (27B) model
  • 📊 Interactive Code Editor: Monaco-based real-time editing
  • Rule Checking: Validation against best practices
  • 🔄 Real-time Updates: WebSocket live feedback

Tech Stack

Frontend

  • React with TypeScript
  • Vite for fast development and building
  • TailwindCSS for styling
  • shadcn/ui component library
  • React Router for navigation
  • Socket.io for real-time communication
  • Monaco Editor for code editing

Backend

  • Flask web framework
  • Flask-SocketIO for WebSocket support
  • Ollama for LLM integration
  • Matplotlib and Pillow for image processing

Prerequisites

Before running this application, ensure you have the following installed:

  • Node.js (v16 or higher) and npm
  • Python (v3.8 or higher) and pip
  • Ollama (Installation guide)

Setup Instructions

1. Clone the Repository

git clone https://github.com/LIA-DiTella/visdecode_webpage.git
cd visdecode_webpage

2. Environment Configuration

Create a .env file in the root directory with the following variables:

VITE_GOOGLE_CLIENT_ID=your_google_client_id_here
VITE_BACKEND_URL=http://localhost:5000

Environment Variables:

  • VITE_GOOGLE_CLIENT_ID: Your Google OAuth 2.0 Client ID (obtain from Google Cloud Console)
  • VITE_BACKEND_URL: The URL where your backend server is running (default: http://localhost:5000)

3. Ollama Setup

Install and run Ollama with the Gemma 3 model:

# Install Ollama from https://ollama.ai

# Pull the required model
ollama pull gemma3:27b

# Start Ollama (if not running as a service)
ollama serve

Note: The gemma3:27b model is large (~16GB). Ensure you have sufficient disk space and RAM.

4. Frontend Setup

Install dependencies and start the development server:

# Install dependencies
npm install

# Start the development server
npm run dev

The frontend will be available at http://localhost:5001 (or the port shown in your terminal).

5. Backend Setup

Navigate to the backend directory, install dependencies, and run the Flask server:

# Navigate to backend directory
cd backend

# Install Python dependencies
pip install -r requirements.txt

# Run the Flask application
python app.py

The backend will start on http://localhost:5000 by default.

Usage

  1. Start all services: Ensure Ollama, the backend server, and the frontend development server are all running.

  2. Upload a chart: Upload an image of a data visualization.

  3. Review generated code: The AI will analyze the chart and generate corresponding code.

  4. Get feedback: Receive suggestions for improving your visualization based on best practices.

  5. Edit and iterate: Use the interactive code editor to make changes and see results in real-time.

Development

Adding New Components

This project uses shadcn/ui for component management. To add new components:

npx shadcn-ui@latest add [component-name]

Backend Development

The backend uses Flask-SocketIO for real-time communication. Key files:

  • app.py: Main routes and WebSocket handlers
  • llm_functions.py: LLM interaction logic
  • session_manager.py: Session state management

Troubleshooting

Frontend Issues

  • CORS errors: Ensure VITE_BACKEND_URL in .env matches your backend URL
  • Google login fails: Verify your VITE_GOOGLE_CLIENT_ID is correct and the OAuth consent screen is properly configured

Backend Issues

  • Port already in use: Change the Flask port in app.py or stop the conflicting process
  • Ollama connection errors: Ensure Ollama is running (ollama serve) and the model is pulled

Ollama Issues

  • Model not found: Run ollama pull gemma3:27b
  • Out of memory: The 27B model requires significant RAM (~16GB+). Consider using a smaller model if needed

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is part of the LIA-DiTella research group.

Contact

For questions or support, please open an issue on GitHub or contact the LIA-DiTella team.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors