-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The gradio loads but the model does not output anything SOLVED VRAM memory issues SOLVED #129
Comments
I have found the solution it is contained here 🚀 How to Install and Run Janus-Pro on WindowsThis guide provides step-by-step instructions to successfully install and run Janus-Pro on Windows without running into errors. ✅ 1. Install System DependenciesBefore setting up Janus-Pro, ensure you have the following installed: 🔹 Install Microsoft Visual Studio (C++ Build Tools)
🔹 Install NVIDIA CUDA Toolkit
🔹 Install Python 3.10+ (if not already installed)
✅ 2. Clone the Janus-Pro Repository
✅ 3. Set Up a Virtual Environment
✅ 4. Install Required Python Packages
✅ 5. Run Janus-Pro
✅ If everything is installed correctly, Janus-Pro should now be running! 🚀 ❌ Troubleshooting🔹 "Torch not compiled with CUDA enabled" ErrorIf you see:
Make sure you manually install PyTorch first before installing requirements: pip install --upgrade torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 Then remove pip install -e . 🔹 Virtual Environment Activation FailsIf janus_env\Scripts\Activate.ps1 If you see a security error, run this command in Administrator PowerShell: Set-ExecutionPolicy Unrestricted -Scope Process 🔹 Gradio or Transformers Module Not FoundIf pip install gradio transformers ✅ 6. Automating Virtual Environment ActivationSince the virtual environment must be activated every time before running Janus-Pro, create a startup script. Windows Batch Script ( @echo off
cd /d D:\Janus\Janus
call janus_env\Scripts\activate
python demo/app_januspro.py
pause Bash Script ( #!/bin/bash
cd "$(dirname "$0")"
source janus_env/Scripts/activate
python demo/app_januspro.py Run this script whenever you need to start Janus-Pro automatically. 🎯 Final Notes✔️ Always clone the repository before setting up the virtual environment. 🚀 ! Enjoy! |
I had issues with image generation so I needed to optimize the app_januspro
|
here is the summirized documentation of the changes
This document details the changes made to the image generation component of the Janus-Pro-7B project. Originally, the image generation process used a parallel approach that led to excessive VRAM usage and stalled operations. To resolve this, the code has been refactored to generate images sequentially, thereby reducing memory pressure. The multimodal understanding (chat) functionality remains unaffected. The primary adjustments include:
3.2. Sequential Image Generation Function
3.3. Updated generate_image Function
3.4. Memory Management
4.2. generate (Sequential Version)
4.3. generate_image
4.4. unpack
The implemented changes allow the Janus-Pro-7B model to generate images sequentially, significantly reducing VRAM usage compared to the previous parallel method. This solution maintains the multimodal understanding capabilities while ensuring that image generation does not stall due to memory constraints. The code is now more robust and suitable for environments with limited GPU resources. |
Microsoft Windows [Version 10.0.19045.5371]
(c) Microsoft Corporation. All rights reserved.
D:\Janus\Janus>myenv\Scripts\activate
(myenv) D:\Janus\Janus>python demo/app_januspro.py
Python version is above 3.10, patching the collections module.
D:\Janus\Janus\myenv\Lib\site-packages\transformers\models\auto\image_processing_auto.py:590: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use
slow_image_processor_class
, orfast_image_processor_class
insteadwarnings.warn(
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:06<00:00, 3.06s/it]
Using a slow image processor as
use_fast
is unset and a slow processor was saved with this model.use_fast=True
will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor withuse_fast=False
.You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the
legacy
(previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, setlegacy=False
. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in huggingface/transformers#24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.Some kwargs in processor config are unused and will not have any effect: ignore_id, add_special_token, image_tag, num_image_tokens, mask_prompt, sft_format.
INFO: Could not find files for the given pattern(s).
To create a public link, set
share=True
inlaunch()
.The text was updated successfully, but these errors were encountered: