Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When running on Windows, there is no response, only "Using device: cuda" is displayed, the progress bar is unresponsive, and there are no error messages? #16

Open
miaogong opened this issue Jan 26, 2025 · 4 comments

Comments

@miaogong
Copy link

When running on Windows, there is no response, only "Using device: cuda" is displayed, the progress bar is unresponsive, and there are no error messages?

@BenDes21
Copy link

BenDes21 commented Feb 6, 2025

When running on Windows, there is no response, only "Using device: cuda" is displayed, the progress bar is unresponsive, and there are no error messages?

Hi, did you manage to get it work ? Do you know how to install it on windows :) ?

@erreth4kbe
Copy link

iopaint.model_manager:init_model:46 - Loading model: lama
Traceback (most recent call last):
File "X:\AI_app\WatermarkRemover-AI\remwm.py", line 170, in
main()
File "G:\miniconda3\envs\py312aiwatermark\Lib\site-packages\click\core.py", line 1161, in call
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\miniconda3\envs\py312aiwatermark\Lib\site-packages\click\core.py", line 1082, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "G:\miniconda3\envs\py312aiwatermark\Lib\site-packages\click\core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\miniconda3\envs\py312aiwatermark\Lib\site-packages\click\core.py", line 788, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "X:\AI_app\WatermarkRemover-AI\remwm.py", line 113, in main
model_manager = ModelManager(name="lama", device=device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\miniconda3\envs\py312aiwatermark\Lib\site-packages\iopaint\model_manager.py", line 39, in init
self.model = self.init_model(name, device, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\miniconda3\envs\py312aiwatermark\Lib\site-packages\iopaint\model_manager.py", line 48, in init_model
raise NotImplementedError(
NotImplementedError: Unsupported model: lama. Available models: ['cv2', 'stabilityai/stable-diffusion-xl-base-1.0']

Running remwm.py will display a message that iopaint does not support lama. Is the latest iopaint version wrong?

@erreth4kbe
Copy link

I modified line#112 as below and the script works fine.

if not transparent:
model_manager = ModelManager(name="lama", device=device)
logger.info("LaMa model loaded")

@erreth4kbe
Copy link

iopaint.model_manager:init_model:46 - Loading model: lama Traceback (most recent call last): File "X:\AI_app\WatermarkRemover-AI\remwm.py", line 170, in main() File "G:\miniconda3\envs\py312aiwatermark\Lib\site-packages\click\core.py", line 1161, in call return self.main(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\miniconda3\envs\py312aiwatermark\Lib\site-packages\click\core.py", line 1082, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File "G:\miniconda3\envs\py312aiwatermark\Lib\site-packages\click\core.py", line 1443, in invoke return ctx.invoke(self.callback, **ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\miniconda3\envs\py312aiwatermark\Lib\site-packages\click\core.py", line 788, in invoke return __callback(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "X:\AI_app\WatermarkRemover-AI\remwm.py", line 113, in main model_manager = ModelManager(name="lama", device=device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\miniconda3\envs\py312aiwatermark\Lib\site-packages\iopaint\model_manager.py", line 39, in init self.model = self.init_model(name, device, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\miniconda3\envs\py312aiwatermark\Lib\site-packages\iopaint\model_manager.py", line 48, in init_model raise NotImplementedError( NotImplementedError: Unsupported model: lama. Available models: ['cv2', 'stabilityai/stable-diffusion-xl-base-1.0']

Running remwm.py will display a message that iopaint does not support lama. Is the latest iopaint version wrong?

After running IOPaint separately, I tried it and the original script works fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants