-
Notifications
You must be signed in to change notification settings - Fork 4.7k
Description
Here is the error I get, I am no programmer, but it seems it cannot run it with 8GB VRAM. (might be wrng about it) anyway, here is the error i get from the console
Traceback (most recent call last):
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\gradio\routes.py", line 412, in run_predict
output = await app.get_blocks().process_api(
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\gradio\blocks.py", line 1299, in process_api
result = await self.call_function(
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\gradio\blocks.py", line 1021, in call_function
prediction = await anyio.to_thread.run_sync(
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "G:\Bark\Bark_WebUI\bark\UI.py", line 24, in start
audio_array = generate_audio(prompt, history_prompt=npz_names[voice])
File "G:\Bark\Bark_WebUI\bark\bark\api.py", line 107, in generate_audio
semantic_tokens = text_to_semantic(
File "G:\Bark\Bark_WebUI\bark\bark\api.py", line 25, in text_to_semantic
x_semantic = generate_text_semantic(
File "G:\Bark\Bark_WebUI\bark\bark\generation.py", line 428, in generate_text_semantic
preload_models()
File "G:\Bark\Bark_WebUI\bark\bark\generation.py", line 362, in preload_models
_ = load_model(
File "G:\Bark\Bark_WebUI\bark\bark\generation.py", line 310, in load_model
model = _load_model_f(ckpt_path, device)
File "G:\Bark\Bark_WebUI\bark\bark\generation.py", line 275, in _load_model
model.to(device)
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 12.00 MiB (GPU 0; 8.00 GiB total capacity; 7.30 GiB already allocated; 0 bytes free; 7.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF