You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Uncaught exception: Could not allocate memory: E:\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/gpu_memory.h:112 cudaMalloc(&rawptr, n_bytes+DEBUG_GUARD_SIZE*2) failed with error out of memory
Am I correct in thinking that the contributing factors of having a dataset fail due to a memory limitation are based on these two variables:
the resolution (dimensions) of the individual images
the quantity of images in the data set
Are there any other factors that contribute to the chances of failing to launch the testbed? is there any way of calculating ahead of time if the dataset will fail? For example, can I add up the resolutions of the dataset and workout if it will fit into memory? How would I do this?
I have a 3080ti with 12 GB of VRAM.
Alternatively, if it is not possible to calculate ahead of time: Is there any way of resizing the images in the dataset without having to re-compute the COLMAP again? Because when I try downscaling my images, the testbed loads, but it looks totally wrong because presumably there are some other details inside the transforms.json that contribute to resolution besides the "w" and "h"?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Am I correct in thinking that the contributing factors of having a dataset fail due to a memory limitation are based on these two variables:
Are there any other factors that contribute to the chances of failing to launch the testbed? is there any way of calculating ahead of time if the dataset will fail? For example, can I add up the resolutions of the dataset and workout if it will fit into memory? How would I do this?
I have a 3080ti with 12 GB of VRAM.
Alternatively, if it is not possible to calculate ahead of time: Is there any way of resizing the images in the dataset without having to re-compute the COLMAP again? Because when I try downscaling my images, the testbed loads, but it looks totally wrong because presumably there are some other details inside the transforms.json that contribute to resolution besides the "w" and "h"?
Beta Was this translation helpful? Give feedback.
All reactions