Skip to content Skip to sidebar Skip to footer

Cuda And Pytorch Memory Usage

I am using Cuda and Pytorch:1.4.0. When I try to increase batch_size, I've got the following error: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity;

Solution 1:

I had the same problem, the following worked for me:

torch.cuda.empty_cache()
# start training from here

Even after this if you get the error, then you should decrease the batch_size

Post a Comment for "Cuda And Pytorch Memory Usage"