Changed tensor so that, when reallocating memory, it frees any existing memory

*before* allocating new memory.  It used to be the other way around which
caused momentary spikes of increased memory usage.  This could put you over the
total memory available in some cases which is obviously less than ideal
behavior.
This commit is contained in:
Davis King 2017-10-19 10:50:40 -04:00
parent 7a317f5456
commit e9837f7035

View File

@ -177,6 +177,11 @@ namespace dlib
{
CHECK_CUDA(cudaGetDevice(&the_device_id));
// free memory blocks before we allocate new ones.
data_size = 0;
data_host.reset();
data_device.reset();
void* data;
CHECK_CUDA(cudaMallocHost(&data, new_size*sizeof(float)));
// Note that we don't throw exceptions since the free calls are invariably