Skip to content Skip to sidebar Skip to footer

Why Operations With Dtype Np.int64 Are Much Slower Compared To Same Operations With Np.int16?

Here is what i mean - a is a vector of 1.000.000 np.int64 elements, b is a vector of 1.000.000 np.int16 elements: In [19]: a = np.random.randint(100, size=(10**6), dtype='int64')

Solution 1:

Reading from memory costs something. Writing to memory costs something. You're reading four times as much data in, and writing four times as much data out, and the work is so much faster than the reads/writes to memory that it's effectively I/O bound. CPUs are just faster than memory (and the speed ratio has been getting more and more extreme over time); if you're doing memory-intensive work, smaller variables will go faster.


Post a Comment for "Why Operations With Dtype Np.int64 Are Much Slower Compared To Same Operations With Np.int16?"