We are encountering a memory accumulation issue when using a Numba class to append arrays of floats to a list within a loop. Despite clearing the list before each iteration, the memory usage continues to grow proportionally with the loop size. Interestingly, this problem only arises when the class is implemented using Numba; it doesn’t occur with a standard Python class.
Any insights or suggestions on how to address this issue would be greatly appreciated. Thank you!
Here is a minimum working (or “bugging”) example. the out put is a positive number (i.e. more memory used after running the script). This is not the case if a normal pyhton class is used (remove the @jitclass() )
Tested for Specs:
OS: Linux SMP x86_64 | 22.04.1-Ubuntu
Numba: 0.59.1 | 0.58.1
Numpy: 1.26.4 | 1.24.1
Python:3.11.8 GCC 12.3.0 | 3.11.0 GCC 11.3.0
import numba.experimental.jitclass as jitclass
import numba
import random
import numpy as np
import psutil
import os
@jitclass()
class testing:
points : numba.types.List(numba.types.Array(numba.types.float64, 1, "C"))
def __init__(self):
# Initialize point structure
self.points = [np.zeros(1)]
self.points.clear()
def compute(self):
# Clear possible previous points
self.points.clear()
l=random.randint(0,1000)
#l=10
for k in range(l):
self.points.append(np.array([np.float64(k)]))
return 0
a=testing()
process = psutil.Process(os.getpid())
base_memory_usage = process.memory_info().rss
for i in range(1000):
a.compute()
memory_usage = process.memory_info().rss
loop_memory_usage = memory_usage - base_memory_usage
print('RAM Used (GB):',loop_memory_usage/1e9)
Thank you in advance!
PS: Note that we found a way to solve this by making a List of List instead of a List of Arrays but we do not understand why the issue occurs in the first place.