I have the following function, that creates a large number of dict entries that describe cubes:
def fill_dict_cubes(
length: np.ndarray,
width: np.ndarray,
height: np.ndarray,
) -> np.ndarray:
numObjects, numFrames = length.shape
cubePrimitives = np.empty((numObjects, numCycles), dtype=object)
for objIdx in range(numObjects):
for frameIdx in range(numFrames):
cubePrimitives[objIdx, frameIdx] = CubePrimitive(
length=length[objIdx, frameIdx],
width=width[objIdx, frameIdx],
height=height[objIdx, frameIdx],
).attributes
return cubePrimitives
I need to use the dict structure, it is predefined by an external API. The input data length
, width
and height
are 2D numpy arrays. The output is an array that contains the dicts.
Since I have a large number of cubes and also many frames, filling all these dicts using nested for-loops takes quite some time. Unfortunately I could not find so far a nice way to make this faster by vectorizing / multiprocessing / parallelization / etc.
Has anybody a clever idea how the creating of these dicts could be made faster? Btw, I am using python 3.10, but updating to 3.13 would be no issue, if new features are required.