I’m constructing a graph with ~10000 nodes, each node having metadata that determine which other nodes it will be connected to, using an edge.
Since the number of edge possibilities (~50M) is far greater than the actual number of edges that will be added (~300k), it is suboptimal to just iterate through node-pairs with for loops to check if an edge should be added between them. Using some logic to filter out many pairs to not have to check, with the help of numpy
‘s rapid methods I quickly reduced the possibilities to an array of ~30M pairs only.
However, when iterating through them instead, the performance did not improve much – in fact iterating through a bigger 2D boolean matrix is twice as fast (compared to my method, which previously collected the True
values from the matrix and only iterates through these ~30M instances). There must be a way to get the desired performance benefit, but I hit a deadend, looking to understand why some methods are faster and how to improve my runtime.
Context: In particular, every node is an artist with metadata such as locations, birth and death year.
I connect two artists based on a method that calculates a measure of how close they lived to each other at one time approximately (e.g. two artists living at the same place at the same time, for long enough time, would get a high value). This is a typical way to achieve just that (iterating through indices is preferred over names):
for i, j in itertools.combinations(range(len(artist_names)), 2): #~50M iterations
artist1 = artist_names[i]
artist2 = artist_names[j]
#...
artist1_data = artist_data[artist1]
artist2_data = artist_data[artist2]
val = process.get_loc_similarity(artist1_data, artist2_data)
if val > 0:
G.add_edge(artist1, artist2, weight=val)
As the number of pair of nodes is ~50M, this runs for ~14 mins. I reduced the number of possibilities by sorting out pairs of artists whose lifetimes did not overlap. With numpy
‘s methods running C under the hood, this executed in less than 5 seconds and gathered ~30M pairs to have to check only:
birth_condition_matrix = (birth_years < death_years.reshape(-1, 1))
death_condition_matrix = (death_years > birth_years.reshape(-1, 1))
overlap_matrix = birth_condition_matrix & death_condition_matrix
overlapping_pairs_indices = np.array(np.where(overlap_matrix)).T
overlapping_pairs_indices = np.column_stack((overlapping_pairs_indices[:, 0], overlapping_pairs_indices[:, 1]))
We can thus iterate through less pairs:
for i, j in overlapping_pairs_indices: #~30M iterations
if i != j:
artist1 = artist_names[i]
artist2 = artist_names[j]
artist1_data = artist_data[artist1]
artist2_data = artist_data[artist2]
val = process.get_loc_similarity(artist1_data, artist2_data)
if val > 0:
G.add_edge(artist1, artist2, weight=val)
It comes as a surprise, that this still runs for over ~13 mins – instead of improving runtime by 40% or so.
Surprisingly, iterating on the matrix indices is much faster, nevertheless looking at all 50M combinations:
for i in range(len(artist_names)):
for j in range(i + 1, len(artist_names)): #~50M iterations
if overlap_matrix[i, j]:
artist1 = artist_names[i]
artist2 = artist_names[j]
artist1_data = artist_data[artist1]
artist2_data = artist_data[artist2]
val = process.get_loc_similarity(artist1_data, artist2_data)
if val > 0:
G.add_edge(artist1, artist2, weight=val)
This ran for less than 5 minutes despite iterating again 50M times.
That is surprising and promising, and I would like to figure out what makes this faster than the previous attempt, and how to modify that to be even faster.
How could I improve runtime by using the right methods?
I wonder if there is a possibility of further utilizing numpy
, e.g. not having to use for loops even when calling the calculation function, using a method similar to pandas dataframe’s .apply()
instead.
(I also noticed that looping through a zip such as for i, j in zip(overlap_pairs[:, 0], overlap_pairs[:, 1])
did not improve runtime.)