R user learning Python here. I have an undirected unweighted edgelist and I want a distance matrix, showing the shortest path between all my nodes. There are about 30000 nodes and 40000 edges, and my goal is a 30000×30000 matrix where entry ij shows distance between node i and node j.
On my i7 16GB RAM PC I can achieve the result in 59 seconds in R 4.3.2
library(igraph)
g <- graph_from_data_frame(el, directed = F)
dm <- distances(g)
But neither attempt 1 nor 2 below finished after 10 minutes of running in Python 3.12.4 (Spyder 5.5.1):
import networkx as nx
import pandas as pd
g = nx.from_pandas_edgelist(el, 'mid', 'cid')
#Attempt 1
dm = nx.floyd_warshall_numpy(g)
#Attempt 2
gdict= dict(nx.all_pairs_shortest_path_length(g))
dm = pd.Dataframe(gdict)
Where el
is a pandas Dataframe and mid
and cid
are uint16
columns.
Am I doing something wrong here, or is Python just very slow in shortest path algorithms compared to R?
Alternatively, how can I create a distance matrix from an edgelist in Python efficiently?
My data looks something like this:
data = {
'mid': [1, 1, 2, 2, 3, 4],
'cid': [2, 3, 3, 4, 5, 5]
}
el = pd.DataFrame(data)
It is possible that python implementations are slower than R. But you can always reuse functions from ‘R’ directly in the python code.
from rpy2.robjects import numpy2ri
from rpy2.robjects.packages import STAP
import numpy as np
r_fct_string ='''
mat_dist <- function(el, weights){
library(igraph)
g <- graph_from_data_frame(el, directed = T)
dm <- distances(g, weights =weights)
dm
}
'''
r_pkg = STAP(r_fct_string, "r_pkg")
# -- create data sample
n_edges = 30000
weights = np.random.random(size=[n_edges,n_edges])
graph = np.indices((n_edges, n_edges)).reshape(2, -1).T # [n_edges, 2]
# -- end data sample
numpy2ri.activate()
Vt_bp = r_pkg.mat_dist(graph,weights) # returns numpy array
numpy2ri.deactivate()
print(Vt_bp.shape)