I have my own knowledge graph representation, read from ConceptNet and NELL, containing tens of millions of nodes in which I want to calculate the nearest distance (if any) between two concept nodes. The application is to find out how two concepts are related in the simplest explainable way. Typical connectedness of the graph lie in the range 100-1000. Do I need some variant of Dijkstras in this case? I want the solution to require at most around 10 GBytes of RAM. My current memory usage is around 2 GB.
6
2 simple solutions that immediately arise:
Precompute everything via something like Johnson’s algorithm, or use a standard search algorithm every time as you suggested – Dijkstra for example (which boils down to simple BFS, since the graph is unweighted).
The first requires way too much storage/RAM to do. The second is prohibitively slow. What you (likely) want is a hybrid solution that combines some precomputation (possibly a long one, but that doesn’t require much storage), and a shorter per-query computation.
Clustering
One approach would be to somehow cluster your graph, and compute the distances between the exit vertices of the clusters. Then a search algorithm wouldn’t even consider paths in clusters (or rather use the precomputed minimum paths between exits), unless that cluster contains the start or end point.
A-star
Another is to compute a heuristic and use A* (or any other heuristic-assisted search). You mentioned that you do not have any information about the graph, except the connectedness, so you may need to devise and precompute such a heuristic.
One such heuristic may be a minimal “order n spanning graph”. There’s likely a proper term for this, but it’s been too long since my Uni days, so I’ll explain what I mean. I call a “order n spanning graph” a collection of vertices, such that any vertex in your original graph is reachable from some vertex in this collection via a path of length at most n.
If you’ve got such a collection, along with a mapping of Vertex -> closest vertex in spanning graph + distance to it
, and the distances between any 2 points in the spanning graph, then you have a heuristic:
The distance between two vertexes is at least the distance between their closest vertices in the spanning tree + distances to them – 2*n. (Why? Because the distance between the spanning vertices is at most the distance between A and B + distances to them).
This is an admissible heuristic, hence A* will do a good job using it.
The smaller the order of the collection, the better the heuristic and faster the search. But that also means that the spanning graph will be larger, and hence you will need a larger matrix of distances. I would probably start off with a graph of order 50 or so, but you can tweak it, depending on the exact shape/nature of the graph.
Optimize for the average use-case
Also worth noting that you can optimize not for the graph you have, but for the queries you answer. If they are typically to compute precise, but small distances, you may want to precompute some, but not all values (For example, distances from each node to anything reachable by 3 or less steps?). You’d need fallback to one of the methods above in this case, but that may be sufficient.
This also begs the question: Is full precision really necessary on longer paths for you? Maybe the specification can be relaxed in this sense?
0