Estimated Shortest Path with F#

Read Time: 14 minutes

Calculating the shortest path between two nodes in large graphs can be a time-consuming process. One approach to solving this is to calculate an estimated path. There are many situations where the exact path isn’t necessary; an estimated path is good enough. This is the case especially if the performance benefits of estimating are high enough. I’ll look at implementing the algorithm from A Sublinear Algorithm for Approximate Shortest Paths in Large Networks in F#. It offers a hefty performance boost at the cost of some preprocessing; well worth the time.

To set the stage, imagine a large, highly-connected graph with millions of nodes. Common examples are a social graphs and networking graphs. What is the fastest way to find the shortest path between two nodes? One method is to use breadth first search (BFS) from the starting node, expanding its search until the target node is found. In a similar, but faster fashion, there is a BiBFS (a bidirectional search simultaneously from the source and target nodes, and stopping when the searches meet). Another option is to preprocess the graph and store an index the nodes and shortest paths. As you can imagine, indexing a large graph isn’t particularly practical. A common solution to this is only index a portion of the nodes. That’s what will be investigated today.

The paper describes, what they call the Wormhole algorithm. It is a method of dynamically selecting a core group of nodes to be indexed. This method takes advantage of the characteristic of highly connected graphs. As a preprocessing step, the core nodes are selected and indexed. Once this is done searches use a modified BiBFS. A path can still be found directly from source to target. But a path can also be found when both searches come in contact with the core group of nodes. At this point, the search can look up the indexed result of part of the path to improved the performance, while accepting that the ultimate path will now be an estimate, not necessarily the fastest path.

Graph path through core nodes

Before getting into the details of the algorithm, there is a bit of structure to define.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
type Graph = {
/// Nodes (in map form for fast lookup)
Nodes: Map<string, bool>
/// List of a node's neighbors
AdjacencyList: Map<string, string list>
/// Indexed version of nodes
Index: Dictionary<string * string, string list>
}

let emptyGraph =
{ Graph.Nodes = [] |> Map.ofList
AdjacencyList = Map.empty<string, string list>
Index = new Dictionary<string * string, string list>()}

let rnd = System.Random()

A bit of preparation is required. The core group of nodes must be selected and indexed. First, to select a core group of nodes a random node is selected from the graph and placed into what is called the “inner ring”. It’s neighbors are then placed in the “outer ring”. The algorithm then processes all nodes in the outer ring in the similar way. One difference is nodes from the outer ring are not selected randomly. Since the goal is to leverage high connectivity, the node in the outer ring that has the most connections to nodes in the inner ring is selected. The selected node is placed in the inner ring. Then its neighbors are added to the outer ring. This process repeats until the size of the inner ring meets the desired size. At this point the processing is complete. The inner and outer rings are combined into a core graph. The result is a highly connected group of core nodes to be used when performing partial path estimation. This is what will be used moving forward.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
/// Create an inner and outer ring of most connected verticies
let decomposeGraph (size: int) (graph: Graph) =
let nodeKeys = graph.Nodes.Keys |> Seq.toArray

let rec decomposeGraph' (cin: string list) (cout: string list) =
if cin.Length = 0 then
// add random vertex to cin
let v = nodeKeys[rnd.Next(nodeKeys.Length)]
let (cin', cout') = addToCin cin cout graph v
decomposeGraph' cin' cout'
else if cin.Length < size then
// add v from cout with highest ties to cin
let v =
cout
|> List.filter (fun x -> not (List.contains x cin))
|> List.map (fun x ->
if graph.AdjacencyList.ContainsKey x then
let c =
graph.AdjacencyList[x]
|> List.filter (fun y -> List.contains y cin)
|> List.length
(x, c)
else
(x, 0))
|> List.sortByDescending (fun (x,y) -> y)
|> List.head
|> fst

let (cin', cout') = addToCin cin cout graph v
decomposeGraph' cin' cout'
else
(cin, cout)

decomposeGraph' [] []

Next, the core graph must be indexed. The index is a mapping of every node pair combination and the shortest path between them (start, target) -> path. Although computationally intensive, it is a much smaller graph, making it viable. This is also where we get the performance boost. Once path finding hits the core group of nodes, a fast index lookup can be done instead of needing to calculate the shortest path on the fly.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
/// Index a graph
/// Result is a (start, target) -> path dictionary
let indexGraph (graph: Graph) (start: string) =
let indexes = new Dictionary<string * string, string list>()
let visited = new Dictionary<string, string list>()
let queue = new Queue<string list>()

let rec indexGraph' start =
if queue.Count > 0 then
// Keep processing if queue is populated
let path = queue.Dequeue()
let node = path.Head

if not (indexes.ContainsKey((start, node))) then
indexes.Add((start, node), path |> List.rev)

graph.AdjacencyList[node]
|> List.filter (fun neighbor -> not (visited.ContainsKey neighbor))
|> List.iter (fun neighbor ->
visited.Add(neighbor, (neighbor :: path))
queue.Enqueue (neighbor :: path))

indexGraph' start
else
indexes

for n in graph.Nodes.Keys do
visited.Clear()
visited.Add(start, [start])
queue.Clear()
queue.Enqueue [start]
indexGraph' n |> ignore

indexes

Creating the core graph along with its index looks like this. For this test I’ve selected an s of 600, which means the inner ring needs to get to a size of 600. This value is really dependent on the properties of the graph in question. The size and connectedness of the graph will impact performance. For a specific graph some experimentation may be required. But once determined, this process only needs to be run once. It can then be saved and used for pathfinding later.

1
2
3
4
5
6
7
8
9
10
let s = 600

let coreGraphTemp =
graph
|> decomposeGraph s
|> createCoreGraph graph

let coreIndex = indexGraph coreGraphTemp (Seq.head coreGraphTemp.Nodes.Keys)

let coreGraph = { coreGraphTemp with Index = coreIndex }

Now that the core graph has been created, the next step is to look at what searching looks like. Before I get to the BiBFS code, I have a supporting enqueue that conditionally adds a node to a queue based on whether the current search has found a path to the core graph yet.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
let inline enqueue (queue: Queue<string list>) (coreNodes: Map<string, bool>) (portalPath: (string list) option) (path: string list) :(string list) option =
let node = List.head path
// always enqueue
queue.Enqueue path
if portalPath.IsNone then
if Map.containsKey node coreNodes then
// Found portal node
Some path
else
// Not a portal node, continue processing
None
else
// Already found portal Path, just return it
portalPath

At a high level, the algorithm simultaneously does two BFSes, one from the start node and one from the target node. When the two searches meet, the shortest path is found. But, the interesting aspect is the integration of the core graph. The algorithm keeps track of when the search touches one of the core nodes. If both searches connect, there is no change. But, if both searches hit the core before each other, then the path will traverse through the core set of nodes. This path is looked up in the index. This is also where the performance comes from.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
/// bfs with wormhole algorithm
let rec wormholeBfs (graph: Graph) (start: string) (target: string) =
let visited1 = new Dictionary<string, string list>()
let visited2 = new Dictionary<string, string list>()
let queue1 = new Queue<string list>()
let queue2 = new Queue<string list>()

// Init visited
visited1.Add(start, [start])
visited2.Add(target, [target])

let mutable portalPath1 :(string list) option = None
let mutable portalPath2 :(string list) option = None

// Init processing queues
portalPath1 <- enqueue queue1 coreGraph.Nodes portalPath1 [start]
portalPath2 <- enqueue queue2 coreGraph.Nodes portalPath2 [target]

let rec wormholeBfs' () =
if portalPath1.IsSome && portalPath2.IsSome then
// Both ends are linked to the core graph

let n1 = portalPath1.Value |> List.head
let n2 = portalPath2.Value |> List.head
let corePath :(string list) option =
if coreGraph.Index.ContainsKey (n1, n2) then
Some coreGraph.Index[(n1, n2)]
else
None
if corePath.IsSome then
Some(
List.concat [
portalPath1.Value |> List.tail |> List.rev;
corePath.Value;
portalPath2.Value |> List.tail |> List.rev])
else
None
else if queue1.Count > 0 && queue2.Count > 0 then
// Keep processing if queues are populated

let path1 = queue1.Dequeue()
let node1 = path1.Head

let path2 = queue2.Dequeue()
let node2 = path2.Head

if visited2.ContainsKey node1 then
// Node from start path is in target visited
let path1' =
path1
|> List.tail
|> List.rev
Some(path1' @ (visited2[node1]))
elif visited1.ContainsKey node2 then
// Node from target path is in start visited
let path1' =
visited1[node2]
|> List.tail
|> List.rev
Some(path1' @ path2)
else
// Not common nodes found, add neighbors for the current nodes
try
graph.AdjacencyList[node1]
|> List.iter (fun neighbor ->
if not (visited1.ContainsKey neighbor) then
visited1.Add(neighbor, (neighbor :: path1))
portalPath1 <- enqueue queue1 coreGraph.Nodes portalPath1 (neighbor :: path1))
with
| _ -> ()

try
graph.AdjacencyList[node2]
|> List.iter (fun neighbor ->
if not (visited2.ContainsKey neighbor) then
visited2.Add(neighbor, (neighbor :: path2))
portalPath2 <- enqueue queue2 coreGraph.Nodes portalPath2 (neighbor :: path2))
with
| _ -> ()

wormholeBfs'()
else
None

wormholeBfs'()

That’s pretty much it. There is preprocessing to provide a partial index that allows for faster search results. Then there the search uses this index. So how do the results look? It should be noted any results are going to be highly dependent on the characteristics of graph as well as the size of the core indexed graph. As with anything related to performance take this with a grain of salt. But the results can be pretty impressive. Below are some sample path finding times from random nodes of a 10 million node graph. Is many cases the wormhole version is drastically faster, although it often comes at the cost of not-quite-the-shortest-path. The index can provide upwards of a 90% performance improvement, but even a nominal improvement is often seen. There is some overhead because of the extra portal node checks, so occasionally the search actually takes longer, if the index isn’t used.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
BiBFS            Wormhole          Gain  Length (delta)
00:00:00.0225713 00:00:00.0044238 80.4% 9 (8)
00:00:00.0063691 00:00:00.0011482 82.0% 10 (4)
00:00:00.0312000 00:00:00.0092429 70.4% 8 (8)
00:00:00.0085600 00:00:00.0034126 60.1% 10 (5)
00:00:00.0184681 00:00:00.0025473 86.2% 9 (7)
00:00:00.0137133 00:00:00.0046735 65.9% 10 (6)
00:00:00.0193507 00:00:00.0027667 85.7% 10 (6)
00:00:00.0047823 00:00:00.0003425 92.8% 10 (4)
00:00:00.0006894 00:00:00.0009595 -39.2% 8 (0)
00:00:00.0422815 00:00:00.0033562 92.1% 10 (7)
00:00:00.0077808 00:00:00.0021191 72.8% 10 (6)
00:00:00.0008566 00:00:00.0007513 12.3% 7 (6)
00:00:00.0148260 00:00:00.0013760 90.7% 10 (5)
00:00:00.0103843 00:00:00.0031217 69.9% 9 (8)
00:00:00.0081660 00:00:00.0017522 78.5% 10 (2)
00:00:00.0045205 00:00:00.0035961 20.4% 10 (7)
00:00:00.0343872 00:00:00.0066955 80.5% 10 (5)
00:00:00.0080696 00:00:00.0048364 40.1% 9 (7)
00:00:00.0075896 00:00:00.0122219 -61.0% 9 (8)
00:00:00.0036436 00:00:00.0036039 1.1% 9 (0)

There are a couple ways the provided code could be improved for performance an estimated path. But even a relatively naive implementation provides some nice benefits. Hopefully you found some value in this dive into estimated shortest path finding and selective indexing. If nothing else, it is valuable to consider that estimates can often be good enough, especially on large datasets where performance is an issue. Until next time.

References:

A Sublinear Algorithm for Approximate Shortest Paths in Large Networks (original source: https://arxiv.org/pdf/2406.08624)