Did you also benchmark other code? Like in this post?
No.Here's the 5 million path tree, the command line and timing, but from a cursory glance at code you reference, you're going to require a lot (10GB or more) of memory.
c:\test>868031 -S=7529 A Z
{
A => ["N", "W", "J", "L", "C", "E", "X", "T", "O", "H"],
C => ["Z", "J", "Q", "U", "S", "T", "N", "P", "D", "O"],
E => ["P", "G", "U", "F", "X", "A", "Y", "K"],
H => ["W", "O", "J"],
J => ["Z", "U", "B", "Q", "N", "I", "V", "F", "C", "P"],
K => ["X", "H", "J", "C", "P", "W", "E", "S", "Q"],
L => ["C", "B", "V", "A", "S", "J", "O", "H"],
N => ["R", "G", "K", "N", "Q", "W", "C", "U", "E", "V"],
O => ["K", "G", "X", "A", "Z", "W"],
P => ["O", "G", "F", "T", "E", "U", "L", "H", "B", "R"],
Q => ["V", "N", "X", "U", "D", "M", "S", "C", "R", "G"],
R => ["D", "S", "K", "X", "O", "U"],
S => ["Q", "E", "T", "P", "G", "Z"],
U => ["Y", "V", "U", "X", "R", "W", "M", "G", "K", "N", "A"],
W => ["P", "E", "G", "Y"],
X => ["Z", "H", "R", "L", "J", "W", "A", "E", "X", "T", "D"],
Y => ["J", "X", "G"],
Z => ["K", "A", "Z"],
}
5014604 FP2 took 981.75 secs for 5014604 paths
If you prefer the 10,000 path graph as a test: c:\test>868031 -S=7367 A Z
{
A => ["F", "B", "U", "Z", "J", "C", "Q", "H"],
D => ["W", "X", "F", "M", "K", "Y"],
G => ["E", "P", "U"],
H => ["K", "Q", "S", "T", "X", "G", "D", "B"],
I => ["U", "Q", "K", "D"],
K => ["O", "R", "A", "L", "X", "N", "C", "M"],
L => ["G", "I", "A", "O", "N", "J", "D", "S", "R", "V", "M"],
N => ["K", "L", "V", "Z", "U"],
O => ["W", "K", "D", "I", "A", "J", "M", "T", "Y", "Z", "P"],
P => ["B", "G"],
R => ["V", "B", "G", "P"],
T => ["M", "I", "N", "K", "D", "U", "A", "V", "W"],
U => ["V", "Z", "J", "A", "E"],
V => ["M", "G", "O", "F", "W", "Y", "P", "S"],
X => ["K", "S", "B", "P", "N", "T", "W", "Z", "H", "R", "F"],
Y => ["Q", "K", "J", "U", "G", "M", "P"],
Z => ["A", "M", "Z", "Q", "W", "N", "G", "J", "L", "H"],
}
10062 FP2 took 2.10 secs for 10062 paths
(I can't see the point in copying around the %seen hash 1) or even the current path of a DFS. :) Furthermore this code can be easily linearized to avoid the function-call overhead...In general there are still plenty of possible optimizations left to speed up such a search.
Improvements or alternatives welcome :) . I'm not claiming the fastest on the block, just better than my own previous efforts.
neversaint expressed his interest in the time complexity of my previous version, so I set out to improve it. Given the combinatorial explosion that can result from the all-paths traversal of apparently quite simple graphs, moving to an iterator rather than an accumulating generator seemed the logical route.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
|