I can't see any seek-based approach as not having some line-length bias (even if not of the length of the chosen line.) Why not just read through the whole file once?
Update: to clarify, I mean using the rand($.) < 1 && ($it = $_) while <>; approach suggested in How do I pick a random line from a file?. On a 10 million line file (*.c from bleadperl top-level directory catted together and repeated 100 times), this took 70 seconds.