Of the options you listed, padding all lines to a fixed length or making an index of offsets would almost certainly be the fastest. Since you said you have plenty of disk to burn, padding would probably be the faster of the two, since you'd only have to do one seek per line vs. doing a seek to find the offset in your index file/db and then a second seek in the actual data file, but this would come at the cost of increased preprocessing time since you'd need to make two passes over the data file (one to find the longest line and one to get the data for padding), plus you'd be rewriting the whole thing on the second pass. It sounds like you'll be using it enough for the gains in lookup time to outweigh the extra preprocessing time, though.
And then there's the unlisted option 5: Stuff it into an actual database. They're designed to look up arbitrary records quickly, although that again gets into the question of whether you'll be reusing the same dataset often enough to be worth the time taken to insert and index all the data.