Dear all that can provide feedback,
I have a question. In working with an enormous file with each line made of random characters (ex. abc123), my task is to read all the data just once, and print out all the unique strings and number of times they appear (so if abc123 shows up 1,000 times, the output would say "abc123: 1,000").
So my question is, in all your opinions, what is the most efficient and fastest way of doing this? Would you evaluate each line as you are reading the file, or would you read the whole file into an array or hash and then evaluate, or would you do it another way?
I appreciate your input.