Thank you everyone for all the insight.
I have tried changing over to Text::CSV_XS, and while it does produce the same output, there was no speed difference.
As a test, I removed the if (!(exists)) check on the hash, and it completed in about 45 seconds, so i'm pretty sure that's where the bottleneck is.
For the fun of it, I let it run overnight just to see how long it would take to run in its current state.
The good news is, now that I've processed the big file with the script, subsequent files are going to be MUCH smaller... but I'm still going to be trying to get this script working more quickly.
Here are the results of letting it run overnight:
perl test-urlListbyIP.pl
Lines: 50000
Lines: 100000
Lines: 150000
Lines: 200000
Lines: 250000
Lines: 300000
Lines: 350000
Lines: 400000
Lines: 450000
Lines: 500000
Lines: 550000
Lines: 600000
Lines: 650000
Lines: 700000
Lines: 750000
Lines: 800000
Lines: 850000
Formatting Output...
List End:1316
Execution Time: 25977.87 s
Check out that runtime!!!
I'll post updates if I get it going more quickly. THanks again for all the feedback.