Have you considered using
benchmark to time your results?
I reckon there is a better technique of parsing your elements than loading them all into memory, processing, then collecting the garbage.
Where is you data coming from?
If its dbi, you could do some funky chicken with selects and loops.
If its a ff, you could parse one line at a time, process it, and stick it on a stack to be whisked off to your db. (Not sure if the overhead of many I/O's would take more time than processing 100,000,000 rows in memory)
If your data is sparse, perhaps you could do some "preprocessing" on the source before the "real" processing begins.
What about fork or more scripts if you have multiple datasources?
If you are collecting your garbage just before your script ends, you may not need to as perl will free the memory when it ends!
Are you collecting the garbage to increase the performance of your DBI calls? (ie pages to disk) - is there a trade off to be considered here?
Dont know if this will help but as they say (and I am more than aware of) - "You cant think of everything all the time."
HTH