4.5GB is a huge amount of data (the CPU appears to be processing a huge amount of data).
Whether or not you take GrandFather's suggestion to use HTML::Parser, you definitely want to take his suggestion of replacing the slurp with a record-at-a-time read.
A while loop reading a record at a time will allow for useful print statements for debugging or progress reporting.
In reply to Re^2: Large file data extraction
by tod222
in thread Large file data extraction
by micwood
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |