in reply to Reduce the time taken for Huge Log files

I'm not going to try modifying your code, but here's an example of what you could do with the URL portion of each line from the log file. This allows for both http and https, non-subdomain URLs which might or might not contain www, capitalization, etc. I've compiled the counts here and not bothered writing the matched lines to a secondary log file, but it would be easy do modify the code to do so.
use strict; use warnings; my @b = ( "corp.home.ge.com", "scotland.gcf.home.ge.com", "marketing.ge.com", "home-school.com" ); my %b; $b{$_} = 0 for @b; for (<DATA>) { $_ = lc $_; m/^https?:\/\/(?:www.)?(.*?)[\/\n]/; $b{$1}++ if exists $b{$1}; } print "$_ => $b{$_}\n" for @b; __DATA__ http://corp.home.ge.com/page/whatever.php3 https://scotland.gcf.home.ge.com http://sub.marketing.ge.com/ HTTP://marketing.ge.com/ http://www.home-school.com/mypage.html http://home-school.com/mypage.html https://MARKETING.ge.com/testpage.html