What would be the best way to extract, record, and count a single field in a very large log file (Apache access log). The file is intentionally large (for a legal request). The file is 1.5TB in size. What I would like to do is pull the date from each line, count how many requests per date, then report number of requests per date and the date itself. If the file wasn't so large, I could just do something like:
cat logfile.log | awk {'print $4'} | sort | uniq -cHowever, reading a 1.5TB file in to memory just isn't going to work :)
Where would I start?
In reply to Working with a very large log file (parsing data out) by calebcall
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |