in reply to Re: How do you parallelize STDIN for large file processing?
in thread How do you parallelize STDIN for large file processing?

I'd check if tcpdump is the first bottleneck. It has to do a lot of work and I doubt it is multithreaded. "time tcpdump -options >/dev/null" gives the minimum run time for the perl script. Look at the load or "top" while doing this.

Recording smaller tcpdump_infiles or splitting them up at a good boundary allows running multiple tcpdump processes.

OTOH it might be justified to parse the tcpdump_infile directly from perl.

  • Comment on Re^2: How do you parallelize STDIN for large file processing?