in reply to Perl vs C++

Alright, I've damned near gotten tired of this argument. Every single time I hear someone say: "that problem is too big for Perl, we're gonna have to throw C++ at it" I get sick to my stomach because I know that the moment I look the person in the eyes and say: "gee, does that mean that we are going to be ignoring the Von-Neumann bottleneck today?" I am going to discover that their IT training is at best abysmal. I am so very tired of people repeating things that they've "heard" from "this guy that knows a whole bunch about 'puters" so it must be true.

Plain and simple, for 99% of server log processing tasks that *I* have encountered, no matter what the size of the datafile, the bottleneck isn't going to be the processing of the data, it is going to be the IO caused by the reading and writing of said data, and that is in Perl, as it is in C++ a very optimized streamlined operation, only a few steps removed from a stdlib call.

Do the math, the time required for processing in memory of the lines that you read efficiently (i suggest looking into some of the block buffered reading hacks that we've discussed on this site this one was quite useful to us...) will greatly be outstripped by the time it simply takes to get the information in memory in the first place, reguarless of the language used to manipulate it (C, C++, or ASM aren't going to make your hard drive faster...) But having parsed data files approaching the terabyte size limit in *extremely* reasonable times, I have to tell you that I feel that Perl is more than qualified for 99% of these jobs.