I am not looking for a way to handle files. My shell script gathers the file names and passes to my Perl script. I do not want Perl to handle this instead, I am okay with it. Then my Perl file connects to "another program", executes some commands there (depending on the argument passed from shell to it) and exits. But this happens hundreds of thousands of times. Why not open the connection to "that program" only ONCE and use the same connection hundreds of thousands of times?
I certainly understand you asking for detailed information and am quite aware of posting guidelines but this is a part of an experiment for a TREC proposal. So it is impossible for me to describe such a long thesis here. If I could not clarify what I am looking for so far, then that's the limitation of my English and we have nothing to do.
One final try from me: Though I am not, let us assume the "external program" we are dealing is MySQL. That is: my Perl file opens an IPC connection to MySQL, selects a DB to work on and creates a table named ARGV[0]. That's the way it is, we cannot change it. Also assume that we want to create 100000 different tables. Now, why open a connection to MySQL for 100000 times? Let us open the connection and select the DB once at the beginning and then just create 100000 tables. It would save us a time of 99999*(time to open a new connection and select database), wouldn't it?
Still does not make sense? :(
In reply to Re^4: batch processing via single open
by karden
in thread batch processing via single open
by karden
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |