in reply to Submitting large data to CGI
When I've done such stuff in the past, I've left a configuration variable in my code, that tells it how many records to push at a time -- that way, I can easily adjust the number to push single records, or to an insanely high number to push all of the records at once, or a number in the middle to do lots of medium sized blocks.
You can then make an educated guess as to what will be best, and adjust if need be down the road, should you need to tune this aspect of the process.
oh -- and as for the mod_perl comment -- I'd probably avoid it for this application -- if you're only going to be making a few calls against it a month, it's not worth keeping resident in memory except for the times when it's being run. _Maybe_ if you're going to run single records, it'll be worth it, but my gut tells me that the costs will outweigh any benefits for the one load.
(if the time to do the bulk processing is more important than the normal processing done by the machine, then maybe mod_perl, but if it were that important, you'd have done the 'don't use HTTP' path, without question.)
|
|---|