I can't keep a counter becouse my goal is make perl MODULE from this script (something like LWP::Parallel, but more efficient becouse of non-blocking sockets inplace of select()).
And this module will be used from user script... and this script can open his own files... any number of files... and I don't know in my module how many files open main script.
Update:I never say that I open only 200 sockets! To download 200 urls/sec I must open much more than 200 socket (~900), but after I open them and reach speed of 200 url/sec, I'm opening one new socket after one url downloaded and it socket is closed.
So, when script started it open ~900 sockets, and every second 200 from these 900 sockets will be closed and new 200 open.
| [reply] |
powerman,
unfortunately a public api for that piece of the task structure
has never been exposed. I looked for some api but couldn't find
one. If you're opening pretty fast, you could always use fileno to let you know how close you are
to the max but once you get going heavy and start closing,
reuse will kick in and that will break.
I have to ask but what are you going to do when you get close
to the max? Sleep? Can't you replicate that same behaviour by
wrapping the open(s) in an eval? It seems to me, the following
would be equivalent:
# the way you want
if( magical_how_many_open() < $what_i_need ) {
do_wait_some_how();
}
# the way you're being pushed
eval {
open( $fh, "whatever" );
};
if( $@ ) {
if( $@ eq "Too many open files" ) {
do_wait_some_how();
} else {
degrade_gracefully();
}
}
-derby | [reply] [d/l] |