in reply to Re: Iteration through large array using a N number of forks.
in thread Iteration through large array using a N number of forks.

Wow. Excellent, thanks everyone! Extremely appreciated. As a bonus question.. Would it be all that difficult to also simply have those threads sleep instead of dying outright, and then fire back up to reiterate through the file? I'm just pondering what the overhead of having the overall script continually fire up every x minutes and spawning 30 kids, vs just having it sleep with it's 30 kids and then reiterate?
  • Comment on Re^2: Iteration through large array using a N number of forks.

Replies are listed 'Best First'.
Re^3: Iteration through large array using a N number of forks.
by BrowserUk (Patriarch) on Feb 22, 2005 at 22:21 UTC
    Would it be all that difficult to also simply have those threads sleep instead of dying outright, and then fire back up to reiterate through the file?

    If you use threads, then that is no problem whatsoever :)


    Examine what is said, not who speaks.
    Silence betokens consent.
    Love the truth but pardon error.
      Reading through some other threads it seems that LWP::Parallel might do an equally effective job.. What would be the smallest chunk of code to simply request an array of urls (with a timer) to measure their response time with LWP::Parallel? The examples I've found are rather thick for what I'm trying to accomplish.

        I agree. The examples for LWP::Parallel are rather intimidating.

        I think this is about as easy as parallel LWP gets:

        #! perl -slw use strict; use threads; use threads::shared; use Thread::Queue; use LWP::Simple; use Time::HiRes qw[ time ]; $|=1; our $THREADS ||= 3; our $DELAY ||= 10; sub ping { my( $Q, $done ) = @_; my $tid = threads->self->tid; while( not $$done ) { my( $time, $url ) = split ':', $Q->dequeue; select undef, undef, undef, 0.01, while time < $time; my $start = time; printf "($tid) %20s returned [%50.50s] and took %f seconds\n" , $url , join( ' ', grep{ defined } head( "http://$url" ) ) , time() - $start ; $Q->enqueue( ( $DELAY + time() ) . ":$url" ); } } my $Q = new Thread::Queue; $Q->enqueue( map{ chomp; time() . ":$_" } <DATA> ); my $done : shared = 0; my @threads = map{ threads->new( \&ping, $Q, \$done ) } 1 .. $THREADS; <STDIN>; print "Stopping..."; $done = 1; $_->join for @threads; __DATA__ www.yahoo.com www.aol.com www.altavista.com www.time.com www.whitehouse.gov www.parliament.uk www.scottish.parliament.uk www.europarl.org.uk www.nasa.com www.perl.com www.perl.org www.activestate.com www.bbc.co.uk www.ibm.com www.google.com www.cnn.com www.perlmonks.com www.microsoft.com

        Output (truncated for posting)


        Examine what is said, not who speaks.
        Silence betokens consent.
        Love the truth but pardon error.