in reply to References and Arrays

In this case, it would simply be:
push @$reqs, $value;
You dereference an array ref with @{  }, but in this case all you need is the at sign.

On another note, i might create the data structure like so:

my @url = qw( http://www.bbc.co.uk http://www.google.com http://www.perlmonks.org http://www.dhtmlcentral.com http://www.lycos.co.uk ); my $reqs = [map HTTP::Request->new('GET',$_), @url];
Also, it is possible that you don't need an array reference. Might an array simply work for you?

jeffa

L-LL-L--L-LL-L--L-LL-L--
-R--R-RR-R--R-RR-R--R-RR
B--B--B--B--B--B--B--B--
H---H---H---H---H---H---
(the triplet paradiddle with high-hat)

Replies are listed 'Best First'.
Re: (jeffa) Re: References and Arrays
by CodeJunkie (Monk) on Mar 10, 2003 at 22:14 UTC

    Thanks, that second way you suggested works great! Not really sure if a simple array would work, I modified the code from the LWP::Parallel::UserAgent examples from cpan so I don't totally know what it is doing.

    Perhaps you could take a look at my code and see what you think?!

    http://www.perlmonks.org/index.pl?node_id=241752

    I'd be happy to hear any comments you might have!

    Cheers
    Tom

      Your code looks pretty good. use strit warnings and diagnostics or die is worth a read RE benefits of strict. foreach and for are synonyms. You use aliasing to $_ in one loop and a my $var in another. This is inconsistent but it makes for easier to read code to write for my $whatever (@widgets) {} as your code becomes more self documenting, especially if the contents of @widgets is not immediately apparent. This loop structure:

      $num_domains = scalar @domains; while ($num_domains > '0') { # Start Loopy Code [blah] my $new_domain = shift @domains; [blah] $num_domains = scalar @domains; # get the size of array }

      is pretty obtuse to me. while loops are fine but they will also go infinite if you don't satisfy the condition (easy to do). All you really need is a simple

      for ( 0 .. $#@domains ) { # do stuff } # or for my $root_domain ( @domains ) { # blah push @domains, @new_domains; }

      The second code snippet is nifty because if you say extract the links from a page and then add the domains to @domains the @domains list gets longer and....you have an instant spider. Without using recursion per se. You can use this iterating over a list/adding to the list in the loop technique to 'recurse' virtually any structure you like. There is an example at Link Checker FWIW that will spider a website. You can also walk a directory tree in a few lines with:

      my $root = 'd:/perl/'; my ( @dirs, @fails, @files); @dirs = ( $root ); for my $dir ( @dirs ) { opendir DIR, $dir or die $!;#do { push @fails, $dir; next }; while ( $_ = readdir DIR ) { next if -s "$dir/$_"; do{ push @dirs, "$dir/$_"; next } if -d "$dir/$_" and not m/^\.\.?$/; do{ push @files, "$dir/$_"; next } if -f "$dir/$_"; } } print "$_\n" for @dirs;

      Sure you can use File::Find but this is short, sweet and quite a bit faster.

      cheers

      tachyon

      s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print

        $ ln -s . "except_it_is_broken"

        I don't understand.. why does speed matter that much? Have you benchmarked?

        Makeshifts last the longest.