in reply to Checking URL's and deleting element arrays without HTML

That's a clever way to handle the indexing problem, but you're still pulling the rug out from under your loop array. What happens when the list of aliases refers to something off beyond the new end of the array?

Splicing out elements from the loop array causes the skips. Copying downward to fill the gap makes the next element drop out of the loop. The top of the array moves down but aliased references to its elements remain unchanged.

Here's a safe way to remove elements from an array in a for loop. Use indexes and work from the top of the array down:

foreach my $index (reverse 0 .. $#links) { $response = $browser->get($links[$index]); splice(@links, $index, 1) unless $response->content_type eq 'text/html' || $response->content_type eq 'text/plain'; }
It would be more economical to work with head instead of get, but many sites deny head requests for no good reason.

Update: fizbin's grep suggestion is most likely a better way to do what you want. ++frodo72's addition of checking the $response for success.

After Compline,
Zaxo