in reply to Capturing substrings with complex delimiter, up to a maximum
Unless you have a particular unstated reason for not doing so, it is probably more efficient to simply parse out all the urls and then discard those you don't want.
Try this:
C:\test>p1 sub getNurls{ my($s,$n) = @_; my @urls = $s =~ m[(https?://.+?)(?:,(?=http)|$)]g; return @urls[ 0 .. $n-1 ]; };; @raw_inputs = ( 'http://abc.org', 'http://de,f.org', 'https://ghi.org', 'http://jkl.org', );; $s = join ',', @raw_inputs;; print for getNurls( $s, 4 );; http://abc.org http://de,f.org https://ghi.org http://jkl.org print for getNurls( $s, 3 );; http://abc.org http://de,f.org https://ghi.org print for getNurls( $s, 2 );; http://abc.org http://de,f.org print for getNurls( $s, 1 );; http://abc.org
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Capturing substrings with complex delimiter, up to a maximum
by jkeenan1 (Deacon) on Oct 30, 2013 at 01:27 UTC | |
by BrowserUk (Patriarch) on Oct 30, 2013 at 01:53 UTC |