First, your check for the "Links" directory is a bit awkward, we shouldn't have to bail if the directory already exists:
but why even do that? All you need to do is create the dir if id doesn't exist, and then use that dir name when you write to files. This prevents having to chdir (something i try to avoid). Next, why waste RAM with an array when you loop on the file handle?my $dir = 'Links'; if (-d $dir) { chdir $dir or die "can't chdir: $!"; } else { mkdir $dir or die "can't mkdir: $!"; }
As you will see in a little while however, my version will need to 'slurp' the the whole file into a scalar. Also, this code screams out to me "Use a Getopt module!!", but i'll leave that as an exercise. ;)open FILEIN, "urls.txt" or die "Could not open file $!"; while(<FILEIN>) { chomp; ... }
Next, pull in the URI::Find, Config::IniHash and File::Basename CPAN modules to ease our burden. At this point, however, the skeleton of your script changes, so here is my version complete:
use strict; use warnings; use URI::Find; use Config::IniHash; use File::Basename; use vars qw( @FOUND ); my $dir = 'Links'; my $file = 'urls.txt'; unless (-d $dir) { mkdir $dir or die "can't mkdir: $!"; } open FILEIN, $file or die "can't open $file: $!"; my $urls = do {local $/; <FILEIN>}; my $finder = URI::Find->new(\&found); $finder->find(\$urls); for (@FOUND) { my $hash = { DEFAULT => { BASEURL => $_ }, InternetShortcut => { URL => $_, Modified => 0 }, }; my $file = basename($_,'.*'); $file =~ s/(\.\w+)+/\.url/; WriteINI("$dir/\u$file", $hash); } sub found { my($uri, $orig_uri) = @_; push @FOUND,$orig_uri; }
jeffa
L-LL-L--L-LL-L--L-LL-L-- -R--R-RR-R--R-RR-R--R-RR B--B--B--B--B--B--B--B-- H---H---H---H---H---H--- (the triplet paradiddle with high-hat)
In reply to (jeffa) Re: url2link
by jeffa
in thread Url2Link 0.1 GUI/TK
by m_dv
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |