Task: to create a list of every symbolic link pointing to a certain file (in this case, "tomcat").
At first, I used the Unix shell ls command:
#!/usr/bin/perl -w use strict; use warnings; my @lines = grep {/->\s+tomcat\s+/} qx{ls -l /etc/rc.d/init.d}; my @inits = map {(split)[-3]} @lines; print join("\n", @inits);
But I wanted to keep it all Perl, so I used readdir and readlink:
#!/usr/bin/perl -w use strict; use warnings; my $dir = '/etc/rc.d/init.d'; opendir(my $dirh, $dir); my @inits = grep {-l "$dir/$_" && (readlink("$dir/$_") =~ /^tomcat$/)} + readdir $dirh; closedir $dirh; print join("\n", @inits);
I like how I did not have to use 2 arrays in my "readdir" option, but I think, if I tried, I could eventually crunch down the "ls" option to only using 1 array. I'm not certain I could preserve readability if I did that, tho.
I'm somewhat torn between the ease of using the "ls" option compared to the "readdir" option. Maybe the "ls" option seemed easier because I'm not as comfortable using readdir as I am shell commands, and this will go away with more experience?
Aside from being all Perl, is there any other reason to use the "readdir" option over the "ls" option?
EDIT: Just realized that there is a slim chance in the "ls" option of getting a bad element on the list. Because I'm only parsing ls output, a funny file name could mess up that parsing. Whereas with the "readdir" option, I'm certain of what I'm getting. That's actually a very good reason to stick with the all Perl "readdir" option.
In reply to Unix shell ls vs readdir by jffry
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |