Hello g_speran,
Fellow Monks have answered your question with multiple great answers. Just to add another minor answer using one of my favorite module(s) IO::All:
#!/usr/bin/perl use strict; use IO::All; use warnings; use Data::Dumper qw(Dumper); my @lines = io('in.txt')->chomp->slurp; # Chomp as you slurp print Dumper \@lines; @lines = grep { $_ ne '' } @lines; # Skip empty elements print Dumper \@lines; __END__ $ perl test.pl $VAR1 = [ 'line1', 'line2', '', 'line4', '', 'line6' ]; $VAR1 = [ 'line1', 'line2', 'line4', 'line6' ];
But why not reading the file line by line more efficiently and make it easier to simply keep the lines that contain something? Sample of code bellow:
#!/usr/bin/perl use strict; use warnings; use Data::Dumper qw(Dumper); my @lines; while (<>) { chomp; next if /^\s*$/; # skip blank lines; # next if /^\s*#/; # skip comments push @lines, $_; } continue { close ARGV if eof; # Not eof()! } print Dumper \@lines; __END__ $ perl test.pl in.txt $VAR1 = [ 'line1', 'line2', 'line4', 'line6' ];
I prefer to read my files from command line by using the eof function as it gives you the ability to read multiple files one after the other. For example perl test.pl in_1.txt in_2.txt etc... etc...
Minor note, a similar question has been asked before in the Monastery (Removing empty string elements from an Array). Remember to search and read as much as possible for your problem it will really help you reading and trying alternative approaches. :)
Hope this helps, BR.
In reply to Re: replace multiple newline characters
by thanos1983
in thread replace multiple newline characters
by g_speran
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |