Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Hi Monks, who are always smarter than me. I'm trying to use the following code but rather than modify just one file to contain only unique lines I'm trying to modify many files in a directory.

I've successfully erased my data 5 times with some not so successful modifications so far so I'm seeking some wisdom. Here's the first bit of code:

#!/usr/bin/perl –w open(INF,"db.txt"); @data = <INF>; close(INF); @sd = sort(@data); for(@sd){ push @out, $_ if (not @out) or ($out[-1] ne $_); }; open(OUTF,">outdb.txt"); print OUTF; close(OUTF);

I'm pretty sure I'm overthinking it and the solution is very, very simple. Here's my code to find the files. See below. I just can't seem to get the two bits of code together properly.

#find the files I want to change my @files; find( sub { return unless -f; # Files only return unless /\.txt$/; # Name must end in ".txt" push @files, $File::Find::name; }, $directory );

Replies are listed 'Best First'.
Re: parse multiple text files keep unique lines only
by 1nickt (Canon) on Jun 27, 2017 at 20:25 UTC

    # 1193725.pl use strict; use warnings; use Path::Tiny qw/ path /; use List::Util qw/ uniq /; my $dir = '.'; my $iterator = path( $dir )->iterator; while( my $file = $iterator->() ) { next unless $file =~ /\.txt$/; my @lines = path( $file )->lines({ chomp => 1 }); path( $file )->spew( join "\n", uniq @lines ); } __END__

    Hope this helps!

    Update: Removed example output as it was distracting (after rereading question and amending example).


    The way forward always starts with a minimal test.
Re: parse multiple text files keep unique lines only
by thanos1983 (Parson) on Jun 28, 2017 at 00:01 UTC

    Hello Anonymous Monk,

    I would approach your problem a bit more different. I would use File::Find::Rule to find all the files you want to compare. Why? Simply because you can use regex plus it can search on multiple directories recursively, what more can you ask.

    Second step I would use List::Compare/Regular Case: Compare Two Lists, why? Simply because for me since I have the list with files that I want to compare I simply need to open one by one the file and compare with the array of data that you are trying to compare on your example for(@sd){ push @out, $_ if (not @out) or ($out[-1] ne $_); };. The module does not need to sort the data it will compare each line individually.

    Example with solution provided bellow:

    #!usr/bin/perl use strict; use warnings; use Data::Dumper; use List::Compare; use File::Find::Rule; sub get_files { my @dirs = ('/home/tinyos/Monks/uniq'); my $level = shift // 2; my @files = File::Find::Rule->file() ->name('*.txt') ->maxdepth($level) ->in(@dirs); return @files; } my @files = get_files(); # print Dumper \@files if @files; # file to compare against the rest my $path2compare = '/home/tinyos/Monks/uniq/compare.txt'; open my $fh, '<', $path2compare or die "Can't open file $path2compare: $!"; chomp(my @array2compare = <$fh>); close $fh or warn "File $path2compare close failed: $!"; # open files and load them into an array # compare unique lines and store them into a Hash of Arrays my %HoA; foreach my $path_to_file (@files) { open my $fh, '<', $path_to_file or die "Can't open file $path_to_file: $!"; chomp(my @lines = <$fh>); close $fh or warn "File $path_to_file close failed: $!"; my $lc = List::Compare->new('-u', \@array2compare, \@lines); # Get those items which appear at least once in both lists (their +intersection). my @intersection = $lc->get_intersection; # Get those items which appear (at least once) only in the second +list. # my @Ronly = $lc->get_complement; # print Dumper \@Ronly; # write to file here $HoA{$path_to_file} = \@intersection; } print Dumper \%HoA; __END__ $ perl unique.pl $VAR1 = { '/home/tinyos/Monks/uniq/compare.txt' => [ 'Common line', 'Unique line file + Original' ], '/home/tinyos/Monks/uniq/unique1.txt' => [ 'Common line' ], '/home/tinyos/Monks/uniq/unique2.txt' => [ 'Common line' ] }; uniqu1.txt Unique line file1 Common line uniqu2.txt Common line Unique line file2 compare.txt Unique line file Original Common line

    Based on you code, I noticed a few things. Always always!!!!!!! use strict; use warnings;. Also do not use bare words for file handles, read here why Don't Open Files in the old way. Every time you open a file handle or close one use die or warn.

    Update: adding one more method in script, plus a bit of explanation.

    Update2: adding opening original file to compare against, also update output.

    Hope this helps, BR.

    Seeking for Perl wisdom...on the process of learning...not there...yet!
Re: parse multiple text files keep unique lines only
by tybalt89 (Monsignor) on Jun 28, 2017 at 02:01 UTC
    #!/usr/bin/perl # http://perlmonks.org/?node_id=1193719 use strict; use warnings; use File::Find; use File::Slurp qw( edit_file_lines ); my $directory = 'some/test/'; find sub { my %u; -f and /\.txt$/ and edit_file_lines sub { $_ x= !$u{$_}++ }, $_ }, $directory;

    hehehe

      Don't use File::Slurp, it's broken.


      The way forward always starts with a minimal test.
        #!/usr/bin/perl # http://perlmonks.org/?node_id=1193719 use strict; use warnings; use File::Find; use Path::Tiny; my $directory = 'some/test/'; find sub { my %u; -f and /\.txt$/ and path($_)->edit_lines( sub { $_ x= !$u{$_}++ } ) }, $directory;
Re: parse multiple text files keep unique lines only
by Anonymous Monk on Jun 28, 2017 at 04:45 UTC

    Maybe it's not clear what return actually means.