When calculating the data for my plots for my master thesis I found that it was much too much, thus I needed a script for crushing the data by deleting every nth line. OK, you're right, this _is_ pretty ugly, but it was one of my first perl scripts...
#!/usr/bin/perl -w $nth = shift(@ARGV); $file = shift(@ARGV); print "Delete every $nth-te line in $file\n"; print "The crushed file will be named $file.$nth"; $delete_this = 0; open(IN,"<$file") or die "could not read datafile\n$!"; $outfile = "$file.$nth"; open(OUT,">$outfile") or die "could not write outfile\n$!"; LIES:while(<IN>) { $line = $_; if($line =~ /^\s#/) { next LIES; } if($delete_this==$nth-1) { $delete_this = 0; next LIES; } else { print OUT $line; $delete_this++; } } close IN; close OUT; exit; # End of file

Replies are listed 'Best First'.
RE: crush_data
by merlyn (Sage) on Jun 06, 2000 at 21:58 UTC
      OK, this _is_ shorter .. sort of ... *grin* Thanks for the lesson :-) regards Stefan Kamphausen $dom = "skamphausen.de"; ## May The Open Source Be With You! $Mail = "mail@$dom; $Url = "http://www.$dom";