Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:
this is my current script it is supposed to go through a file that is near 50 megs of text or better put is 1000 blocks of 2703 lines of coords XYZ label and number every 20th block needs to be removed and put in its own file so that it can be opened in a program that would render the coords... i have any irky feeling that this prog wont do what it should and i figured i would ask instead of beating my silicon graphics workstation with a baseball bat :) kidding if you see any problems with my method, style or just anything plz help me out :) thanx#!/usr/bin/perl -w $xf = "~/OUTPUT" #enter file name location here <- $num =1000 #enter number of iterations here <- $tnum =20 #enter number of pdbs necessary <- $lab = "BRO" #enter label for output <- open (XF, $xf) or die "no $xf exists!!"; while (my $line = <XF>){ #reads file, takes out erroneous da +ta, if($line = m/TIMESTEP/){ #saves each chunk of 2703 line to $line = ""; #one array slot in superchunk @file = scalar(@trans); @trans[1] = @superchunk;} else if($line = m/ATOM/){ @file = $line;} } $num / $tnum = $div; for($i=1;$i<=$tnum;++$i){ open (OU, "> $lab_$i.pdb") or die "File would not open!!";); print OU (@superchunk[$i*$div]); close OU } #prints every 20th 2703 line ch +unk to #file, different file for each +chunk # close XF
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Breaking up large database
by rjray (Chaplain) on Jul 16, 2002 at 06:47 UTC | |
by Brokensoulkeeper (Initiate) on Jul 16, 2002 at 15:52 UTC | |
by dorko (Prior) on Jul 16, 2002 at 17:01 UTC |