The logic basically just take file 1 first 2 column data match with file 2. If it is match, then insert all the 3 colume from file 1 to the file 2. Also, the nature of file 1 and file 2 already sorted correctly (that meant line 1 in file1 is equal to line1 in file2. it no need to perform search actually)
File 1 example:- WL,BL,Die1 WL0,BL0,1708 WL0,BL1,1708 WL0,BL2,1708 WL0,BL3,1931 WL0,BL4,1931 File 2 example:- WL,BL,Die 2 WL0,BL0,1708 WL0,BL1,1931 WL0,BL2,1708 WL0,BL3,1931 WL0,BL3,1708 Output after script:- WL,BL,Die1, Die2 WL0,BL0,1708,1708 WL0,BL1,1708,1931 WL0,BL2,1708,1708 WL0,BL3,1931,1931 WL0,BL4,1931,1708 My script:- #!/usr/bin/perl #Copy Die2 as Output file use File::Copy; copy ("Die2_10k.txt","CombineDie1Die2.txt") or die "copy failed: $!"; #Open Die1 Input file open (Label, "Die1_10k.txt") or die "can't open Die1: $!"; #Search and replace using 1 liner command while (<Label>) { $replace = $_; chomp ($replace); @temp = split (/,/, $replace); $search = $temp[0].",".$temp[1]; $command_line = "perl -pi\.bak -e s\/".$search."(?=,)\/".$replace. +"\/g\; CombineDie1Die2.txt"; system ($command_line); }
It will be greatful if someone could help to provide a effective way of process the 2 8millions of file with some script for the above purpose. Thanks you.
In reply to Solve the large file size issue by Vkhaw
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |