in reply to Re^2: Solve the large file size issue
in thread Solve the large file size issue

Well, we stay in perl plus use an additional language (SQL), the disadvantage being, obviously, that we now have two languages. However, I find this (for some applications, not for everything) better suited.

The parameters to the connect have the information to interface your environment (plus perl, of course) with the "database".
The parameters in my example say that the tables are .txt files, so the OP's file "Die1_10k.txt" is a table "Die1_10k" (my example looks for it in the /tmp folder/directory). Different mappings of file names to table names are possible, this is described in → DBD::CSV.

Possible difficulties you may encounter:

I should have written the SQL a bit tidier, e.g.

CREATE TABLE CombineDie1Die2 AS SELECT WL, BL, Die1, Die_2 as Die2 FROM Die1_10k a, Die2_10k b WHERE a.WL = b.WL AND a.BL = b.BL
so that it is more readable.

The OP's data contains column headings, with one column named "Die 2" (with space in between), DBD::CSV sees this as Die_2 (with underscore), but we want it to be simply "Die2". If your column names are different (or your data doesn't have a header line), you have to decide where to adjust: the first line of your data, the info in the connect statement, or the names in the SQL statement itself, like some "renaming" as I did with that field or the tables (a and b).

What the program printed for you, was the heading only, meaning that no data was SELECTed, either because your data does not match (what I wrote about "inner join" at the end of my previous posting), or your columns are named differently (with the effect, that they don't match) ...

Update: to find out more, you could try out a simple SQL statement such as:
use strict; use warnings; use Data::Dumper; use DBI; my $dbh = DBI->connect('dbi:CSV:', undef, undef, { f_dir => '/tmp', # where your files live f_ext => '.txt', }) or die $DBI::errstr; my $select = $dbh->prepare( 'SELECT * FROM Die1_10k' ) or die "Failed to prepare - " . $dbh->errstr; $select->execute or die "Failed to execute - " . $select->errstr; print Dumper $select->{NAME}, $select->fetchrow_hashref();
This prints out your column names, and the first row of data (or undef if it doesn't find data).

Update 2: I don't have real big CSV files, so I don't know how well this will work out, but if it is 2 files x 8 Million lines x 12 Bytes (let's assume some Overhead, so 128 Bytes) → 2 Gig, so, depending on how "modern" your Computer is, it might fail.

Anyway, you did not yet tell us, wether
- your data is sorted
- your data is "in sync" (as assumed in GrandFather's example)
The database solution would work for unsorted data also, but if it is, this would be an advantage for the other solutions.