in reply to processing huge files
because i need to reorder the fields (well, i need to identify the MLSNumber (since it's real estate data) and see if the data needs and ins/upd and if the data needs to be inserted into a view ... i don't see the MySQL loader being the right tool for the job ...
What if you made an initial "analyze/index" pass over the big file, to identify the updates vs. inserts, identify the view inserts, etc. In other words, "parse" the big file first, and maybe even split it up into pieces according to what needs to be done with each subset (if you have enough disk to store a second copy of it all -- but if you don't have that much space, writing a set of byte-offset indexes is likely to take a lot less than 334G).
Once you know how to break down the file contents according to what sort of treatment they need, push the insert records into mysqlimport -- that will go remarkably fast with fairly low load on the system. Maybe you'll still need to handle updates via DBI, but that will be easier to optimize and will go a lot quicker if you avoid doing DBI inserts in the same run (and make sure the "where" clause cites an indexed field, and use prepared statements with "?" placeholders, etc).
Of course, anything you do with 334G of data is going to take a while, and the idea of doing your job in multiple passes over the data might seem silly. But think about it: each pass will be easier to code, easier to validate, and relatively efficient and fast at runtime with relatively low system load, compared to a monolithic "one pass to do it all".
In the long run, the time required to do a lot of little tasks in succession could easily end up being less than the time required to do them all at once in a single massive job.
|
|---|