![]() |
|
more useful options | |
PerlMonks |
Re: Recommendations for efficient data reduction/substitution applicationby BrowserUk (Patriarch) |
on Mar 03, 2015 at 23:20 UTC ( #1118677=note: print w/replies, xml ) | Need Help?? |
My suggestion comes in two parts:
The tricky bit with both schemes is combining the modified chunks back together, as the modifications will have changed their lengths. The simplest mechanism is to write separate small files with a naming convention that allows them to be ordered. Eg. You have 4 processes, so give each process a number, and have them write to numbered files: infile.dat.part0n, and have the parent process (that allocated and started the kids) wait for them to complete, and then merge the files back together. HTH. Update: If you have a second physical device available, do your small file writes to that; and then merge them back to the original device. With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
I'm with torvalds on this
In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked
In Section
Seekers of Perl Wisdom
|
|