http://qs1969.pair.com?node_id=439884


in reply to Re: Muy Large File
in thread Muy Large File

Wow. Many deep bows of reverence for all that responded. As UK inferred, I was (indeed) doing something wrong. Based on the above suggestions, this was the script tested:

#!/usr/local/perl5.6.1/bin/perl -slw
use strict;
our $BUFSIZE ||= 2**30;
open my $fhi, '+<', "/data/p_dm200/ndm_ip_pull/test_customer1" or die $!;

while( sysread $fhi, $_, $BUFSIZE ) {
tr^M ;
sysseek $fhi, -length(), 1;
syswrite $fhi, $_, $BUFSIZE;
}
close $fhi;

which was tested against an 8,595,447,728 byte file. The time output was:
real 10m5.95s
user 1m48.55s
sys 0m17.24s

An amazing 10 minutes. I checked the output and it looks exactly as expected. I even retested 3 times and each time the results were similar.

Ok, now I am getting greedy and curious as to if this can be optimized more?? I ran top during this session and saw that SIZE and RES were both around 1026M throughout the duration and only 1 cpu seemed used. Would increasing BUFSIZE help performance linearly? If I was capable (and I am not) would either shared memory threads or parallel forks produce big gains? Any other low-hanging fruit?

Perlfan, the ROMIO seemed interesting but I could not find a perl sample. Still it seemed interesting. Anonymous Monk, please forgive my ignorance but what does HD mean?

A sincere thanks to all,
--Paul

Replies are listed 'Best First'.
Re^3: Muy Large File
by BrowserUk (Patriarch) on Mar 16, 2005 at 11:12 UTC

    Using a larger buffer size may increase throughput slightly, but then it may not. It will depend upon many factors mostly to do with your file system buffering, disk subsystems etc. The easy answer, given it's only taking 10 minutes is to try it.

    As far as using threads to distribute the load across your processors is concerned, it certainly could be done, and could in theory, give you near linear reductions per extra processor.

    But, and it's big 'but', how good is your runtime library's handling of multi-threaded IO to a single file?

    On my system, even using sys* IO calls and careful locking to ensure that only one thread can seek&read or seek&write at a time, something, somewhere is getting confused and the file is corrupted. I suspect that even after a syswrite completes, the data is not yet fully flushed to disk before the next seek&write cycle starts.

    So, maybe you could get it to work on your system, but I haven't succeeded on mine, and I am not yet entirely sure whether the problem lies within Perl, the OS, or some combination of the two.

    If you feel like trying this, please come back and report your findings. If you need a starting point, /msg me and I can let you have my failing code.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco.
    Rule 1 has a caveat! -- Who broke the cabal?
Re^3: Muy Large File
by Random_Walk (Prior) on Mar 16, 2005 at 11:36 UTC

    By HD anonymonk means Hard Disk. The seek times to move the heads around a hard drive are slow compared to memory access and geological compared to processor cache. What this means is processes that are dedicated to doing something to a file are normal disk IO bound. Lets not mention network latencies for now.

    If you do manage to split this into threads you may actually reduce performance as each time a different thread gets a shot at it, it forces the HD to drag it's heads over to a completely different part of disk. A single thread reading the file sequentially will not be making the heads seek so much, assuming the file is not desperately fragmented on the media.

    Then there are other users competing for those heads and tasking them off to the boondocks of the drive as far as your data is concerned which is why it was suggested you kick the lusers to try and get the disk all to yourself.

    Cheers,
    R.

    Pereant, qui ante nos nostra dixerunt!
      If you do manage to split this into threads you may actualy reduce performance as each time a different thread gets a shot at it it forces the HD to drag it's heads over to a completely different part of disk. A single thread reading the file sequentialy will not be making the heads seak so much, assuming the file is not desperately fragmented on the media.
      That may be true if the file is stored on a single disk. But somehow I doubt an 8-way box dealing with 45-50Gb files uses filesystems that are layed out over single disks. It's far more likely some kind of volume manager (either in software using Solstice Disksuite or Veritas Volumemanager, to name two common products used with Solaris, or in hardware, either by using a RAID card, or by having the RAID done by the backend storage, which in turn, could be done by a specific diskarray, or by the network itself (NAS)). Without knowing more about the volume layout and implementation, it's hard to say how much performance is helped by using separate threads or processes. It becomes unlikely that performance will actually decrease, although bad volume setups happen all the time. Often unknowingly, but also because people want to know the disk a certain file is stored on.

        That is why I said may actually reduce performance.

        With a RAID array the parts of a large file are still hopefully stored in reasonable proximity across however many disks they are spread across. If the 50Gb file is spread across 5 disks then when it was written ideally the heads on each of the five disks would have been able to put their (10Gb+parity data) down in one long stripe. Admittedly if the head has to seek from one end to the other now it has a little more than 1/5th of the previous distance but there is still a good chance of a bonus for sequential read.

        Even NAS still has spinning platters and inertialy challenged heads at the coal face.

        Cheers,
        R.

        Pereant, qui ante nos nostra dixerunt!
      Okay. I attempted the above suggestion:

      1) Tried increasing $BUFSIZE ||= 2**31; but got this err Negative length at ./test.pl line 9. Tried subtracting 1 in this manner $BUFSIZE ||= 2**31-1 and got this err Out of memory during "large" request for 2147487744 bytes, total sbrk() is 108672 bytes at ./test.pl line 9. I then ran ulimit which came back as 'unlimited'. I'm betting SA's will not change server config or recompile kernel on my behalf so is this a dead-end?

      2) BrowserUK, I would like to test threads. If your system was not Solaris 8, please let me know how I /msg you to get your test code. Or post with a caveat.

      The last question I had was about leveraging additional cpu's. Can we coax perl to coax the OS to throw some additional CPU's onto the fire? Would this make any difference? Based on the time output above, is it fair to say that this process is completely IO bound? Meaning that adding cpu's would only increase IO WAIT.

      Upon searching perlmonks for other large file challenges, I've seen references to sys::mmap and fork::parallelmanager. If anyone has used either of these (or others) and feel strongly about one, please let me know.

        Tried increasing $BUFSIZE ||= 2**31; but got this err Negative length at...

        As you probably worked out, the parameter is being used as a signed 32-bit value, and 2**31 rolls over to a negative value.

        Tried subtracting 1 in this manner $BUFSIZE ||= 2**31-1 and got this err Out of memory during "large" request for 2147487744 bytes, total sbrk() is 108672 bytes at ./test.pl line 9. I then ran ulimit which came back as 'unlimited'. I'm betting SA's will not change server config or recompile kernel on my behalf so is this a dead-end?

        I have no knowledge of Solaris at all, but I think that whilst your server has 16GB of ram, it is probable that each of the 8 cpu's is limited to 2GB. This is a very common upper limit with 32-bit OS's. The theorectic upper limit is 4 GB, but often the other 2GB of each processes virtual address space is reserved by the OS for it's own purposes.

        For example, under NT, MS provide a set of APIs collectively known as "Address Windowing Extensions" that allow individual processes to access memory beyond the usual 2GB OS limits by allocating physical ram and mapping parts of it into the 32-bit/4GB address space. But the application needs to be written to use this facility, and it comes with considerable restrictions.

        The point is that settling for 2**30 is probably the best you would be able to do without getting a lot more familiar with the internals of your OS.

        That said. I would try 2**21, 2**22 & 2**23 first and carefully log the timings to see if using larger buffers actually results in better throughtput. It is quite possible that the extra worked required by the OS in marshalling that volume of contiguous address space will actually reduce your throughput. Indeed, you may find that you get just as good a throughput using 2**16 as you do with 2**20. It may even vary from run to run depending on the loading of the server and a whole host of other factors.

        Using ever larger buffers does not imply ever increasing throughput. It's fairly easy to see that if the standard read size is (say) 4k and your processing a 50 GB file, then your going to do 13 million read-search&modify-write cycles and therefore 13 million times any overhead involved in that cycle.

        If you increase your buffer size to 2**20, then you reduce that repetiton count to 50 thousand and thereby reduce the effects of any overhead to just 4%. And your OS will have no problems at all in allocating a 1 MB buffer, and reading the 1 MB from disk will easily happen in one timeslice, so there is little to negate the gain.

        If you increasing your readsize to 2**22, then your overheads reduce to less than 1% of the original, but only 25% of the 2**20. Worth having, but diminishing returns. Allocating 4 MB will again be no problem, but will the read still complete in a single timeslot? Probably, but you may be pushing the boundary. Ie, it is possible that you will introduce an extra delay through missing a possible timeslot whilst waiting for IO completion.

        By the time you get to 2**30, your gain over the 1 MB slot is very small, but you are now forcing the OS to marshall 1 GB of contiguous ram for your process, which itself may take many missed timeslots. And then asking the disk subsystem to read/write 1 GB at a time, which again will definitely introduce several missed timeslots in IOWAIT states. Overall, the gains versus losses will probably result in an overall loss of throughput.

        There is no simple calculation that will allow you to determine the breakpoints, nor even estimate them unless the machine is dedicated to the is one task. The best you can do is time several runs at different buffer sizes and look for the trends. In this, looking to maximise your processes cpu load, is probably your best indication of which direction you are heading. My best guess is that you will see little improvement above around 4 MB reads&writes, but the breakpoint may come much earlier, depending upon disk subsystem as much as anything else.


        Now we come to multitasking. In all cases, the problem will come down to whether your OS+Perl can correctly manage sharing access to a single file from multiple concurrent threads-of-execution (threads or processes). I'm not familiar with the operation of SysV memory mapping, though I think it may be similar to Win32 File Mapping objects. These would certainly allow processes or threads to process different chunks of a large file concurrently in an efficient and coordinated manner, but the APIs are non-portable and require a pretty deep understanding of the OS in question to use. I don't have that for Solaris so cannot advise, but there are is theSys::MMap module and I noticed that PerlIO has a ':mmap' layer, but it doesn't work for my OS so I am unfamailiar with it.


        Now to my threaded code. I have tried two different approaches to this.

        My first attempt, tried to overlap the IO and processing by reading into buffers on one thread and doing the translation and writing on a second thread. The idea was that if Perl/C runtime could handle this, I could then try and use more buffers and balance the number of read threads with the number of write threads to get best throughput. On my OS, something is being cached somewhere such that file gets corrupted.

        The code I tried is pretty unsophisticated, but was enough to convince me that it wouldn't work:

        My second attempt--which works (for me)--uses one thread to do all the reading, the main thread to do the transformation, and a third thread to do the writing. Again, the idea is to overlap the IO with the transformation, allowing the process to make best use of the timeslots it is allocated by utilising the time when the read/write threads are blocked in IO wait states to do other stuff. The data is passed between the threads via a couple of queues.

        The problem with this is that the iThreads model requires such shared data to be duplicated. It also has to be synchronised. Whilst this works on my system, as I do not have multiple cpus, I cannot tell you whether it will result in greater throughput or not. Nor whether it will continue to work correctly on a multi-cpu machine.

        So, I provide it only as an example--testing in your environment is down to you:).

        Because of the way iThreads work, and the way this is coded, I would suggest sticking with fairly small buffers 2**20 or 2**22 (and maybe 2**16 would be worth trying also).

        I'd appreciate any feedback you can give me if you do try this.

        please let me know how I /msg you to get your test code.

        See here and here.

        the lowliest monk