in reply to IO::Handle slowdowns?

In all cases, you are building a global array, @ary, that gets up to eight million elements, each a reference to some perl type. That eats lots of memory, probably forcing you into swap and thrashing. The bigger the structure, the sooner. Swap is notorious for bad scaling.

To open and close IO::Handles inside the loop naturally takes more time than setting up an anonymous hash. In many cases you are contending against swap for the disk spindle.

After Compline,
Zaxo

Replies are listed 'Best First'.
Re: Re: IO::Handle slowdowns?
by mpaduano (Initiate) on Feb 03, 2004 at 00:04 UTC
    hi.

    actually, no. The CPU starts to bog down immediately and the CPU nearly pegs before the process image is even 15% of available RAM. No swaping, no paging.

    Try running the program and notice the slowdown with the Handles but not the hashes... Sockets will slow the program down too, but be sure to send() at least one character (if you don't call send() with at least one character, it doesn't happen... similar to the open/close effect for the FD).

    It isn't about the Handle objects being slower than ordinary hashes, it is about the wall time to process each Handle going to infinity while all system resources seem to be in ample supply. For regular hashes, the wall time behaves.

    This one will require a real guru I am afraid...

    matt
      Update: none of below applies, since Windows doesn't use glibc.

      What OS is this? I wonder if you are hitting the pathological-free bug with an older glibc.

      Update: the bug I'm thinking of is in glibc 2.2.x where free calls will suddenly start taking an enormous amount of time and cpu. If I understand correctly, the malloc library was rewritten for glibc 2.3 and the problem no longer exists there. If you have a glibc 2.2.x, make sure perl is built to use it's own malloc (Configure with -Dusemymalloc) or upgrade to a newer glibc.

        We are using Windows XP/2000/2003 and Active State 5.6.1 build 635.

        interesting comment since we are definitely linking to something other than the malloc() in glibc x.y

        I have someone trying my test programs on some unix boxes and if I am lucky, the bug will be reproducible and perhaps a tool like strace/ltrace/truss will help!

        matt