in reply to Re: Growing strings, avoiding copying and fragmentation?
in thread Growing strings, avoiding copying and fragmentation?

The issue is that if I want to write a CPAN module that requires the ability to decompress .Z data I need to either have a wrapper over the external binary /usr/bin/uncompress or to include this in a perl library directly. I have a working wrapper but want something that doesn't have the overhead of forking out to /usr/bin/uncompress millions of times.

Hence: Compress::Zlib::LZW. As is, there is no code on CPAN for handling the .Z data format. Compress::Zlib handles the 'deflate' and 'gzip' formats but doesn't handle the 'compress' format. Its that one that I'm targetting. I already wrote something that sends its output to a file handle but if I want to be 5.005_03 safe then I figured I'd have to return a proper scalar instead of PerlIO tricks.

Now as to your actual answer - can you reword "Also, as far as I know, perl itself gets memory from system each time twice as in previous memory allocation, and I find this strategy much worse than allocating by some reasonable chunks." this? I think you're speaking from more context than I have right now and I need you to be a bit more explicit.

  • Comment on Re: Re: Growing strings, avoiding copying and fragmentation?

Replies are listed 'Best First'.
Re: Re: Re: Growing strings, avoiding copying and fragmentation?
by Courage (Parson) on Aug 10, 2003 at 13:30 UTC
    1. I thought that "zlib" library can handle compress data, because I use command "gzip -d filename.Z" to uncompress it, and it works.
    May be I am wrong and "zlib" do not handle it, then which one handles it?

    Now I looked into gzip sources from http://ftp.gnu.org/gnu/gzip/ and it appears that "gzip" uncompresses .Z data using its own algorithm without zlib.
    I think using proper file from there will be more portable, but this is not easy way, of course.

    2. Please excuse me for being unclear.
    I meant following behaviour. When my perl program eats big ammount of memory, it does so in non-linear way.
    When I see in task manager how much memory system allocated to processes, I see it like a stair, where first step is small, second step is twice bigger, next step of that stair is two times bigger than previous.
    When I saw that perl already got 256M, I expect it will take 512M next time, and usually at this time I decide whether to allow it to run...
    I remember this was discussed a little on p5p recently, but no special decision was made.

    Courage, the Cowardly Dog