perl5ever has asked for the wisdom of the Perl Monks concerning the following question:

Hi monks, I am seeing much larger sizes for XS .so files when building under x86_64 versus i686.

E.g. smaps reports a size of 2085 kB for lib/perl5/x86_64-linux-thread-multi/auto/Time/HiRes/HiRes.so
versus a size of 24 kB for lib/perl5/i686-linux/auto/Time/HiRes/HiRes.so

In fact, all of the x86_64 .so files are > 2000 kB.

Is there some common problem I am tripping up on?

OS is Centos 5.5 (5.4 for i686), perl version is 5.12.2. My Configure invocation for x86_64 is:

./Configure \ -des \ "-Doptimize=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -f +stack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic" \ -Dcc=gcc \ -Dprefix=$install_dir \ -Darchname=x86_64-linux-thread-multi \ -Duseshrplib \ -Dusethreads \ -Duseithreads \ -Duselargefiles \ -Dd_semctl_semun \ -Di_db \ -Ui_ndbm \ -Di_gdbm \ -Di_shadow \ -Di_syslog \ -Duseperlio \ -Ubincompat5005 \ -Uversiononly \ -Dd_gethostent_r_proto \ -Ud_endhostent_r_proto \ -Ud_sethostent_r_proto \ -Ud_endprotoent_r_proto \ -Ud_setprotoent_r_proto \ -Ud_endservent_r_proto \ -Ud_setservent_r_proto

Replies are listed 'Best First'.
Re: large .so sizes under x86_64
by Eliya (Vicar) on Apr 08, 2011 at 20:06 UTC

    I can't confirm your observation, i.e. my 64-bit HiRes.so (for example) is just 35k.

    Maybe you've somehow managed to statically link in libc (which is roughly 2 MB)?  What does ldd say?  Mine says:

    $ ldd /usr/local/lib/perl5/5.12.2/x86_64-linux-thread-multi/auto/Time/ +HiRes/HiRes.so linux-vdso.so.1 => (0x00007fff399fe000) librt.so.1 => /lib/librt.so.1 (0x00007f2e314fe000) libc.so.6 => /lib/libc.so.6 (0x00007f2e3119c000) libpthread.so.0 => /lib/libpthread.so.0 (0x00007f2e30f7f000) /lib64/ld-linux-x86-64.so.2 (0x00007f2e31911000)

    (the entry libc.so.6 => ... would be missing if linked statically)

      I'm seeing libc.so.6:
      % ldd ./auto/Time/HiRes/HiRes.so librt.so.1 => /lib64/librt.so.1 (0x00002af3572c2000) libc.so.6 => /lib64/libc.so.6 (0x00002af3574cb000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00002af357822000) /lib64/ld-linux-x86-64.so.2 (0x0000003bf0000000)
      What do you get when you run this script?
      use Time::HiRes; open(SMAPS, "/proc/$$/smaps"); while (<SMAPS>) { if (m/^\d/) { chomp($lib = (split(' ', $_))[5]); } elsif (m/^Size:\s*(\d+.*)/) { my $size = $1; print "$size $lib\n" if ($lib =~ m/HiRes/); } }
      I am getting (Centos 5.5, perl 5.8):
      24 kB /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/auto/Time/HiRes +/HiRes.so 2044 kB /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/auto/Time/HiR +es/HiRes.so 4 kB /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/auto/Time/HiRes/ +HiRes.so

      Does the 2044 kB entry represent real memory usage (unshared with any other library)? I'm trying to determine how much real memory my application is using.

        Ah, sorry, I missed that you were talking about memory usage (as opposed to shared object file size).  And yes, I do get similar numbers.

        Note that the "Size" figures (like your 2044 kB) are rather meaningless. You can think of them as the address regions that a process could (in theory) access without causing a segfault.  As long as it doesn't, the figure is irrelevant for what one typically thinks of as "memory usage". What's more interesting in this regard is the resident size (RSS), in particular the private (non-shared) pages of it. But note that this is also a somewhat too simplified view of things...

        Anyhow, for a more detailed analysis of memory usage, you might be interested in exmap. Its docs give a good overview of the terminolgy, btw.

        Also, there's Linux::Smaps, so you don't have to parse the proc file yourself. And IIRC, there's even a script "out there" (using this module) which assembles the smaps info and creates an easier-to-digest summary report.  Unfortunately, I can't remember its name, but maybe you can dig it up...

Re: large .so sizes under x86_64
by locked_user sundialsvc4 (Abbot) on Apr 08, 2011 at 23:10 UTC

    For what (little) it may be worth, I have noticed ... especially in the 64-bit worlds ... that memory region sizes, as reported by tools like these, can be a lot bigger until some amount of actual memory pressure is exerted that, so to speak, “compels the system to clean house a little bit.”   It seems to go straight for big allocations.   (And, this strategy seems to work just fine.)   I haven’t delved into the guts of Linux to know what might be influencing the designers’ decisions in the case of 64-bit, nor do I intend to, but the algorithms are obviously different.   (Maybe it’s partly affected by the simple fact that “chips are cheap” now?)   Of course, operating systems are always (well-) designed to be lazy, because the tradeoff is always “space vs. time.”