in reply to Do XS-components require special considerations with CGI? [SOLVED]
In my ongoing bulldog-struggle with this thing, I modified CPAN's DBD::mysql.pm to wrap the bootstrap DBD::mysql $VERSION; statement in an eval{} block ... then test for errors. And, voilá! I found one!
(Pathname "..." shortened for brevity.)Can't load '[...]/local-perl/lib/perl5/i486-linux-gnu-thread-multi/ +auto/DBD/mysql/mysql.so' for module DBD::mysql: libz.so.1: failed to map segment from shared object: Cannot allocate memory at /usr/lib/perl/5.8/DynaLoader.pm line 225.
This is the failure which ultimately gives rise to the “Had to create DBD::mysql::dr::imp_data_size unexpectedly at ...” message, as well as all the others. Because the driver-module in fact isn't being bootstrapped at all, “nothing works.”
As previously stated, this message occurs only in Apache CGI mode.
I admit that I am rather mystified that DynaLoader::bootstrap doesn't throw any kind of error ... nor does mysql.pm ... if this sort of thing occurs. This apparently has been a well-known issue, perhaps with various drivers, for a number of years.
And yet, having written the above paragraph, I observe that when I replaced the print STDERR statement with either croak or die, the exception is “silently eaten.” by someone-out-there! I apparently can't surface the error-message from here via either of these two methods. Presumably the same thing is happening elsewhere, thus concealing the underlying error-condition from view.
Anyhow, this particular point in /usr/lib/perl/5.8/DynaLoader.pm is prefixed by an expected comment:
# Many dynamic extension loading problems will appear to come from # this section of code: XYZ failed at line 123 of DynaLoader.pm. # Often these errors are actually occurring in the initialisation # C code of the extension XS file. Perl reports the error as being # in this perl code simply because this was the last perl code # it executed. my $libref = dl_load_file($file, $module->dl_load_flags) or croak("Can't load '$file' for module $module: ".dl_error());
I see from the error-message that the correct version of mysql.so is being located and loaded, but that it fails during the load attempt.
The root cause of the problem, as output by dl_error(), ultimately is:
libz.so.1: failed to map segment from shared object: Cannot allocate memory.
Update: With a statically-linked driver with -lz included in the --libs="" parameter, the same message occurs, but the library-name is now libnsl.so. Plainly, I think, the root cause of this condition is not actually associated with any particular library: I deduce that it is a loader issue.
Everything that I read about this sort of thing says, e.g. “This error is not related to shared libraries. You need to set maximum process size in megabytes.” But what I simply don't understand is how the resource-constraints of any reasonable hosting-service (this is 1&1 Internet... one of the big boys) could possibly be “too small.” How could 10 megabytes (or is it 20?) possibly be “too small,” and if so, why does it work fine... in my ulimit test, and in 1&1's own “500 Server Error” test-rig? My intuition tells me that there is something else wrong ... something that could “drop” DBD::mysql but not cause the application itself to completely die. If it truly were “out of memory,” I would expect to get a “500 Server Error” indeed... and nothing less.
The hosting-service's documentation says that shared host CGI programs are allowed a “10 megabyte” memory-limit, which seems more-than-generous to me. But just to be sure, I experimented with running the CGI program in a bash sub-shell with ulimit -Sm 10240 and it completed (in the shell...) without a quibble. (We're not talking about “large tables” or anything remotely-unusual here, anyway...)
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Well, I really see it now
by Corion (Patriarch) on Mar 03, 2009 at 14:02 UTC | |
by locked_user sundialsvc4 (Abbot) on Mar 03, 2009 at 14:45 UTC |