in reply to Re^3: iteration through tied hash eat my memory
in thread iteration through tied hash eat my memory

I'm using SuSE Linux 7.3 (Intel 32-bit) which contains Berkeley DB database library in version 3.1.17. When file size with database raised to 2GB I get message like 'File size limit exceeded (SIGXFS2)' (not exactly - translated from localized message). I'm able to create larger files on filesystem (tested up to 6GB). Does it mean that db package in my distribution was miscompiled?

  • Comment on Re:^4 iteration through tied hash eat my memory

Replies are listed 'Best First'.
Re:^5 iteration through tied hash eat my memory
by diotalevi (Canon) on Dec 09, 2002 at 18:11 UTC

    I did a bit of checking for you and it looks like Linux support for large files was added in version 3.2.9 (see the Changelog at http://www.sleepycat.com/update/3.2.9/if.3.2.9.html). Your signal was from XFS so some other checking brought up the link http://oss.sgi.com/projects/xfs/faq.html#largefilesupport which indicates that your large file support may be conditional on your glibc library. My recommendation is to get the current version of BerkeleyDB and install it into /usr/local. Be very careful not to disturb your existing library since various parts of your OS probably depend on 3.1.17 staying 3.1.17.

    Google is your friend suse xfs 2gb. Obviously just read the changelog on http://www.sleepycat.com's web site for the scoop on BerkeleyDB.

    __SIG__ use B; printf "You are here %08x\n", unpack "L!", unpack "P4", pack "L!", B::svref_2object(sub{})->OUTSIDE;