http://qs1969.pair.com?node_id=218610


in reply to Re:^4 iteration through tied hash eat my memory
in thread iteration through tied hash eat my memory

I did a bit of checking for you and it looks like Linux support for large files was added in version 3.2.9 (see the Changelog at http://www.sleepycat.com/update/3.2.9/if.3.2.9.html). Your signal was from XFS so some other checking brought up the link http://oss.sgi.com/projects/xfs/faq.html#largefilesupport which indicates that your large file support may be conditional on your glibc library. My recommendation is to get the current version of BerkeleyDB and install it into /usr/local. Be very careful not to disturb your existing library since various parts of your OS probably depend on 3.1.17 staying 3.1.17.

Google is your friend suse xfs 2gb. Obviously just read the changelog on http://www.sleepycat.com's web site for the scoop on BerkeleyDB.

__SIG__ use B; printf "You are here %08x\n", unpack "L!", unpack "P4", pack "L!", B::svref_2object(sub{})->OUTSIDE;