I don't believe I've ever seen this 2GB barrier, unless the file system won't let you write files larger than 2GB. In these cases, I have seen big problems. Unfortunately, though, the only way to combat this is to re-compile the OS, which is usually never a viable solution.
This note asside, I also noticed that the Perldoc notes in the WARNINGS that:
Many DBM implementations have arbitrary limits on the size of records that can be stored. For example, SDBM and many ODBM or NDBM implementations have a default limit of 1024 bytes for the size of a record. MLDBM can easily exceed these limits when storing large data structures, leading to mysterious failures. Although SDBM_File is used by MLDBM by default, it is not a good choice if you're storing large data structures. Berkeley DB and GDBM both do not have these limits, so I recommend using either of those instead.
Reading this, I would make sure not to use SDBM_File, ODBM, or NDBM if your structures are larger than 1024 bytes. You can always test this with
Storable::freeze().
If that doesn't help, I would recommend posting a snippet of the portion of code you are using to tie your hash to the file for writing the data in the first place. There may be ways of re-factoring it that might be benificial. Worth a shot, I suppose.
---hA||ta----
print map{$_.' '}grep{/\w+/}@{[reverse(qw{Perl Code})]} or die while ( 'trying' );