cybersekkin has asked for the wisdom of the Perl Monks concerning the following question:

I missed an item in the first post. Debian Linux OS(sarge) I have a program when assigning a value directly as below the value 3000000 is maintained but I am getting rather odd results when using the commented out portion lstat and I get a negative number :) it seems to think all inode output is below the 2billion number but if using printf with a long specification the number comes out correctly. Problem is on a MySQL database insert (into unsigned int) the negative value causes the inode insert to become zero. Can anyone confirm this lstat behaviour? or point me toward another direction that might help me solve the problem.
******start code Sample*****
#!/usr/bin/perl use strict; use DBI; my $fqfn = '/home/daveb/temp.txt'; #my @stats = lstat $fqfn; my @stats = (-1000, 3000000000, -22345); my($dev, $ino, $garbage) = @stats; print "$dev\n$ino\n$garbage\n";

Edited by planetscape - added code tags

Replies are listed 'Best First'.
Re: lstat for large inode values over 3billion
by Celada (Monk) on Dec 06, 2005 at 19:09 UTC

    What operating system? Your kernel is reporting very large integers in the result of stat but some part of the system (either Perl or your C header files) think the inode number in the stat structure is supposed to be signed. ino_t is unsigned under both Solaris and Linux as far as I can tell, maybe you are using something else?

    In any case, I think you can force the value to become unsigned without any worries because I don't think anyone has ever heard of negative inode numbers being valid. You found one way to do this: printf with a long integer specifier. I suggest XORing the value with 0. It's a noop but it forces the value to be treated as insigned:

    no integer; # you probably don't need this # just pointing out it doesn't work # under use integer; $ino ^= 0;
      XORing it worked :) thanks for the fix.