The best you can do is either change the kernel to accomodate your desires (you could probably write a module) or you could implement that bit of code in C to reduce the amount of time you linger.
In perl, I would rewrite your code so that you open all the files first, then read from them one by one. In most unices, a file won't 'disappear' until the last file handle is closed.
foreach my $d(readdir(PROC)){ next if $d !~ /^[0-9]+$/; my $procdir = "/proc/$d"; open($filehandles{$procdir}, "<", $procdir/status") || warn "c +an't open status with$procdir"; } foreach my $fh ( keys %filehandles ) { while(<$fh>){ @a = split(/\s+/, $_) if /Uid/; $temp[0] = $a[1]; @b = split(/\s+/, $_) if /VmSize/; $temp[1] = $b[1]; } close $fh; }
____________________
Jeremy
I didn't believe in evil until I dated it.
In reply to Re: fastest way to open a file and stroing it?
by jepri
in thread fastest way to open a file and storing it?
by snam
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |