In an attempt to get unique names for files that need to be processed in sequence I turned to Time::Hires. I construct these files from a command line script. I then read the file names (using another script) from a dir and sort them according to the time (The filenames look something like XXX~1090220890.53125.xml . I am using gettimeofday() to get the unique number. To illustrate the issue I face I used the following test code.
The output of which is..#! /usr/bin/perl use strict; use warnings; use Time::HiRes ( gettimeofday ); for (my $i = 0; $i <= 10; $i++) { print "$i\tgettimeofday = " . gettimeofday() . "\n"; }
NOT very unique...0 gettimeofday = 1090220890.53125 1 gettimeofday = 1090220890.53125 2 gettimeofday = 1090220890.53125 3 gettimeofday = 1090220890.53125 4 gettimeofday = 1090220890.53125 5 gettimeofday = 1090220890.53125 6 gettimeofday = 1090220890.53125 7 gettimeofday = 1090220890.53125 8 gettimeofday = 1090220890.53125 9 gettimeofday = 1090220890.53125 10 gettimeofday = 1090220890.53125
The command line interface gets called extremely often and often several for these processes run at the same time. I would really not like to slow down the entire process if I can help it. The only way I can think of doing this is by keeping a number in a file which I lock until I have updated it with a new number. This means that all the other processes have to wait until the file is unlocked.
Anyone have any clever ideas of getting unique numbers fast (some kind of algorithm that includes the time etc...)
In reply to Unique filenames with Time::HiRes by AcidHawk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |