Thanks for your feedback; so:
1_ forget the print line ... it was suposed to be commented.
2_ I thought that doing "echo" sould be faster or have better performance ... isn't it ?
3_ I considered using a db but ... why will be better ? it would contain only a single table on it? would you explain ?
Thanks!!! | [reply] |
Re: "echo" vs. local print:
"echo" is invoking an external command (New process), opening a file, and appending to it.
Doing that in perl would avoid the "process creation", and command parsing overhead (Probably nanoseconds, so no big deal). But, being a perl bigot, I'd prefer seeing it in perl.
open my $log, ">>", "$path/register_list.txt" or die "Cannot append:$!
+";
print $log join( ",",$date,$client_ip,$client_imsi,$bsid),"\n";
close $log;
Re: Using a Database:
You could, potentially , have a BSID table, annd a CLIENT table, in addition to the log line,to track items. Dates would be better organized, and searching and filtering is easy to do.
I have a simple cgi application that presents the contents of a Sqlite DB, to a web page, enabling queries. This is why I would prefer a database.
This is not an optical illusion, it just looks like one.
| [reply] [d/l] |
$ cat pore.pl
#!/usr/bin/env perl
use strict;
use warnings;
use Benchmark 'cmpthese';
open my $fh, '>', '/tmp/wp.txt' or die $!;
cmpthese (10000,
{
'print' => sub { print $fh "x\n"; },
'echo' => sub { system "echo x >> /tmp/we.txt"; }
}
);
close $fh;
exit;
$ perl pore.pl
(warning: too few iterations for a reliable count)
Rate echo print
echo 317/s -- -100%
print 10000000000000000000/s 3153000000000000000% --
Yes, print is so much faster than shelling out each time it is practially immeasurable. They'll get closer if you do the open and close inside the sub of course and that would mean losing the buffering too but it's still going to beat the pants off a fork and a shell invocation each time just to do a one-line write.
Upshot: on my machine here it isn't nanoseconds - it's more like 3 milliseconds.
| [reply] [d/l] |