Re: Storing Info in a text File
by mirod (Canon) on Mar 10, 2001 at 22:04 UTC
|
This looks so much like the synopsys of DBD::RAM that
I had to try it (you need DBD::RAM and DBI installed) :
#!/usr/bin/perl -w
use strict;
use DBI;
# create the data base handle
my $dbh = DBI->connect('DBI:RAM:', {RaiseError=>1});
# load the table in memory
$dbh->func({
table_name => 'members', # table name
col_names => 'id, title, description', # column names
data_type => 'PIPE', # pipe separated columns
data_source => [<DATA>], # it could also be a fil
+e
}, 'import' );
# 2 ways to get the new id
print "last id: ", get_last_id( $dbh), "\n";
print "first available id: ", get_first_available_id( $dbh), "\n";
# insert a new record
my $new_id= get_last_id( $dbh) + 1;
$dbh->do( qq[ INSERT INTO members (id, title, description)
VALUES ($new_id, 'New Title', 'New Description')]);
# output the result
$dbh->func( { data_source => 'SELECT * FROM members',
data_type => 'PIPE',
data_target => 'toto.txt',
},
'export');
# DBD::RAM does not support the MAX function so we have to do it ourse
+lves
# just get the first id from the list of id's sorted in descending ord
+er
sub get_last_id
{ my( $dbh)= @_;
return $dbh->selectcol_arrayref(qq[ SELECT id FROM members ORDER B
+Y id DESC ])->[0];
}
sub get_first_available_id
{ my( $dbh)= @_;
# get the list of id's sorted by ascending order
my $ids= $dbh->selectcol_arrayref(qq[ SELECT id FROM members ORDE
+R BY id ASC ]);
# go through the list until there is a hole
my $id= 1;
$id++ while( $id == shift @$ids);
return $id;
}
__DATA__
1|Title 1|Description 1
2|Title 2|Description 2
4|Title 3|Description 3
This way the day you decide to switch from a text
DB to a real one you can just change import your
data in the new DB, change the table creation
function, remove the export et voila! | [reply] [d/l] |
Re: Storing Info in a text File
by Rudif (Hermit) on Mar 10, 2001 at 19:47 UTC
|
Maybe you could do this: read all user lines, find the largest id, increment it for the user being added.
#! perl -w
use strict;
my @users = grep /[^\s]+/, <DATA>;
my $max = 0;
$max = $_ > $max ? $_ : $max for map { /(\d+)|/; $1 || 0 } @users;
++$max;
push @users, "$max|Rudif|scribe\n";
print for @users;
__END__
1|Title|Description
3|Title|Description
4|Title|Description
Rudif
| [reply] [d/l] |
|
|
Or a little more compact:
use List::Util qw(max);
my @users = grep /\S/, <DATA>;
my $max = max /^(\d+)/, @users;
++$max;
print @users, "$max|Rudif|scribe\n";
-- Randal L. Schwartz, Perl hacker | [reply] [d/l] |
|
|
But it costs an extra trip to CPAN.
Would you know offhand what this diagnostic (while installing Scalar-List-Utils-1.02) points to - wrong kind or build of Perl? I have the AS build 623 here on Win2k.
I did not try to investigate - the package also has a pure Perl version.
Util.c
Util.xs(183) : error C2198: 'Perl_Irunops_ptr' : too few actual parame
+ters
Rudif
| [reply] [d/l] |
Re: Storing Info in a text File
by Ido (Hermit) on Mar 10, 2001 at 20:54 UTC
|
If you assume that the last line has the highest ID you can:
my $line;
open FH,"file.ext" or die $!;
$line=$_ while(<FH>); close FH; #now the last line is in $line
my($id)=split(/\|/,$line);
#now you can use $id+1
If the highest ID can be anywhere in the file:
my $id;
open FH,"file.ext" or die $!;
while(<FH>){
my($nid)=split(/\|/);
$id=$nid if $nid>$id;
}
close FH;
#and again you use $id+1
| [reply] [d/l] [select] |
Re: Storing Info in a text File
by Chady (Priest) on Mar 10, 2001 at 18:38 UTC
|
If you insist on using your own text-file based database, and not use some kind of DB, there are some ways to get around this. ( just note that this is not tested in any way, major bugs may come along )
- You can make another file that stores the deleted IDs and when you add a new ID you first check if it is deleted, and you can give a new member the ID of 2 that was deleted earlier.
- Instead of just counting the lines, why not read the number from your lines to figure out what ID you're at, just split the last line with '|' and take the first value. This way you know where you're at, and add a new ID
As I said, untested for bugs, and I'm sure there are plenty other ways to do it, you just have to find what's best to do it with.
Chady | http://chady.net/
| [reply] [d/l] |
Re: Storing Info in a text File
by turnstep (Parson) on Mar 11, 2001 at 09:07 UTC
|
Here's another way, featuring flock. It finds the
first "unused" number when adding a new user, so the
gaps are kept to a minimum.
#!/usr/bin/perl -- -*-fundamental-*-
use strict;
use Fcntl qw(:flock);
my $type = shift or die "Need a type!\n";
my $title = shift or die "Need a title or user!\n";
my $desc = shift || "";
my $flatfile = "members.txt";
## Format of file: ID|Title|Description
&AddUser if $type =~ /add/i;
&DeleteUser if $type =~ /delete/i;
die qq[Invalid type of "$type"!\n];
exit;
sub AddUser() {
## Add a new user to the flat file
## Open the file in read/write mode:
open(FF, "+< $flatfile") or die "Could not open $flatfile: $!\n";
## Now we lock it so nobody else messes with it until we're done
flock(FF, LOCK_EX) or die "Could not flock $flatfile: $!\n";
## Now, we read through until we find a "free" number
my $goodnumber=0;
my $newnumber;
my $slurp="";
while(<FF>) {
$goodnumber++;
($newnumber) = split(/|/, $_, 2);
if ($newnumber != $goodnumber) {
## We have just skipped a number - the perfect place
## to add a new user!
## Decrease goodnumber temporarily
$goodnumber--;
## Save this spot, so we can go back to it quickly
## We need to subtract the line we just read in,
## because the new entry must go *before* it
my $position = tell(FF) - length $_;
## Save the current line, since we have already read it:
$slurp = $_;
## Now we slurp the rest of the file into memory:
## Setting $/ allows us to read the whole thing at once
## by setting the input record separator ($/) to
## nothing. See perlvar for more.
{ local $/; $slurp .= <FF>; }
## Now we rewind the file back to where we marked it:
seek(FF,$position,0);
## This bails us out of the while loop
last;
}
}
## Increment (needed in case no "holes" found before the end of the
+file)
$goodnumber++;
## Some systems need this to switch from read to write:
seek(FF,0,1);
## Add the new entry:
print FF "$goodnumber|$title|$desc\n";
## Add all the entries after that:
print FF $slurp;
## We should not need to truncate, as the size is always
## increasing when adding a user, but just for fun:
truncate(FF, tell(FF));
## Close and unlock
close(FF);
print "Added user $goodnumber to $flatfile.\n";
exit;
} ## end of AddUser
sub DeleteUser() {
## Delete a user from the flat file
## What number user do we want?
my $baduser = $title;
## Sanity check:
$baduser =~ /^\d+$/ or die "User to delete must be a number!\n";
## Open the file in read/write mode:
open(FF, "+< $flatfile") or die "Could not open $flatfile: $!\n";
## Now we lock it so nobody else messes with it until we're done
flock(FF, LOCK_EX) or die "Could not flock $flatfile: $!\n";
## Now, we read through until we find the "bad" number
my $newnumber;
my $slurp="";
while(<FF>) {
($newnumber) = split(/|/, $_, 2);
if ($newnumber == $baduser) {
## Save the spot right before this user, so we can
## go back to it quickly later:
my $position = tell(FF) - length $_;
## Now we slurp the rest of the file into memory:
## Setting $/ allows us to read the whole thing at once
## by setting the input record separator ($/) to
## nothing. See perlvar for more.
{ local $/; $slurp = <FF>; }
## Now we rewind the file back to where we marked it:
seek(FF,$position,0);
## This bails us out of the while loop
last;
}
}
## Some systems need this to switch from read to write:
seek(FF,0,1);
## Add all the entries after the bad one:
print FF $slurp;
## We do need to truncate, as the file size has shrunk:
truncate(FF, tell(FF));
## Close and unlock
close(FF);
if ($slurp) {
print "Deleted user $baduser from $flatfile.\n";
}
else {
print "User $baduser not found in $flatfile.\n";
}
exit;
} ## end of DeleteUser
| [reply] [d/l] |
Re: Storing Info in a text File
by petral (Curate) on Mar 10, 2001 at 18:35 UTC
|
Why wouldn't just reading the last line be easier? (Strip out the number and add 1.)
p | [reply] |
Re: Storing Info in a text File
by sierrathedog04 (Hermit) on Mar 11, 2001 at 01:32 UTC
|
The text-based solutions proposed above do not seem to take into account race conditions. What if two processes both read the text file nearly simultaneously? Then they will both attempt to write a row with the same row id and the text file will be corrupted.
Databases handle this problem by locking rows or tables. The first user who reads the data locks it so no one else can read it. He does not release the lock until he is done writing or his write has failed.
It looks to me as if Perl's flock command can get around the race conditions problem, but I do not see anybody using it in their solutions.
| [reply] |
Re: Storing Info in a text File
by Anonymous Monk on Mar 10, 2001 at 18:36 UTC
|
| [reply] |
Re: Storing Info in a text File
by fpi (Monk) on Mar 11, 2001 at 13:49 UTC
|
Here's a different approach: seeing that your ID numbers are apparently not pre-assigned, and that you just create a new ID based on the number of records (but the ID really doesn't have any relation to the number of records), how about creating an ID that will undoubtedly be unique at the time it is created and even after thousands and thousands more ID's are created? Such as an ID based on the date/time. Just use the time since epoch, which is just the date/time in seconds from a specific date (which is 1Jan1970 on Unix systems I think).
sub get_current_epoch {
use Time::Local;
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$CURRENT,$isdst)=localt
+ime(time);
$EPOCH= timelocal($sec, $min, $hour, $mday, $mon, $year);
return $EPOCH;
}
This may cause problems with duplicate ID numbers if your situation were such that it was possible (i.e. like in a website) that more than one ID number were being created at the EXACT same second (their epoch time would be exactly the same). The solution to this would be to add randomly generated numbers to your ID in addition to the date/time. The more random digits you add, the less likely you are to have duplicates. For example, if it were possible in your situation that 2 ID's might be created at the same second, but no more than 2, then it would be useless to have 10 random digits added to your ID, but maybe 2 or 3 digits would be fine.
my $random = rand 999; #seeds and gets a random no. up to 999
$random = sprintf("%0.3d",$random); #fills in leading zeros
$my ID = &get_current_epoch . $random; #uses above subroutine
Hope this all makes sense. The drawback is that the ID numbers are large, and it may be overkill for your situation. The advantage is that when you create a new ID, you just create it without having to check the latest entry, or for duplicate ID's. Plus you will have an automatic timestamp of when the record was created.
(12Mar2001 - code added for clarity) | [reply] [d/l] [select] |
Re: Storing Info in a text File
by dfog (Scribe) on Mar 10, 2001 at 22:48 UTC
|
To save on general processing, you may want to just have a second text file with the next available id, and increment that every time you add a line. That way it is completely independent of how many lines you have in the file, and you can add using >> without having to parse that entire file. Just my 2E-2.
Dave
| [reply] |