tkil,
Thanks for your prompt response. I ran your script and it printed "hi mom" on the command line. The script that contains the code I'm using is Movable Type (one of the most popular blogging software used). I'm not really familiar with Perl so I am not sure how to fix it. I've posted questions in the MT support forums and nobody responded. If you wouldn't mind looking at the code of the script that I'm trying to run which is coming up with the error posted above, I have it posted at the DevShed forums at: http://forums.devshed.com/t141992/s.html
I used this script on another account on the same server without any problems. It started giving me trouble when I copied all the files over from the old account to the new one.
Any help would be greatly appreciated.
Thanks,
John | [reply] |
It started giving me trouble when I copied all the
files over from the old account to the new one.
This makes me suspicious. If the two servers were
different in any of a variety of ways (different
architectures, especially big-endian (PPC) vs.
little-endian (x86); 32-bit vs. 64-bit;
different perl versions; different
versions of Storable.pm...) then anything saved with
Storable on one machine will likely not be readable
with Storable on another machine.
The actual script should not have a problem; any data
you wrote with Storable is where the issue is. And by
"wrote", I'm including anything you put into a database
from the first server as well. Are you hitting the
same tables (same database and same schema) as you
were before, or have you set up a new database?
The one thing that still worries me about your code
in particular is that you seem to be setting a value
into that column directly, then pulling it out; since
that is all done on the same machine, the compatibility
issues above should not come up. So I'm still not
convinced that the column method is safe...
but if it works fine on one machine, it likely is.
(On the other hand, if the data you are putting into
the column now is not exactly the same, then there still
could be reasons that it is failing that have nothing
to do with it being on a new machine. In particular,
if the freeze representation of the data
is too wide to fit into your column, and you do not
check for width overflow, then that could be corrupting
your data by truncating it. You should be able to use
the second test script in my original post to
determine whether
this is the case; instead of setting $orig
to a generated string, find the actual data you were
trying to store when your code blew up. The idea is
to test just your column method, without
complicating it by trying freeze and
thaw on it.)
So, how to fix it? You will need export those values
in a portable way on the first machine, then import
them on the new machine. Data::Dumper
is probably good enough for you here; it is slower
than Storable, but you should only have
to do it once. (Note that just doing a database-level
export/import won't help you, if the problem is where
I think it is.)
I can't give you code for how to do this without
knowing how your database is set up. If you have
less than about 10K rows, though, you can probably
get away with using Data::Dumper to
spit out each table row in a file named by a primary
id number:
my $dumper = Data::Dumper->new();
$dumper->Terse(1);
$dumper->Indent(0);
my $sth = $dbh->prepare( 'SELECT * FROM table' );
$sth->execute();
# open a file to track all ids
my $id_file_name = 'all-ids.txt';
open my $id_fh, "> $id_file_name"
or die "opening $id_file_name: $!"
# array position of primary key
my $id_ix = 0;
while ( my $cur = $sth->fetch() )
{
my $id = $cur->[$id_ix];
open my $fh, "> $id"
or { do warn "opening $id: $!"; next };
print $fh $dumper->($cur);
close $fh or warn "closing $id: $!";
print $id_fh "$id\n";
}
close $id_fh
or die "closing $id_file_name: $!";
$sth->finish();
And you can then re-import them like so:
# open a file to track all ids
my $id_file_name = 'all-ids.txt';
open my $id_fh, $id_file_name
or die "opening $id_file_name: $!"
my @ids = map { s/\s+$//; $_ } <$id_fh>;
close $id_fh or die "closing $id_file_name: $!";
my $n_cols = XXX; # how many columns in "table"?
my $places = join ',', ('?') x $n_cols;
my $ins = $dbh->prepare( "INSERT INTO table" .
" VALUES ( $places )" );
local $/ = undef;
foreach my $id ( @ids )
{
open my $fh, $id
or do { warn "opening $id: $!"; next };
my $str = <$fh>;
close $fh
or do { warn "closing $id: $!"; next };
my $aref = eval $str;
if ( !$aref || $@ )
{
warn "$id: eval failed (str='$str')";
next;
}
$ins->execute( @$aref );
}
If you have up to about a million records, you can
scale the above by making subdirs out of the last
few digits of the primary id (which I'm assuming is
an integer; if not, write out an integer counter to
the "all-ids.txt" file instead, and use that to name
the files). Something like:
my $subdir = sprintf '%03d', $id % 1000;
unless ( -d $subdir ) { mkdir $subdir, 0777; }
my $file = "$subdir/$id";
Up to a million rows, this will keep any subdirectory
smaller than 1000 files, which should still be managable.
| [reply] [d/l] [select] |
Thanks again for your response. I was running the script on the exact same server, just a different account (which has been deleted). I have three scripts similiar to the one below that run. The first two work properly but for some reason this one is giving me problems. Is there anyway to just rename the database/column that it is writing to instead of trying to add/update the current corrupted one?
package MT::Plugin::NetflixHistory;
use strict;
use MT::Template::Context;
use LWP::UserAgent;
use HTTP::Cookies;
MT::Template::Context->add_container_tag('NetflixHistory', \&queue);
MT::Template::Context->add_tag('NetflixHistoryURL', \&queue_info);
MT::Template::Context->add_tag('NetflixHistoryImage', \&queue_info);
MT::Template::Context->add_tag('NetflixHistoryTitle', \&queue_info);
MT::Template::Context->add_tag('NetflixHistoryShipDate', \&queue_info)
+;
MT::Template::Context->add_tag('NetflixHistoryReturnDate', \&queue_inf
+o);
my $queue = 'http://www.netflix.com/RentalActivity';
sub queue {
my($ctx, $args) = @_;
my $session_id = $args->{session_id};
my $lastn = defined($args->{lastn})?$args->{lastn}:0;
my $noparms = 0;
if (!$lastn) {
$noparms = 1;
}
my $jar = HTTP::Cookies->new;
$jar->set_cookie(0, 'NetflixShopperId', $session_id, '/', '.netfli
+x.com');
my $req = HTTP::Request->new(GET => $queue);
$jar->add_cookie_header($req);
my $ua = LWP::UserAgent->new;
$ua->agent('NetflixWebServices/1.0');
my $res = $ua->request($req);
return $ctx->error("HTTP error: " . $res->status_line)
unless $res->is_success;
my $builder = $ctx->stash('builder');
my $tokens = $ctx->stash('tokens');
my $c = $res->content;
my $out = '';
$c =~ m!Returned</div>(.*?)</body>!sg;
$c = $1;
while ($c =~ m!"buf"(.*?)"brdr"></td>!sg) {
my $rec = $1;
my @rec = $rec =~ m!eid=(.*?)&trk!sg;
my $url = "http://www.netflix.com/MovieDisplay?"."movieid=".$1
+;
$rec[4] = "http://a408.g.akamai.net/f/408/1284/24h/image.netfl
+ix.com/NetFlix_Assets/boxshots/small/".$1.".jpg";
$rec =~ m!jr=5">(.*?)</a>.*?ter>(.*?)</td>.*?ter>(.*?)</td>!sg
+;
my $title = $1;
$rec[2] = $2;
$rec[3] = $3;
my $f = 0;
if ($rec[3]) {
if ($rec[3] eq "-") {
$c =~ m!"buf"(.*?)ght=2></td>!sg;
$rec = $1;
$rec =~ m!on(.*?)\. <br>!sg;
$rec[3] = $1;
$f = 1;
}
}
if ($f) {next;}
if ($lastn || $noparms) {
if ($title && $rec[2] && $rec[3] && $rec[4]) {
my $entry = { title => $title, url => $url, shipdate => $r
+ec[2],
returndate => $rec[3], image => $rec[4]};
$ctx->stash('netflix_history_entry', $entry);
defined(my $o = $builder->build($ctx, $tokens))
or return $ctx->error($builder->errstr);
$out .= $o;
if ($lastn){--$lastn;}
}
}
}
$out;
}
sub queue_info {
my $n = $_[0]->stash('netflix_history_entry');
(my $tag = lc $_[0]->stash('tag')) =~ s!^netflixhistory!!;
$n->{$tag};
}
1;
Thanks,
John | [reply] [d/l] |