yosefm has asked for the wisdom of the Perl Monks concerning the following question:

I have a function that writes a record to a database, by calling two functions: The first creates a hash of data to be inserted and the second assembles an SQL statement out of the hash. then the SQL statement is executed via $sth->do

Their code is:

sub writeElems { my ($tbl, $prm, $dbh) = (shift, shift, shift); #table name, CGI object, DBI connection. my %valid = preinsert($prm, $tbl); #Creates the hash if (%valid) { $dbh->do(insertstr($tbl, %valid)); } #Creates and ex +ecs the statement return %valid; }

This one gets the parameters:

# A sub to take CGI parameters, untaint and validate them. # Returns: a hash with ready-to-insert data, # or undef if any field fails to validate or untaint; # Arguments: A CGI object, the name of a table to prepare. sub preinsert { my $page = shift; my $tbl = shift; my (%fields, %retval); foreach ($page->param) { if ($_ =~ /^$tbl\./) { s/^($tbl\.)//; $fields{$_} = $page->param("$tbl.$_"); } } #There's a table XML descriptor, and it's just fine. my $dsc = XMLin(M_LIB."/$tbl.descriptor", ForceArray => ['field']); foreach my $fieldref (keys %{$dsc->{field}}) { my %tags = %{$dsc->{field}->{$fieldref}}; my $untaint; return undef if ($fields{$fieldref} !~ /$tags{untaint}/); $untaint = $1; print STDERR "recieved $1\n"; if ($untaint =~ /$tags{validate}/) { print STDERR "transmitted", ($retval{$fieldref} = $untaint), " +\n"; } else { print STDERR "Invalid data"; return undef; } } return %retval; }

And the statement is constructed with:

# Prepares the insert statement string, using results of preinsert. sub insertstr { my $tbl = shift; my %fields = @_; my $str = "insert into $tbl set"; foreach (keys %fields) {print STDERR "$fields{$_}\n"; $str .= " $_=\'$fields{$_}\',";} #watch this print: chop($str); print STDERR "$str\n"; return $str.";"; }

No problem, right? Well, when used in english, there's no problem.

now, I call it twice, like this:

if (scalar($page->param) > 1) { $Xtable = getXTableName($mtype, $dbh); $page->param('items.media_type', $mtype); #This is it: writeElems('items', $page, $dbh); writeElems($Xtable, $page, $dbh) if $Xtable; $page->delete('items.media_type'); }
The parameters are hebrew, regular win-1255 encoding.

The first call goes perfect.

Now the wierd stuff: The second call gives garbage, only after the concatenation in &insertstr and it must be unicode, cause it's exactly twice the number of chars and I've been there before...

Why only the second time around? Why me? What did I do wrong? Can't figure that out.

Thanks for your help.

Replies are listed 'Best First'.
Re: Unicode Problems with DBI?
by chromatic (Archbishop) on Jul 01, 2003 at 19:25 UTC

    I mistrust your quoting scheme, though I know of no reason it'd do the wrong thing with Unicode. Perhaps it's the chop at the end. I have an article about better ways to use DBI. (Okay, I just wanted to say "I have an article on that.")

Re: Unicode Problems with DBI?
by graff (Chancellor) on Jul 02, 2003 at 03:03 UTC
    If you say this works fine for English, I guess I'll take your word for that -- though I don't really get this part of the "preinsert" sub:
    return undef if ($fields{$fieldref} !~ /$tags{untaint}/); $untaint = $1; print STDERR "recieved $1\n";
    Just what does "$1" contain at this point? I don't see any parens in the regex, so I would assume that it isn't assigning anything to $1.

    Anyway, it's not clear that the resulting garbage has anything to do with unicode. You say the input is "regular win-1255 encoding", but in what form? Parameters containing bytes with the 8th-bit set? Parameters containing things like "õ" (cp1255 for "final tsadi", visible to latin1 users as õ, "o with tilde" -- I'm sure it wouldn't be anything like "ץ" (unicode for final tsadi: ץ). For that matter, is there any chance that you are actually getting a "\xFE" with each Hebrew character or each Hebrew word token? (that's the CP1255 code for "right-to-left mark")

    When you say "There's a table XML descriptor, and it's just fine," what does that mean? What evidence makes you sure that Hebrew characters come out of this "just fine"?

    What version of Perl is being used here? (5.6 or 5.8?) In what OS environment? (These things do make a difference in terms of behavior relating to unicode.)

    BTW, I fully agree with the earlier reply about quoting. You should be using a parameterized sql statement (i.e. with "?" where the string values should go), a "prepare" step, and an "execute" with the values provided to fill in the "?" placeholders.

    update: changed the "rendering" of Hebrew characters; also wanted to add that even in 5.8, the DBI module may be somewhat "external to" (not fully tied in with) Perl-internal unicode character storage; e.g. if you have a database containing utf8 character data, and fetch that data into perl5.8 using DBI, perl won't necessarily recognize it as utf8, unless/until you "decode" the string as utf8 data.

      Thanks for your reply. I'm sorry I didn't explain everything in my code, because it's somewhat complicated (e.g. the parentheses for the regexp are in the XML, etc).

      I assure you that I debugged the script 'till blood came out of my ears, so trust me when I say 'this works fine'.

      The solution I finally found is below, but thanks for making me aware of some new unknown (to me) ways of making a localizer's life harder... ;-)

Re: Unicode Problems with DBI?
by yosefm (Friar) on Jul 02, 2003 at 10:48 UTC
    Okay, I found the solution to anyone interested:

    chromatic was right, it was the chop. from perlunicode:

    Most operators that deal with positions or lengths in the string will automatically switch to using character positions, including chop(), substr(), pos(), index(), rindex(), sprintf(), write(), and length().

    Which means that chop automaticaly switches to wide charachters and translates the whole deal to unicode.

    The solution was to put a use bytes at the beginning of the sub insertstr (which caused all the problem).