I have a big application, all written in Perl. It uses a MySQL database and saves in some cases Perl structures to the database. This Perl structures are dumped with Data::Dumper and restored with eval.
Everyting is in Unicode: program sources, data files, database. Each program and Module begins with use utf8. Non-ASCII Unicode data is handled without problems, except in one case: data dumped with Data::Dumper, then restored and then written to the database (now without Dumper). Such data is corrupted.
After some research i finally found out that the data restored with eval looses its utf-8 bit. All bytes are correct, but Perl does not know that it is utf8-encoded. DBI then fails when it tries to include such data in SQL statements.
Unluckilly the application is quite big, What is the simplest way to fix this problem? All I need is a way to tell Perl in evals, that string which are generated by the eval are in Unicode (Useqq does not solve it)
This program demonstrates the problem:
#!/usr/bin/perl -w use strict; use utf8; use Data::Dumper; # $Data::Dumper::Useqq = 1; binmode STDOUT, 'utf8'; our $VAR1; my $data = 'ä'; # this is a non-ACII a Umlaut my $dump = Dumper( $data ); eval $dump; if ( $data eq $VAR1 ) { print " == equal\n"; } else { print " != not equal\n"; } print $dump, "\n"; print Dumper( $VAR1 ), "\n"; print "original is utf8 = '" . utf8::is_utf8( $data ) . "'\n"; print "restored is utf8 = '" . utf8::is_utf8( $VAR1 ) . "'\n";
Output is
PS: it is on a Mac OSX 10.5.8 with Perl v5.8.8 built for darwin-thread-multi-2level== equal $VAR1 = "\x{e4}"; $VAR1 = 'ä'; original is utf8 = '1' restored is utf8 = ''
In reply to Data::Dumper and utf8 by thedi
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |