I just wrote the program pasted below some days ago which basically takes hashes and makes an INSERT statement out of them.

Update: Upon re-reading your question, I realize that my program completely misses the point, as you want to do UPDATE statements, not INSERT statements. This would be easy if you have the column names and know which column is to be used as the primary key. My program does not know about that though, sorry.

In your case, the interesting parts are where the program constructs the INSERT statement from a hash. It uses placeholders and assigns the hash values to a the list of values based on the order of the column names. I recommend to always use column names and never use positional INSERT statements as the column order of a database might change when columns are dropped or renamed and then other columns with the same name are again.

my $data = [ { col1 => 'data1', col3 => 'data3' }, { col2 => 'data2', col3 => 'data3plus' }, ]; if( ! @columns ) { # Collect all (potential) columns we want to write my %columns; for( @$data ) { $columns{ $_ } = 1 for keys %$_; } @columns = sort keys %columns; }; my $columns = join ",", map { qq("$_") } @columns; my $placeholders = join ",", ("?") x @columns; # Run the insert statement for each row in our list my $sql_insert = qq(insert into "$table" ($columns) values ($placehold +ers));

The full program is here:

#!perl -w use strict; use DBI; use JSON; use Getopt::Long; use Pod::Usage; use Path::Class 'file'; GetOptions( 'table:s' => \my $table, 'dsn:s' => \my $dsn, 'user:s' => \my $user, 'password:s' => \my $password, 'file:s' => \my $json_file, 'json:s' => \my $json_text, 'columns:s' => \my @columns, ) or pod2usage(2); my $data; if( $json_file ) { $data = decode_json( file( $json_file )->slurp($json_file, iomode +=> '<:raw' )); } elsif( $json_text ) { $data = decode_json( $json_text ); } else { binmode STDIN, ':raw'; local $/; $data = decode_json( <STDIN> ); }; if( ref $data eq 'HASH' ) { # Convert single row to a set of rows $data = [$data] }; if( ! @columns ) { # Collect all (potential) columns we want to write my %columns; for( @$data ) { $columns{ $_ } = 1 for keys %$_; } @columns = sort keys %columns; }; my $columns = join ",", map { qq("$_") } @columns; my $placeholders = join ",", ("?") x @columns; # Run the insert statement for each row in our list my $sql_insert = qq(insert into "$table" ($columns) values ($placehold +ers)); my $dbh = DBI->connect( $dsn, $user, $password, { RaiseError => 1, Pri +ntError => 0 }); my $sth_insert = $dbh->prepare( $sql_insert ); if( ! $sth_insert->execute_for_fetch( sub { my $item = shift @$data; if( $item ) { my $res = [@{$item}{ @columns }]; return $res } else { return $item } }, \my @results)) { for my $err (grep { ref $_ } @results) { warn $err->[1]; }; };

In reply to Re^2: DBI Update dynamically by Corion
in thread DBI Update dynamically by Anonymous Monk

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.