in reply to oracle temp table does not exist

I am not an expert on the DBI, but I do occasinally use it in conjunction with SQLite and less frequently MySQL. I don't know any Oracle specifics. However I have a few suggustions that are generally applicable.

First relegate error handling to the DBI instead of using "or die" messages within your code.
I would also turn AutoCommit on, if it is not on by default (it is in SQLite, but I don't know about Oracle).
An anon hash can be specified in the connect, like this:

my $dbh = DBI->connect($dsn,$username,$password,{RaiseError=>1, AutoCo +mmit=>1}) or die $DBI::errstr;
You will not need $dbh->commit(); after a "do" - the commit will happen automatically.

When you want an explict transaction, override the auto commit by starting a transaction explictly with $dbh->begin_work; After the work is done, then $dbh->commit; This way it is very clear within your code exactly what work is within that particular transaction.

Use placeholders in your insert instead of continually "preparing" the statement again and again within a "do". The DB can wind up doing quite a bit of work in order to "prepare". Note that "prepares" are specific to a particular table meaning that the table name cannot be a "placeholder variable" within a prepared statement.

my $insert1 = $dbh->prepare(" INSERT INTO HR.Tempest ( ID, NAME, TITLE, DEPT ) VALUES (?,?,?,?) "); for (...whatever...) { $insert1->execute($id,$name,$title,$dept); }

When you say "program continues to throw "table or view does not exist" in runtime", that sounds like something is wrong with your temporary table.

I would start by massively dumbing this thing down. Make a very simple temp table with perhaps just a single column. Put a few rows in it. Then see if you can read and print that table out. If that doesn't work and you can't make it work, then you have a simple and direct question to pose to the Monks.

If your temp table is pretty big, then making its creation a single transaction will speed things up a lot. A main limit in DB's is the number of transactions per second. A single transaction can have millions of operations. Don't worry about this just to get your code working. After it is working, then slap a begin_work before and commit after the creation work.

Update: Well now that I look more closely:

my $resultset = $dbh->prepare("Select * from Tempest"); while(@row = $resultset->fetchrow_array()){ print"@row\n\n"; }
Should be:
my $resultset = $dbh->prepare("Select * from Hr.Tempest"); $resultset->execute(); while(@row = $resultset->fetchrow_array()){ print"@row\n\n"; }
Update2:
As it happened, a new SQL project appeared on my desk. I did follow my own advice about transactions. I thought I'd emphasize this with my following actual code. As shown below, each year's of data goes into a separate table. Each year takes about 7-8 seconds for something around 1M inserts. So for 3 years of data, program runs in less than 30 sec which is just fine. I commented out the transaction stuff with the intention of being able to quote a benchmark time difference. But the code ran all night without even processing the first directory! However I can say the difference between good transaction mgmt and none is seconds vs literally days!
foreach my $year (sort keys %yr2dir) { create_table($dbh,$year); $dbh->begin_work; populate_table($dbh,$year,$yr2dir{$year}); $dbh->commit; }
I chose to give the transaction a high visibility. I could have put these statements into the populate_table sub. Perhaps some Monks would do it that way? I don't know. A single transaction could be run for all 3 tables. But that would result in no gain. In my testing with large table creation, the "sweet spot" is about 250K inserts. Making a transaction bigger than that has no significant impact on execution time. For something like this I don't need any transaction rollback code. If something is wrong with one of the ~10,000 input files, I'll just fix that file and run the whole program again.

If your turn auto commit off, you wind up having to put in explicit commit statements for relatively quick operations like create_table. My preference is leave auto commit on (and I do think that is usually the default) and explicitly override that for "important operations".

Update3:
This is off topic, but folks who work with large tables may find this interesting. With SQLite, I have found that it is faster when importing data to create an entire table, to create the table first without indices and then create the indices (if any) later. SQLite "likes" a lot of memory for index creation. You cannot vary the memory footprint of a Perl process. But you can and should vary the memory footprint of SQlite when appropriate. I have tested this and it works. Giving SQLite say an extra 500 MB of memory during index creation will speed things up a lot. When that is over, you can give that memory back to the O/S.