Re: Is it possible to run SQL select in Oracle and SQL Server to get a large recordset and import it straight into an array, looking to avoid using rownext
by kcott (Archbishop) on Nov 25, 2013 at 05:11 UTC
|
G'day newbieperlperson,
That sounds like the selectall_arrayref() method of DBI is what you want.
I suggest you read through the DBI documentation to learn about the various methods, attributes and functions it provides.
| [reply] |
|
|
Thank you so much Ken, I will try this out as soon as I get into work, from looking at the method, it looks like it is the solution.
I sincerely appreciate you taking the time to assist me, I hope one day I can pay it back.
Regards,
AJ
| [reply] |
Re: Is it possible to run SQL select in Oracle and SQL Server to get a large recordset and import it straight into an array, looking to avoid using rownext
by einhverfr (Friar) on Nov 25, 2013 at 06:30 UTC
|
In addition to the suggestion for selectall_arrayref method (or fetchall_arrayref/fetchall_hashref) I am going to suggest the possibility of pushing this back to someone else's problem. In your case it sounds like you may be joining two result sets across databases. If this is just a union, then go ahead and do it all in Perl, but if it is a join or anything more complex, avoid doing it in Perl if you can. Databases usually handle this task better than anything you can put together.
SQL Server supports linked servers over ODBC, so it can pull the data from Oracle, join with its own data and give you a result. Oracle supports the same over OCI. So if you need SQL-like logic linking the data sets, you are best to do it this way.
I mention this because there may be plenty of cases where you really want to do things in the db instead of retrieving a lot of rows you will need to process. DB's have fine-tuned algorithms for these sort of operations and an ability to select between them.
| [reply] |
Re: Is it possible to run SQL select in Oracle ... tl;dr
by roboticus (Chancellor) on Nov 25, 2013 at 15:26 UTC
|
newbieperlperson:
You've already gotten some good answers (e.g., selectall_arrayref), so I won't elaborate on that.
However, your claim that retrieving 8,000 rows will take forever when doing so one at a time doesn't seem reasonable. I do it all the time, and it doesn't really take much time at all. I'm wondering whether you're optimizing prematurely, since you mention that you're planning some items. I frequently pull tens or hundreds of thousands of rows at a time from our server, but generally use the slower
fetchrow_hashref
which is normally quite fast enough.
I think you may be making yourself think too hard about the problem, rather than solving it in a straightforward way, and improving the solution if/when it proves insufficient.
Note: I tried to link to the specific documentation for fetchrow_hashref in DBI, but can't seem to figure out how to link it nicely.
Update: I gave up and hardcoded a link.
...roboticus
When your only tool is a hammer, all problems look like your thumb.
| [reply] [d/l] [select] |
Re: Is it possible to run SQL select in Oracle and SQL Server to get a large recordset and import it straight into an array, looking to avoid using rownext
by Jenda (Abbot) on Nov 25, 2013 at 13:39 UTC
|
| [reply] |
Re: Is it possible to run SQL select in Oracle and SQL Server to get a large recordset and import it straight into an array, looking to avoid using rownext
by locked_user sundialsvc4 (Abbot) on Nov 25, 2013 at 16:50 UTC
|
Also ... what immediately pops into my head is this:
“Gee, ’6000 to 8000 rows’ isn’t that much data ... not at all. Especially not for Oracle. Therefore, why is it ‘taking forever?’ Perhaps this is the true root-cause of the problem. I smell a dead mouse somewhere ...
Sure, there are DBI routines that will “fetch an array,” but if there is some kind of bottleneck going on here, those would probably take forever, too. Consider therefore that there might be something to do with your algorithm, with the exact way that you are connecting to Oracle and so-on, which might be causing an extreme but altogether-unjustified slowdown.
Great example of this (from another context): I was using SQLite basically for the first time, and it was dog slow. Knowing SQLite’s reputation for speed, I didn’t know what I was doing wrong ... until I saw that disk I/O was going through-the-roof for this process ... especially writes. It turns out that, with this system, you basically must use transactions, even for reads. So I made this change, and whoosh!! No amount of “diddling with my code” would have made any difference at all. Perhaps a similar thing is going on here ... and, mind you, it could in this case very well have something to do with transactions, isolation-levels / repeatable-read, and so forth. Oracle, like all pieces of complex software, “makes certain assumptions,” such that you might not know that they are happening but that might have a profound impact on your software’s performance.
| |
|
|
Right. A simple SELECT without a complex WHERE and no LOBs returning "just" 8000 rows should happen almost instantly on Oracle. If not, start by issuing the same SQL statement in SQLplus or similar, if that is slow, consult the Oracle admin.
About fifteen years ago, I filled an Oracle table with a few thousand rows of simple address data (id, name, street, town, phone, email), the following "SELECT * FROM $table" several minutes. My Oracle guru just muttered "primary key?", and of course, my CREATE TABLE lacked the PRIMARY KEY on the id column. So Oracle had do to a full table scan for each record I selected. Adding the primary key to the ID column reduced the time required to a few milliseconds.
Alexander
--
Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
| [reply] [d/l] [select] |