Typically I write each string to a sqlldr file seperated by a delimiter and then when finished I load the entire file. I would like to send each line in my script to sqlldr using stdin. Currently, after I have finshed writing to the sqlldr file I just call the sqlldr command...
What you typically do sounds like the Right Way To Do It. Why do you think it would be better to run sqlldr on one line of data at a time? What benefit do you think this will provide?
I haven't had occasion to use sqlldr for several years now, so I don't know whether it supports reading data from stdin -- and that's not a perl question; you have to look up the docs for sqlldr.
Considering what sqlldr is supposed to do and how it does that, I'd rather have a single file with lots of rows to be inserted in a single run: some rows might fail for various reasons, and sqlldr is very good about handling those, setting them aside, reporting the problems, etc. Having a stable reference for the input data (i.e. a disk file rather than a pipeline stream) might make it easy to do error recovery, diagnosis, etc.
BTW, did you happen to notice these links on the node composition page? Markup in the Monastery and Writeup Formatting Tips will tell you about the use of <code> tags (short form: <c>...</c>) around snippets of perl code and data -- which is sort of mandatory.
In reply to Re: using stdin with sqlldr
by graff
in thread using stdin with sqlldr
by chuckd
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |