This sounds a good idea, but my sql statement just look like this:
delete tbl_3 using a primary key insert tbl_3 ( about 200 fields ) select tbl_1's about 100 fields tbl_2's about 100 fields )
it's easy to split the DELETE statements, but it looks not so easy to transfer the insert statements into a file using the bcp format. Querying and then insert maybe the root cause for the slowness.
And the script I am using just use Sybase::CTlib to apply the sql batchs to the Sybase server. And each of my file is a TRAN, containing 30k delete-then-insert sql statements.
Thanks very much for your advice, I think maybe I should try to ran the script parallelly to see if it can save some time.
In reply to Re^2: How can I improve the sql statements applying efficiency
by littlehorse
in thread How can I improve the sql statements applying efficiency
by littlehorse
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |