in reply to Threading (Perl 5.8 ithreads) with Oracle 9i dumps core

Perl may not be the whole answer.

In Oracle if I have a lot of deletes to do in a large database, I use partitioned views to spread the work out. I also drop as many indexes as I can so that the index overhead is removed until I am done deleting and then I reindex.

The partitioned views will help the update issue also, since the actual database will be spread across multiple disks. Also make sure that your temp space is not on the same disks as your database and indexes.

I would also optimize my segment management and use stored procedures for the updates and deletes. You can have Perl call the stored procedures if you want.

Overall I would tune my database side first, and then work on concurrent SQL processes.

Richard

There are three types of people in this world, those that can count and those that cannot. Anon

  • Comment on Re: Threading (Perl 5.8 ithreads) with Oracle 9i dumps core

Replies are listed 'Best First'.
Re: Re: Threading (Perl 5.8 ithreads) with Oracle 9i dumps core
by fx (Pilgrim) on Dec 21, 2003 at 14:52 UTC

    I thought about dropping and then recreating the indices but the creation takes 60-80 minutes depeding on what else the machine is doing. I believe it takes so long as we use functional indicies.

    As you can probably tell, I am not an Oracle DBA. I will ask the DBA to look into partitioned views and segment management.

    Many thanks.

      It just goes to show that you should always think about the big picture. After I went back to the DBA and complained about the speed it was discovered that one of the foreign keys used for joining was not indexed. Adding this index speeded one of the cascaded deletes up from 3-5 seconds to "very quick".

      I now have reasonable speed from the SQL calls but my problem still lies in the fact that it can take hours to apply a "big" update which I have just been told is around 110,000 records.

      I am therefore still looking for some way to increase the performance of my code.

        Hi fx, Have you become able to solve this problem? I am getting similar error after 10000 records. You can see my post also. With Regards, kaustuv