in reply to Re^4: Sqlite: Threads and inserts into a different database for each one. (1 thread fast, >1 slow: BY YOUR DESIGN!)
in thread Sqlite: Threads and inserts into a different database for each one. (1 thread fast, >1 slow)

Yes. A small sample of the input data would be extremely useful. Just enough to exercise the code through 3 or 4 iterations is enough.

(The full -- or a least working, cut-down showing all the relevant code -- is a requirement if you are to get answers that are anything more than guesswork.)


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
  • Comment on Re^5: Sqlite: Threads and inserts into a different database for each one. (1 thread fast, >1 slow: BY YOUR DESIGN!)

Replies are listed 'Best First'.
Re^6: Sqlite: Threads and inserts into a different database for each one. (1 thread fast, >1 slow: BY YOUR DESIGN!)
by ssc37 (Acolyte) on Apr 07, 2014 at 19:23 UTC
    I don't know if you've seen my answer due to the fact i've replied on the other branch

    (I'm not familiar with the email alert and the followers on a topic on perl monks).

    Please forgive me if you were already aware of my answer and my apologies if it's hard to read me with my bad english learning

    Best regards,

      Given that $uniq_values is a hash reference, could you tell me what you think this construct does?

      \%{$uniq_values}

      And why you think it is necessary?


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        Hum..

        now that you put my eyes on it and make me think about it, it seem i was thinking to call my subroutine with a reference but i call it with a reference of an already referenced hash ?

        And i have to admit than i don't know the (cpu/memory cost) cost of this behavior ? neither the real consequences of this "wrong design"...

        And it seems, after re-reading of some topics, than i try to process some data inside the worker's subroutine without being shared, or push them with enqueue, and the result is a deep copy off the hash(es), array(s) for each threads and maybe the time took to create the thread is here?

        I didn't already tried to recreate the multi threads process by splitting my data inside a shell script, and multithread it inside this script to observe if it's really a limitation of multiple sqlite connection to different databases inside a same script or only an incomprehension of the conseption of the threads processes inside perl.

        I don't have many experiences with perl (even if i've already worked with it) but i refuse to give up without real facts and good understanding of my errors, and really think than the slowness i talk about on this topic, is not related by the fact than I multithread my insert but than my code to create the pool is the "dark side" of my code shown by the small debugs i've made, or like your first observations considering your "skills" :) (without offense anybody)

        Best regards,