now that you put my eyes on it and make me think about it, it seem i was thinking to call my subroutine with a reference but i call it with a reference of an already referenced hash ?
And i have to admit than i don't know the (cpu/memory cost) cost of this behavior ? neither the real consequences of this "wrong design"...
And it seems, after re-reading of some topics, than i try to process some data inside the worker's subroutine without being shared, or push them with enqueue, and the result is a deep copy off the hash(es), array(s) for each threads and maybe the time took to create the thread is here?
I didn't already tried to recreate the multi threads process by splitting my data inside a shell script, and multithread it inside this script to observe if it's really a limitation of multiple sqlite connection to different databases inside a same script or only an incomprehension of the conseption of the threads processes inside perl.
I don't have many experiences with perl (even if i've already worked with it) but i refuse to give up without real facts and good understanding of my errors, and really think than the slowness i talk about on this topic, is not related by the fact than I multithread my insert but than my code to create the pool is the "dark side" of my code shown by the small debugs i've made, or like your first observations considering your "skills" :) (without offense anybody)
Best regards,In reply to Re^8: Sqlite: Threads and inserts into a different database for each one. (1 thread fast, >1 slow: BY YOUR DESIGN!)
by Anonymous Monk
in thread Sqlite: Threads and inserts into a different database for each one. (1 thread fast, >1 slow)
by ssc37
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |