The hash contains list of directories which need to be archived.

Also there is a constrains about how many threads we can use for a process.

Below is small use case for which I am trying to solve the problem using threads.

  1. Scan a directory to find how many directory we have to archive.
  2. Now make a entry in DB to keep track of files we are looking forward to work on.
  3. Now archive directory. Once the operation is successful update DB with that set of files.

Now I am trying to multi-thread where each thread will work on a fixed size set.

One of the bottleneck I see the DB handler which I think can't be shared across threads. I think that is limitation of DBI. So any thought to over come this bottle neck will be great.


In reply to Re^2: How to bucket an Hash by techman2006
in thread How to bucket an Hash by techman2006

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.