Can we say that for some RAID/LVM setups, performance of N parallel seeks is unknown?

Yes, absolutely.

Meaning that some tests prior to forking must be made to establish a good value for N for that specific directory (and also specific action like reading file contents or just listing directory entries)?

Yes. The general idea is to have one process traversing the directories per set of disks containing the directory tree. It does not matter if that is done via RAID, LWM, ZFS, or something else we haven't thought of.

Or perhaps RAID can be enquired about N before seeking?

Yes, but implementing an algorithm that considers all edge cases may take some time. Linux software RAID allows really perverted constructs. For example, it does not prevent you from creating four partitions on a single disk and build a RAID-5 on top of that four partitions. It makes absolutely no sense, except for learning and debugging. A quite common setup for software RAID is to have some disks in a RAID-5, but also have a bootable RAID-1 for /boot on the same disks. Linux SW RAID allows that, because it can use partitions instead of the full disks. So you end up with two RAID volumes, each using three or more disks, but using the same disks. Or, you use separate RAIDs for data, root, swap, and boot, because you don't like LVM. This is my home server setup:

>lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 1 931.5G 0 disk |-sda1 8:1 1 100M 0 part | `-md0 9:0 0 100M 0 raid1 /boot |-sda2 8:2 1 10G 0 part | `-md1 9:1 0 40G 0 raid5 / |-sda3 8:3 1 2G 0 part | `-md3 9:3 0 8G 0 raid5 [SWAP] `-sda4 8:4 1 919.4G 0 part `-md2 9:2 0 3.6T 0 raid5 /data sdb 8:16 1 931.5G 0 disk |-sdb1 8:17 1 100M 0 part | `-md0 9:0 0 100M 0 raid1 /boot |-sdb2 8:18 1 10G 0 part | `-md1 9:1 0 40G 0 raid5 / |-sdb3 8:19 1 2G 0 part | `-md3 9:3 0 8G 0 raid5 [SWAP] `-sdb4 8:20 1 919.4G 0 part `-md2 9:2 0 3.6T 0 raid5 /data sdc 8:32 1 931.5G 0 disk |-sdc1 8:33 1 100M 0 part | `-md0 9:0 0 100M 0 raid1 /boot |-sdc2 8:34 1 10G 0 part | `-md1 9:1 0 40G 0 raid5 / |-sdc3 8:35 1 2G 0 part | `-md3 9:3 0 8G 0 raid5 [SWAP] `-sdc4 8:36 1 919.4G 0 part `-md2 9:2 0 3.6T 0 raid5 /data sdd 8:48 1 931.5G 0 disk |-sdd1 8:49 1 100M 0 part | `-md0 9:0 0 100M 0 raid1 /boot |-sdd2 8:50 1 10G 0 part | `-md1 9:1 0 40G 0 raid5 / |-sdd3 8:51 1 2G 0 part | `-md3 9:3 0 8G 0 raid5 [SWAP] `-sdd4 8:52 1 919.4G 0 part `-md2 9:2 0 3.6T 0 raid5 /data sde 8:64 1 931.5G 0 disk |-sde1 8:65 1 100M 0 part | `-md0 9:0 0 100M 0 raid1 /boot |-sde2 8:66 1 10G 0 part | `-md1 9:1 0 40G 0 raid5 / |-sde3 8:67 1 2G 0 part | `-md3 9:3 0 8G 0 raid5 [SWAP] `-sde4 8:68 1 919.4G 0 part `-md2 9:2 0 3.6T 0 raid5 /data sr0 11:0 1 1024M 0 rom >

Should be easy to parse, and the result should be that all RAID volumes share the same set of disks. In my case, N=1. If you also want to traverse the BD-ROM sr0, N=2.

Compare with one of our servers at work:

>lsblk -i NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 1.8T 0 disk |-sda1 8:1 0 1G 0 part | `-md0 9:0 0 1024M 0 raid1 /boot `-sda2 8:2 0 1.8T 0 part `-md1 9:1 0 7.3T 0 raid6 |-pve-root 251:0 0 20G 0 lvm / |-pve-swap 251:1 0 128G 0 lvm [SWAP] `-pve-data 251:2 0 7.1T 0 lvm /var/lib/vz sdb 8:16 0 1.8T 0 disk |-sdb1 8:17 0 1G 0 part | `-md0 9:0 0 1024M 0 raid1 /boot `-sdb2 8:18 0 1.8T 0 part `-md1 9:1 0 7.3T 0 raid6 |-pve-root 251:0 0 20G 0 lvm / |-pve-swap 251:1 0 128G 0 lvm [SWAP] `-pve-data 251:2 0 7.1T 0 lvm /var/lib/vz sdc 8:32 0 1.8T 0 disk |-sdc1 8:33 0 1G 0 part | `-md0 9:0 0 1024M 0 raid1 /boot `-sdc2 8:34 0 1.8T 0 part `-md1 9:1 0 7.3T 0 raid6 |-pve-root 251:0 0 20G 0 lvm / |-pve-swap 251:1 0 128G 0 lvm [SWAP] `-pve-data 251:2 0 7.1T 0 lvm /var/lib/vz sdd 8:48 0 1.8T 0 disk |-sdd1 8:49 0 1G 0 part | `-md0 9:0 0 1024M 0 raid1 /boot `-sdd2 8:50 0 1.8T 0 part `-md1 9:1 0 7.3T 0 raid6 |-pve-root 251:0 0 20G 0 lvm / |-pve-swap 251:1 0 128G 0 lvm [SWAP] `-pve-data 251:2 0 7.1T 0 lvm /var/lib/vz sde 8:64 0 1.8T 0 disk |-sde1 8:65 0 1G 0 part | `-md0 9:0 0 1024M 0 raid1 /boot `-sde2 8:66 0 1.8T 0 part `-md1 9:1 0 7.3T 0 raid6 |-pve-root 251:0 0 20G 0 lvm / |-pve-swap 251:1 0 128G 0 lvm [SWAP] `-pve-data 251:2 0 7.1T 0 lvm /var/lib/vz sdf 8:80 0 1.8T 0 disk |-sdf1 8:81 0 1G 0 part | `-md0 9:0 0 1024M 0 raid1 /boot `-sdf2 8:82 0 1.8T 0 part `-md1 9:1 0 7.3T 0 raid6 |-pve-root 251:0 0 20G 0 lvm / |-pve-swap 251:1 0 128G 0 lvm [SWAP] `-pve-data 251:2 0 7.1T 0 lvm /var/lib/vz >

Six disks, containing two RAID volumes, /boot (RAID-1) and an LVM set. LVM provides /, /var/lib/vz, and swap on top of the RAID-6. N=1, no optical disk.

Alexander

--
Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)

In reply to Re^12: Finding files recursively by afoken
in thread Finding files recursively by ovedpo15

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.