in reply to Re^4: Data manipulation on a file
in thread Data manipulation on a file
Small problem now that the output when run on a large amount of linux hosts is producing duplicate mount points for different shares etc. This is due to the many mounts on the same mountpoint.
like this $ouputstring: host1: mountpoint1 mountpoint1 mountpoint1 mountpoint2 host25: mountpoint5 mountpoint5 mounpoint5 mountpoint6 m......78.
it would be really nice to remove the duplicates but this would mean logic before updating the array elements or logic coming out of the dump. As I am just dumping the full hash into a $scalar this could get tricky. Anyone got any suggestions on removing the duplicate mount points once the hash has been dumped to the scalar or should I put logic in the dump or even into the push. Perl is so powerfully but still a newbie here so any help appreciated. Many thanks#!/usr/bin/perl use strict; use warnings; my $filer; my %filer_hash; for (qx(mount -t nfs | awk -F/ '{print \$1,\$3}' | sed -r 's/(blah.*:) +|(bblah.*:)//g' |sort)) { chomp; my ($host, $mp) = split; push @{ $filer_hash{$host} }, $mp; } foreach $filer ( sort { ${filer_hash{$b}} <=> ${filer_hash{$a}} } keys + %filer_hash ) { $outputstring .= "$filer:@{$filer_hash{$filer}}," } print " This is my output : $outputstring
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^6: Data manipulation on a file
by CountZero (Bishop) on Oct 02, 2015 at 21:23 UTC | |
by sstruthe (Novice) on Oct 02, 2015 at 21:38 UTC | |
by CountZero (Bishop) on Oct 02, 2015 at 21:41 UTC | |
|
Re^6: Data manipulation on a file
by sstruthe (Novice) on Oct 04, 2015 at 18:20 UTC |