I'm tyring to create the following format of hash:
{ "test": [ { "group": "ABC1", "values": [ "abc", "xyz" ] }, { "group": "ABC2", "values": [ "abc2", ] }, { "group": "ABC3", "values": [ "xyz3" ] } ] }
I iterate over my data the perform the following code on each line:
open(my $fh, '<', "$file_path") or return 0; while (my $line = <$fh>) { # ........... # ........... # Additional operations of the lines of the file # ........... # ........... my %a; # temp hash $a{"group"} = $group; if (defined($href->{"values"})) { $a{"values"} = [sort(uniq($value,@{$href->{"values"}}))]; } else { push(@{$a{"values"}},$value); } push(@{$href->{"test"}},\%a); }
I don't like how it looks, it is not clear and a bit mess.
Is there a way to make the code shorter and maybe without using the temp hash and if-else block?
EDIT: I found out that it actually won't do what I need - it will not push the value in the same array of the group. Not sure how to fix it yet.
EDIT-2: I'm sorry if my question was not understandable. I though about it and from the comments I leraned that I need to clerify my question (sorry again). I have talked about the inner structure but actually the final structure which I need is:
{ + "root": [ { "test": [ { "group": "XYZ", "values": [ "1234" ] }, { "group": "ABC", "values": [ "6.13.00" ] } ] }, { "test": [ { "group": "XYZ", "values": [ "tcsh" ] }, { "tool": "WEA", "values": [ "6.13.00" ] } ] }, { "test": [ { "group": "BAB", "values": [ "ASDAS", "12312321" ] }, { "group": "SADA", "values": [ "6.13.00", "1231231" ] } ] } ] }
I want to iterate though a file of data, parse it and extract group and values. The currect way I use:
sub parse_file { my ($file_path,$href) = @_; open(my $fh, '<', "$file_path") or return 0; while (my $line = <$fh>) { chomp($line); unless ($line =~ /\A[^,]+(?:,[^,]+){5}\z/) { next; } my ($key,$group,$value,$version,$file,$count) = split(/,/,$lin +e); push @{ $href->{test} }, { group => $group, values => [ sort uniq $value, @{ $href->{values} // [] } ] }; } close ($fh); return 1; } foreach my $dir (sort(@list_of_dirs)) { my ($all,%data); $all = $dir."/"."galish"; prase_results_raw($all,\%data); push(@all_data,\%data); }
It does what I need when there are no multiple values for the same group.
For multiple values for the same group I'll get:
{ + "root": [ { "test": [ { "group": "XYZ", "values": [ "1234" ] }, { "group": "ABC", "values": [ "6.13.00" ] } ] }, { "test": [ { "group": "XYZ", "values": [ "tcsh" ] }, { "tool": "WEA", "values": [ "6.13.00" ] } ] }, { "test": [ { "group": "BAB", "values": [ "ASDAS", ] }, { "group": "BAB", "values": [ "12312321" ] }, { "group": "SADA", "values": [ "6.13.00", ] } { "group": "SADA", "values": [ "1231231" ] } ] } ] }
If you take a look closely, you'll see duplicates of the same group name in the same test.
How to solve it?

In reply to Trying to make the code more clear and clean by ovedpo15

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.