foreach my $hour (map { sprintf "%02d", $_ } 0 .. 23) {
# Do Stuff Here
}
and i want to take all the data out of each file and put it into a hash.
So within the foreach loop, you will open the file for that
hour and then read from the file using a the angle-brackets operator,
like <FILEHANDLE>, probably repeatedly (e.g., using a while loop),
parse the result, and store it in a hash. The parsing part is going
to depend somewhat on what your data looks like, but most simple
assignments like this can be done with split or a simple
regular expression pattern match. Then store it in your hash like
this: $hash{$key} = $value; If you need more help with
that part, you'll have to give us more information about what your
data looks like, which part is the hash key, and which part is the value.
It would also be possible to do the thing as a list transformation,
feeding the list-context file read through map and into
the hash, but for a beginner the other way is probably easier.
But i want it to just keep one of each unique type.
I'm not sure what the word "type" means in this sentence, but if you
use a string representation of the type as the hash key, then you'll
end up with just one value for each key, because that's what hashes do.
If you assign each value you run into, the hash will remember the last
one assigned. If you want the first one, you can make the assignment
contingent on there not already being a value, like so:
$hash{$key} = $value unless exists $hash{$key};
That way you'd only get the first value for each key.
My files contain "perl.exe , svchost.exe etc"
I'm not sure exactly what this means. Do you mean that you have
perl installed on your computer? If so, I think we sort of assumed
that would be the case if you're writing programs in Perl.
I can do this, however i wish to increment the keys() of each value when it comes across a duplicate
I am not at all sure what you mean by this. Storing a value in a hash
using a key that hadn't already been used will cause keys() to return
a value one larger than before. Is your key coming out of the data
you're parsing, or do you need to make up a key for each line? One way
to do that would be to count the lines (using a counter variable that
you add one to each time you read a line) and use (a stringification of)
that number as the hash key for the data you just read. (You don't have
to manually stringify the number; perl will do that automatically when
you use it as a hash key.)
Sanity? Oh, yeah, I've got all kinds of sanity. In fact, I've developed whole new kinds of sanity. You can just call me "Mister Sanity". Why, I've got so much sanity it's driving me crazy.
| [reply] [d/l] [select] |
foreach my $hour ( '00' .. '23' ) {
# Do Stuff Here
}
| [reply] [d/l] [select] |
$hour = "00";
while ($hour < 25)
{
$infile = "$hour.txt";
open (LOG, $infile)
while <LOG>
{
#PUT EACH NEW LINE INTO THE HASH
}
}
sorry by "perl.exe" i mean each line of my text files holds a process the computer is doing do "SNDSrvc.exe
SPBBCSvc.exe
symlcsvc.exe
spoolsv.exe
AluSchedulerSvc.exe
svchost.exe
btwdins.exe"
And i am trying to make my hash hold
%hash = ("btwdins.exe", 1, "spoolsv.exe", 1, "svchost.exe", 4,);
So it takes all the lines from the text files but increments the value for every duplicate.
So if svchost.exe is in four text files, or four times in one file, its key and value would be "svchost.exe", 4.
My txt files are already parsed as each *.exe is on a new line. I hope this helps and is not to complicated. Any help is much much appreciated, my brain is frazzled. | [reply] [d/l] [select] |
Detecting duplicates are what Tiggers and hashes do best:
while (<LOG>) {
# Assume name of program is in $_
chomp;
$hash{$_}++;
}
The hash key will be created and the value set to undef (zero), then incremented on the first time around. Subsequent occurances of the same key will increment the value.
By the way, a small thing, but be careful when you use a leading zero on a numeric - it converts it to octal. | [reply] [d/l] |
$hour = "00";
while ($hour < 25) {
$infile = "$hour.txt";
open LOG, '<', $infile;
while <LOG> {
#PUT EACH NEW LINE INTO THE HASH
}
$hour++;
}
That's one reason I'd suggested a foreach loop, because it takes care
of that automatically. It also takes care of the initial assignment, too.
But the while loop will get the job done also, with this addition.
And i am trying to make my hash hold
%hash = ("btwdins.exe", 1, "spoolsv.exe", 1, "svchost.exe", 4,);
So it takes all the lines from the text files but increments the value for every duplicate.
Ah, I see. So the strings that come out of the hour files (one per line)
are themselves filenames, and you want to use those as the hash keys,
and make the value be the count of the number of times they occur?
In that case, you can just increment the value each time, as cdarke
suggests.
Sanity? Oh, yeah, I've got all kinds of sanity. In fact, I've developed whole new kinds of sanity. You can just call me "Mister Sanity". Why, I've got so much sanity it's driving me crazy.
| [reply] [d/l] |
This code reads values from STDIN and either creates or increments a hash appropriately. When you enter "Q" it will terminate and output the values...
use strict;
my %hash;
while(<STDIN>){
chomp;
last if $_ =~ /^q$/i; # quit if 'q' is entered
if(exists $hash{$_}){ # exists, so increment
$hash{$_}++;
}
else{ # doesn't exist, so set to 1
$hash{$_} = 1;
}
}
my @array = sort keys %hash; # let's make the output sorted...
foreach(@array){
print "$_ : $hash{$_}\n";
}
Is that what you meant?
map{$a=1-$_/10;map{$d=$a;$e=$b=$_/20-2;map{($d,$e)=(2*$d*$e+$a,$e**2
-$d**2+$b);$c=$d**2+$e**2>4?$d=8:_}1..50;print$c}0..59;print$/}0..20
Tom Melly, pm@tomandlu.co.uk
| [reply] [d/l] [select] |
$hash{$_}++;
But I think that the OP wants to increment the hash key instead of the value:
$_++ while exists $hash{$_};
$hash{$_}++;
| [reply] [d/l] [select] |