And doing that for 8 million people.
Is that 8 million records in the file? Or 8 million people with each having multiple records in the file?
If the former, (roughly) how many records per person? If the latter, what is the total number of records in the file?
Update: Given your test file format with 8 million lines, this one liner does the job in around 35 seconds:
[19:07:51.13] C:\test>wc -l 1178116.dat
8000000 1178116.dat
[19:07:54.92] C:\test>head 1178116.dat
ihpfgx Z
fxbkfh Z
kqektt B
zxburh Z
zpzafy Z
nvamqp Z
umpeky Z
hyfldc B
qdapmk Z
ynlfhg Z
[19:08:07.28] C:\test>perl -anle"$F[1] eq 'Z' and ++$h{$F[0]} }{ print
+ join ' ', $_, $h{ $_ } for keys %h" 1178116.dat >null
[19:08:42.87] C:\test>
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
In the absence of evidence, opinion is indistinguishable from prejudice.
|