I am reading in a lot of data from file and this data will contain 'duplicate rows'. By this i mean each row has a key or identifier and there may be multiple rows per key (although each row may well have different information for that key). When i encounter the row key for the FIRST time only i need to perform an operation. The way i was planning to do this was to use a hash with the row key as the hash key. I would look to see if i had encountered the row before by trying to retrieve a value from the hash with the row key. However the hash will have hundreds of thousands of keys so i need my look up mechanism to be quick and i've heard hashes are quite slow (though i don't know much perl and could be wrong). Can you suggest any fast alternatives?
Many thanksIn reply to need for speed - how fast is a hash by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |