first of all thanks for all the answers.
What I eventually did was some version of divide and conquer.
When reading the inital data (filling the hash) I instead wrote it in a more suitable way for a new file that was then sorted (shamefully - with the linux sort util) and then i could process the lines that started out the same (same entry + timestamp) and out put. this also gave me a sorted result.
I liked the idea of seperating the internal hash to a list - this actually might decrease lookups and not run out of memory for those lists which are relatively short.
I did not use a database because i was under the impression that i need an sql server (for example) to be running and i don't have one. am i wrong? this could be usefull...