I am a research assistant and decided to learn perl so that I can better collect data. One recent task that has perplexed me is how to check the validity of data i "ripped" from a webpage. I just figured out how to parse data and have made 1000s of CSV files. I worry though that the website might have an imperfect mechanism of generating replies to queries.
Here's what I want to do. I want to check whether all of my files are different. The problem is... there are about five thousand of them.
How can I write a perl script that would open one up, read the first line of data and then compare it to the first line of every other file. How can i do this without comparing.
20040819 Edit by broquaint: Changed title from 'question about reading massive number of text files'
In reply to Verifying data in large number of textfiles by dchandler
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |