in reply to Re: File search using Grep
in thread File search using Grep

I'm not sure whether the OP's use of grep implies that there might be more than one record with that ID but, if not, it might be an idea to avoid reading the rest of the file once you've found the record. Something like

$ perl -ne ' > next unless /9344151299/; > print; > last;' /tmp/Config1

I hope this is of interest.

Cheers,

JohnGG

Replies are listed 'Best First'.
Re^3: File search using Grep
by Anonymous Monk on Jun 26, 2009 at 14:35 UTC
    Hi, Thanks to all your suggestions; My problem is just to get the information particular to an account using his unique id; some of them will not be in the list also. We are expecting that this file will have millions of records as the accounts grow. So thinking of whether to store these records in hash each time the script is called or to just grep it? Thanks Priya

      Is there any reason not to use database?

      Using grep or perl -ne 'print if /<expr>/' is linear search, which is an operation that will scale with the size of the file. If the file is double the size, you can expect it to take twice as long time.

      If you use a different storage format, such as dbm or the like, you can improve this performance significantly, but it has nothing with the way the file is being read - its all about the format of the file.

      If you still want to keep the flat file you have described, you can build an index in a separate and use that for look up. One tool that will allow you to index the data is Berkeley DB for which there is the DB_File module.