Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris
 
PerlMonks  

Re^2: Removing digits until you see | in a string

by Animator (Hermit)
on Jan 08, 2007 at 12:10 UTC ( [id://593521]=note: print w/replies, xml ) Need Help??


in reply to Re: Removing digits until you see | in a string
in thread Removing digits until you see | in a string

That is a bad idea.

If he is reading a lot of these strings from a file then it implies that there are a lot of records in the file.

What your code is doing is first reading the entire file into the memory and after that starting to process it.

Also: you can combine both maps just fine.
That is: map { chomp; split m{\|}, $_, 2 }

Replies are listed 'Best First'.
Re^3: Removing digits until you see | in a string
by johngg (Canon) on Jan 08, 2007 at 13:53 UTC
    What's bad about reading the file into memory? With modern computer systems it is quite a common idiom to read the whole of a file into memory before processing it. Only if the data file was very large would this become a bad idea.

    Combining the maps is good, I should have thought of it myself.

    Cheers,

    JohnGG

      What is bad about it (IMHO):

      • file size is unknown
      • processing does not start untill you are done reading
      • using at the very least three of four times the size of the file in memory
      • most people will not realizse that you are reading it in memory first. If you really want to read it all at once then I would suggest reading it in an array first.
      • the processing will be slower then reading it with a while and immedaitly creating a hash-element for each record. Now you read it in a temporary list, then you loop over that list, while looping over it you create a new list, and then you finally assign that list to a hash. Not that you will notice the speed/memory difference but that doesn't mean it's not there.

      And with this technique you can't check (easily) for duplicate elements - but that wasn't asked

        Taking your points in order:

        file size is unknown

        Not by us but kevyt is probably aware and can make a value judgement reconciling the size of his file with the memory resources available.

        processing does not start untill you are done reading

        I can't think why that would be a problem here. Please could you expand on why this is bad.

        using at the very least three of four times the size of the file in memory

        Yes, but as in point one kevyt can decide whether he has the resources to accomodate this. We don't know what resources are available.

        most people will not realizse that you are reading it in memory first. If you really want to read it all at once then I would suggest reading it in an array first

        This is a difficult topic. To what extent do you balance using using the features of Perl, or any language, against making your code accessible to beginners in the language. It has to depend on the type of workplace, the experience level of the workforce and the amount of staff churn. An experienced, stable programming team can perhaps make greater use of language features. However, if you never expose people to new techniques, they will never learn them. This exposure can be via training/mentoring or by encouraging and rewarding self-study. Personally, I am in favour of educating programmers so they can make more informed choices from a larger tool bag in order to solve problems.

        the processing will be slower then reading it with a while and immedaitly creating a hash-element for each record. Now you read it in a temporary list, then you loop over that list, while looping over it you create a new list, and then you finally assign that list to a hash. Not that you will notice the speed/memory difference but that doesn't mean it's not there

        Well, let's test it. Using a data file kludged up from /usr/dict/words so that we have unique keys as the first of four pipe-delimited fields per line (file size just under 1MByte) I ran some benchmarks. Here's the code

        I ran the benchmark five times and the map solution came out faster than the line-by-line approach on four of them, although the difference is probably not statistically significant. Reading into an array was consistently the slowest by a larger margin. Here's the output

        $ spw593475 s/iter Array ByLine Map Array 1.30 -- -14% -15% ByLine 1.12 16% -- -1% Map 1.10 18% 1% -- $ spw593475 s/iter Array Map ByLine Array 1.43 -- -14% -18% Map 1.22 17% -- -5% ByLine 1.16 23% 5% -- $ spw593475 s/iter Array ByLine Map Array 1.31 -- -14% -15% ByLine 1.12 17% -- -0% Map 1.12 17% 0% -- $ spw593475 s/iter Array ByLine Map Array 1.31 -- -13% -15% ByLine 1.13 16% -- -1% Map 1.11 17% 1% -- $ spw593475 s/iter Array ByLine Map Array 1.30 -- -14% -16% ByLine 1.12 16% -- -3% Map 1.09 19% 3% -- $

        I also ran each method in separate scripts to look at memory usage. As you would expect, line-by-line was most frugal with an image of about 7MB, array came next at about 9MB and map was most expensive at about 1MB, so your estimate of three to four times data file was spot on.

        The platform is SPARC/Solaris, an Ultra 30 with 300MHz processor and 384 MB of memory running Solaris 9 and the data file was on a local disk; the Perl version was 5.8.4 compiled with gcc 3.4.2.

        Regarding your final (added?) point, yes, I would have approached the problem a different way had duplicate detection been a requirement.

        Cheers,

        JohnGG

        Update: Fixed typo

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://593521]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others musing on the Monastery: (4)
As of 2024-04-24 05:23 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found