in reply to Re: Possible to have regexes act on file directly (not in memory)
in thread Possible to have regexes act on file directly (not in memory)

You need to have a sliding windows on top of it: when reading a new chunk, you keep a number of bytes from the previous chunk at least as large as the maximal length of the pattern match. So you would process:
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonu +my eirmod tempor invidunt ut labore et dolore magna aliquyam erat, se +d diam voluptua. At vero eos et accusam et justo duo dolores et ea re +bum. Stet clita kasd guber
keep, for example, the lita kasd guber from this chunk and append the next chunk to get:
lita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit + amet.
And now you have your match on the second chunk.

Replies are listed 'Best First'.
Re^3: Possible to have regexes act on file directly (not in memory)
by karlgoethebier (Abbot) on May 02, 2014 at 18:00 UTC
    "You need to have a sliding window on top of it..."

    Yes sure. Anyway, i wonder how to implement this (with one terabyte of Lorem ipsum) ;-(

    Best regards, Karl

    «The Crux of the Biscuit is the Apostrophe»

      The size of the data doesn't matter, it's the maximum size of the regex which counts. Suppose you have a maximum regex size of n, then your chunk size should be n (or greater). The algorithm is then:

      1. load 1 chunk into RAM
      2. load another chunk into RAM
      3. concatenate the 2 chunks for testing
      4. Discard the older chunk
      5. Goto 2

      HTH.

        The size of the data doesn't matter, it's the maximum size of the regex which counts.

        Surely, in functional or algorithmic terms, the size of the maximum possible match is the important thing. But the size of the data does matter very much in terms of feasibility. Sometimes, it just can't be done because it would take just way too long.

        About two years ago, I had a PL/SQL-like language program to extract data inconsistencies that would take about 60 days to complete (and that is after heavy optimization, the original one would have taken 180 days); 3 or 4 days would have been acceptable in the context, not 60 days. The idea was to correct data inconsistencies, you simply can't make DB corrections based on data whose extraction was done 60 days ago. You might be interested to know that I solved the problem thanks to Perl. I removed most of the very complicated business logic (at least the part of it that was taking ages to execute) from the PL/SQL-like program to extract raw files and reprocessed these files with Perl. The program is now running in about 12 hours (the main difference being that Perl has very efficient hashes enabling very fast look-up, whereas the PL-SQL-like language did not have that, forcing for linear search into relatively large arrays billions of times). BTW, this success contributed quite a bit to convincing my colleagues to use Perl; when I arrived in the department where I am, nobody was using Perl for anything more than a few Perl one-liners here and there in shell scripts; all of my colleagues now use Perl almost daily. Even our client has been convinced: I only need to propose them to rewrite this or that program in Perl to improve performance (of course, I do that only if I have good reasons to think that we will get really improved results), and they allocate the budget almost without any further ado.

        OK, this was somewhat off-topic, but my point was: if the data is really big (and I am working daily with GB or dozens of GB of data), the size of the input can really make the difference between things that are feasible and things which are not.

      I have basically explained how to do it: keeping part of the previous chunk and appending the new chunk to it. The real difficulty is whether it is possible to determine the length of longest possible match for the regex (which determines how much to keep from one chunk to another). For some regexes, it is very easy, for others, it is very difficult or even impossible. The OP does not give enough information on that.

      Then, there is the question of the size of the input. On my server, processing a 10 GB (line-based) file with a relatively simple regex might take 5 to 10 minutes. It would probably be a bit faster if not line-based, reading chunks of say 1 MB. With a TB of data, it would take quite a bit of time, but that might still be relatively manageable. But that's assuming a simple regex with no need to backtrack. With a regex implying a lot of backtracking, it might very easily be completely unmanageable.

        > For some regexes, it is very easy, for others, it is very difficult or even impossible.

        it's often much simpler as you might think, cause you can decompose regexes into smaller and easier parts.

        perl -e'use re 'debug';qr/x{100}.*y{100}/' Compiling REx "x{100}.*y{100}" Final program: 1: CURLY {100,100} (5) 3: EXACT <x> (0) 5: STAR (7) 6: REG_ANY (0) 7: CURLY {100,100} (11) 9: EXACT <y> (0) 11: END (0) anchored "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"... at 0 floating "yyyyyyyyyy +yyyyyyyyyyyyyyyyyyyy"... at 100..2147483647 (checking floating) minle +n 200 Freeing REx: "x{100}.*y{100}"

        In this case you start looking for 'x'x100 in a sliding window of size >200 from the beginning. Then you search backwards from the end in sliding windows for 'y'x100.

        Like this even greedy matches can be handled (mostly) and the total match might even cover terrabytes.

        Cheers Rolf

        ( addicted to the Perl Programming Language)