in reply to Re: data historization with DBIx::Class
in thread data historization with DBIx::Class

It is not exactly what the OP asked for. The journaling system documents the changes, whereas the OP asked for the data that was changed.

Of course it is possible to reconstruct the previous contents by running the journal to just before the SQL that changed the data, but that would be quite cumbersome.

Of course the OP's request is also very naive, as (s)he assumes that all changes will be atomic on a single record. By simply "saving" the previous content into a history table, you are open to all kinds of race conditions, such as two connections editing different fields in the same record, which will make it dificult, if not imossible to determine what the "previous content" was.

CountZero

A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James

  • Comment on Re^2: data historization with DBIx::Class

Replies are listed 'Best First'.
Re^3: data historization with DBIx::Class
by morgon (Priest) on Dec 27, 2010 at 00:13 UTC
    Of course the OP's request is also very naive, as (s)he assumes that all changes will be atomic on a single record.
    I am not sure I understand your point...

    To clarify:

    What I want is a way to save an object (representing a row) BEFORE I call any updating methods on it into another table (adding a few extra attibutes e.g user-information).

    This will be done in the same transaction that will eventually commit the updates to the original row.

    After that I have a "before-image" (the row at transaction start) and an "after-image" (the row after commit) in the database.

    Which of my assumption here is naive?

      There may be other connections working on the same row, updating or deleting it. Of course the transaction manager of the database will take care (or so we hope) that the database remains in a consistent state, but it may be difficult, or even impossible, to maintain a table of historical data in such a way that you can always reconstruct the status of the database at any given moment. It is surely for good reason that these internal transaction managers work on a journalling basis too!

      CountZero

      A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James

        It depends on the isolation level.

        As long as it's higher than "read uncommitted" (which is not really an isolation level and if that's all a data-engine can do it should not be called a database) then there should not be a problem as far as I can see.