in reply to Re^3: Tracking and deploying changes in (MySql/Maria) DB schema ...
in thread Tracking and deploying changes in (MySql/Maria) DB schema ...

my crazy idea: Have a cron job doing hourly (or so)
mysqldump ... git commit ...
If the cron job is scheduled often enough, it would register the modifications with enough granularity.
  • Comment on Re^4: Tracking and deploying changes in (MySql/Maria) DB schema ...
  • Download Code

Replies are listed 'Best First'.
Re^5: Tracking and deploying changes in (MySql/Maria) DB schema ...
by karlgoethebier (Abbot) on May 23, 2019 at 11:30 UTC

    This is not crazy. It works. We did it like this with a huge MySQL DB for about 10 years. Best regards, Karl

    «The Crux of the Biscuit is the Apostrophe»

    perl -MCrypt::CBC -E 'say Crypt::CBC->new(-key=>'kgb',-cipher=>"Blowfish")->decrypt_hex($ENV{KARL});'Help

      What about impact on resource usage? (That was my main reason for preliminarily calling it crazy)

        Terrific, to be honest. A restore of a full dump lasted one day or so. If this is what you where talking about. Totally out of question. But this case happened 2 or 3 times. In most cases it was enough to roll back/change 2 or 3 tables. Due to some strange design this had no impact on data consistency. At least the customer believed this. BTW, unfortunately MySQL dumps are plain text files. This design is totally fubar IMHO. Regards, Karl

        «The Crux of the Biscuit is the Apostrophe»

        perl -MCrypt::CBC -E 'say Crypt::CBC->new(-key=>'kgb',-cipher=>"Blowfish")->decrypt_hex($ENV{KARL});'Help