Dependent on the database in question, you may find that table sizes may not actually be the same for the same data on different machines. In particular, PostgreSQL (which I'm most familiar with) may have different amounts of unused space on each machine, dependent on how recently VACUUM has been run. Assuming you're checking the data (rather than the schema), I'd be tempted to take a data-only dump of each table (using the same backup mechanism on all 4 machines), calculate an MD5 hash for each dump, then compare those. If you're dealing with large amounts of data this may not be particularly feasible, and this almost certainly won't work if the machines aren't all the same architecture. I've no idea if this is the best way of doing this, but it's what sprung to mind - am very curious to see the other replies to your question to be honest ;-)
Update: I can't believe I missed the glaringly-obvious 'MS SQL' in the title. More coffee required for me. I still think the MD5 approach is worth considering mind you.
|