Quantcast
Viewing latest article 1
Browse Latest Browse All 2

Answer by Michael Green for Pre-production database replication solution

My first thought was to restore the backup. At 1TB this becomes impractical, however. Log shipping would not allow you to write to the pre-prod instance either. I think it should be possible to engineer something around table partitioning - assign each partition to its own filegroup, perform backups after the nightly batch and restore that to pre-prod in piecemeal fashion. I have not tried this; I'm just hypothesising.

Replication will add more load onto the produciton server to read changed values and publish them. Having the pre-prod copy writable makes things a little more complicated here.

Instead, for the one-off load, I'd suggest you restore a backup of prod. It may take a while but it will be complete and simple to script with minimal load on the prod box.

For daily updates I'd go with copying the staging table(s) to pre-prod and re-running the load job there. There will be no additional strain on the production DB. You can roll back pre-prod and re-run a batch as often as you like, for performance tuning or debugging. If the staging DB starts each batch empty it would be simple to script a full backup - copy - restore - rerun sequence. This will be a good way to test future changes to the loader.

We've had good results with change tracking, using it to copy hundreds of thousands of rows per day. If your staging DB isn't rebuit each day, this may be a good way forward. It will put a modest run-time load on the instance hosting the staging DB.


Viewing latest article 1
Browse Latest Browse All 2

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>