Friday, March 30, 2012

Question about sending a large db backup/restore then replicating

OK.
lets say i have a 100gb db. I really just want to replicate 1 table
within that db that is 65gb at like 100,000,000 rows(but who cares, we
will use the entire db for this example). This is a production db that
is constantly being updated. How would I be able to do something like a
no sync or something of that nature on a db that is constantly being
updated?
Wouldn't I need to shut off access of anything entering that db from the
point I started the backup to the point the subscriber does the restore,
and I can then setup the publication with "nosync"?? I guess I could
setup no sync and then manually dts the missing 1 day of information
over some other way and then we can manually sync items from there?
The other option i was thinking about was starting the 100gb snapshot,
disabling the distribution agent from starting to sync. Copy those
..bcp,.sch,.idx files to tape. Over night the tape to the subscriber, in
the mean time let the log file for the db grow like crazy since their
are articles marked for replication that cannot be delivered because the
agent is disabled. Then once they do a restore enable the distribution
agent and tell it to skip looking for the snapshot that was once there
and just deliver the transactions that are waiting in the log? Does
that make sense.
The short of this, Is that I am trying to figure out how to replicate a
100gb db without shutting off access to it. This db is updated 24/7.
tia, and sorry to be so long winded
-comb
Ideally you will kick off all of your users and backup and restore, bcp the
data out, or use DTS.
What I do is create a no-sync subscription at lets say midnight. Then I get
my data over, usually using bcp (if you can use snapshot isolation within
SQL 2005), and the deploy it on my subscriber. Then I start up my
distribution agent with the allow data consistency errors and I try to fix
the consistency issues.
Its very difficult but it seems to be the best option.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"combfilter" <asdf@.adsf.com> wrote in message
news:MPG.1e3843bad612d2a99896e4@.news.newsreader.co m...
> OK.
> lets say i have a 100gb db. I really just want to replicate 1 table
> within that db that is 65gb at like 100,000,000 rows(but who cares, we
> will use the entire db for this example). This is a production db that
> is constantly being updated. How would I be able to do something like a
> no sync or something of that nature on a db that is constantly being
> updated?
> Wouldn't I need to shut off access of anything entering that db from the
> point I started the backup to the point the subscriber does the restore,
> and I can then setup the publication with "nosync"?? I guess I could
> setup no sync and then manually dts the missing 1 day of information
> over some other way and then we can manually sync items from there?
> The other option i was thinking about was starting the 100gb snapshot,
> disabling the distribution agent from starting to sync. Copy those
> .bcp,.sch,.idx files to tape. Over night the tape to the subscriber, in
> the mean time let the log file for the db grow like crazy since their
> are articles marked for replication that cannot be delivered because the
> agent is disabled. Then once they do a restore enable the distribution
> agent and tell it to skip looking for the snapshot that was once there
> and just deliver the transactions that are waiting in the log? Does
> that make sense.
> The short of this, Is that I am trying to figure out how to replicate a
> 100gb db without shutting off access to it. This db is updated 24/7.
> tia, and sorry to be so long winded
> -comb

No comments:

Post a Comment