We have a large table with one, compound (two columns), clustered index. We
want to recover space from the table (it is 50% empty space). DBREINDEX could
do this, because DBREINDEX would copy the table (since the index is
clustered). But there is not enough free space on the disk partition for us
to do this on this table.
So we need to dump the table to another disk partition, and then reload the
data. How could this be done efficiently? I'm particularly concerned about
the reload phase, since the index needs to be rebuilt and I'm guessing this
will have to be done row by row. However I'd like advice on the dump phase
as well, since I suspect the type of dump and dump row order will also
greatly affect the subsequent reload.
I'm open to all suggestions: how you'd do it; how you'd *not* do it;
documentation or web references you'd suggest I look at.
(SQL Server 2000, by the way.)
Thanks!
You might consider using DBCC INDEXDEFRAG if you don't have sufficient
space. If you export to a file, you can import using BCP or BULK INSERT
with the clustered index in place and specify that data are ordered using
the ORDER hint. See BCP and BULK INSERT in the Books Online for details.
In addition to data file space requirements, be mindful of the log space
needed. You might consider importing with a batch size specification and
using the SIMPLE or BULK_LOGGED recovery model in order to to reduce log
space requirements.
Hope this helps.
Dan Guzman
SQL Server MVP
"Scott" <Scott@.discussions.microsoft.com> wrote in message
news:3544030B-92C7-40F7-9F44-03125557D27D@.microsoft.com...
> We have a large table with one, compound (two columns), clustered index.
> We
> want to recover space from the table (it is 50% empty space). DBREINDEX
> could
> do this, because DBREINDEX would copy the table (since the index is
> clustered). But there is not enough free space on the disk partition for
> us
> to do this on this table.
> So we need to dump the table to another disk partition, and then reload
> the
> data. How could this be done efficiently? I'm particularly concerned
> about
> the reload phase, since the index needs to be rebuilt and I'm guessing
> this
> will have to be done row by row. However I'd like advice on the dump
> phase
> as well, since I suspect the type of dump and dump row order will also
> greatly affect the subsequent reload.
> I'm open to all suggestions: how you'd do it; how you'd *not* do it;
> documentation or web references you'd suggest I look at.
> (SQL Server 2000, by the way.)
> Thanks!
>
|||Very good, thanks.
I've used INDEXDEFRAG a lot. Except for an early test case (small table),
we have totally batted zero on it ever recovering space. I know it can do so
in theory; the the theory has not materialized in practice for us. It does a
good job of improving performance (index optimizing) for us, though.
I found this on the web: "Bulk Copy Optimization Tips" -
http://www.mssqlcity.com/Tips/bulk_copy_optimization.htm ; this very much
matches your suggestions. An additional tip it has (not sure this is in
Books Online) is that BULK INSERT will be more efficient on the import than
BCP would be. Also regarding the transaction log issue you bring up: *if*
the TABLOCK hint is specified and the table starts out empty at the beginning
of the import (plus: select into/bulkcopy option enabled and no replication),
*then* no transaction logging is performed. So that combination not only
would solve the potential transaction log space problem you raise, it sounds
like it even speeds things further due to no transaction logging at all.
Thank you very much for your advice.
--Scott
"Dan Guzman" wrote:
> You might consider using DBCC INDEXDEFRAG if you don't have sufficient
> space. If you export to a file, you can import using BCP or BULK INSERT
> with the clustered index in place and specify that data are ordered using
> the ORDER hint. See BCP and BULK INSERT in the Books Online for details.
> In addition to data file space requirements, be mindful of the log space
> needed. You might consider importing with a batch size specification and
> using the SIMPLE or BULK_LOGGED recovery model in order to to reduce log
> space requirements.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Scott" <Scott@.discussions.microsoft.com> wrote in message
> news:3544030B-92C7-40F7-9F44-03125557D27D@.microsoft.com...
>
|||Hi, Scott.
Also be aware that if you perform a minimally logged bulk operation without
indexes, you might still paint yourself into a corner if you don't have
enough space to create the clustered index.
It seems to me that your main problem is insufficient disk space. Unless
you can add more space, you might need to load with the clustered index in
place so that you don't need to create the clustered index after the load.
Personally, I try to reserve enough freespace to allow rebuilding the
clustered index on the largest table. Although you can get by with less, it
increases manageability costs..
Hope this helps.
Dan Guzman
SQL Server MVP
"Scott" <Scott@.discussions.microsoft.com> wrote in message
news:FA01EF9E-15B1-4511-A2B2-BAFE7F86E1C4@.microsoft.com...[vbcol=seagreen]
> Very good, thanks.
> I've used INDEXDEFRAG a lot. Except for an early test case (small table),
> we have totally batted zero on it ever recovering space. I know it can do
> so
> in theory; the the theory has not materialized in practice for us. It
> does a
> good job of improving performance (index optimizing) for us, though.
> I found this on the web: "Bulk Copy Optimization Tips" -
> http://www.mssqlcity.com/Tips/bulk_copy_optimization.htm ; this very much
> matches your suggestions. An additional tip it has (not sure this is in
> Books Online) is that BULK INSERT will be more efficient on the import
> than
> BCP would be. Also regarding the transaction log issue you bring up:
> *if*
> the TABLOCK hint is specified and the table starts out empty at the
> beginning
> of the import (plus: select into/bulkcopy option enabled and no
> replication),
> *then* no transaction logging is performed. So that combination not only
> would solve the potential transaction log space problem you raise, it
> sounds
> like it even speeds things further due to no transaction logging at all.
> Thank you very much for your advice.
> --Scott
>
> "Dan Guzman" wrote:
Monday, March 12, 2012
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment