guys,
I have a table without any index, I have an indexed view just select
all data from that table. my question is when the indexed view will
rebuild the index? when I add new records to the table, or when I
query the view? Thanks a lot!AFAIK when you add the records. You added a clustered index on a view
basically turning it in a table (simplified!).
MC
"Lee" <lee.jenkins.ca@.gmail.com> wrote in message
news:1178335665.973134.115560@.h2g2000hsg.googlegroups.com...
> guys,
> I have a table without any index, I have an indexed view just select
> all data from that table. my question is when the indexed view will
> rebuild the index? when I add new records to the table, or when I
> query the view? Thanks a lot!
>|||To add to MC's response, the view index is maintained as the underlying
table content changes, just like any other type of index. The view index
can be rebuilt just like a table index (e.g. DBCC DBREINDEX).
Hope this helps.
Dan Guzman
SQL Server MVP
"Lee" <lee.jenkins.ca@.gmail.com> wrote in message
news:1178335665.973134.115560@.h2g2000hsg.googlegroups.com...
> guys,
> I have a table without any index, I have an indexed view just select
> all data from that table. my question is when the indexed view will
> rebuild the index? when I add new records to the table, or when I
> query the view? Thanks a lot!
>|||The view is never rebuild, but its index is immediately updated as part
of any insert/update/delete to the base table.
But why would you want to create a table without indexes and then add an
indexed view? That will just waste space.
Gert-Jan
Lee wrote:
> guys,
> I have a table without any index, I have an indexed view just select
> all data from that table. my question is when the indexed view will
> rebuild the index? when I add new records to the table, or when I
> query the view? Thanks a lot!|||Thanks for you guys, If the index will be updated when insert new
records, what is the best way to handle my situation?
I have a table need to archive every day, basically, move 1M rows from
one table to another table in different database instance, the target
table have 1 clustered index on PK field and 3 nonclustered index on
other int fields, if I leave the indexes there, it will take about 4
minutes to insert the rows, if I removed those indexes, it will take
only 1 minutes and 30 seconds to do so, but I do have some queries
will join the target table and query data from it, queries without
indexes are really slow. What is the best way to handle it?
Is there any solutions other than partition the table? create a
windows console application to use bcp will help?
On May 5, 2:28 pm, Gert-Jan Strik <s...@.toomuchspamalready.nl> wrote:
> The view is never rebuild, but its index is immediately updated as part
> of any insert/update/delete to the base table.
> But why would you want to create a table without indexes and then add an
> indexed view? That will just waste space.
> Gert-Jan
>
> Lee wrote:
>
>
> - Show quoted text -|||A bulk insert technique using DTS, SSIS or custom app is the fastest way to
move data from one SQL Server instance to another. What method you are
currently using? 250K rows/min. seems about right for a bulk insert with
several indexes on the target table.
Hope this helps.
Dan Guzman
SQL Server MVP
"Lee" <lee.jenkins.ca@.gmail.com> wrote in message
news:1178566651.374193.27950@.y5g2000hsa.googlegroups.com...
> Thanks for you guys, If the index will be updated when insert new
> records, what is the best way to handle my situation?
> I have a table need to archive every day, basically, move 1M rows from
> one table to another table in different database instance, the target
> table have 1 clustered index on PK field and 3 nonclustered index on
> other int fields, if I leave the indexes there, it will take about 4
> minutes to insert the rows, if I removed those indexes, it will take
> only 1 minutes and 30 seconds to do so, but I do have some queries
> will join the target table and query data from it, queries without
> indexes are really slow. What is the best way to handle it?
> Is there any solutions other than partition the table? create a
> windows console application to use bcp will help?
> On May 5, 2:28 pm, Gert-Jan Strik <s...@.toomuchspamalready.nl> wrote:
>|||On May 7, 8:45 pm, "Dan Guzman" <guzma...@.nospam-online.sbcglobal.net>
wrote:
> A bulk insert technique using DTS, SSIS or custom app is the fastest way t
o
> move data from one SQL Server instance to another. What method you are
> currently using? 250K rows/min. seems about right for a bulk insert with
> several indexes on the target table.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Lee" <lee.jenkins...@.gmail.com> wrote in message
> news:1178566651.374193.27950@.y5g2000hsa.googlegroups.com...
>
>
>
>
>
>
>
>
>
>
>
> - Show quoted text -
Dan,
Thanks for the reply. right now I am just use regular INSERT INTO
table (...) SELECT (...) FROM table query to do that. Do you mean
250K/minute is similar to use bcp? Because my application is a web
application, I can not use DTS or SSIS to do the bulk insert. the
custom app is the only choice, my concern is I have to first bcp all
the data to a flat file and then bcp the file to other database, will
those actions add overheads to the whole process and make it even
slower than the way I am using right now? Thanks lot.|||I was under the impression that the source/target tables were on different
servers. When the tables are on the same server with indexes on the target
table, then INSERT...SELECT performance should be equal or better than
DTS/SSIS with this volume of data. Both methods are fully logged because of
the target table indexes so you should get roughly the same throughput.
Just to be clear, you can copy directly from table to table without an
intermediate file if you use DTS/SSIS or custom bulk insert app. You only
need a file in order to use the command-line BCP utility or Transact-SQL
BULK INSERT statement.
It seems a bit odd to me that you need to move 1M rows using a web
application. I think an asynchronous batch process would be more
appropriate here.
Hope this helps.
Dan Guzman
SQL Server MVP
"Lee" <lee.jenkins.ca@.gmail.com> wrote in message
news:1178670995.343694.109970@.h2g2000hsg.googlegroups.com...
> On May 7, 8:45 pm, "Dan Guzman" <guzma...@.nospam-online.sbcglobal.net>
> wrote:
> Dan,
> Thanks for the reply. right now I am just use regular INSERT INTO
> table (...) SELECT (...) FROM table query to do that. Do you mean
> 250K/minute is similar to use bcp? Because my application is a web
> application, I can not use DTS or SSIS to do the bulk insert. the
> custom app is the only choice, my concern is I have to first bcp all
> the data to a flat file and then bcp the file to other database, will
> those actions add overheads to the whole process and make it even
> slower than the way I am using right now? Thanks lot.
>
No comments:
Post a Comment