SQL Server 2008 :: Index Fragmented Quickly
Apr 8, 2015
I just did index defragmentation for some databases include MSDB . I notice there are 3 indexes from MSDB database that fragmented quickly ( I did rebuild last nite at 10 PM - > fragmentation level becomes zero but today at 9 am it become 80 % ).The indexes are backupsetuuid, backup media family uuid, backupmediasetuuid. I am thinking to set the fill factor for those indexes = 80 respectively.
View 6 Replies
ADVERTISEMENT
May 1, 2015
This application runs on a SQL Server 2008 R2 database.This application receives messages from an integration module. It has a core table: Table-A. Each message is inserted as 1 row into Table-A. Then when it is processed, that row in Table-A is updated.
There are two environments which are both connected to the same integration. So in both environments, Table-A has exactly the same amount of records inserted and updated. In both environments Table-A has around 80 million rows, with an extra 150,000 rows being inserted and then updated every day.Table-A has 8 indexes. For some reason unknown to me, the 8 indexes fragment really quickly in one environment but not in the other.
e.g. In Environment-1 the index fragmentation ranges from 0 - 19% and this environment has not been re-indexed for over 2 months.BUT a reindex was performed in Environment-2 and only 2 days later the index fragmentation ranges from 72 - 99.93%!
Our DBA has confirmed the re-index in Environment-2 completed successfully and has shown stats before and after the reindex to show that the 8 indexes for Table-A in Environment-2 went down to 0% fragmentation.
My question is, how can the indexes in Environment-2 fragment so much more quickly than the indexes in Environment-1? Both environments are on exactly the same hardware and have exactly the same inbound messages. The database on Environment-1 is actually a clone from Environment-2. The only known differences between the 2 databases is Environment-1 is STANDARD edition - SQL Server 2008 R2 (SP2) whereas Environment-2 is ENTERPRISE edition - SQL Server 2008 R2 (SP1). Could this difference be due to the Service Pack levels or even because one is STANDARD and the other ENTERPRISE?
This is what I have checked so far:
1) In both Environments all 8 indexes have "Set Fill Factor" unchecked and "Automatically recompute statistics", "Use row locks...", "Use page locks..." checked.
2) The "Index Usage Statistics" report in both Environments shows a similar amount of #UserUpdates and #UserScans
View 9 Replies
View Related
Dec 1, 2007
Hi everyone,
If I have a table with some indexes on the foriegn keys and these indexes are heavily fragmented (80%+), is it normal for queries to return incorrect results?
For example if I had a table called Customer( CustID, Name) and Orders (OrderID, CustID, Product, Date).
Lets say I have a non clustered index on CustID in Orders table, and the clustered indexes are Customer.CustID and Orders.OrderID
If the non clusterd index on Orders.CustID becomes heavily fragmented and I am querying the Orders table with TSQL "SELECT * FROM Orders where CustID = @CustID" I sometimes get missing data or incorrect results. In one case all orders for a particular year were missing, but if I queried using OderID they were returned. Rebuilding the index fixed the problem.
I know the index should be rebuilt or reorganized depending on the fragmentation but if one happened to become this fragmented should it start returning incorrect data?
- Using SQL Server Express 2005
View 3 Replies
View Related
Sep 24, 2015
I have a table which has cluster index on col1 column. If i insert 10 into my table what would be cluster index key value?Is it going to be 10 as well? How do i get cluster index key value?
View 5 Replies
View Related
Feb 27, 2015
After reading some comments here I decided to look at tables to see if any had a clustered index that was a unique identifier. Yep. So if I have a table with a unique identifier as the primary key/clustered index and an identity column that is indexed, I would like to make the identity a clustered index (maybe even the primary key) and make the unique identifier a unique non-clustered index (not the primary key).
Does this sound reasonable?If I do this will I need to drop and recreate the other indexes? Or maybe just rebuild the other indexes?
Currently:
CREATE TABLE Payments (
IDX INT IDENTITY(1,1) NOT NULL,
GUID UNIQUEIDENTIFIER NOT NULL DEFAULT(NEWID()),
.....
-- many other columns
);
GO
ALTER TABLE [dbo].[PAYMENTS] ADD CONSTRAINT [PK_PAYMENTS_GID] PRIMARY KEY CLUSTERED ([GUID] ASC);
GO
CREATE NONCLUSTERED INDEX [IX_Payments_ID] ON [dbo].[PAYMENTS] ([IDX] ASC);
GO
Would like:
ALTER TABLE [dbo].[PAYMENTS] ADD CONSTRAINT [PK_PAYMENTS_IDX] PRIMARY KEY CLUSTERED (IDX ASC);
GO
CREATE UNIQUE NONCLUSTERED INDEX [IX_Payments_GUID] ON [dbo].[PAYMENTS] (GUID ASC);
GO
View 9 Replies
View Related
Apr 1, 2015
I've yet to use partitioning in a production environment, and pretty much last ran any partitioning related code a few years back when looking at certification; so I'm definitely not an expert on the matter and only loosely clued up on the concepts.
I've recently started with a new employer, and they have just implemented a new system for sms messaging. The database tables tracking the sms messages being sent are going to get big and so they have created decided to implement partitioning on some of the tables using a partition scheme on the CreatedDate column; the DBA involved in designing the partitioning has left and I'm picking this up.
The relevant DDL for the table is below:-
CREATE TABLE [Message].[Sms](
[SmsId] [bigint] IDENTITY(250000001,1) NOT NULL,
[CreatedDate] [datetime] NOT NULL CONSTRAINT [DF_Sms:CreatedDate] DEFAULT (getdate()),
CONSTRAINT [PK_Sms:SmsId] PRIMARY KEY NONCLUSTERED
[code]....
There are some issues with the above that I will be addressing seperately (e.g. the clustered index should be unique as it contians the unique key, and the fillfactors are daft), but my concerns for this post are below.
1) How to define the Primary Key and enforce it's uniqueness whilst trying to ensure it's aligned with the partition in order to be able to switch out old data once an as yet undefined retention period has passed. In books online it states:- "If it is not possible for the partitioning column to be included in the unique key, you must use a DML trigger instead to enforce uniqueness. " Books online - Special Guidelines for Partitioned Indexes. However, I'm not sure what this means, nor how I create the primary key to use the partition function seeing as it doesn't have the CreatedDate in the unique key?
2) The original partition function was envisaged as the following:-
CREATE PARTITION FUNCTION [DateFunction](datetime) AS RANGE
LEFT FOR VALUES (N'2014-01-01T00:00:00.000'
, N'2014-04-01T00:00:00.000'
, N'2014-07-01T00:00:00.000'
, N'2014-10-01T00:00:00.000'
, N'2015-01-01T00:00:00.000'
, N'2015-04-01T00:00:00.000'
, N'2015-04-02T00:00:00.000'
, N'2015-04-03T00:00:00.000'
, N'2015-04-04T00:00:00.000'
, N'2015-04-05T00:00:00.000')
GO
There is a procedure that has been created and scheduled daily that will create a new partition for each day, and then merge these together at the end of the quarter. My understanding of partitioning is that this is a bad idea, as it will result in merging several populated partitions together. Is my understanding correct? If so, I'm planning on removing the day partitions at the end of the function, and simply adding quarterly partitions, maintaining a spare empty partition at the end of the table. Would this make more sense?
View 9 Replies
View Related
Jul 1, 2015
We are adding 4-5 indexes to one database and dropping 2 unused indexes. I don't have proper testing environment. How to monitor these indexes changes? Do we need to run any baseline but we don't get load all the time same load all the days
View 8 Replies
View Related
Jul 9, 2015
Does including non-key columns work for the performance of an index?
View 8 Replies
View Related
Oct 30, 2015
Give a user table ‘MyTable’. How to know whether the table contains a non-unique clustered index by using SQL query?
View 2 Replies
View Related
May 31, 2015
I am new to mssql server. There is a table on one of my databases that occupies a lot of space. And the space usage is as follows: (all values in KB)
reserved: 42329064
data: 16272288
index: 26050032
unused: 6744
This table takes up almost 80% of my database size, and the information is this table just captures the time spent by a user on the website(not very critical data)
I would like to know how to delete the entire index (which is what is occupying most space) to free up disk space. the index is a clustered index.
View 3 Replies
View Related
Jun 17, 2015
I run a query
select col1, col2, col3, col4
from Table
where col2=5
order by col1
I have a primary key on the column.The execution plan showing the clustered index scan cost 30% & sort cost 70%..When I run the query I got missing index hint on col2 with 95% impact.So I created the non clustered index on col2.The total executed time decreased by around 80ms but I didn't see any Index name that is using in the execution plan.After creating the index also I am seeing same execution plan
The execution plan showing the clustered index scan cost 30% & sort cost 70% but I can see the total time is reducing & Logical reads on that table is reducing.I am sure that index is useful but why there is no change in the execution plan?
View 7 Replies
View Related
Aug 2, 2015
I am extremely new to database design, and I ran into a problem that I know comes up often, however has many opinions...
Basically I have a table that is going to have 50+ columns. The natural key on this table is actually 8 columns wide, 4 of them being Varchar columns by default. (varchar(50)'s).
I have added an identity column, (1,1) to the table, however I put the clustered index on the 8 natural keys... My plan is to rebuild the clustered index once nightly when the system isn't in use (after 7 pm).
I know others would say it would be better to have the clustered key on the 1,1 column and then add indexes on the other 8 fields... However I don't quite understand why honestly...
Every single query against this table will use the 8 columns, and will NOT use the Identity column (1,1) because they are calls from other systems that do not know the Identity column....
Therefore if your database is set up for query speed, and every single query has to have a value for 8 columns to get a valid result, does it make sense to put a clustered index over the 8 columns?
If not why? Why is putting a clustered index on an identity column (that will literally never be used in a query) a better solution?
View 9 Replies
View Related
Sep 30, 2015
Table Name: Denominator
Already has the following constraint:
PK_Denominatorclustered, unique, primary key located on PRIMARY DenominatorID
How can I add a unique key that will cover the 3 fields --> MemberID,MeasureID,TimePeriodID
I also want to know whether we can include the " WITH ( IGNORE_DUP_KEY=ON ) "
View 3 Replies
View Related
Oct 14, 2015
that violates the targets referential integrity?I am getting error Msg 2601, Level 14, State 1, Line 1Cannot insert duplicate key row in object XXX with unique index YYY.The statement has been terminated.I would like to know if there is a way to examine or determine what source rows are not conforming to the unique index.I'm fine with dropping and reestablishing the index, and i know its cataloged somewhere because during index creation, the error message does tell you the row details clobbering index creation. Ideally i would like to be able to trap all the failing rows and see what i can do about rehabilitating them or ignoring them or managing them some other way, but id like to know what the server knows when it will not create the index.
View 2 Replies
View Related
Apr 22, 2015
What is the best way to forecastestimate space for non-clustered index on a table?
Example :
Table name : Test123
Row : 170000000
Reserved : 18000000 KB
Data : 70000000 KB
Index: 40000000 KB
Note: Test123 already has clustered index and 2 non clustered indexes.
View 7 Replies
View Related
Sep 15, 2015
I have query with an expensive Key Lookup on a joined table. The predicate is the column that I'm joining on, and the output list contains two columns from the joined table.
I've created a basic non-clustered index covering the predicate column and include-ing the two output columns. However, the execution plan ignores this, and insists on using the primary key of the joined table to do the expensive key lookup. I've tried adding the included columns to the index directly and there's no change. I've also tried running dbcc freeproccache and no change.
View 3 Replies
View Related
Jul 16, 2015
We noticed a deadlock 3-4 weeks ago on a table (table1) and deadlock graph was captured.
When I am analyzing the deadlock graph, page number using DBCC PAGE, I am getting the object id for a different table (table2). But deadlock graph shows the name of the object as table1.
Is it possible that subsequent defragmentation of indexes would have caused the respective page id to got re-allocated to a different table? I checked the deadlock graph lately only after 3-4 weeks.
View 1 Replies
View Related
Nov 13, 2000
Good morning one and all,
I need to transfer a database (contining one table) containing over 35 million records from one server to another. I have two options at present :
(a) Use DTS to do the transfer
(b) Copy the mdf file across and sp_attach_db it
Does any1 have a better idea, or does any1 know which of the two methods will be the quickest?
TIA
Gurmi
View 1 Replies
View Related
Mar 28, 2015
Our system runs a SQL Server 2012 DB, it has a table (table_a) which has over 10M records. Our system have to receive data file from previous system daily which contains approximate 3M updated or new records for table_a. My job is to update table_a with the new data.
The initial solution is:
1 Create a table (table_b) which structur is as the same as table_a
2 Use BCP to import updated records into table_b
3 Remove outdated data in table_a:
delete from table_a inner join table_b on table_a.key_fileds = table_b.key_fields
4 Append updated or new data into table_a:
insert into table_a select * from table_b
As the test result, this solution is very inefficient. Step 3 costs several hours, e.g. How can I improve it?
View 9 Replies
View Related
Feb 27, 2004
Hi,
I have a database and application that are running slowly at the moment. My investigations so far have led me to HD issues. After an analysis of the data drives (RAID 5 on 5x36Gb drives) it shows that they are 99% fragmented.
Can i run a defrag on this drive?
Will i have to stop the SQL services?
tia
fatherjack
View 4 Replies
View Related
Jul 23, 2005
kalpesh.s...@gmail.com Feb 1, 6:50 am show optionsNewsgroups: comp.databases.informixFrom: kalpesh.s...@gmail.com - Find messages by this authorDate: 1 Feb 2005 06:50:21 -0800Local: Tues, Feb 1 2005 6:50 amSubject: how do you know indexes have been fragmentedReply | Reply to Author | Forward | Print | Individual Message | Showoriginal | Remove | Report Abusei ran dbcc showcontig on my sql server db and it returned fo*llowingTable: 'Table1' (1621580815); index ID: 1, database ID: 7TABLE level scan performed.- Pages Scanned................................: 4982- Extents Scanned..............................: 628- Extent Switches..............................: 627- Avg. Pages per Extent........................: 7.9- Scan Density [Best Count:Actual Count].......: 99.20% [623*:628]- Logical Scan Fragmentation ..................: 0.00%- Extent Scan Fragmentation ...................: 99.52%- Avg. Bytes Free per Page.....................: 38.3- Avg. Page Density (full).....................: 99.53%Based on searching for info on index defrag it seems my Exte*nt ScanFragmentation percentage is not what it should be (0%) . Is *it trueandif yes how can you be sure that your indexes have been fragm*ented.If indexes are really fragmented what is the best way withou*treindexing(or is that the best way) to defrag the indexes.Thank you Kalpesh
View 3 Replies
View Related
May 26, 2008
Last week i found on my database that almost all indexes had more than 70% of fragmentation. I rebuilt the indexes to fix the "problem" but today i found out that in one particular table (one with little more than 1 million records") the indexes are again 99,80% fragmented.
Did I do somethin wrong? If you ask me if there are a lot of transactions running over that table, the answer is NO. There is one procces that could append (insert) between 500 and 1000 records but this happens just once a month.
View 3 Replies
View Related
Oct 1, 2007
Have a 1TB of heaped tables being used 24/7 and performance is degrading over time, and the vendors dynamic SQL won't include clustered indexes! (and they won't let me add them)
I can reorganise the heap with an ALTER TABLE statement, add a column,
clustered index, drop column etc. However this is intrusive and I would require an entire day to perform this piece-meal, and a blanket script for 2000+ tables would kill performance all together.
Would a back-up, then remove the 1TB DB followed by a restore placing the data back onto disk unfragmented. (with LS this would only require a 3 hour down-time window) In theory it should work?
View 4 Replies
View Related
Oct 18, 2012
We have a new database with cdc enabled on all of its tables. This causes the index maintenance task to fail with following message:
"Executing the query "EXEC DBName.dbo.IndexDefrag_sp" failed with the following error: "The unique index 'PK_TableName' on source table '[dbo].[TableName]' is used by Change Data Capture. To alter or drop the index, you must first disable Change Data Capture on the table. The transaction ended in the trigger. The batch has been aborted.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly" We would like to run the index maintenance without losing the cdc data. We plan on installing SP2 on SQL Server 2008 R2 soon, would that solve the issue? Disabling the cdc prior to index maintenance and then re-enabling back upon completion; would delete the data as I found in most discussions, but we would like to retain it.
View 4 Replies
View Related
Apr 23, 2008
I have a database with a couple of tables i need to expand to 4 gigabytes in order to run some tests. (currently 300 megs)
Does anyone have a script or some method that would quickly populate my tables with random data so that i can grow my database to the desired size for testing.
Thanks
I have SQL server 2005 express. I have the management studio installed too.
View 4 Replies
View Related
Mar 20, 2007
We have a server with some pesky RAID5 - which has on 3 separate occasions corrupted the Databases when a drive failed. So we had a maintenance window and decided to change it to RAID10.
We started the configuration at 13:00 today, its now 18:00 and it has done 25% ... is that normal?
Its 8 disks (so 4 mirror-pairs), it will have around 300GB of usable space when its done.
What would happen if we needed to do this in a time critical window? (for this debacle we have moved the database onto the Web Server, so we can survive for a few more hours ...)
Kristen
View 4 Replies
View Related
Jun 15, 2007
Hi, All:
I know oracle SQL, now I need to do a lot of SQL query on Microsoft SQLSERVER, can any one point out any place that I can find out the syntax of SQLserver SQL statement? Since this is just a short term assignment, so I don't want to buy a book, just hoping I can learn something quickly from online. I don't need learn anything deep, just need to know some simple syntax so I can do join, count, concatenate, min(), max(), sum () etc.
thanks in advance.
View 5 Replies
View Related
Oct 22, 2007
I am trying to search for stored files "for example from date: 15/12/2003 to: 24/6/2006" and when i press search no results appeare the following is the database code:
1 public DataTable searchData(string fileNo, string Title, string dFrom, string dTo, string brief)2 {3 string str = "";4 5 str = "select * from Tb_File where Active = 1 ";6 7 if (fileNo != "")8 str += " and FileNo='" + fileNo + "'";9 if (Title != "")10 str += " and Title like '%" + Title + "%' ";11 if (brief != "")12 str += " and Brief like '%" + brief + "%' ";13 if (dFrom != "")14 str += " and DFrom >= convert(datetime,'" + Convert.ToDateTime(dFrom).ToShortDateString() + "',103) ";15 if (dTo != "")16 str += " and DTo < convert(datetime,'" + Convert.ToDateTime(dTo).ToShortDateString() + "',103) ";17 18 ole.Open();19 SqlDataAdapter DA = new SqlDataAdapter(str, ole);20 DataTable DT = new DataTable();21 DA.Fill(DT);22 ole.Close();23 return DT;24 25 }
i am using sql 2000, with Visual Studio 2005.
View 14 Replies
View Related
Mar 26, 2007
Good day,
I have a table of approximately 10 million rows. The table has 3 field making up the key, namely:
ID, Date, Program
I need to extract all the distinct Program's from the table.
I have don so with:
Select distinct Program from table
This unfortunately takes roughly 2 minutes which is far to long. Is there something I can do to help speed this process up?
Thanks in advance.
View 14 Replies
View Related
May 19, 2004
There have been several threads about changing a database's collation but none have come up with an easy answer before.
The suggestion before was to create an empty database with the correct collation and then copy the data across.
However this is hard work as you have to populate tables in a specific order in order not to violate foreign keys etc. You can't just dts the whole data.
There follows scripts we have written to do the job. If people use them, please could you add to this thread whether they worked successfully or not.
Firstly we change the default collation, then change all the types in the database to match the new collation.
===================
--script to change database collation - James Agnini
--
--Replace <DATABASE> with the database name
--Replace <COLLATION> with the collation, eg SQL_Latin1_General_CP1_CI_AS
--
--After running this script, run the script to rebuild all indexes
ALTER DATABASE <DATABASE> COLLATE <COLLATION>
exec sp_configure 'allow updates',1
go
reconfigure with override
go
update syscolumns
set collationid = (select top 1 collationid from systypes where systypes.xtype=syscolumns.xtype)
where collationid <> (select top 1 collationid from systypes where systypes.xtype=syscolumns.xtype)
go
exec sp_configure 'allow updates',0
go
reconfigure with override
go
===================
As we have directly edited system tables, we need to run a script to rebuild all the indexes. Otherwise you will get strange results like comparing strings in different table not working.
The indexes have to actually be dropped and recreated in separate statements.
You can't use DBCC DBREINDEX or create index with the DROP_EXISTING option as they won't do anything(thanks to SQL Server "optimization").
This script loops through the tables and then loops through the indexes and unique constraints in separate sections. It gets the index information and drops and re-creates it.
(The script could probably be tidied up with the duplicate code put into a stored procedure).
====================
--Script to rebuild all table indexes, Version 0.1, May 2004 - James Agnini
--
--Database backups should be made before running any set of scripts that update databases.
--All users should be out of the database before running this script
print 'Rebuilding indexes for all tables:'
go
DECLARE @Table_Name varchar(128)
declare @Index_Name varchar(128)
declare @IndexId int
declare @IndexKey int
DECLARE Table_Cursor CURSOR FOR
select TABLE_NAME from INFORMATION_SCHEMA.tables where table_type != 'VIEW'
OPEN Table_Cursor
FETCH NEXT FROM Table_Cursor
INTO @Table_Name
--loop through tables
WHILE @@FETCH_STATUS = 0
BEGIN
print ''
print @Table_Name
DECLARE Index_Cursor CURSOR FOR
select indid, name from sysindexes
where id = OBJECT_ID(@Table_Name) and indid > 0 and indid < 255 and (status & 64)=0 and
not exists(Select top 1 NULL from INFORMATION_SCHEMA.TABLE_CONSTRAINTS
where TABLE_NAME = @Table_Name AND (CONSTRAINT_TYPE = 'PRIMARY KEY' or CONSTRAINT_TYPE = 'UNIQUE') and
CONSTRAINT_NAME = name)
order by indid
OPEN Index_Cursor
FETCH NEXT FROM Index_Cursor
INTO @IndexId, @Index_Name
--loop through indexes
WHILE @@FETCH_STATUS = 0
begin
declare @SQL_String varchar(256)
set @SQL_String = 'drop index '
set @SQL_String = @SQL_String + @Table_Name + '.' + @Index_Name
set @SQL_String = @SQL_String + ';create '
if( (select INDEXPROPERTY ( OBJECT_ID(@Table_Name) , @Index_Name , 'IsUnique')) =1)
set @SQL_String = @SQL_String + 'unique '
if( (select INDEXPROPERTY ( OBJECT_ID(@Table_Name) , @Index_Name , 'IsClustered')) =1)
set @SQL_String = @SQL_String + 'clustered '
set @SQL_String = @SQL_String + 'index '
set @SQL_String = @SQL_String + @Index_Name
set @SQL_String = @SQL_String + ' on '
set @SQL_String = @SQL_String + @Table_Name
set @SQL_String = @SQL_String + '('
--form column list
SET @IndexKey = 1
-- Loop through index columns, INDEX_COL can be from 1 to 16.
WHILE @IndexKey <= 16 and INDEX_COL(@Table_Name, @IndexId, @IndexKey)
IS NOT NULL
BEGIN
IF @IndexKey != 1
set @SQL_String = @SQL_String + ','
set @SQL_String = @SQL_String + index_col(@Table_Name, @IndexId, @IndexKey)
SET @IndexKey = @IndexKey + 1
END
set @SQL_String = @SQL_String + ')'
print @SQL_String
EXEC (@SQL_String)
FETCH NEXT FROM Index_Cursor
INTO @IndexId, @Index_Name
end
CLOSE Index_Cursor
DEALLOCATE Index_Cursor
--loop through unique constraints
DECLARE Contraint_Cursor CURSOR FOR
select indid, name from sysindexes
where id = OBJECT_ID(@Table_Name) and indid > 0 and indid < 255 and (status & 64)=0 and
exists(Select top 1 NULL from INFORMATION_SCHEMA.TABLE_CONSTRAINTS
where TABLE_NAME = @Table_Name AND CONSTRAINT_TYPE = 'UNIQUE' and CONSTRAINT_NAME = name)
order by indid
OPEN Contraint_Cursor
FETCH NEXT FROM Contraint_Cursor
INTO @IndexId, @Index_Name
--loop through indexes
WHILE @@FETCH_STATUS = 0
begin
set @SQL_String = 'alter table '
set @SQL_String = @SQL_String + @Table_Name
set @SQL_String = @SQL_String + ' drop constraint '
set @SQL_String = @SQL_String + @Index_Name
set @SQL_String = @SQL_String + '; alter table '
set @SQL_String = @SQL_String + @Table_Name
set @SQL_String = @SQL_String + ' WITH NOCHECK add constraint '
set @SQL_String = @SQL_String + @Index_Name
set @SQL_String = @SQL_String + ' unique '
if( (select INDEXPROPERTY ( OBJECT_ID(@Table_Name) , @Index_Name , 'IsClustered')) =1)
set @SQL_String = @SQL_String + 'clustered '
set @SQL_String = @SQL_String + '('
--form column list
SET @IndexKey = 1
-- Loop through index columns, INDEX_COL can be from 1 to 16.
WHILE @IndexKey <= 16 and INDEX_COL(@Table_Name, @IndexId, @IndexKey)
IS NOT NULL
BEGIN
IF @IndexKey != 1
set @SQL_String = @SQL_String + ','
set @SQL_String = @SQL_String + index_col(@Table_Name, @IndexId, @IndexKey)
SET @IndexKey = @IndexKey + 1
END
set @SQL_String = @SQL_String + ')'
print @SQL_String
EXEC (@SQL_String)
FETCH NEXT FROM Contraint_Cursor
INTO @IndexId, @Index_Name
end
CLOSE Contraint_Cursor
DEALLOCATE Contraint_Cursor
FETCH NEXT FROM Table_Cursor
INTO @Table_Name
end
CLOSE Table_Cursor
DEALLOCATE Table_Cursor
print ''
print 'Finished, Please check output for errors.'
====================
Any comments are very welcome.
View 1 Replies
View Related
Mar 6, 2008
I am very new to SSIS. Can someone give me a basic out line to this problem. I kind of understand control tasks, data flow, etc... but not in details(watched couple of webcasts). I need to see something like below in action to understand this better.
Basically, I need to process a flat csv file on daily basis and load it into a table. As I am loading the records, I will need to verify(on a key column) to see if record exists in table already. If so then just update the record otherwise insert a new record. When I find a record, I need to possibly do a checksum on a set of columns before I do update. So, only update if these set of columns are different from file vs. table. I also need to keep performance in mind as I am processing this record one at a time looking up this record. I am thinking this should be fairly easy but I am getting little lost in control tasks and dataflow as to what goes on what. By the way I am using visual studio 2005 and sqlserver 2005.
I would appreciate your help. thanks again. I dont mind an example solution file.
View 3 Replies
View Related
Jul 10, 2007
I have five small tables that I need to insert to a SQL CE database.
I am using the 2.0 Compact Framework with the 2.0 System.Data.SqlServerCe.
My table definition is dynamic so I never know it's design.
1- If I go Row by Row using an this.ExecuteNonQuery(_global, par); it takes about 26 seconds to insert 5 tables of 330 rows.
2- If a use
StringBuilder sbColumns = new StringBuilder();
foreach (DataColumn dc in table.Columns)
{
if (sbColumns.ToString() != "")
sbColumns.Append(",");
sbColumns.Append(dc.ColumnName);
}
SqlCeDataAdapter da = new SqlCeDataAdapter("SELECT " + sbColumns.ToString() + " FROM " + _tablename, m_con);
SqlCeCommandBuilder cb = new SqlCeCommandBuilder(da);
da.MissingMappingAction = MissingMappingAction.Passthrough;
da.InsertCommand = cb.GetInsertCommand();
da.Update(table);
da.Dispose();
it takes about 46 seconds.
How Can write it faster or is this fastest it can go?
Thanks
View 1 Replies
View Related
Dec 15, 2006
We have a system that has 35 million conversations piled up. We didn't know to explicitly end the conversation once the processing has completed. Oops. Now, our production box has 35 mm sitting in the table, and we have run into the problem where the amount in sys.conversation_endpoints has exceeded memory and they are being dumped into tempdb, which is killing our disk space, thus bringing the box down. We have fixed the code to end the conversations, but we now have to end the conversations in a hurry. If we select one by one out of the table and end the conversation via END CONVERSATION, it is slow. Very slow. It will finish in a few months. :(
Does anyone know how to get rid of these conversations in a hurry? All of the messages have been applied to our system, so killing the conversations will (should) have no affect on the processed data. Something like a TRUNCATE statement?
Thank you so much in advance,
John Hennesey
View 5 Replies
View Related