When considering locks caused by a table, would it be better to have more rows in a table
than less? For example, I can design the table to hold 1 million rows that have 300,000 rows that are
updated frequently or I can take out 700,000 rows and place them in another table and have
the 300,000 rows that remain take a severe beating. Would I have less locking problems and
deadlocks if I take out the non-updating rows or would it be more likely to deadlock because
of a higher chance of a lock being held on the same page?
I'm running a merge replication on a sql2k machine to 6 sql2k subscribers. Since a few day's only one of the merge agents fail's with the following error:
The merge process could not retrieve generation information at the 'Subscriber'. The index entry for row ID was not found in index ID 3, of table 357576312, in database 'PBB006'.
All DBCC CHECKDB command's return 0 errors :confused: I'm not sure if the table that's referred to in the message is on the distribution side or the subscribers side? A select * from sysobjects where id=357576312 gives different results on both sides . .
Hi everyone, When we create a clustered index firstly, and then is it advantageous to create another index which is nonclustered ?? In my opinion, yes it is. Because, since we use clustered index first, our rows are sorted and so while using nonclustered index on this data file, finding adress of the record on this sorted data is really easier than finding adress of the record on unsorted data, is not it ??
please explain the differences btween this logical & phisicall operations that we can see therir graphical icons in execution plan tab in Management Studio
my stored procedure have one table variable (@t_Replenishment_Rpt).I want to create an Index on this table variable.please advise any of them in this loop... below is my table variable and I need to create 3 indexes on this...
I have setup a DTS job that performs the following steps once a week.
1. Truncate a User Table called Sales 2. Import 750,000 new sales records from a semi-colon delimited text file. 3. Execute an update query that adds a SalesID field to each record. (this is a concatenation of several columns for each record and may not be unique)
This whole process takes about 2 minutes.
Here is my question: all querys and views against this Sales table use the SalesId field to identify a result set. Therefore my thought is that I need Clustered index on the SalesID field in the sales table.
What is the right way to handle this:
1. Leave the table as is and do not add an index to the SalesID field. (All queries would rely on file scan of the table) 2. Add a permenat Index to the SalesID field. Which will probably cause the truncate and Import to run more slowly. 3. Do option 2 but drop the index before truncating the table and add the Index back to the SalesId field as the last step in the DTS job.
Any idea what would provide the best performance? If I missed any options please let me know, thank you for any help!
select row_number() over ( order by duedate) as row , duedate as date, ... into #fronta from oitb with(nolock) where .... order by duedate
The table is filled correctly with about 30k records. Now in next step I want to work with this tmp table I created, but I have problem, when I use query like this
select * from #fronta where row < 500
When the operator is = or <>, the query is quick, but when I use < or >, the query takes about 10 minutes.
I tried to add to this tmp table index on field named row, but with no succes.
My ID field is an identity field (unique). It is the primary key. I also want to add an index/unique key so that a combination of Field1 and Field2 can not be duplicated. How do I do this?
Hi I'm issuing a SELECT on a field with the SUM on SQL Server 7. I have an index on the field in the WHERE clause but upon analysis, the Query Optimizer always uses a Full Table Scan. Can anyone explain why and is there a way to use the index.
HEre's the structure: SELECT SUM(colA) FROM TABLE tblB GROUP BY colC
Hi all, Is it possible to get the name and size of each index in a table? Please let me know. In sql 7, we could do this using EM but, in sql 2000, I'm not sure how to do this.
I am creating a table variable (@tblBin) to temporarily store a set of data. Later in my sproc, I am doing a JOIN from @tblBin to a persistent table. In order to improve performance, I was thinking of adding an index to the columns of the @tblBin (indexes already exist on the persistent table). Using standard CREATE INDEX syntax(*), I am getting a compile error. Can this be done?
(*)CREATE NONCLUSTERED INDEX IX_tblBin_shortname ON @tblBin(shortname)
I already posted this over on sqlteam so don't peek there if you haven't seen that post yet. :)
So now to the question:
Anyone care to guess how long it took me to build a clustered index on a table with 900 million rows? This is the largest amount of data in a single table I have had to work with thus far in my career! It's sorta fun to work with such large datasets. :)
Some details:
1. running sql 2005 on a dual proc 32bit server, 8gb ram, hyperthreaded, 3ghz clock. disk is a decent SAN, not sure of the specs though.
2. ddl for table:
CREATE TABLE [dbo].[fld]( [id] [bigint] NOT NULL, [id2] [tinyint] NOT NULL, [extid] [bigint] NOT NULL, [dd] [bit] NOT NULL, [mp] [tinyint] NOT NULL, [ss] [tinyint] NOT NULL, [cc] [datetime] NOT NULL, [ff] [tinyint] NOT NULL, [mm] [smallint] NOT NULL, [ds] [smallint] NOT NULL )
3. ddl for index (this is the only index on the table):
CREATE CLUSTERED INDEX [CIfld] ON [dbo].[fld] ( extid asc )WITH (FILLFACTOR=100, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF)
4. extid column was not sorted to begin with. ordering was completely random.
Note that I have changed the column names, etc, to protect the innocent. I can't go into details about what it's for or I'd be violating NDA type stuff.
I need to read a db Table but sorted by index - is there a generic "select * from tableA sorted by index" where it just uses whatever index it finds (main index) or do i have to name the index?
I have a table that has thousands of rows inserted daily (rows are seldom updated or deleted)
The table is also involved in frequent non-simple select statements. It currently has about a million rows.
Out of the 15 odd columns in the table, I can see about 6 that would benefit being indexed to speed up the select statements.
Before I do this, I was wondering if people think that perhaps I should create an indexed view that all select statements use, rather than adding indexes directly to the table.
Can anyone advise me the performance benefits/disadvantages of indexed views over indexed tables?
Hi i am new to using this sql server 2000....this is a very simple question to all u guys.....i am just in a learning stage...so any help from u guys is really appreciable....
i need to create a table customers with the following columns... identity column to self-populate as the primary key, joindate, leavedate, custcode, empID.
This is the one i tried: create table customers (id int primary key identity (1,1) not null, joindate smalldatetime null, leavedate smalldatetime null, custcode varchar (10) not null, empid int not null ) is tht code correct only??? and i also want the below one : Create indexes on the leavedate, custcode and empid columns.
how to create these indexes??? and wht happens when i create them(like is thr any advantage of creating indexes???)
Hi,I've got 2 table variables inside of an SQL 2000 function:@tmpBigList(BItemID, BRank)@tmpSmallList (ItemID, Rank)The following UPDATE statement can run for a long time if @TmpTable1has 500 rows and @TmpTable2 has 35,000 rows.UDPATE @tmpBigListSET BRank = t.RankFROM @tmpBigListJOIN @tmpSmallList t on t.ItemID = BItemIDLooking at the Query Plan, you see that the INNER JOIN Of @tmpBigListto @TmpSmallList results in 500 * 35,000 = 17,500,000 rows beingreturned from @TmpSmallList. That takes a long time.An index would help, but it appears that you can't add an index to atable variable.Changing to a temp table does not work since it's in a function.Thanks,Joe Landes
We had data in tables for multiple users (Logins) .Each user data is identified by a one column named €œUSER€?. No user has direct access to tables and only through views .we have created views and stored proc .Views will perform DML operations on tables using condition WHERE USER=SUSER_SNAME() (i.e Logged in user).So no point of getting others user data.
Each table has a column USER and we are queering data based on login user .this is the foreign key of USER table. Each view contains user column in where clause .So for every query we are searching all records .instead of that is there any way to get data with out searching all records.
I heard about table Partitioning, index Partitioning, view Partitioning. Are they helpful to boost my query performance?
And also let me know is there any good way of designing apart from above options
this is not allowed because of the index for (Col2,Col3,Col4).Col4 must be different from 1.
I can see only one way to insert into this table..
SELECT @max_col4= max(Col4) FROM TABLEXXX WHERE Col2=30082006 AND Col3='TERM1'
INSERT INTO TABLEA(Col2,Col3,Col4,Col5) VALUES (30082006,'TERM1',(@max_col4+1),7)
this is right with only one client....but if there are more than one,possibly two clients can try to insert to the table.When one of them gets the max_Col4,lets say it got 89.At the same time,another client started the process and it got 89,too..but after the first client inserted the row, then max_Col4 will be 89.However the second client will still try to insert with Col4 as 89......namely, it will boom...
There must be another method to achieve that job..but what is it...???
Is there a way to track tables/indexes/stored procedures that are being used? I know that the Profiler can do this but I am looking for a way to query a system table to get this information. Oracle has a way to turn on/off monitorint for tables and indexes so I was wondering if this info is avaialble and if so if something needs to be done to activate the collection of the info.
Hi all, My question is about Indexs on partition where I have a table with say 5 partitions and I want to create index on partitions and not on the whole table. The objective is that if i create a table level index on a partition table and eventually if I drop one of the partition or add another partition, what will happen to the index? 1) Do I need to re-create the index for the partion which are left after deleting one partition? 2) If a partition is added do I need re-create the index for the whole table or just create the index for that particular new partition?
Let me know if there is any white paper or code available. I have gone through the white paper published "SQL Server 2005" Partitioned Tables and Indexes Author: Kimberly L. Tripp, Founder, SQLskills.com
We are trying to load flat text files with upwards of 7 million records into a table on SQL. The table has a clustered index on 3 fields. We setup the indexes prior to importing the data. We are sometimes able to complete smaller tables (500,000-750,000 records), however when we try the larger tables an error occurs :
Error at Destination for row number 6785496. Errors encountered so far in this task: 1
Location: somerge.c:1573 Expression: mrP->mrStatus!=MERGERUN::NONE SPID: 11 Process ID: 173
The destination row number is the same number as the total number of rows that we are trying to load.
None of the recods end up importing. The row number it gives is always the total number of records that was in the text file I was trying to import. I tried to import the text files first and then build the clustered indexes but a table with only 300,000 records ran for nearly 4 days without completing before we killed it. Be for we try to load the file we always delete whatever is there. Some of the files that we try to load are new and we have to set up the indexes from scratch. We are using a DTS wizard. Someone told me to find a way to get it to commit every 1000 or so but I can't find a way to do it. I looked and looked but can't find it !!!
I have a proc which creates a rather large temp table, and then i create an index on this. the problem arises when multiple users call this proc at that same time. the second user gets errors as they cannot create the index because it already exists. i know i can't just name the index #index_name, althought this would be ideal. does anyone know of a way to let multiple users create an index besides using dynamic sql? thanks in advance
On the above view example, if table PARSEL has 1 million records do I have to create an index for PARSELDURUM or when I create a view does SQL server create automatic virtual index?