Will Change In Index Slows Down Queries In SQL Server2000 Sp4.
May 12, 2008
Hi
Will change in index slows down queries in SQL Server2000 sp4.
Please help me
Hi
Will change in index slows down queries in SQL Server2000 sp4.
Please help me
Hi,
We are running SQL Server 2005 Ent Edition with SP2 on a Windows 2003 Ent. Server SP2 with Intel E6600 Dual core CPU and 4GB of RAM. We have an C# application which perform a large number of calculation that run in a loop. The application first load transactions that needs to be updated and then goes to each one of the rows, query another table get some values and update the transaction.
I have set a limit of 2GB of RAM for SQL server and when I run the application, it performs 5 records update (the process described above) per second. After roughly 10,000 records, the application slows down to about 1 record per second. I have tried to examine the activity monitor however I can't find anything that might indicate what's causing this.
I have read that there are some known issues with Hyper-Threaded CPUs however since my CPU is Dual-core, I do not know if the issue applies to those CPUs too and I have no one to disable one core in the bios.
The only thing that I have noticed is that if I change the Max Degree of Parallelism when the server slows down (I.e. From 0 to 1 and then back to 0), the server speeds up for another 10,000 records update and then slows down. Does anyone has an idea of what's causing it? What does the property change do that make the server speed up again?
If there is no solution for this problem, does anyone know if there is a stored procedure or anything else than can be used programmatically to speed up the server when it slows down? (This is not the optimal solution however I will use it as a workaround)
Any advice will be greatly appreciated.
Thanks,
Joe
We are trying a cluster setup on MS SQL 2005 with one m/c as Publisher (Primary) and another as Subscriber (Secondary).
When Publisher and Subscriber both are running, everything goes fine but when Publisher server goes down, subscriber server troubles.
We are using MSADO15.DLL for database connectivity.
We have a table where ID column is set to primary key with Auto Increment true.
We use AddNew() function of this library to insert a new record.
We fill a structure with all necessary values to pass it to AddNew() with ID field set to 0.
When we use AddNew() on the above table to insert a record, the ID auto inserted in table is correct but returned structure contains wrong ID value.
We tried to trace this problem using SQL Server Profiler Tool of SQL Server.
AddNew() function performs following operations in back end
1. Inserts the record to the table using INSERT
2. Calculates the auto increment field ID using SELECT @@IDENTITY
3. Fills this ID in the structure passed to AddNew() and returns it.
But ID returned by SELECT @@IDENTITY query is wrong.
There are other ways also to retrieve the last ID inserted in table by
IDENT_CURRENT(‘table_name’) which return right ID.
Can we change the call of SELECT @@IDENTITY to IDENT_CURRENT in AddNew() functions behavior of MSADO DLL?
Or there is another way of retrieving right ID
We are trying a cluster setup on MS SQL 2005 with one m/c as Publisher (Primary) and another as Subscriber (Secondary).
When Publisher and Subscriber both are running, everything goes fine but when Publisher server goes down, subscriber server troubles.
We are using MSADO15.DLL for database connectivity.
We have a table where ID column is set to primary key with Auto Increment true.
We use AddNew() function of this library to insert a new record.
We fill a structure with all necessary values to pass it to AddNew() with ID field set to 0.
When we use AddNew() on the above table to insert a record, the ID auto inserted in table is correct but returned structure contains wrong ID value.
We tried to trace this problem using SQL Server Profiler Tool of SQL Server.
AddNew() function performs following operations in back end
1. Inserts the record to the table using INSERT
2. Calculates the auto increment field ID using SELECT @@IDENTITY
3. Fills this ID in the structure passed to AddNew() and returns it.
But ID returned by SELECT @@IDENTITY query is wrong.
There are other ways also to retrieve the last ID inserted in table by
IDENT_CURRENT(‘table_name’) which return right ID.
Can we change the call of SELECT @@IDENTITY to IDENT_CURRENT in AddNew() functions behavior of MSADO DLL?
Or there is another way of retrieving right ID?
Hi Guys,
This is Ravi. I'm working on SSIS 2005 version. I have created the DTSX file from the SQL Server and executed it successfully from my .NET 2005 code.
Now I have a requirement that I need to dynamically change the Source database query. ie., based on the user selection I need to get the data from different tables of SQL and put it into an Excel file.
Can anyone help me in this..
Regards,
Ravi K. Kalyan
Mascon Global Limited.
I have a requirement to calculate the % change in the number of orders received today with the number of orders that were received 3 days back. All data is in the same table. There is a received date column.
I have two count(*) queries - one for today and one for 3 days back running separately and getting the results. Is it possible I can get the % change in orders received from 3 days back and today in one query.Also if I want to get the number of orders received today between 12:00am today and current time. How would I modify the query.
Found out a while back that my facts-tabel has an non-clustered index on its facts_id. In a bunch of procedures an update is executed against a facts_id unfortunately on it's facts-table. I was wondering if changing it into a clustered index is worth the effort / would make sense considering a +110 million facts and re-indexing the other indexes as well? Facts are loaded sequentially, so I would suspect them facts are in the ordered already?
thanx,
I had my database in 6.5 which i upgraded to 7.0 using SQL Server upgrade wizard.Then I created full text catalog. When I say incremental population It gives me warning that You can create full text indexes but can not execute queries against it as the database is still in SQL server 6.5 mode.What is the reason behind this?
View 1 Replies View RelatedHi everybody,I've five instances of SQL Server 2000 with the SAME database with aDIFFERENT owner in each server. I, as the administrator, have a lot ofqueries that I have to execute in some or all servers. The problem isthat I have to connect to all servers with MY user, not each of the dbowners...So I have queries this way:select * from mike.table1 t1 join mike.table2 t2 on...And when I connect to another server I have to change mike for jeremyin all the SQLs...And when I connect to another server I have to change jeremy for ninain all the SQLs...I know that there was an old, v7, deprecated way to change the"schema", something likechange current user to kimberlygoselect * from table1 t1 join table2 t2 on...This way, I'll change ONLY once the connected user. I could even do atthe beginning of the script an IF, to change the connected userdepending on @@SERVERNAME !!!Can someone remember this instruction???Thanks in advance for your help !!!
View 1 Replies View RelatedIs there any way to change the value of a primary key value?
View 10 Replies View Related We are trying a cluster setup on MS SQL 2005 with one m/c as Publisher (Primary) and another as Subscriber (Secondary).
When Publisher and Subscriber both are running, everything goes fine but when Publisher server goes down, subscriber server troubles.
We are using MSADO15.DLL for database connectivity.
We have a table where ID column is set to primary key with Auto Increment true.
We use AddNew() function of this library to insert a new record.
We fill a structure with all necessary values to pass it to AddNew() with ID field set to 0.
When we use AddNew() on the above table to insert a record, the ID auto inserted in table is correct but returned structure contains wrong ID value.
We tried to trace this problem using SQL Server Profiler Tool of SQL Server.
AddNew() function performs following operations in back end
1. Inserts the record to the table using INSERT
2. Calculates the auto increment field ID using SELECT @@IDENTITY
3. Fills this ID in the structure passed to AddNew() and returns it.
But ID returned by SELECT @@IDENTITY query is wrong.
There are other ways also to retrieve the last ID inserted in table by
IDENT_CURRENT(€˜table_name€™) which return right ID.
Can we change the call of SELECT @@IDENTITY to IDENT_CURRENT in AddNew() functions behavior of MSADO DLL?
Or there is another way of retrieving right ID?
At one of your client sides we have configured Always on with synchronous mode.Also we have schedule rebuild index and update statistics job which runs in night every alternate day. the issue is there are more then 100 sleeping queries which is blocking update statistics job.
I have to stop update statistics job manually once i come to office manually.
Once I have killed blocking sleeping query but then other sleeping query blocked it and so on.
I have been testing methods to maintain indexes in a SQL Server 2005 database which has been migrated from SQL Server 2000. The compatibility level is still set to 80. I used the query below to inspect the degree of fragmentation amongst other things.
SELECT a.index_id
, name
, database_id
, avg_fragmentation_in_percent
,index_type_desc
,fragment_count
,page_count
FROM sys.dm_db_index_physical_stats (NULL, NULL, NULL, NULL, 'DETAILED') AS a
JOIN sys.indexes AS b ON a.object_id = b.object_id AND a.index_id = b.index_id
Some of the indexes in the database had a high degree of fragmentation based on the avg_fragmentation_in_percent value. I tried drop+create, rebuild and reorganise commands on those indexes. Predictably, drop + create was the most effective, but even that did not always reduce fragmentation much. Sometimes the fragmentation was the same no matter what method I used. Other times drop+create helped, rebuild made it worse.
What is going on?
I am tracking FX changes for USD Vs other currencies. I want to allow my users to choose a Start date and an End date and see the accumulative % Change for the index in a line chart (baseline set as Start date). Additional Q - I prefer the dates slicer to be from "Timeline" type instead of 2 regular slicers.Â
View 5 Replies View Related
Hi,
I am working on a script to do following:
get a list of indexes on all tables in all dbs on a SQL server.
If the index property to allow page locks is off, then turn it on, re-index and turn it off again.
My problem is:
i want to use ' Use <db>' statement in the middle of my script but it is not working.I tried using dynamic SQL with
set @cmd='use '+ @dbname
exec (@cmd)
But this is not working.
Can we use 'use' statement in the middle of a script? If not what is the alternative?
My script looks as follows:
DECLARE @Database VARCHAR(255)
DECLARE @Table VARCHAR(255)
declare @Index varchar(255)
DECLARE @cmd NVARCHAR(500)
DECLARE @fillfactor INT
SET @fillfactor = 90
DECLARE DatabaseCursor CURSOR FOR
SELECT name FROM master.dbo.sysdatabases
WHERE name NOT IN ('master','model','msdb','tempdb','distrbution')
ORDER BY 1
OPEN DatabaseCursor
FETCH NEXT FROM DatabaseCursor INTO @Database
WHILE @@FETCH_STATUS = 0
BEGIN
SET @cmd = 'DECLARE TableCursor CURSOR FOR SELECT table_catalog + ''.'' + table_schema + ''.'' + table_name as tableName
FROM ' + @Database + '.INFORMATION_SCHEMA.TABLES WHERE table_type = ''BASE TABLE'''
-- create table cursor
EXEC (@cmd)
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @Table
WHILE @@FETCH_STATUS = 0
BEGIN
set @cmd='use '+@Database
print (@cmd)
exec (@cmd)
declare IndexCursor CURSOR for select name from sys.indexes where object_id=object_id(@Table)
open IndexCursor
fetch next from IndexCursor into @Index
print (@table)
--select name from sys.indexes where object_id=object_id(@Table)
print (@index)
WHILE @@FETCH_STATUS = 0
begin
if (INDEXPROPERTY(OBJECT_ID(@Table),@Index,'IsPageLockDisallowed')=1)
begin
print (@Index + ' page locking off')
-- SET @cmd='ALTER INDEX '+@Index +' ON '+@Table+' SET (ALLOW_PAGE_LOCKS = ON) reorganize
-- ALTER INDEX '+@Index +' ON '+@Table+' SET (ALLOW_PAGE_LOCKS = OFF)'
end
else
begin
print (@Index + ' page locking on')
-- SET @cmd='ALTER INDEX '+@Index +' ON '+@Table+' reorganize'
end
--PRINT (@cmd)
fetch next from IndexCursor into @Index
end
CLOSE IndexCursor
DEALLOCATE IndexCursor
FETCH NEXT FROM TableCursor INTO @Table
END
CLOSE TableCursor
DEALLOCATE TableCursor
FETCH NEXT FROM DatabaseCursor INTO @Database
END
CLOSE DatabaseCursor
DEALLOCATE DatabaseCursor
Can anyone help me please?
How can i enable my fulltex change-tracking and update-index in my table?
I recreated my fulltext catalog and start the full population, but although my fulltext index status shows active, my full-text change-tracking and the update index were disabled. - and I don't know how to enable them.
Thanks in advance
Hi peeps,
We have just upgraded to Service Pack 4 on our SQL Server 2000.
We have had a DTS job that normally takes about four hours to complete(this dts job has been ok for the last three years).
However, after applying SP4, this DTS job now takes over 8 hours to complete.
There are no other processes running on the box and the box is a high end Dell machine with 8 Gig of RAM.
Any advice on this would be greatly appreciated.
Bal
Hi guys I am sitting and testing som variants of this simple SP, and I have an question that I couldent answer with google or any thread in this forum.
Perhaps I am doing something really easy completly wrong here.
Why does the local variables in the first code segment slow down the overall execution of the procedure?
Dont mind the logic why I have them there are only testing som things out.
If i declare two variables the same way:
DECLARE @v INT
SET @v = 100
When I use it in a WHERE CLAUSE:
...WHERE [V] BETWEEN @v AND @x)
Is there any different then
...WHERE [V] BETWEEN 100 AND 200)
Cant figure this out, why does it hurt the performance so bad? As a C# guy its the same thing ?
Thanks in advance
/Johan
Slow
ALTER PROCEDURE [dbo].[spStudio_Get_Cdr]
@beginDate DATETIME = null,
@endDate DATETIME = null,
@beginTime INT,
@endTime INT,
@subscribers VARCHAR(MAX),
@exchanges VARCHAR(MAX) = '1:',
@beginDateValue int,
@endDateValue int
AS
BEGIN
SET NOCOUNT ON;
DECLARE @s INT
SET @s = @beginDateValue
DECLARE @e INT
SET @e = @endDateValue
print @s
print @e
DECLARE @exch TABLE(Item Varchar(50))
INSERT INTO @exch
SELECT Item FROM [SplitDelimitedVarChar] (@exchanges, '|') ORDER BY Item
DECLARE @subs TABLE(Item Varchar(19))
INSERT INTO @subs
SELECT Item FROM [SplitDelimitedVarChar] (@subscribers, '|') ORDER BY Item
SELECT [id]
,[Abandon]
,[Bcap]
,[BlId]
,[CallChg]
,[CallIdentifier]
,[ChgInfo]
,[ClId]
,[CustNo]
,[Digits]
,[DigitType]
,[Dnis1]
,[Dnis2]
,[Duration]
,[FgDani]
,[HoundredHourDuration]
,[Name]
,[NameId]
,[Npi]
,[OrigAuxId]
,[OrigId]
,[OrigMin]
,[Origten0]
,[RecNo]
,[RecType]
,[Redir]
,[TerId]
,[TermAuxId]
,[TermMin]
,[Termten0]
,[Timestamp]
,[Ton]
,[Tta]
,[Twt]
,[Level]
FROM
[dbo].[Cdr] AS C
WHERE
(C.[DateValue] BETWEEN @s AND @e)
AND
(C.[TimeValue] BETWEEN @beginTime AND @endTime)
AND
EXISTS(SELECT [Item] FROM @exch WHERE [Item] = C.[Level])
AND
(EXISTS(SELECT [Item] FROM @subs WHERE [Item] = C.[OrigId] OR [Item] = C.[TerId]))
END
Fast
ALTER PROCEDURE [dbo].[spStudio_Get_Cdr]
@beginDate DATETIME = null,
@endDate DATETIME = null,
@beginTime INT,
@endTime INT,
@subscribers VARCHAR(MAX),
@exchanges VARCHAR(MAX) = '1:',
@beginDateValue int,
@endDateValue int
AS
BEGIN
SET NOCOUNT ON;
DECLARE @exch TABLE(Item Varchar(50))
INSERT INTO @exch
SELECT Item FROM [SplitDelimitedVarChar] (@exchanges, '|') ORDER BY Item
DECLARE @subs TABLE(Item Varchar(19))
INSERT INTO @subs
SELECT Item FROM [SplitDelimitedVarChar] (@subscribers, '|') ORDER BY Item
SELECT [id]
,[Abandon]
,[Bcap]
,[BlId]
,[CallChg]
,[CallIdentifier]
,[ChgInfo]
,[ClId]
,[CustNo]
,[Digits]
,[DigitType]
,[Dnis1]
,[Dnis2]
,[Duration]
,[FgDani]
,[HoundredHourDuration]
,[Name]
,[NameId]
,[Npi]
,[OrigAuxId]
,[OrigId]
,[OrigMin]
,[Origten0]
,[RecNo]
,[RecType]
,[Redir]
,[TerId]
,[TermAuxId]
,[TermMin]
,[Termten0]
,[Timestamp]
,[Ton]
,[Tta]
,[Twt]
,[Level]
FROM
[dbo].[Cdr] AS C
WHERE
(C.[DateValue] BETWEEN @beginDateValue AND @endDateValue)
AND
(C.[TimeValue] BETWEEN @beginTime AND @endTime)
AND
EXISTS(SELECT [Item] FROM @exch WHERE [Item] = C.[Level])
AND
(EXISTS(SELECT [Item] FROM @subs WHERE [Item] = C.[OrigId] OR [Item] = C.[TerId]))
END
Hi MSDN PPL!
I have an SQL Server 2005 instance that slows down over time, she almost grinds to a halt. The data is being exposed via an ASP.Net 2.0 web interface. The web application gets slower and slower over a matter of days. If I restart the SQL Server process she comes back to life and starts serving as it should - nice and snappy.
The web application does not perform much writing to the DB, 90% of the time its just reading. The DB server is worked hard by a console application that produces data each day. This console app runs for about 30 minutes during which there is a lot of reading, processing and writing back to the DB as fast as the hardware will allow. Its this massive workload that is slowing the DB server.
This seems to be related to the amount of memory that SQL Server is using. When looking at Task Manager I can see that sqlservr.exe is using 1,878,904K, this figure continues to rise while the console app runs. I have seen it over 2 GB. When the console app finishes the memory is still allocated and performance is slow. This continues to get worse after a few days of processing.
The machine's specs are:
* Windows Server 2003 R2 Standard
* SQL Server 2005 Standard 9.00.3054.00
* Twin 3.2Ghz Xeons
* 3.5 Gb RAM
I plan to apply "Cumulative hotfix package (build 3152) for SQL Server 2005 Service Pack 2" in a blind hope to solve the problem.
Any suggestions?
Sorry if this is in the wrong place guys, couldn't find a general performance topic. Please move accordingly.
Thanks,
Matt.
I'm hoping someone will be able to point me in the right direction for solving this problem as i've come a bit stuck.
The Sql Server 2005 Stored Procedure runs in about 3 secs for a small table when run from SQL Management Studio (starting with dbcc freeproccache before execution) but times out when run through ADO.NET on .NET (45 sec timeout).
I've made sure the connection was closed prior to opening and executing the adapter. I'm a bit stuck as where to check next though.
Any ideas greatfully received, Thanks
Hi All,We're running SQL Server 2000, SP3.I have a stored procedure that consists of a single Select statement.It selects a bunch of columns one of which is a column of data typeTEXT.SP takes 30 sec to run which causes timeouts on the Front End.When I comment out the Text column from the select it only takes 1Sec.Is there anything I can do about it? I know I can't index a Textcolumn. It's also not used in the where clause, so no need forFull-Text Search.But we absolutely have to have it in the Select clause.Thanks for the help in advance.~Narine
View 5 Replies View RelatedWe noticed a deadlock 3-4 weeks ago on a table (table1) and deadlock graph was captured.
When I am analyzing the deadlock graph, page number using DBCC PAGE, I am getting the object id for a different table (table2). But deadlock graph shows the name of the object as table1.
Is it possible that subsequent defragmentation of indexes would have caused the respective page id to got re-allocated to a different table? I checked the deadlock graph lately only after 3-4 weeks.
Hi all,
In my project, I have a website and through that, I run my reports. But the reports take a lot of time to render. When I checked the profiler, it showed that the SP for the report is run around 4-5 times. Due to this, the report rendering takes a lot of time.
When, I ran the SP with the same set of Parameters in Query Analyser, it ran in around 18 seconds. But when I ran the report from web interface, it took around 3 minutes to completely show the data. And the SP has been run 5 times.
I am having serious problems with Report's performance because of this. Many a times, report just times out. I have set the timeout as 10 minutes. And because the Sp is run 5 times, the report times out, if there is huge amount of data.
Any help would be appreciated.
Thanks in advance.
Swati
Hi
We have a t-sql statement in a SP that generates on average between 50 €“ 60 rows of data, pretty small! The statement references a View, some tables and temporary # table which has been created in the SP.
Everything works a treat and runs sub second until you put a Insert Into in front of the above statement scenario. The SP then takes a about a minute to run which happens to be about the same amount of time to generate all the data in the View.
I have not attached T-Sql statement at this stage as it runs ok without the Insert Into but would be happy to post it if need be.
Anybody else ever had this problem?
We are using SQL Server 2005 SP2 64 bit.
Art99
I have two tables - gift_cards and history - each related by a field called "card_number". This field is encrypted in the history table but not in the gift_cards table. Let's say the passphrase is 'mypassphrase'. The following query takes about 1 second to execute with a fairly large amount of data in both tables:
SELECT max([history].[date_of_wash]) AS LastUse
FROM gift_cards AS gc LEFT JOIN history
ON gc.card_number=CAST(DecryptByPassPhrase('mypassphrase', HISTORY.CARD_NUMBER) AS VARCHAR(50))
GROUP BY gc.card_number
When I use a declared variable to contain the passphrase, the same query takes over 40 seconds. For example,
declare @vchPassphrase as nvarchar(20)
select @vchPassphrase = 'mypassphrase'
SELECT max([history].[date_of_wash]) AS LastUse
FROM gift_cards AS gc LEFT JOIN history
ON gc.card_number=CAST(DecryptByPassPhrase(@vchPassphrase, HISTORY.CARD_NUMBER) AS VARCHAR(50))
GROUP BY gc.card_number
This query is part of a stored procedure and, for security reasons, I can't embed the passphrase in it. Can anyone explain the discrepancy between execution times and suggest a way to make the second query execute faster?
Thanks,
SJonesy
We are using an OLE DB Source for the Data Flow Source and OLE DB Destination for the Data Flow Destination. The amount of data being moved is about 30 million rows, and it is gather using a sql command. There is not other transformations in between straight from one to another. The flow starts amazingly fast but after 5 million rows it slows considerably. Wondered if anyone has experienced anything similar with large loads.
View 6 Replies View RelatedI have an SSIS Package which is designed to import log files. Basically, it loops through a directory, parses text from the log files, and dumps it to the database. The issue I'm having is not with the package reading the files, but when it attempts to write the information to the db. What I'm seeing is that it will hit a file, read 3000 some lines, convert them (using the Data Conversion component), and then "hang" when it tries to write it to the db.
I've run the SQL Server Profiler, and had originally thought that the issue had to do with the collation. I was seeing every char column with the word "collate" next to it. On the other hand, while looking at the Windows performance monitor, I see that the disk queue is maxed at 100% for about a minute after importing just one log file.
I'm not sure if this is due to the size of the db, and having to update a clustered index, or not.
The machine where this is all taking place has 2 arrays- both RAID 10. Each array is 600 GB, and consists of 8 disks. The SSIS package is being executed locally using BIDS.
Your help is appreciated!
We are going to use SQL Sever change tracking. The problem is that some of our tables, which are to be tracked, have no primary keys. There are only unique clustered indexes. The question is what is the best way to turn on change tracking for these tables in our circumstances.
View 4 Replies View Relatedplease explain the differences btween this logical & phisicall operations that we can see therir graphical icons in execution plan tab in Management Studio
thank you in advance
Hi all,
We have many tables which have cluster index on column with datatype 'Char(200)'.
Does anyone have script to change cluster index to noncluster for all user tables which have clustered index on a column with 'char(200)' datatype.
Thanks,
Deepak
MS SQL Server 2000 SP3
I'm not the most knowledgable DBA, I've had to learn almost completely on my own, AND on a production server, because it's the only MS SQL Server I have access to.
Everything was fine before I took down the production server for maintenance. Someone suggested that I re-index my tables because I was having some performance issues with a particularly large table (it didn't help that table btw), so I did re-index.
Now, Everything works wonderfully, except for the performance issue mentioned AND one other thing that is going horribly wrong.
Here is the table:
create table ABMcontactlink
(
classifier varchar(20) not null, /* Classification of contact. */
transmitter varchar(36) not null,
contact integer not null, /* Link to ABMcontact (detail) table */
primary key (classifier,transmitter,contact),
foreign key (contact) references ABMcontacts(identifier),
group_name varchar(20) null,
priority smallint null, /* Authorization level. */
type smallint null, /* Autoalarm or Manual */
last_modification_date datetime, /* Date/time record last touched */
last_modification_id varchar(40) /* Who last touched record */
)
go
create index IndexABMcontactlink on
ABMcontactlink(classifier,transmitter)
go
create index CandidateABMcontactlink on
ABMcontactlink(transmitter)
go
As you can see, I have the primary key, which creates a clustered index, PK_ABMContactlink_Some Number, and two other indexes.
Now, this is a very busy production database, and most quick short queries benefit more from CandidateABMContactlink than from the other two indexes.
Unfortunately, in this production system, and this table, seconds count ALOT, so when I have roughly 3000-4000 quereies an hour pulling information from this table, I personally beleive I need to keep CandidateABMContactlink, and I'm not willing to find out on a production server.
** Now to the Problem at Hand **
I have one query that kicks off about 7 times a day, used to take less than 1 minute before the re-index. NOW it takes 30 Minutes. And it drags the system to a crawl.
I did some looking into it, and this query is using CandidateABMContactlink, and it takes 30 minutes. If it uses PK_Abmcontactlink it finishes in under 45 seconds.
Most queries are simple, "Select Column_names from abmcontacts where identifier in (select contact from abmcontactlink where transmitter = 'XXXXXX')"
This one is:
select * from ABMcontacts where (
(last_modification_date >= '2006-04-28 04:40:03' and last_modification_date <= '2006-05-09 16:41:14')
and EXISTS(select contact from ABMcontactlink where contact = identifier
and EXISTS(select transmitter_id from ABMtransmitter where transmitter_id = transmitter and (dealer = 'XXXX'))))
or
(EXISTS(select contact from ABMcontactlink where
(last_modification_date >= '2006-04-28 04:40:03' and last_modification_date <= '2006-05-09 16:41:14')
and contact = identifier and EXISTS(select transmitter_id from ABMtransmitter where transmitter_id = transmitter and (dealer = 'XXXX'))))
I can't change the query, so how do I make it use the Index I want it to use without removing the index that it is using? (I know there are much better ways to write the above query, I'm not the culprit, if I could re-write it, I would)
When I try to install MS SQl server2000 on windows XP machine,it says the server component is not supported by OS.What should I do to get it run on my machine?
View 3 Replies View RelatedHi,
I'm Arash Baseri a Sql Server2000 developer and mail you from Dubai (U.A.E). I have a problem in Sql Server locking table. My problem is not reasonable so the more I researched the more I understand it is not my problem .it is a bug in Sql Server2000.
Now I explain the situation:
I have two tables (Table A and Table B).Table A has a clustered index on col1 and col2, Table B has a clustered index on col1 and col2. I join these tables and update col3 in table A, like this
begin tran
insert [Table A]
select * from Arshiv_Master where Col1 between 26001 AND 26001
Update AI31 set Col3=Case
when 1=1 then 1
Else 0 End
From [Table A]AI5, [Table B]AI31
where AI5.Col1=AI31.Col1 And AI31.Col1 Between 26001 And 26001
/* intentionally I didn't rollback or commit transaction to hold locks on table*/
In another connection I execute this query and I face "Lock request timeout period exceeded" error message.
set lock_timeout 1
set transaction isolation level read uncommitted
Update AI31 set Col3=Case
when 1=1 then 0
Else 0 End
From [Table A] AI5, [Table B] AI31 where AI5.Col1=AI31.Col1 and AI31.Col1 between 45018 And 60000
Now the most interesting part is here .when I use a smaller data range for Col1 no error message is shown. A query like this
set lock_timeout 1
set transaction isolation level read uncommitted
Update AI31 set Col3=Case
when 1=1 then 0
Else 0 End
From [Table A] AI5, [Table B] AI31 where AI5.Col1=AI31.Col1 and AI31.Col1 between 45018 And 50000
As you see the difference between the last two queries is on WHERE clause especially on data rages of Col1.
As you know Sql Server 2000 has row level lock and when I acquire a lock on a record the other records are free. So what's wrong with Sql Server that assumes the others records are locked. I tested this situation in Sql Server 2005 and this problem was not seen so I think it's a bug in Sql Server 2000.Who can help me about this problem?