Multi Processor Is Slower
May 3, 2001
I know this won't be a lot to go on but, we has a quad processor box that is doing a lot of sql crunching. When we turn off three of the processors it runs the SQL queries faster. The requests are comming from COM objects. CPU, Memory, page faults, all that stuff is fine. Also SQL dosen't appear to be using all the processors as only 1 has any amount of usage?. Any suggestions of where to start would be most appreaciated.
Mike
View 1 Replies
ADVERTISEMENT
Jul 23, 2005
Hi, Ive been told this, but I hope it is NOT true. I have an sqlserver2000 installation running on a server that has four processors.It is on a active network but is not the domain controller soessentially it is fully dedicated to servicing the needs of sqlserver, (a bit of browsing, a bit of ms Office, but almost whollydedicated to sqlserver. Now, the big question, why, when the serverproperties have been set to utilize all four processors, can any onejob never get more than 25% of cpu time? I can launch multipleinstance of QA and run the same job on each one and that will utilisemore and more cpu time, but if you launch multile QA windows fromwithin one insance of QA, you can NEVER get more than 25% CPUutilisation. Now i have to run a job (FTS is a good example,re-indexing lots of db's another, or even a huge query with multipleufd's on computed cols which I hoped would grab lots of CPU time thatthey need, but no. So do I have to live with this or can I tell eitherwindows or sql server to grab more cpu when it want to ie use my spareCPU capacity more efficiently or am i working on a misguided premiseand 25% per job is your lot?DMAC
View 4 Replies
View Related
Aug 16, 2006
SQL2K
SP4
Howdy all. I have a query with bizarre results in Query Analyzer.
Box1; 4 x 3.0 processors, 4 gigs of RAM. Results never come back.
Box2; 8 x 3.0 processors, 16 gigs of RAM. Results never come back.
Both of the above boxes are extremely under utilized. The absolute max amount of CPU being used is 25%. Box1 had 1 gig free RAM and Box2 had 7 gigs free RAM.
Box3; 1 x 3.0 processor, 1 gig of RAM. Results in 15 seconds.
Box4, all the same as box 3.
I took a backup of the DB and restored it from box to box to box, so I know everything is identical.
I once had a deadlock issue where I had to use the maxDop hint and tried that here, but it didn't help.
All ideas are appreciated.
TIA, cfr
View 1 Replies
View Related
Apr 30, 2007
Hello all,
Has anyone else experienced random SQL timeouts and checked the processor usage and found that processors 0 and 1's usage patterns are mirrored (when one goes up the other goes down)? We're running into this all of a sudden.
SQL 2000
fairly small db and very low usage at this time < 100 connections/hour.
only web services, SQL 2000 running on the machine, SQL 2005 is installed but currently stopped.
1GB memory
Windows Server 2003 SP 1.
2 processors (dual core) in use.
SQL Usage of the processors according to performance monitor becomes an exact mirror where proc 0 goes high proc 1 goes low, if proc 1 is 25% then proc 0 is 75%.
Thanks
Bob
View 5 Replies
View Related
Jul 20, 2005
I am begginner at best so I hope someone that is better can help.I have a stored procedure that updates a view that I wrote using 2cursors.(Kind of a Inner Loop) I wrote it this way Because I couldn'tdo it using reqular transact SQL.The problem is that this procedure is taking longer and longer to run.Up to 5 hours now! It is anaylizing about 30,000 records. I thinkpartly because we add new records every month.The procedure works like this.The first Cursor stores a unique account and duedate combination fromthe view.It then finds all the accts in the view that have that account duedatecombo and loads them into Cursor 2 this groups them together for datamanipulation. The accounts have to be grouped this way because aaccount can have different due dates and multiple records within eachaccount due date combo and they need to be looked at this way aslittle singular groups.Here is my procedure I hope someone can shead some light on this. Myboss is giving me heck about it. (I think he thinks Girls cant code!)I got this far I hope someone can help me optimize it further.CREATE PROCEDURE dbo.sp_PromiseStatusASBEGINSET NOCOUNT ON/* Global variables */DECLARE @tot_pay moneyDECLARE @rec_upd VARCHAR(1)DECLARE @todays_date varchar(12)DECLARE @mActivityDate2_temp datetimeDECLARE @tot_paydate datetime/* variables for cursor ACT_CUR1*/DECLARE @mAcct_Num1 BIGINTDECLARE @mDueDate1 datetime/* variables for ACT_CUR2 */DECLARE @mAcct_Num2 BIGINTDECLARE @mActivity_Date2 datetimeDECLARE @mPromise_Amt_1 moneyDECLARE @mPromise_Status varchar(3)DECLARE @mCurrent_Due_Amt moneyDECLARE @mDPD intDECLARE @mPromise_Date datetimeSELECT @todays_date =''+CAST(DATEPART(mm,getdate()) AS varchar(2))+'/'+CAST(DATEPART(dd,getdate()) AS varchar(2))+'/'+CAST(DATEPART(yyyy,getdate()) AS varchar(4))+''DECLARE ACT_CUR1 CURSOR FORSELECT DISTINCTA.ACCT_NUM,A.DUE_DATEFROM VWAPPLICABLEPROMISEACTIVITYRECORDS AOPEN ACT_CUR1FETCH NEXT FROM ACT_CUR1 INTO @mAcct_Num1 , @mDueDate1WHILE (@@FETCH_STATUS = 0)BEGINSELECT @rec_upd = 'N 'DECLARE ACT_CUR2 CURSOR FORSELECTB.ACCT_NUM,B.ACTIVITY_DATE,B.PROMISE_AMT_1,B.PROMISE_STATUS,B.CURRENT_DUE_AMT,B.DAYS_DELINQUENT_NUM,B.PROMISE_DATE_1FROM VWAPPLICABLEPROMISEACTIVITYRECORDS B (UPDLOCK)WHERE B.ACCT_NUM = @mAcct_Num1ANDB.DUE_DATE = @mDueDate1ORDER BY B.ACCT_NUM,B.DUE_DATE,B.ACTIVITY_DATE,CASEB.Time_ObtainedWHEN 0 THEN 0ELSE 1END Desc, B.Time_ObtainedOPEN ACT_CUR2FETCH NEXT FROM ACT_CUR2INTO @mAcct_Num2 ,@mActivity_Date2,@mPromise_Amt_1,@mPromise_Status ,@mCurrent_Due_Amt,@mDPD,@mPromise_DateWHILE (@@FETCH_STATUS = 0)BEGIN----CHECK------------------------------------------------------------------------DECLARE @PrintVariable2 VARCHAR (8000)--SELECT @PrintVariable2 = CAST(@MACCT_NUM2 AS VARCHAR)+''+CAST(@MACTIVITY_DATE2 AS VARCHAR)+' '+CAST(@MPROMISE_AMT_1 ASVARCHAR)+' '+CAST(@MPROMISE_STATUS AS VARCHAR)+''+CAST(@mCurrent_Due_Amt AS VARCHAR)+' '+CAST(@mDPD AS VARCHAR)+''+CAST(@mPromise_Date AS VARCHAR)--PRINT @PrintVariable2----ENDCHECK------------------------------------------------------------IF @mDPD >= 30BEGINSELECT @tot_pay = SUM(CONVERT(FLOAT, C.PAY_AMT))FROM vwAplicablePayments CWHERE C.ACCT_NUM = @mAcct_Num2ANDC.ACTIVITY_DATE >= @mActivity_Date2ANDC.ACTIVITY_DATE < @mActivity_Date2 + 15----CHECK------------------------------------------------------------------------DECLARE @PrintVariable3 VARCHAR (8000)--SELECT @PrintVariable3 ='Greater=30 DOLLARS COLLECTED'--PRINT @PrintVariable3----ENDCHECK------------------------------------------------------------ENDELSE IF @mDPD < 30BEGINSELECT @tot_pay = SUM(CONVERT(FLOAT, C.PAY_AMT))FROM vwAplicablePayments CWHERE C.ACCT_NUM = @mAcct_Num2ANDC.ACTIVITY_DATE >= @mActivity_Date2ANDC.ACTIVITY_DATE BETWEEN @mActivity_Date2 AND@mPromise_Date + 5----CHECK----------------------------------------------------------------------DECLARE @PrintVariable4 VARCHAR (8000)--SELECT @PrintVariable4 ='Less 30 DOLLARS COLLECTED'--PRINT @PrintVariable4----END CHECK------------------------------------------------------------END----------------------------------------MY REVISEDLOGIC-------------------------------------------------------IF @rec_upd = 'N'BEGINIF @mDPD >= 30BEGINSELECT @mActivityDate2_temp = @mActivity_Date2 + 15--DECLARE @PrintVariable5 VARCHAR (8000)--SELECT @PrintVariable5 =' GREATER= 30 USING ACTVITY_DATE+15'--PRINT @PrintVariable5ENDELSE IF @mDPD < 30BEGINSELECT @mActivityDate2_temp = @mPromise_Date + 5--DECLARE @PrintVariable6 VARCHAR (8000)--SELECT @PrintVariable6 =' LESS 30 USING PROMISE_DATE+5'--PRINT @PrintVariable6ENDIF @tot_pay >= 0.9 * @mCurrent_Due_Amt--used to be promise amtBEGINUPDATE VWAPPLICABLEPROMISEACTIVITYRECORDSSET PROMISE_STATUS = 'PK',TOTAL_DOLLARS_COLL = @tot_payWHERE CURRENT OF ACT_CUR2--This statement updates the time that the status was placedinto PK.IF @mPromise_Status IN ('PTP','OP')BEGINUPDATE VWAPPLICABLEPROMISEACTIVITYRECORDSSET Status_Date = @todays_dateWHERE CURRENT OF ACT_CUR2ENDSELECT @rec_upd = 'Y 'ENDIF ((@tot_pay < 0.9 * @mCurrent_Due_Amt) OR @tot_pay IS NULL)AND( @mActivityDate2_temp > @todays_date )--need to put 1dayof month here for snapshot9/01/2004BEGINUPDATE VWAPPLICABLEPROMISEACTIVITYRECORDSSETPROMISE_STATUS = 'OP'WHERE CURRENT OF ACT_CUR2--This statement updates the time that the status was placedinto OP which is the original Activity Date.--The record will hold this date until it goes into PK,PB,orIP.IF @mPromise_Status IN ('PTP','OP')BEGINUPDATE VWAPPLICABLEPROMISEACTIVITYRECORDSSET Status_Date = @mActivity_Date2WHERE CURRENT OF ACT_CUR2ENDENDELSE IF ((@tot_pay < 0.9 * @mCurrent_Due_Amt) OR @tot_pay ISNULL)AND( @mActivityDate2_temp <= @todays_date )--need to put 1dayof month here for snapshot 9/01/2004BEGINUPDATE VWAPPLICABLEPROMISEACTIVITYRECORDSSETPROMISE_STATUS = 'PB',TOTAL_DOLLARS_COLL = case when @tot_pay is nullthen 0 else @tot_pay endWHERE CURRENT OF ACT_CUR2--This statement updates the time that the status was placedinto PB.IF @mPromise_Status IN ('PTP','OP')BEGINUPDATE VWAPPLICABLEPROMISEACTIVITYRECORDSSET Status_Date = @todays_dateWHERE CURRENT OF ACT_CUR2ENDENDENDELSE IF @rec_upd = 'Y'BEGINUPDATE VWAPPLICABLEPROMISEACTIVITYRECORDSSETPROMISE_STATUS = 'IP',TOTAL_DOLLARS_COLL = 0WHERE CURRENT OF ACT_CUR2--This statement updates the time that the status was placedinto IP.IF @mPromise_Status NOT IN ('IP')BEGINUPDATE VWAPPLICABLEPROMISEACTIVITYRECORDSSET Status_Date = @todays_dateWHERE CURRENT OF ACT_CUR2ENDENDFETCH NEXT FROM ACT_CUR2 INTO @mAcct_Num2,@mActivity_Date2,@mPromise_Amt_1,@mPromise_Status ,@mCurrent_Due_Amt,@mDPD,@mPromise_DateENDCLOSE ACT_CUR2DEALLOCATE ACT_CUR2FETCH NEXT FROM ACT_CUR1 INTO @mAcct_Num1 , @mDueDate1ENDCLOSE ACT_CUR1DEALLOCATE ACT_CUR1SET NOCOUNT OFFENDGO
View 15 Replies
View Related
Mar 1, 2008
hi, friends, please look at this:
I have a SSIS package, and inside it I do something like below:
1. I have a SQL component, to give back a object to store the records.
2. I have a VB script component, I direct the object I got in 1 step into the script as a dataset.
My problem is:
I run the package in the SQL SERVER 2005 Store Procedue like this:
do
dtexec.exe package.dtsx
loop untill i>t
I control the it runs 30 times. But I found that the speed is slower and slower.
the first time, it takes about 600 s, but the last time, it takes the 1800 s.
Why?
The package don't drop the object it create during the loop in the Store Procedue ?
Thanks!
View 11 Replies
View Related
Nov 26, 2007
I am running a query on a SQL Server 2005 database and encounter the following error message
"Internal Query Processor Error: The query processor encountered an unexpected error during execution."
There is a join between a table on the 2005 database and another on a 2000 database. I have run DBCC CHECKTABLE and found no errors on the two tables.
Anybody with ideas?
Thanks
View 3 Replies
View Related
May 24, 2007
SQL Server 2005 9.0.3161 on Win 2k3 R2
I receive the following error:
"Error: 8624, Severity: 16, State: 1 Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services."
I have traced this to an insert statement that executes as part of a stored procedure.
INSERT INTO ledger (journal__id, account__id,account_recv_info__id,amount)
VALUES (@journal_id, @acct_id, @acct_recv_id, @amount)
There is also an auto-increment column called id. There are FK contraints on all of the columns ending in "__id". I have found that if I remove the contraint on account__id the procedure will execute without error. None of the other constraints seem to make a difference. Of course I don't want to remove this key because it is important to the database integrity and should not be causing problems, but apparently it confuses the optimizer.
Also, the strange thing is that I can get the procedure to execute without error when I run it directly through management studio, but I receive the error when executing from .NET code or anything using ODBC (Access).
View 5 Replies
View Related
Sep 23, 2014
Disaster Recovery Options based on the following criteria.
--Currently running SQL 2012 standard edition
--We have 18000 databases (same schema across databases)- majority of databases are less than 2gb-- across 64 instances approximately
--Recovery needs to happen within 1 hour (Not sure that this is realistic
-- We are building a new data center and building dr from the ground up.
What I have looked into is:
1. Transactional Replication: Too Much Data Not viable
2. AlwaysOn Availability Groups (Need enterprise) Again too many databases and would have to upgrade all instances
3. Log Shipping is a viable option and the only one I can come up with that would work right now. Might be a management nightmare but with this many databases probably all options with be a nightmare.
View 1 Replies
View Related
Aug 17, 2015
More often than not, I typically don't touch DTC on clusters anymore; however on a project where the vendor states that it's required. So a couple things here.
1) Do you really need DTC per instance or one for all?
2) Should DTC be in its own resource group or within the instance's group?
2a) If in it's own resource group, how do you tie an instance to an outside resource group? tmMappingSet right?
View 9 Replies
View Related
Feb 4, 2008
the stored procedure don't delete all the records
need help
Code Snippet
DECLARE @empid varchar(500)
set @empid ='55329429,58830803,309128726,55696314'
DELETE FROM [Table_1]
WHERE charindex(','+CONVERT(varchar,[empid])+',',','+@empid+',') > 0
UPDATE [empList]
SET StartDate = CONVERT(DATETIME, '1900-01-01 00:00:00', 102), val_ok = 0
WHERE charindex(','+CONVERT(varchar,[empid])+',',','+@empid+',') > 0
UPDATE [empList]
SET StartDate = CONVERT(DATETIME, '1900-01-01 00:00:00', 102), val_ok = 0
WHERE charindex(','+CONVERT(varchar,[empid])+',',','+@empid+',') > 0
TNX
View 2 Replies
View Related
Jul 20, 2005
Hello,I am trying to construct a query across 5 tables but primarily 3tables. Plan, Provider, ProviderLocation are the three primary tablesthe other tables are lookup tables for values the other tables.PlanID is the primary in Plan andPlanProviderProviderLocationLookups---------------------------------------------PlanIDProviderIDProviderIDLookupTypePlanNamePlanIDProviderStatusLookupKeyRegionIDLastName...LookupValue....FirstName...Given a PlanID I want all the Providers with a ProviderStatus = 0I can get the query to work just fine if there are records but what Iwant is if there are no records then I at least want one record withthe Plan information. Here is a sample of the Query:SELECT pln.PlanName, pln.PlanID, l3.LookupValue as Region,p.ProviderID, p.SSNEIN, pl.DisplayLocationOnPCP,pl.NoDisplayDate, pl.ProviderStatus, pl.InvalidDate,l1.LookupValue as ReasonMain, l2.LookupValue as ReasonSub,pl.InvalidDataFROM Plans plnINNER JOIN Lookups l3 ON l3.LookupType = 'REGN'AND pln.RegionID = l3.Lookupkeyleft outer JOIN Provider p ON pln.PlanID = p.PlanIDleft outer JOIN ProviderLocation pl ON p.ProviderID = pl.ProviderIDleft outer JOIN Lookups l1 ON l1.LookupType = 'PLRM'AND pl.ReasonMain = l1.LookupKeyleft outer JOIN Lookups l2 ON l2.LookupType = 'PLX1'AND pl.ReasonSub = l2.LookupkeyWHERE pln.PlanID = '123456789' AND pl.ProviderStatus = 0ORDER BY p.PlanID, p.ProviderID, pl.SiteLocationNumI know the problew the ProviderStatus on the Where clause is keepingany records from being returned but I'm not good enough at this toanother select.Can anybody give me some suggestions?ThanksDavid
View 5 Replies
View Related
Mar 22, 2007
I am not sure if this is the correct forum but here goes!
We rebuilt our SqlServer 2000 Trans replication the other night. It used to run in 3 hours but now it runs in 9.5 hours (7 hours bcp out, 2.5 hours bcp in). We have a dedicated distributor box (4 processors), a 4 processor publisher, and a 2 processor subscriber. None of the systems exhibited any processor stress or unusual disk activity. The network tests OK (tested with file xfers). But the bcp's wrote data at 2.5 to 4 minutes per 100k rows, and they loaded the data at about 100k rows in 10 seconds or less.
As you know, Replication Snapshot uses bcp on each source table to build a collection of flat-files. Then it uses bcp to load those files into the subscribing tables. Because bcp is the workhhorse here, I decided against posting this in the Replication forum.
The only change I know of is increasing MaxBcpThreads from 4 to 8. This parameter specifies the number of bulk-copy operations that can be performed in parallel. I was thinking that 8 bcp's might somehow be killing the drive where all the bcp files are written.
Any ideas?
View 2 Replies
View Related
Apr 12, 2007
I installed SP2 two days ago and it seems like my SSIS-packes now take longer time than before - the very opposite of what I was hoping for.
Anyway, here are some data from runs on our performance environment. No new data is added to the source database between the runs, but I do a full process of the cubes every time (time is in seconds):
Package...............................SP1...............................SP2
Load dimensions..................200.................................270
Load fact data.......................800...............................1600
Process cubes....................2100...............................2600
So, as you can see, everything is going slower with SP2. I have yet to look into if there are any specific steps in the packages that take longer time than before, but it's odd that all packages take a longer time to execute. Especially that cube processing is slower suprises me.
Has anyone experienced something similar? Thanks!
View 10 Replies
View Related
Nov 14, 2000
Hi gurus
I haven't put the code in since I've tried several variations & keep having the same problem: I'm hoping someone will recognise the problem from a description of it.
There are two parts to my query.
* Part 1 creates and then populates a temporary table
* Part 2 is a select query which joins the temporary table to a permanent table, on 2 fields including a datetime field. The data types on each side of the join are identical.
If I run the first part of the query through ISQL and wait for it to complete before running the second part (in the same ISQL window), it (the second part) takes just 3 minutes. However, if I run both parts together, the second part takes ages, in fact I'm not sure if it completes at all (could wait indefinately!).
I tried placing a 'GO' between the two queries when running them together, but it didn't seem to help.
Please help, I'm stumped.
Thanks
Jo
View 1 Replies
View Related
Oct 13, 2004
I have database, Visual basic as front end, and sql server as backend, the reports are using crystal report. Recently, the user complain it is too slow to run, it took a long to load the data, anybody help me? Thanks in advance.
View 3 Replies
View Related
Oct 29, 2005
I am new to SSIS and probably doing some mistake while transferring data from oracle source to oracle destination. Please guide me.. In our project we need to transfer nearly 80 GB data from old system (Oracle 9i) to new system (Oracle 10 G Release 2). To do this exercise we are testing and comparing performance and cost effectiveness between different tools like SSIS, DTS and SQL * Loader (oracle built in tool). We have selected one table, which is having 40 fields with 3 million records. The destination table is also having same structure. Surprisingly SSIS is giving slower performance than DTS!!!!! It is taking more than two hours or nearly two hours. I have tested the same process 3 times. I have used two servers (1 GB RAM, Dual processor) for source and destination with minimum load and used data flow task (OLEDB Source and OLEDB Destination). In case of transferring data from Oracle to SQL SERVER I am finding €śFast Loading Option€? in data access mode, which is giving considerable performance boosting. But while transferring data from Oracle to Oracle I am not finding €śFast Loading Option€? !!!!!!!!! For performance boosting which provider I should use?? Please suggest me€¦ if any one can€¦ would remain grateful to him€¦ Thanks and Regards Sudripta Rakshit.
View 18 Replies
View Related
Jan 14, 2008
I copied a code from one proc and created new proc with the same code but different name. Using the same parameter for both procedures I got the different time of execution. New one is 4 times slower. I went through execution plan and could not find any changes. Does anyone have any experiance like this?
View 2 Replies
View Related
Feb 2, 2007
Hi,We have a C# web application that has been running for a few years now with little maintenance. Over the past few months, we have had some increasing problems.1. Loses session frequently ~ every 2-3 minutes.2. Occasionally (every 2 months) get out of memory exceptions.3. Connections to the DB and therefore the whole site run fast after restart. Gets continuously slower and slower over time until it starts throwing timeouts: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed
prior to completion of the operation or the server is not responding. The timeouts usually come every 3 weeks or so. Number 3 is the biggest problem for us, but I thought maybe 1 or 2 give some indication to why 3 is happening?I don't have much time to look into this now as I'm working on another project, but I was hoping that maybe someone would have ideas just looking at our symptoms.Thank you so much,Christie
View 4 Replies
View Related
Jun 12, 2002
Ok,
here's a funky one That I can't find an expanation for. If I go into EM and choose a table from a database and return all rows, I get immediate results and can start browsing records. If I go into query analyzer and do a select * on the same table, it takes up to 20-25 minutes to return the result set. This used to only take like 5 mins. What gives? Anyone seen this before?
View 1 Replies
View Related
Aug 24, 2004
I have an issue with the BCP util, it is extremely slow when running a BCP in to the same database on an A/P cluster vs. Standalone system. It is a vendor application, so I have little control as to what I can do. The BCP runs from a jobserver (remote) through the network, but it runs the same way in the standalone system, yet the process is about 10X as slow with the cluster, and the only diff is cluster vs. standalone.
SQLClusterName is specified in -S, not the active node, and it uses a trusted connection, not a local account. Is there anything about a cluster config that would cause the BCP to run slower ?
Any thoughts ??
Thanks,
Paul
View 3 Replies
View Related
Mar 29, 2007
Hi there,
I am an application programmer who uses SQL Server; I'm not an expert and just know the basics. Our application has approximately 300 SQL tables and each table has just one primary index. We don't have stored procedures and only have 3 Views that a couple of reports use. Our database is approximately 26GB in size. We are planning on migrating from SQL 2000/Windows Server 2000 to SQL 2005/Windows Server 2003. Before doing this I decided to do some bench mark testing. I chose a simple SELECT statement on one of our larger tables. On SQL 2000 it ran in 22 seconds whereas on SQL 2005 it ran in 34 seconds.
These are the steps I have done to try and improve performance, all to no avail.
I tried both a passive and active upgrade and the results were the same. The passive way was to remove 2000, install 2005, create a new database and restore from a 2000 backup into the new 2005 database. The active way was to leave 2000 on the server with the database attached and upgrading to 2005.
I did not change any defaults on the database except I have set the compatibility level to SQL 2005.
The collation is set to SQL_Latin1_General_CP1_CS_AS.
I have run an Optimize Index Task.
I have run a Rebuild Index Task.
I have run an Update Statistics Task.
I have defragged the drive that the database resides on.
Can anyone explain why SQL 2003 is dramatically slower than 2000? Any help would be greatly appreciated.
View 20 Replies
View Related
Mar 29, 2007
Hi there,
I am an application programmer who uses SQL Server; I'm not an expert and just know the basics. Our application has approximately 300 SQL tables and each table has just one primary index. We don't have stored procedures and only have 3 Views that a couple of reports use. Our database is approximately 26GB in size. We are planning on migrating from SQL 2000/Windows Server 2000 to SQL 2005/Windows Server 2003. Before doing this I decided to do some bench mark testing. I chose a simple SELECT statement on one of our larger tables. On SQL 2000 it ran in 22 seconds whereas on SQL 2005 it ran in 34 seconds.
These are the steps I have done to try and improve performance, all to no avail.
I tried both a passive and active upgrade and the results were the same. The passive way was to remove 2000, install 2005, create a new database and restore from a 2000 backup into the new 2005 database. The active way was to leave 2000 on the server with the database attached and upgrading to 2005.
I did not change any defaults on the database except I have set the compatibility level to SQL 2005.
The collation is set to SQL_Latin1_General_CP1_CS_AS.
I have run an Optimize Index Task.
I have run a Rebuild Index Task.
I have run an Update Statistics Task.
I have defragged the drive that the database resides on.
Can anyone explain why SQL 2003 is dramatically slower than 2000?
Any help would be greatly appreciated.
View 1 Replies
View Related
Jul 23, 2005
-- I have a situation where doing-- first example-- 1. Get series of values througha query into a string (@val)like'1,2,3,4':declare @val varchar(4000)select @Val = @val + cast(myval as varchar) + ',' -- myval is aninteger variablefrom xyzwhere xyz.field = 33SET @val = left(@val, len(@val) - 1)-- 2. EXEC a query using IN (' + @val + ')'EXEC('select *from qprwhere qpr.fieldx IN (' + @val + ')')-- is much faster than doing-- second exampleselect *from qprwhere qpr.fieldx IN (select myvalfrom xyzwhere xyz.field = 33)-- Since second example does not have a correlateed query, why is itslower?-- Thanks in advance,-- Caveman
View 6 Replies
View Related
Mar 29, 2007
Hi there,
I am an application programmer who uses SQL Server; I'm not an expert and just know the basics. Our application has approximately 300 SQL tables and each table has just one primary index. We don't have stored procedures and only have 3 Views that a couple of reports use. Our database is approximately 26GB in size. We are planning on migrating from SQL 2000/Windows Server 2000 to SQL 2005/Windows Server 2003. Before doing this I decided to do some bench mark testing. I chose a simple SELECT statement on one of our larger tables. On SQL 2000 it ran in 22 seconds whereas on SQL 2005 it ran in 34 seconds.
These are the steps I have done to try and improve performance, all to no avail.
I tried both a passive and active upgrade and the results were the same. The passive way was to remove 2000, install 2005, create a new database and restore from a 2000 backup into the new 2005 database. The active way was to leave 2000 on the server with the database attached and upgrading to 2005.
I did not change any defaults on the database except I have set the compatibility level to SQL 2005.
The collation is set to SQL_Latin1_General_CP1_CS_AS.
I have run an Optimize Index Task.
I have run a Rebuild Index Task.
I have run an Update Statistics Task.
I have defragged the drive that the database resides on.
Can anyone explain why SQL 2003 is dramatically slower than 2000? Any help would be greatly appreciated.
View 5 Replies
View Related
Mar 24, 2007
Hi all,
In order to return a table for a specific input parameter, I am using Function, but the performance is just awful! After I have tried same code as SP, the whole thing is running under 1 sec (like 0.5 sec), while the function is about 10 times slow (4-6 sec). I know in SQL 2000 function is slower than SP, but that cannot be as bad as 10 times slower.
Now, in order to use that table from SP, I have to create a temp table, then insert result into that temp table, before I can direct use any "select" sentence. Any explanation here? Or how to "select" from a SP directly?
Thanks,
Ning
View 2 Replies
View Related
Jan 4, 2007
I have a CF application I'm porting from VS2003/SQL CE 2.0 to VS2005/SQL Mobile. I make extensive use of RDA and unfortunately there is a huge performance degredation for me going to SQL Mobile. Where most PULL's in 2.0 take no more than a second or 2 to complete, they are taking around 10 or 11 seconds in SQL Mobile. This is the time from when .PULL starts and ends, it does not include any code run before or after the .PULL is issued, so it definitely something during the PULL that is slower.
I have tried settins of 0,1,3, and 6 for the compression level, all gave similar results, around 10 or 11 seconds. I have also tried doing a .SubmitSQL right before the .PULL to see if a sort of cached connection would help, but it only shaved a second off the time in most cases.
These times are using the same IIS server, SQL server (2000 SP3a), and database. I have tried using a 2003 SE device, WM 5.0 device, WM 5.0 emulator, and 2003 SE emulator, all exhibit the same performance degradation.
I really need to figure this out or I can't move the product to 2005 as the synchronization times would be completely unacceptable to our customers. Has anyone else run into this problem or know of a solution to it?
Thanks!
-Manny
View 1 Replies
View Related
Apr 19, 2006
I have a table valued UDF with only one SELECT query. It selects from a View, and has a couple of joins and calls another UDF in the WHERE clause. When I run the Select query itself it's duration is around 120, but when I have it wrapped in a UDF, it's around 1200!
ALTER FUNCTION [dbo].[fnGetiii]
(
bla bla bla
)
RETURNS @ItemTable TABLE
(
a varchar(2),
b bigint,
c smallmoney,
d varchar(50),
e varchar(50),
f int,
g varchar(50),
h datetime,
i int,
j int,
k int
)
AS
BEGIN
INSERT INTO @ItemTable
(a,b,c,d,e,f,g,h,i,j,k)
SELECT
vat.a AS a,
vat.b AS b,
vat.c AS c,
vat.d AS d,
vat.e AS e,
vat.f AS f,
vat.g AS g,
vat.h AS h,
vat.i AS i,
vat.j AS j,
vat.k AS k
FROM dbo.vw_jkl vat
LEFT JOIN dbo.uio uio
LEFT JOIN dbo.iop iop
ON (iop.kko = uio.kko)
ON (uio.jj = vat.jj)
WHERE this = that AND
this = that AND
this IN (SELECT iii FROM dbo.fniiireturn(pppp))
RETURN
END
How can I speed it up?
View 5 Replies
View Related
Aug 22, 2007
I was tasked with optimizing a query that was taking about 2.5 minutes to run. It is now running in 2 seconds, so great, it's working....except I don't understand why. The ONLY change made is the where...
Here is the origional
Code Snippet
WHERE (j.datelaunched > GETDATE() - 7 OR (j.jobtype_id = 2 AND j.datecreated > GETDATE() - 30)
Here is the optomized
Code Snippet
WHERE (j.datelaunched >= DATEADD(DD, DATEDIFF(DD,0,GETDATE() - 7), 0) OR (j.jobtype_id = 2 AND j.datecreated >= DATEADD(DD, DATEDIFF(DD,0,GETDATE() - 30), 0)))
View 10 Replies
View Related
Mar 27, 2007
I am new to Reporting Services and hope that what I am looking to do is within capabilities :-)
I have many identical schema databases residing on a number of data servers. These support individual clients accessing them via a web interface. What I need to be able to do is run reports across all of the databases. So the layout is:
Dataserver A
Database A1
Database A2
Database A3
Dataserver B
Database B1
Database B2
Dataserver C
Database C1
Database C2
Database C3
I would like to run a report that pulls table data from A1, A2, A3, B1, B2, C1, C2, C3
Now the actual number of servers is 7 and the number of databases is close to 1000. All servers are running SQL2005.
Is this something that Reporting Services is able to handle or do I need to look at some other solution?
Thanks,
Michael
View 5 Replies
View Related
May 9, 2005
I have a table in which 2 of the fields are defined as TEXT. It was determined that neither of these fields would have to hold more than 3500 characters, so I changed both fields to VARCHAR (3500). The problem is that any query against this table takes 20 seconds to run. When the fields were defined as data type TEXT, queries ran lick-itty-split!? I would have expected just the opposite. Can someone explain this behavior?
View 1 Replies
View Related
Sep 3, 2004
I've got a table with a pk (bigint, no autoincrement) that has a clustered index. Same table has an integer field with a non-unique index on it.
When I do a count(*) on the table, the non-unique index is used (20m rows, 12 secs). When I force the count(*) to use the clustered index, it takes 43 secs. When selecting rows, usually the clustered index is used.
So I'm curious as to why the count(*) uses the non-unique index and the others don't. I've noticed it's faster but, why? Any ideas/considerations?
View 4 Replies
View Related
Feb 19, 2004
I have a large VB 6.0 application running with Jet 4.0 Access DB. I am considering moving the DB to MSDE. For testing, I installed MSDE on another computer in a peer to peer network running XP sp1, loaded up the DB from Access to MSDE2000A -- all without problem. (I should add the obvious fact I am new to using SQL server.)
What I find is that with a relatively small test DB, running just a single instance of the application, query response from MSDE is taking several seconds (4-5 second lag) longer than response from the Access DB, which runs extremely fast. This is with the Access DB installed on the same network drive, running the same application and the same queries syntax -- only changing configuration of connection for each (SQL vs Jet 4).
Any clues as to what may be going on? The lag time is unacceptable. I am using SQL password instead of NT security. It seems the process is perhaps lagging in the process of authorization.
Thanks for any suggestions or ideas on this.
View 1 Replies
View Related