Why Does 8 Bcp's Run Slower Than 4 Bcp's?
Mar 22, 2007
I am not sure if this is the correct forum but here goes!
We rebuilt our SqlServer 2000 Trans replication the other night. It used to run in 3 hours but now it runs in 9.5 hours (7 hours bcp out, 2.5 hours bcp in). We have a dedicated distributor box (4 processors), a 4 processor publisher, and a 2 processor subscriber. None of the systems exhibited any processor stress or unusual disk activity. The network tests OK (tested with file xfers). But the bcp's wrote data at 2.5 to 4 minutes per 100k rows, and they loaded the data at about 100k rows in 10 seconds or less.
As you know, Replication Snapshot uses bcp on each source table to build a collection of flat-files. Then it uses bcp to load those files into the subscribing tables. Because bcp is the workhhorse here, I decided against posting this in the Replication forum.
The only change I know of is increasing MaxBcpThreads from 4 to 8. This parameter specifies the number of bulk-copy operations that can be performed in parallel. I was thinking that 8 bcp's might somehow be killing the drive where all the bcp files are written.
Any ideas?
View 2 Replies
ADVERTISEMENT
Jul 20, 2005
I am begginner at best so I hope someone that is better can help.I have a stored procedure that updates a view that I wrote using 2cursors.(Kind of a Inner Loop) I wrote it this way Because I couldn'tdo it using reqular transact SQL.The problem is that this procedure is taking longer and longer to run.Up to 5 hours now! It is anaylizing about 30,000 records. I thinkpartly because we add new records every month.The procedure works like this.The first Cursor stores a unique account and duedate combination fromthe view.It then finds all the accts in the view that have that account duedatecombo and loads them into Cursor 2 this groups them together for datamanipulation. The accounts have to be grouped this way because aaccount can have different due dates and multiple records within eachaccount due date combo and they need to be looked at this way aslittle singular groups.Here is my procedure I hope someone can shead some light on this. Myboss is giving me heck about it. (I think he thinks Girls cant code!)I got this far I hope someone can help me optimize it further.CREATE PROCEDURE dbo.sp_PromiseStatusASBEGINSET NOCOUNT ON/* Global variables */DECLARE @tot_pay moneyDECLARE @rec_upd VARCHAR(1)DECLARE @todays_date varchar(12)DECLARE @mActivityDate2_temp datetimeDECLARE @tot_paydate datetime/* variables for cursor ACT_CUR1*/DECLARE @mAcct_Num1 BIGINTDECLARE @mDueDate1 datetime/* variables for ACT_CUR2 */DECLARE @mAcct_Num2 BIGINTDECLARE @mActivity_Date2 datetimeDECLARE @mPromise_Amt_1 moneyDECLARE @mPromise_Status varchar(3)DECLARE @mCurrent_Due_Amt moneyDECLARE @mDPD intDECLARE @mPromise_Date datetimeSELECT @todays_date =''+CAST(DATEPART(mm,getdate()) AS varchar(2))+'/'+CAST(DATEPART(dd,getdate()) AS varchar(2))+'/'+CAST(DATEPART(yyyy,getdate()) AS varchar(4))+''DECLARE ACT_CUR1 CURSOR FORSELECT DISTINCTA.ACCT_NUM,A.DUE_DATEFROM VWAPPLICABLEPROMISEACTIVITYRECORDS AOPEN ACT_CUR1FETCH NEXT FROM ACT_CUR1 INTO @mAcct_Num1 , @mDueDate1WHILE (@@FETCH_STATUS = 0)BEGINSELECT @rec_upd = 'N 'DECLARE ACT_CUR2 CURSOR FORSELECTB.ACCT_NUM,B.ACTIVITY_DATE,B.PROMISE_AMT_1,B.PROMISE_STATUS,B.CURRENT_DUE_AMT,B.DAYS_DELINQUENT_NUM,B.PROMISE_DATE_1FROM VWAPPLICABLEPROMISEACTIVITYRECORDS B (UPDLOCK)WHERE B.ACCT_NUM = @mAcct_Num1ANDB.DUE_DATE = @mDueDate1ORDER BY B.ACCT_NUM,B.DUE_DATE,B.ACTIVITY_DATE,CASEB.Time_ObtainedWHEN 0 THEN 0ELSE 1END Desc, B.Time_ObtainedOPEN ACT_CUR2FETCH NEXT FROM ACT_CUR2INTO @mAcct_Num2 ,@mActivity_Date2,@mPromise_Amt_1,@mPromise_Status ,@mCurrent_Due_Amt,@mDPD,@mPromise_DateWHILE (@@FETCH_STATUS = 0)BEGIN----CHECK------------------------------------------------------------------------DECLARE @PrintVariable2 VARCHAR (8000)--SELECT @PrintVariable2 = CAST(@MACCT_NUM2 AS VARCHAR)+''+CAST(@MACTIVITY_DATE2 AS VARCHAR)+' '+CAST(@MPROMISE_AMT_1 ASVARCHAR)+' '+CAST(@MPROMISE_STATUS AS VARCHAR)+''+CAST(@mCurrent_Due_Amt AS VARCHAR)+' '+CAST(@mDPD AS VARCHAR)+''+CAST(@mPromise_Date AS VARCHAR)--PRINT @PrintVariable2----ENDCHECK------------------------------------------------------------IF @mDPD >= 30BEGINSELECT @tot_pay = SUM(CONVERT(FLOAT, C.PAY_AMT))FROM vwAplicablePayments CWHERE C.ACCT_NUM = @mAcct_Num2ANDC.ACTIVITY_DATE >= @mActivity_Date2ANDC.ACTIVITY_DATE < @mActivity_Date2 + 15----CHECK------------------------------------------------------------------------DECLARE @PrintVariable3 VARCHAR (8000)--SELECT @PrintVariable3 ='Greater=30 DOLLARS COLLECTED'--PRINT @PrintVariable3----ENDCHECK------------------------------------------------------------ENDELSE IF @mDPD < 30BEGINSELECT @tot_pay = SUM(CONVERT(FLOAT, C.PAY_AMT))FROM vwAplicablePayments CWHERE C.ACCT_NUM = @mAcct_Num2ANDC.ACTIVITY_DATE >= @mActivity_Date2ANDC.ACTIVITY_DATE BETWEEN @mActivity_Date2 AND@mPromise_Date + 5----CHECK----------------------------------------------------------------------DECLARE @PrintVariable4 VARCHAR (8000)--SELECT @PrintVariable4 ='Less 30 DOLLARS COLLECTED'--PRINT @PrintVariable4----END CHECK------------------------------------------------------------END----------------------------------------MY REVISEDLOGIC-------------------------------------------------------IF @rec_upd = 'N'BEGINIF @mDPD >= 30BEGINSELECT @mActivityDate2_temp = @mActivity_Date2 + 15--DECLARE @PrintVariable5 VARCHAR (8000)--SELECT @PrintVariable5 =' GREATER= 30 USING ACTVITY_DATE+15'--PRINT @PrintVariable5ENDELSE IF @mDPD < 30BEGINSELECT @mActivityDate2_temp = @mPromise_Date + 5--DECLARE @PrintVariable6 VARCHAR (8000)--SELECT @PrintVariable6 =' LESS 30 USING PROMISE_DATE+5'--PRINT @PrintVariable6ENDIF @tot_pay >= 0.9 * @mCurrent_Due_Amt--used to be promise amtBEGINUPDATE VWAPPLICABLEPROMISEACTIVITYRECORDSSET PROMISE_STATUS = 'PK',TOTAL_DOLLARS_COLL = @tot_payWHERE CURRENT OF ACT_CUR2--This statement updates the time that the status was placedinto PK.IF @mPromise_Status IN ('PTP','OP')BEGINUPDATE VWAPPLICABLEPROMISEACTIVITYRECORDSSET Status_Date = @todays_dateWHERE CURRENT OF ACT_CUR2ENDSELECT @rec_upd = 'Y 'ENDIF ((@tot_pay < 0.9 * @mCurrent_Due_Amt) OR @tot_pay IS NULL)AND( @mActivityDate2_temp > @todays_date )--need to put 1dayof month here for snapshot9/01/2004BEGINUPDATE VWAPPLICABLEPROMISEACTIVITYRECORDSSETPROMISE_STATUS = 'OP'WHERE CURRENT OF ACT_CUR2--This statement updates the time that the status was placedinto OP which is the original Activity Date.--The record will hold this date until it goes into PK,PB,orIP.IF @mPromise_Status IN ('PTP','OP')BEGINUPDATE VWAPPLICABLEPROMISEACTIVITYRECORDSSET Status_Date = @mActivity_Date2WHERE CURRENT OF ACT_CUR2ENDENDELSE IF ((@tot_pay < 0.9 * @mCurrent_Due_Amt) OR @tot_pay ISNULL)AND( @mActivityDate2_temp <= @todays_date )--need to put 1dayof month here for snapshot 9/01/2004BEGINUPDATE VWAPPLICABLEPROMISEACTIVITYRECORDSSETPROMISE_STATUS = 'PB',TOTAL_DOLLARS_COLL = case when @tot_pay is nullthen 0 else @tot_pay endWHERE CURRENT OF ACT_CUR2--This statement updates the time that the status was placedinto PB.IF @mPromise_Status IN ('PTP','OP')BEGINUPDATE VWAPPLICABLEPROMISEACTIVITYRECORDSSET Status_Date = @todays_dateWHERE CURRENT OF ACT_CUR2ENDENDENDELSE IF @rec_upd = 'Y'BEGINUPDATE VWAPPLICABLEPROMISEACTIVITYRECORDSSETPROMISE_STATUS = 'IP',TOTAL_DOLLARS_COLL = 0WHERE CURRENT OF ACT_CUR2--This statement updates the time that the status was placedinto IP.IF @mPromise_Status NOT IN ('IP')BEGINUPDATE VWAPPLICABLEPROMISEACTIVITYRECORDSSET Status_Date = @todays_dateWHERE CURRENT OF ACT_CUR2ENDENDFETCH NEXT FROM ACT_CUR2 INTO @mAcct_Num2,@mActivity_Date2,@mPromise_Amt_1,@mPromise_Status ,@mCurrent_Due_Amt,@mDPD,@mPromise_DateENDCLOSE ACT_CUR2DEALLOCATE ACT_CUR2FETCH NEXT FROM ACT_CUR1 INTO @mAcct_Num1 , @mDueDate1ENDCLOSE ACT_CUR1DEALLOCATE ACT_CUR1SET NOCOUNT OFFENDGO
View 15 Replies
View Related
Mar 1, 2008
hi, friends, please look at this:
I have a SSIS package, and inside it I do something like below:
1. I have a SQL component, to give back a object to store the records.
2. I have a VB script component, I direct the object I got in 1 step into the script as a dataset.
My problem is:
I run the package in the SQL SERVER 2005 Store Procedue like this:
do
dtexec.exe package.dtsx
loop untill i>t
I control the it runs 30 times. But I found that the speed is slower and slower.
the first time, it takes about 600 s, but the last time, it takes the 1800 s.
Why?
The package don't drop the object it create during the loop in the Store Procedue ?
Thanks!
View 11 Replies
View Related
Apr 12, 2007
I installed SP2 two days ago and it seems like my SSIS-packes now take longer time than before - the very opposite of what I was hoping for.
Anyway, here are some data from runs on our performance environment. No new data is added to the source database between the runs, but I do a full process of the cubes every time (time is in seconds):
Package...............................SP1...............................SP2
Load dimensions..................200.................................270
Load fact data.......................800...............................1600
Process cubes....................2100...............................2600
So, as you can see, everything is going slower with SP2. I have yet to look into if there are any specific steps in the packages that take longer time than before, but it's odd that all packages take a longer time to execute. Especially that cube processing is slower suprises me.
Has anyone experienced something similar? Thanks!
View 10 Replies
View Related
Nov 14, 2000
Hi gurus
I haven't put the code in since I've tried several variations & keep having the same problem: I'm hoping someone will recognise the problem from a description of it.
There are two parts to my query.
* Part 1 creates and then populates a temporary table
* Part 2 is a select query which joins the temporary table to a permanent table, on 2 fields including a datetime field. The data types on each side of the join are identical.
If I run the first part of the query through ISQL and wait for it to complete before running the second part (in the same ISQL window), it (the second part) takes just 3 minutes. However, if I run both parts together, the second part takes ages, in fact I'm not sure if it completes at all (could wait indefinately!).
I tried placing a 'GO' between the two queries when running them together, but it didn't seem to help.
Please help, I'm stumped.
Thanks
Jo
View 1 Replies
View Related
Oct 13, 2004
I have database, Visual basic as front end, and sql server as backend, the reports are using crystal report. Recently, the user complain it is too slow to run, it took a long to load the data, anybody help me? Thanks in advance.
View 3 Replies
View Related
Oct 29, 2005
I am new to SSIS and probably doing some mistake while transferring data from oracle source to oracle destination. Please guide me.. In our project we need to transfer nearly 80 GB data from old system (Oracle 9i) to new system (Oracle 10 G Release 2). To do this exercise we are testing and comparing performance and cost effectiveness between different tools like SSIS, DTS and SQL * Loader (oracle built in tool). We have selected one table, which is having 40 fields with 3 million records. The destination table is also having same structure. Surprisingly SSIS is giving slower performance than DTS!!!!! It is taking more than two hours or nearly two hours. I have tested the same process 3 times. I have used two servers (1 GB RAM, Dual processor) for source and destination with minimum load and used data flow task (OLEDB Source and OLEDB Destination). In case of transferring data from Oracle to SQL SERVER I am finding €śFast Loading Option€? in data access mode, which is giving considerable performance boosting. But while transferring data from Oracle to Oracle I am not finding €śFast Loading Option€? !!!!!!!!! For performance boosting which provider I should use?? Please suggest me€¦ if any one can€¦ would remain grateful to him€¦ Thanks and Regards Sudripta Rakshit.
View 18 Replies
View Related
Jan 14, 2008
I copied a code from one proc and created new proc with the same code but different name. Using the same parameter for both procedures I got the different time of execution. New one is 4 times slower. I went through execution plan and could not find any changes. Does anyone have any experiance like this?
View 2 Replies
View Related
Feb 2, 2007
Hi,We have a C# web application that has been running for a few years now with little maintenance. Over the past few months, we have had some increasing problems.1. Loses session frequently ~ every 2-3 minutes.2. Occasionally (every 2 months) get out of memory exceptions.3. Connections to the DB and therefore the whole site run fast after restart. Gets continuously slower and slower over time until it starts throwing timeouts: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed
prior to completion of the operation or the server is not responding. The timeouts usually come every 3 weeks or so. Number 3 is the biggest problem for us, but I thought maybe 1 or 2 give some indication to why 3 is happening?I don't have much time to look into this now as I'm working on another project, but I was hoping that maybe someone would have ideas just looking at our symptoms.Thank you so much,Christie
View 4 Replies
View Related
May 3, 2001
I know this won't be a lot to go on but, we has a quad processor box that is doing a lot of sql crunching. When we turn off three of the processors it runs the SQL queries faster. The requests are comming from COM objects. CPU, Memory, page faults, all that stuff is fine. Also SQL dosen't appear to be using all the processors as only 1 has any amount of usage?. Any suggestions of where to start would be most appreaciated.
Mike
View 1 Replies
View Related
Jun 12, 2002
Ok,
here's a funky one That I can't find an expanation for. If I go into EM and choose a table from a database and return all rows, I get immediate results and can start browsing records. If I go into query analyzer and do a select * on the same table, it takes up to 20-25 minutes to return the result set. This used to only take like 5 mins. What gives? Anyone seen this before?
View 1 Replies
View Related
Aug 24, 2004
I have an issue with the BCP util, it is extremely slow when running a BCP in to the same database on an A/P cluster vs. Standalone system. It is a vendor application, so I have little control as to what I can do. The BCP runs from a jobserver (remote) through the network, but it runs the same way in the standalone system, yet the process is about 10X as slow with the cluster, and the only diff is cluster vs. standalone.
SQLClusterName is specified in -S, not the active node, and it uses a trusted connection, not a local account. Is there anything about a cluster config that would cause the BCP to run slower ?
Any thoughts ??
Thanks,
Paul
View 3 Replies
View Related
Mar 29, 2007
Hi there,
I am an application programmer who uses SQL Server; I'm not an expert and just know the basics. Our application has approximately 300 SQL tables and each table has just one primary index. We don't have stored procedures and only have 3 Views that a couple of reports use. Our database is approximately 26GB in size. We are planning on migrating from SQL 2000/Windows Server 2000 to SQL 2005/Windows Server 2003. Before doing this I decided to do some bench mark testing. I chose a simple SELECT statement on one of our larger tables. On SQL 2000 it ran in 22 seconds whereas on SQL 2005 it ran in 34 seconds.
These are the steps I have done to try and improve performance, all to no avail.
I tried both a passive and active upgrade and the results were the same. The passive way was to remove 2000, install 2005, create a new database and restore from a 2000 backup into the new 2005 database. The active way was to leave 2000 on the server with the database attached and upgrading to 2005.
I did not change any defaults on the database except I have set the compatibility level to SQL 2005.
The collation is set to SQL_Latin1_General_CP1_CS_AS.
I have run an Optimize Index Task.
I have run a Rebuild Index Task.
I have run an Update Statistics Task.
I have defragged the drive that the database resides on.
Can anyone explain why SQL 2003 is dramatically slower than 2000? Any help would be greatly appreciated.
View 20 Replies
View Related
Mar 29, 2007
Hi there,
I am an application programmer who uses SQL Server; I'm not an expert and just know the basics. Our application has approximately 300 SQL tables and each table has just one primary index. We don't have stored procedures and only have 3 Views that a couple of reports use. Our database is approximately 26GB in size. We are planning on migrating from SQL 2000/Windows Server 2000 to SQL 2005/Windows Server 2003. Before doing this I decided to do some bench mark testing. I chose a simple SELECT statement on one of our larger tables. On SQL 2000 it ran in 22 seconds whereas on SQL 2005 it ran in 34 seconds.
These are the steps I have done to try and improve performance, all to no avail.
I tried both a passive and active upgrade and the results were the same. The passive way was to remove 2000, install 2005, create a new database and restore from a 2000 backup into the new 2005 database. The active way was to leave 2000 on the server with the database attached and upgrading to 2005.
I did not change any defaults on the database except I have set the compatibility level to SQL 2005.
The collation is set to SQL_Latin1_General_CP1_CS_AS.
I have run an Optimize Index Task.
I have run a Rebuild Index Task.
I have run an Update Statistics Task.
I have defragged the drive that the database resides on.
Can anyone explain why SQL 2003 is dramatically slower than 2000?
Any help would be greatly appreciated.
View 1 Replies
View Related
Jul 23, 2005
-- I have a situation where doing-- first example-- 1. Get series of values througha query into a string (@val)like'1,2,3,4':declare @val varchar(4000)select @Val = @val + cast(myval as varchar) + ',' -- myval is aninteger variablefrom xyzwhere xyz.field = 33SET @val = left(@val, len(@val) - 1)-- 2. EXEC a query using IN (' + @val + ')'EXEC('select *from qprwhere qpr.fieldx IN (' + @val + ')')-- is much faster than doing-- second exampleselect *from qprwhere qpr.fieldx IN (select myvalfrom xyzwhere xyz.field = 33)-- Since second example does not have a correlateed query, why is itslower?-- Thanks in advance,-- Caveman
View 6 Replies
View Related
Mar 29, 2007
Hi there,
I am an application programmer who uses SQL Server; I'm not an expert and just know the basics. Our application has approximately 300 SQL tables and each table has just one primary index. We don't have stored procedures and only have 3 Views that a couple of reports use. Our database is approximately 26GB in size. We are planning on migrating from SQL 2000/Windows Server 2000 to SQL 2005/Windows Server 2003. Before doing this I decided to do some bench mark testing. I chose a simple SELECT statement on one of our larger tables. On SQL 2000 it ran in 22 seconds whereas on SQL 2005 it ran in 34 seconds.
These are the steps I have done to try and improve performance, all to no avail.
I tried both a passive and active upgrade and the results were the same. The passive way was to remove 2000, install 2005, create a new database and restore from a 2000 backup into the new 2005 database. The active way was to leave 2000 on the server with the database attached and upgrading to 2005.
I did not change any defaults on the database except I have set the compatibility level to SQL 2005.
The collation is set to SQL_Latin1_General_CP1_CS_AS.
I have run an Optimize Index Task.
I have run a Rebuild Index Task.
I have run an Update Statistics Task.
I have defragged the drive that the database resides on.
Can anyone explain why SQL 2003 is dramatically slower than 2000? Any help would be greatly appreciated.
View 5 Replies
View Related
Mar 24, 2007
Hi all,
In order to return a table for a specific input parameter, I am using Function, but the performance is just awful! After I have tried same code as SP, the whole thing is running under 1 sec (like 0.5 sec), while the function is about 10 times slow (4-6 sec). I know in SQL 2000 function is slower than SP, but that cannot be as bad as 10 times slower.
Now, in order to use that table from SP, I have to create a temp table, then insert result into that temp table, before I can direct use any "select" sentence. Any explanation here? Or how to "select" from a SP directly?
Thanks,
Ning
View 2 Replies
View Related
Jan 4, 2007
I have a CF application I'm porting from VS2003/SQL CE 2.0 to VS2005/SQL Mobile. I make extensive use of RDA and unfortunately there is a huge performance degredation for me going to SQL Mobile. Where most PULL's in 2.0 take no more than a second or 2 to complete, they are taking around 10 or 11 seconds in SQL Mobile. This is the time from when .PULL starts and ends, it does not include any code run before or after the .PULL is issued, so it definitely something during the PULL that is slower.
I have tried settins of 0,1,3, and 6 for the compression level, all gave similar results, around 10 or 11 seconds. I have also tried doing a .SubmitSQL right before the .PULL to see if a sort of cached connection would help, but it only shaved a second off the time in most cases.
These times are using the same IIS server, SQL server (2000 SP3a), and database. I have tried using a 2003 SE device, WM 5.0 device, WM 5.0 emulator, and 2003 SE emulator, all exhibit the same performance degradation.
I really need to figure this out or I can't move the product to 2005 as the synchronization times would be completely unacceptable to our customers. Has anyone else run into this problem or know of a solution to it?
Thanks!
-Manny
View 1 Replies
View Related
Apr 19, 2006
I have a table valued UDF with only one SELECT query. It selects from a View, and has a couple of joins and calls another UDF in the WHERE clause. When I run the Select query itself it's duration is around 120, but when I have it wrapped in a UDF, it's around 1200!
ALTER FUNCTION [dbo].[fnGetiii]
(
bla bla bla
)
RETURNS @ItemTable TABLE
(
a varchar(2),
b bigint,
c smallmoney,
d varchar(50),
e varchar(50),
f int,
g varchar(50),
h datetime,
i int,
j int,
k int
)
AS
BEGIN
INSERT INTO @ItemTable
(a,b,c,d,e,f,g,h,i,j,k)
SELECT
vat.a AS a,
vat.b AS b,
vat.c AS c,
vat.d AS d,
vat.e AS e,
vat.f AS f,
vat.g AS g,
vat.h AS h,
vat.i AS i,
vat.j AS j,
vat.k AS k
FROM dbo.vw_jkl vat
LEFT JOIN dbo.uio uio
LEFT JOIN dbo.iop iop
ON (iop.kko = uio.kko)
ON (uio.jj = vat.jj)
WHERE this = that AND
this = that AND
this IN (SELECT iii FROM dbo.fniiireturn(pppp))
RETURN
END
How can I speed it up?
View 5 Replies
View Related
Aug 22, 2007
I was tasked with optimizing a query that was taking about 2.5 minutes to run. It is now running in 2 seconds, so great, it's working....except I don't understand why. The ONLY change made is the where...
Here is the origional
Code Snippet
WHERE (j.datelaunched > GETDATE() - 7 OR (j.jobtype_id = 2 AND j.datecreated > GETDATE() - 30)
Here is the optomized
Code Snippet
WHERE (j.datelaunched >= DATEADD(DD, DATEDIFF(DD,0,GETDATE() - 7), 0) OR (j.jobtype_id = 2 AND j.datecreated >= DATEADD(DD, DATEDIFF(DD,0,GETDATE() - 30), 0)))
View 10 Replies
View Related
May 9, 2005
I have a table in which 2 of the fields are defined as TEXT. It was determined that neither of these fields would have to hold more than 3500 characters, so I changed both fields to VARCHAR (3500). The problem is that any query against this table takes 20 seconds to run. When the fields were defined as data type TEXT, queries ran lick-itty-split!? I would have expected just the opposite. Can someone explain this behavior?
View 1 Replies
View Related
Sep 3, 2004
I've got a table with a pk (bigint, no autoincrement) that has a clustered index. Same table has an integer field with a non-unique index on it.
When I do a count(*) on the table, the non-unique index is used (20m rows, 12 secs). When I force the count(*) to use the clustered index, it takes 43 secs. When selecting rows, usually the clustered index is used.
So I'm curious as to why the count(*) uses the non-unique index and the others don't. I've noticed it's faster but, why? Any ideas/considerations?
View 4 Replies
View Related
Feb 19, 2004
I have a large VB 6.0 application running with Jet 4.0 Access DB. I am considering moving the DB to MSDE. For testing, I installed MSDE on another computer in a peer to peer network running XP sp1, loaded up the DB from Access to MSDE2000A -- all without problem. (I should add the obvious fact I am new to using SQL server.)
What I find is that with a relatively small test DB, running just a single instance of the application, query response from MSDE is taking several seconds (4-5 second lag) longer than response from the Access DB, which runs extremely fast. This is with the Access DB installed on the same network drive, running the same application and the same queries syntax -- only changing configuration of connection for each (SQL vs Jet 4).
Any clues as to what may be going on? The lag time is unacceptable. I am using SQL password instead of NT security. It seems the process is perhaps lagging in the process of authorization.
Thanks for any suggestions or ideas on this.
View 1 Replies
View Related
Feb 6, 2008
Hi all,
My ASP application is running fast in SQL Server 2000. But when we shifted it to SQL Server 2005, it becomes very slow.
I have created the indexes also. There is lot of data in the tables.
Is there any option to build the indexes for previous data.
Please help....
-Shah.
View 2 Replies
View Related
Oct 24, 2007
We have a procedure that does some complicated table joins to populate a temporary table (declared in the proc) and then extraneous rows are removed from the table and the remaining rows are returned.
On the production server the procedure takes 22 seconds, but the identical query in a dynamic session window of SQL Server Management Studio takes 2 seconds. That doesn't make any sense.
We've profiled the procedure and discovered that the small loop at the end of the query runs through twice with each iteration taking 10 seconds. Again - the same section (but dynamic) profiles in at 0 seconds. We further narrowed it down to the compound delete statement from the @tblResults table.
Here's the looped section of code:
-- now delete the entries with no content
declare @deleted int
set @deleted = 1
declare @tmp table ([Id] int)
while @deleted <> 0
begin
delete from @tmp
-- get all entries with no child nodes
insert into @tmp ([Id])
select distinct r.[id] from @tblResults r
where r.[id] not in
(
select [Id] from @tblResults
where [Id] in (select distinct ParentId from @tblResults)
)
-- delete entries with no documents and region match
-- (this is where the slowdown occurs)
delete from @tblResults
where [id] in
(
select t.[id] from @tmp t where t.[id] not in
(select distinct dm.metatagid from documentmetatag dm
where dm.documentid in
(select distinct dr.documentid from documentregion dr, @region r
where (dr.regionid = r.regionId or @regionIds is null)
)
)
)
select @deleted = @@rowcount
end
We then return all rows from the @tblResults table.
We are totally stumped on why the procedure would take so much longer than running the same query in a SQL Manager window. Is there some issue with temporary tables as variables in a procedure? Any ideas anyone?
Thanks,
John
View 5 Replies
View Related
Jan 24, 2008
I'm using SQL2005. I'm confused on why queries run slower after I add indexes to a table. I thought the system would pick the best index available when running select statements.
sys.dm_db_missing_index* and Tuning Advisor keep recommending indexes that make my system slower. Maybe they are making a few queries faster that I'm not aware of but I do know they makes some too slow. Now I have to document which indexes to avoid creating for each table.
View 5 Replies
View Related
Jul 20, 2007
We have an extensive stress test for our OLTP application.that runs slower after upgrading to SQL Server 2005.
We have done the following:
- applied SP1 and SP2
- updated statistics (sp_updatestats)
- recompiled all stored procedures
Some behavior we notice in Perfmon over the course of the 1-hour test is:
- Average CPU Utilization is up from 22% to 32% and we're trying to figure out why
- Compilations per second are up from 200 to 250
- Recompilations per second are down from 5 to 1 (which is good).
- Memory utilization is identical.
- We see exagerated spikes in response times. The general performance
seems pretty good - but there are periodic spikes that hurt our averages.
- I examined perfmon for waits and nothing jumped out.
- A small handful of application workflows are much slower, while most are roughly identical.
I realize there could be a zillion different causes. But if anyone has experience upgrading a large OLTP solution which uses a mixture of dynamic SQL and stored procedures to SS2005 - and tuning the upgraded database - please offer any suggestions.
Our current plan includes (a) running the Best Practices analyzer which already uncovered a trace flag we may want to use since we're running on an 8-way, (b) rebuilding indexes, and (c) examine a SQL Trace this time around too since it appears the slowness is isolated to certain application workflows.
Please offer any suggestions that have worked for your team.
Thanks so much!
Mike
PS I will post all the things we end up doing in the end once we get performance equal or better - hopefully the latter!!!
View 9 Replies
View Related
Apr 6, 2006
Hello,
I am experiencing that my SISS packages are executing faster during development from within the Sql Business Intelligence Development Studio environment than when I execute them with dtexecui after I have deployed them as either File System Or Sql Server deployments.
After deployment the packages run about twice a long as they do from within Sql Business Intelligence Development Studio (i.e they reference the same data and are executed on the same machine).
In short the packages import data, during this import stage the data is massaged certain calculations are performed thereon where after this massaged data is used to perform further calculations.
A summary of tools used in SISS are Execute Sql Task, Flat File Source, Oledb Destination, Data Conversion, Aggregate, Conditional Split, Merge, For Each Ado Enumerators, Script Tasks, Derived Columns and Recordset Destinations.
In terms of time from within Sql Business Intelligence Development Studio when the packages are executed for 1 month's data the total combined time is about 10 minutes and when deployed as either File System Or Sql Server deployments they run for about 20 minutes. My problem is when they are executed for a years data instead of running for 120 minutes they run for at least 240 minutes or longer.
Is this normal or am I missing some settings when building the package for deployment?
Luke.
View 3 Replies
View Related
Aug 1, 2006
I've found a very interesting case where SQL 2005 is more than 4 times slower than its predecessor, despite superior hardware.
To compare, I have some logs from the ISA Server has stored into the database, which is then filtered against my blocklists. The result are the most popular sites I have not blocked or trusted.
I have done some tuning on the query to utilize the data patterns that are in the database. This allows much better scaling towards large datasets. The engine is now capable to use hash-matches instead of nested loops over the entire datasets.
It currently runs in 1 to 1.5 minutes on SQL 2000, which is powered by a VIA C3 Nehemiah at 1 GHz, with only 350 MB RAM too spare in optimistic conditions (The remainder of the 1 GB is used by many other applications). THe server is limited to 384 MB memory usage.
The other box running SQL 2005 is a Athlon 800 MHz, which has superior processing power, larger cache and more memory bandwidth and has 1.25 GB RAM, where SQL server can use the needed 629 MB without any problem.
Despite these facts, my query takes more than 4 times longer on the SQL 2005 box. 1.5 minutes compared to 7 to 8 minutes.
SQL 2000 Database Schema:
CREATE TABLE [dbo].[WebProxyLog](
[ClientIP] [bigint] NOT NULL,
[ClientUserName] [nvarchar](514) NOT NULL,
[ClientAgent] [varchar](128) NOT NULL,
[ClientAuthenticate] [smallint] NOT NULL,
[logTime] [datetime] NOT NULL,
[service] [smallint] NOT NULL,
[servername] [nvarchar](32) NOT NULL,
[referredserver] [varchar](32) NOT NULL,
[DestHost] [varchar](255) NOT NULL,
[DestHostIP] [bigint] NOT NULL,
[DestHostPort] [int] NOT NULL,
[processingtime] [int] NOT NULL,
[bytesrecvd] [bigint] NOT NULL,
[bytessent] [bigint] NOT NULL,
[protocol] [varchar](12) NOT NULL,
[transport] [varchar](8) NOT NULL,
[operation] [varchar](24) NOT NULL,
[uri] [varchar](2048) NOT NULL,
[mimetype] [varchar](32) NOT NULL,
[objectsource] [smallint] NOT NULL,
[resultcode] [int] NOT NULL,
[CacheInfo] [int] NOT NULL,
[rule] [nvarchar](128) NOT NULL,
[FilterInfo] [nvarchar](128) NOT NULL,
[SrcNetwork] [nvarchar](128) NOT NULL,
[DstNetwork] [nvarchar](128) NOT NULL,
[ErrorInfo] [int] NOT NULL,
[Action] [varchar](32) NOT NULL,
[GmtLogTime] [datetime] NOT NULL
)
CREATE TABLE [dbo].[TrustedHosts](
[Hostname] [varchar](60) NOT NULL,
[Comment] [varchar](500) NULL,
CONSTRAINT [PK_TrustedHosts] PRIMARY KEY CLUSTERED
(
[Hostname] ASC
) ON [PRIMARY]
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[DeniedHosts](
[ReasonId] [smallint] NOT NULL,
[Hostname] [varchar](80) NOT NULL,
[Path] [varchar](50) NOT NULL CONSTRAINT [DF_DeniedHosts_Path] DEFAULT ('%'),
[Comment] [varchar](500) NULL,
CONSTRAINT [PK_DeniedHosts] PRIMARY KEY CLUSTERED
(
[Hostname] ASC,
[ReasonId] ASC,
[Path] ASC
) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[DeniedHosts] WITH NOCHECK ADD CONSTRAINT [FK_DeniedHosts_DenyReason] FOREIGN KEY([ReasonId]) REFERENCES [dbo].[DenyReason] ([ReasonId])
GO
ALTER TABLE [dbo].[DeniedHosts] CHECK CONSTRAINT [FK_DeniedHosts_DenyReason]
GO
ALTER TABLE [dbo].[DeniedHosts] WITH NOCHECK ADD CONSTRAINT [CK_DeniedHosts_HostName_NoTwoDots] CHECK ((((not([Hostname] like '%..%')))))
GO
ALTER TABLE [dbo].[DeniedHosts] CHECK CONSTRAINT [CK_DeniedHosts_HostName_NoTwoDots]
GO
ALTER TABLE [dbo].[DeniedHosts] WITH NOCHECK ADD CONSTRAINT [CK_DeniedHosts_NoPercentDot] CHECK (([Hostname] <> '%.'))
GO
ALTER TABLE [dbo].[DeniedHosts] CHECK CONSTRAINT [CK_DeniedHosts_NoPercentDot]
GO
ALTER TABLE [dbo].[DeniedHosts] WITH NOCHECK ADD CONSTRAINT [CK_DeniedHosts_NoWildcardMiddle] CHECK ((((not([Hostname] like '_%[%]%')))))
GO
ALTER TABLE [dbo].[DeniedHosts] CHECK CONSTRAINT [CK_DeniedHosts_NoWildcardMiddle]
GO
ALTER TABLE [dbo].[DeniedHosts] WITH NOCHECK ADD CONSTRAINT [CK_DeniedHosts_ValidPath] CHECK (([Path] is null or [Path] = '%' or [Path] like '/%' and [Path] <> '' and [Path] <> '/%'))
GO
ALTER TABLE [dbo].[DeniedHosts] CHECK CONSTRAINT [CK_DeniedHosts_ValidPath]
GO
ALTER TABLE [dbo].[DeniedHosts] WITH NOCHECK ADD CONSTRAINT [CK_DeniedHosts_WildcardStart] CHECK ((((not([Hostname] like '[%]%'))) or ([Hostname] like '[%].%' or [Hostname] = '%')))
GO
ALTER TABLE [dbo].[DeniedHosts] CHECK CONSTRAINT [CK_DeniedHosts_WildcardStart]
I'm not going to give you any data as:
WebProxyLog contains 1 223 878 rows; 524 MB (few more on SQL 2000 database).
DeniedHosts contains 52 338 rows; 3 MB
TrustedHosts contains 2 183 rows; <1 MB
The relevant query is:
CREATE PROCEDURE [dbo].[GetTrustedHosts]
AS
SELECT Hosts, Requests --, DistinctRequests
FROM(SELECT AA.Hosts, COUNT(*) AS Requests, COUNT(DISTINCT Path) AS DistinctRequests
FROM(SELECTCASE
WHEN CHARINDEX('':'', SUBSTRING(URI, 8, CHARINDEX(''/'', URI, 8)-8), 8) <> 0 THEN SUBSTRING(URI, 8, CHARINDEX('':'', URI, 8)-8)
ELSE SUBSTRING(URI, 8, CHARINDEX(''/'', URI, 8)-8)
END AS Hosts,
SUBSTRING(URI, CHARINDEX(''/'', URI, 8), 50) AS Path
FROM dbo.WebProxyLog wpl
WHERE URI LIKE 'http://%/%'
AND ResultCode BETWEEN 200 AND 399
AND (Service = 1) -- filter for only forward proxy
) AA
WHERE NOT EXISTS
(SELECT *
FROM dbo.DeniedHosts dhp
WHERE Path <> ''%''
AND LEFT(AA.Path,3) = LEFT(dhp.Path COLLATE SQL_Latin1_General_CP1_CI_AS, 3)
AND AA.Hosts LIKE (dhp.Hostname COLLATE SQL_Latin1_General_CP1_CI_AS)
AND ( AA.Path LIKE (dhp.Path COLLATE SQL_Latin1_General_CP1_CI_AS) OR AA.Path LIKE ((dhp.Path COLLATE SQL_Latin1_General_CP1_CI_AS) + ''[?]%'') )
)
AND NOT Hosts IS NULL-- this seems to give a speed advantage
GROUP BY AA.Hosts
HAVING COUNT(*) >= 25
) A
WHERE NOT Hosts IN
(SELECT Hostname COLLATE SQL_Latin1_General_CP1_CI_AS
FROM dbo.TrustedHosts thc
--WHERE NOT thc.Hostname LIKE ''%[%]%''
)
AND NOT Hosts IN
(SELECT Hostname COLLATE SQL_Latin1_General_CP1_CI_AS
FROM dbo.DeniedHosts dhc
WHERE dhc.Path = ''%''
)
AND NOT EXISTS
(SELECT *
FROM dbo.TrustedHosts thh
WHERE A.Hosts LIKE (thh.Hostname COLLATE SQL_Latin1_General_CP1_CI_AS)
AND thh.Hostname LIKE ''[%]%''
-- generates a hash join instead of a of nested loop
AND RIGHT(A.Hosts,6) = (RIGHT(thh.Hostname,6) COLLATE SQL_Latin1_General_CP1_CI_AS)
)
AND NOT EXISTS
(SELECT *
FROM dbo.DeniedHosts dhh
WHERE dhh.Path = ''%''
-- this reduces the cost of the most expensive query
AND dhh.Hostname LIKE ''[%]%''
AND A.Hosts LIKE (dhh.Hostname COLLATE SQL_Latin1_General_CP1_CI_AS)
-- generates a hash join instead of a of nested loop
-- performance difference is significant due to volume
AND RIGHT(A.Hosts,6) = (RIGHT(dhh.Hostname,6) COLLATE SQL_Latin1_General_CP1_CI_AS)
)
ORDER BY Requests DESC
There is some mess with collations, but these don't seem to hurt performance.
The query plans provided differ only slightly.
SQL 2000 provides a Clustered Index Scan over WebProxyLog with predicate on resultcode. After this a filter for the LIKE operator. Cost is 78% for the scan and 10% for the filter.
SQL 2005 combines both. With the scan costing 93%.
However, the key does not seem to be in this data, as the plans are nearly equavent, with the SQL2005 executing plan looking slightly better (table scan is a larger part of the execution).
How can these differences, espcially of this magnitude, be explained?
And further, how can the query be optimized for decent performance on SQL2005? What am I doing wrong?
View 5 Replies
View Related
Apr 28, 2004
Hi,
I am experiencing some problems accessing an Oracle database through
OLE DB from an MTS application using OpenWithServiceComponents (which
is supposed to give me connection pooling).
When I connect to a SQL Server database it only takes 3.1 ms to open a
connection, while with Oracle it takes 15.5 ms (both DB's running on
the same machine, I made 1000 calls and took the average).
Am I doing something wrong, am I missing something here?
I have tried to use the plain Open method, but in this case it takes
5.4 ms with SQL Server and 31.4 ms with Oracle.
Is Oracle really that much slower when accessing it through OLE DB?
Thanks for any advice/hints!
Florin
View 1 Replies
View Related
Sep 30, 2014
I'm setting up AG replication over our WAN (1GB MPLS). Using sql stress, AG replication quickly falls behind. Network throughput plateaus at ~14Mbs.
The problem ISN'T NETWORK
using conventional mirroring, the same workload never falls behind, throughput plateaus at ~26Mbs.
it's almost like log stream compression isn't happening for AG...
both mirroring and AG are using async.
View 4 Replies
View Related
Nov 10, 2014
Following is my db table
student_id student_code student_parent_id student_name
1 11 0 a
2 111 1 b
3 1111 2 c
4 11111 3 d
I want to generate following op
student_id student_code student_parent_id student_name Hierarchy
1 11 0 a 11
2 111 1 b 11-111
3 1111 2 c 11-111-1111
4 11111 3 d 11-111-1111-11111
Following is the query
i want to retrieve around 10000 in one go.. its taking around 8 seconds.. how to make it faster?
even if i retrieve 1 record or 10000 records, its taking around 8 seconds...
--- create table
create table test(sid bigint, scode nvarchar(50), parentid bigint, sname nvarchar(50))
---- insert records
insert into test values (1, '11', 0, 'a')
insert into test values (2, '111', 1, 'b')
insert into test values (3, '1111', 2, 'c')
insert into test values (4, '11111', 3, 'd')
---- result query
;WITH SInfo AS
(
SELECTsId
,scode
,ParentId
,sName
,CONVERT(nvarchar(800), scode) AS Hierarchy
[Code] .....
View 7 Replies
View Related
Aug 24, 2015
I'm migrating electronic records from a legacy system and the new system has strict requirements for ASCII characters in certain metadata fields. I wrote a UDF to display illegal characters, so I can work out how to map them.
The UDF used a while loop and to improve performance, I wrote the equivalent UDF using a tally table. The tally table version actually ran significantly slower. Query calling UDF using cross apply took 26 secsfor the while loop versus 119 secs for Tally table, for test data of 97000 rows
I would like to work out why, as I will use similar code to replace the illegal characters .
-- while loop version of UDF
CREATE FUNCTION [dbo].[DisplayIllegalChars](@strText VARCHAR(4000))
RETURNS @TableVariable TABLE (
Chr CHAR(1)
,AsciiValue INT)
AS
BEGIN
DECLARE @intCount INT
DECLARE @chrCheck CHAR
[Code] .....
View 9 Replies
View Related