Weird Problem: SP Executes Slowly, But Drop And Recreate Speeds It Up!
Feb 22, 2006
I have found an (encrypted) SP which takes ~20 seconds to run on one of our client DBs.
If I drop and recreate (or alter) the SP giving it the same contents it will run in ~1 second.
I need to know
1. why the sp started running so slowly
2. how to stop it happening again
3. how to fix it without resorting to drop/recreate (I can't exactly write a batch script to recreate all our sps every so often)
I have tried sp_updatestats and sp_recompile before running the sp to no avail...
I need to drop and recreate few subscriptions in transactional publication Do I need to worry about log marker issues ? Do I need to set the primary and replicate databases in 'DBO use only'
The Primary and Replicate databases are being accessed all the time.
I have come across a user that has error 2525 errors on sysobjects tables throughout their SQL 6.5 system. The ids match and I can identify the index. But as they are system tables I cannot drop or rebuild the indexes.
Any ideas please?
The errors have been occuring for some time, so it is not an option to restore. I would like to avoid having to transfer data to and from another SQL Server.
I'm having a heck of a time trying to upload data to an excel spreadsheet. This works perfectly in sql 2000 but I've been having problems with 2005
SSIS package "Package1.dtsx" starting. Error: 0xC002F210 at Drop table(s) SQL Task, Execute SQL Task: Executing the query "drop table `GRE` " failed with the following error: "Table 'xxx' does not exist.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. Task failed: Drop table(s) SQL Task Error: 0xC002F210 at Preparation SQL Task, Execute SQL Task: Executing the query "CREATE TABLE `xxx` ( `TEST_REC_NBR` Decimal(29,0), `PROCESS_DT_GRE` LongText ) " failed with the following error: "Invalid precision for decimal data type.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. Task failed: Preparation SQL Task SSIS package "Package1.dtsx" finished: Failure.
Hello, I need to retrieve all the information about all the foreign key constraints,inorder to store them temporarily (to be deleted later)and then recreated after making the necessary modifications to the concerned tables. The stored proc sp_helpconstraint shows all the constraint types, their user-defined or system-supplied name, the columns on which they have been defined, and the expression that defines them.But I don't know whether it can be manipulated to get what I want.I need to get it done programmatically..so that I can integrate it in my program which I'm building up progressively. Any help or scripts would be appreciated!
I have two tables Person & Location where Location has a primary key LocationId and Person has a foreign key LocationId.
Sometime ago I used the Database Diagrams visual tool of SQL Server Management Studio Express to create the foreign key relationship between the two tables - i.e. "visually" (drawing a line between the PK & FK LocationId elements of both tables).
Time has passed and I recently noticed that, upon retrieving my saved diagram, the foreign key relationship had "fallen off" (i.e. the many-to-one line was no longer showing in the diagram).
After recreating the relationship (redrawing the line) I find that I get an error message when I try to save the diagram:
Post-Save Notifications [!] Errors were encountered during the save process. Some database objects were not saved.
'Location' table saved successfully 'Person' table - Unable to create relationship 'FK_Person_Location'. The ALTER TABLE statement conflicted with the FOREIGN KEY constraint "FK_Person_Location". The conflict occurred in database "mydb", table "dbo.Location", column 'LocationId'.
When I go back to the object explorer and view the dependencies for the two tables, there is no dependency (between these two tables) revealed. When I try to create the foreign key constraint manually (T-SQL) it again says can't add the constraint. It comes up with an error as follows:
ALTER TABLE Person ADD FOREIGN KEY (LocationId) REFERENCES Location (LocationId)
Msg 547, Level 16, State 0, Line 2 The ALTER TABLE statement conflicted with the FOREIGN KEY constraint "FK__Person__LocationId__793DFFAF". The conflict occurred in database "mydb", table "dbo.Location", column 'LocationId'.
(Note: Each time I do this, the 8 hexadecimal character suffix changes.)
When I try to drop the foreign key:
alter table Person drop constraint FK__Person__LocationId
it comes back with the error:
Msg 3728, Level 16, State 1, Line 2 'FK__Person__LocationId' is not a constraint. Msg 3727, Level 16, State 0, Line 2 Could not drop constraint. See previous errors.
So it seems that there's some kind of goof up here. Can anybody shed light on this / tell me how to fix it?
I have two avenues to access my primary SQL Server. (I work remotelyusing a VPN connection.)Usually, I hit the server from my local machine but I also login to adesktop machine inside the main building using a terminal ap and hitthe server from that desktop.Anyway, when I import files locally using the Enterprise Manager's DTSimport wizard, the process is incredibly slow. It doesn't matter ifthe file is stored on my local machine or if the file is stored on thesame server that has SQL Server mounted.It's as if the data from the file is being channeled to my localmachine and then back to the server. Shouldn't the entire process runserver-side?When I use the EM import wizard through the terminal app, on thedesktop inside the building, the import process flies.The only difference between the two setups is that I must add thedomain name to the address of the SQL Server and the same internalserver where the files are stored.Can I do anything about this?Thanks!
Monthly, I copy a table from one database to another database. I delete the original table and copy the table back speed the performance of the query on the order of 10 to 1. Why does this work?
Detail: I have a legacy table that a small application queries about once a month. The table was poorly designed and the query runs a date range comparison on one field and has a sub query that runs string comparison against six fields. I cannot change the calling app or table design. When the app calls the query, the call times out due to the inordinate length of time. To fix this until next months query, I copy the table out, delete the original and copy back. What changes when the table is copied to another database and then copied back? The performance of the query changes from 10sec to 1.
Why I see absolutely no performance improvement when I spread my primary file group over 8 separate files on 8 separate disks, as opposed to having the primary file group all in one file on one disk.
I have set up 2 identical databases, one spread over 8 disks and one on one disk. Each database has a table called DATA and a column called VALUE. Value is NVARCHAR(200). I have filled each table up in both databases with 20,000 rows.
I then perform a select on each table in each database using CHECKPOINT and DBCC DROPCLEANBUFFERS to ensure I am reading from disk before each query and the execution times are identical in both databases.
I then ran the same queries against each database using a load testing tool and the batch requests per second on each DB is identical under load.
Surely the database with data spread over 8 disks should be FAR faster than the single file database as you have the combined reading power of 8 disks as opposed to 2??
Also, the same is happening for write speeds. When I create the data on both databases, the time it takes is identical on both.
BOL says it should be faster with multiple disks.
Just FYI this is on an Azure virtual machine and each disk is a locally redundant data disk that I have attached to the virtual machine.
Whether write speeds should increase with multiple disks or just read speeds?
Hi,I found this SQL in the news group to drop indexs in a table. I need ascript that will drop all indexes in all user tables of a givendatabase:DECLARE @indexName NVARCHAR(128)DECLARE @dropIndexSql NVARCHAR(4000)DECLARE tableIndexes CURSOR FORSELECT name FROM sysindexesWHERE id = OBJECT_ID(N'F_BI_Registration_Tracking_Summary')AND indid 0AND indid < 255AND INDEXPROPERTY(id, name, 'IsStatistics') = 0OPEN tableIndexesFETCH NEXT FROM tableIndexes INTO @indexNameWHILE @@fetch_status = 0BEGINSET @dropIndexSql = N' DROP INDEXF_BI_Registration_Tracking_Summary.' + @indexNameEXEC sp_executesql @dropIndexSqlFETCH NEXT FROM tableIndexes INTO @indexNameENDCLOSE tableIndexesDEALLOCATE tableIndexesTIARob
DB1 contains a log of multiple events for each record while DB2 contains the latest log for the record. I have a trigger(trigger1) in DB1 that automatically updates/inserts the log in DB2 depending on whether the record already exists in DB2.
I also have another trigger that checks for a specific value in the 'tag' field.
I used to set this trigger(trigger2) up in DB2 but it was executed twice. The first one when trigger1 inserts/updates a DB2 record and the second one when trigger2 updates DB2. So instead of having only a value of 1 in tag, I get a 2.
So what I did is I moved trigger2 to DB1. My only problem is I don't know which trigger get executed first.
I'm not sure whether this is the best way of updating the 'tag' field. The purpose of the tag field is that when it reaches a certain number, it stops all events and inserts a termination log in DB2.
I have three stored procedures that need to run nightly in SQL 7. The threeprocedures are not related; but to keep the procedures from running at thesame time, I placed them as three steps of a single job. The first two stepsare set to "Goto next step" on success; the last step is set to "Quit withsuccess."The job runs every night. However, only the first step/procedure isexecuted. Also, the first step has a green flag next to its ID in the Stepstab of the job propertiesI suppose I could just call all three SPs from a single stored procedurethat is run nightly. But I thought that putting them as three steps in asingle job would cause all three to run. What am I doing wrong?Thanks!Neil
I have a developer using SQL 7.0 Enterprise Manager and the DTS Wizard to create and save simple DTS imports on the SQL Server. The DTS job runs only on his workstation using his NT logon ID. Server is set for mixed security. Other users can edit the saved DTS job and resave it, then anyone can run the job. I tried having the developer log in with his NT logon ID on another workstation; he can't run the job from there if he was the last person to edit it from his workstation.
Any ideas, anyone? This is the only person having this problem in my shop.
I am trying to bulk map products to a catergoryID table. Each product has a SKU code and then is mapped to a CategoryID table. The script executes fine but when I look the products are not mapped.
when i run the query it takes less that 5 seconds to give results
but when i run the view it takes about a minute
can anyone help please, it keeps giving me time out errors
here is the query.
SELECT MAX(B.Code) AS Code, MAX( T.Description) AS Type, MAX( WH.Description) AS Warehouse, MAX(B.Barcode) AS Barcode, MAX(B.BatchNo) AS BatchNo, MAX(B.CustomField) AS CustomField, MAX(B.Colour) AS Colour, MAX(Q.Quality) AS Quality, MAX(round( M.ConvFactor, 2, 2)) AS ConvFactor, MAX( M.Multiply) AS Multiply, MAX(CONVERT(VARCHAR(20), round((B.BoughtQty + B.TransferQty + B.IssuedQty + B.ReturnedQty + B.AdjustmentQty), 2))) + ' ' + MAX( M.UoM) AS Available, CASE WHEN MAX( M.Multiply) = 'M' THEN MAX(CONVERT(VARCHAR(20), round(((B.BoughtQty + B.TransferQty + B.IssuedQty + B.ReturnedQty + B.AdjustmentQty) * M.ConvFactor), 2))) + ' ' + MAX(M.AUoM) ELSE max(CONVERT(Varchar(50),convert(float, round(((B.BoughtQty + B.TransferQty + B.IssuedQty + B.ReturnedQty + B.AdjustmentQty) / M.ConvFactor), 2))) ) + ' ' + MAX( M.AUoM) END AS AvailableAlternative, MAX(BC.Supplier) AS Supplier, MAX( OD.SupplierCode) AS SupplierCode FROM cvrbatches B LEFT JOIN SciposA.dbo.cvrmaster M ON M.Code = B.Code LEFT JOIN cvrbatchctrl BC ON B.Code = BC.Code AND B.Type = BC.Type AND B.BatchNo = BC.BatchNo INNER JOIN SciposA.dbo.cvrwhcontrol WH ON WH.Warehouse = B.Warehouse INNER JOIN SciposA.dbo.cvrtypes T ON T.CoverType = B.Type INNER JOIN cvrquality Q ON Q.Code = B.Quality AND B.Type = Q.Type LEFT JOIN SciposA.dbo.cvrgrndetail GD ON GD.BatchNo = B.BatchNo AND B.Warehouse = GD.Warehouse LEFT JOIN SciposA.dbo.cvrorderdetails OD ON OD.OrderNo = GD.OrderNo AND OD.LineN = GD.LineN GROUP BY B.Barcode
I want to split the data in a table into two parts, say 30%-70%.
Here are my query statements:
-- 30%
SELECT top 30 percent * FROM DataTable ORDER BY Col1
--70%
SELECT top 70 percent * FROM DataTable ORDER BY Col1 DESC
Somebody says the result is not guaranteed to be correct since in SQL Server 2005, the query optimizer may choose different strategy to execute the query statement.
I think select should execute after order by, which has nothing to do with query optimization. Am I correct?
Is this a limitation of SQL server. I am running a quite complex sp that I wrote which uses exec to execute an SQL string. Running the SP produces the desired results but if I try to use this sp with an insert statement then I get an error message that exec cannot be nested in an insert statement.....any help would be appreciated
I have created a rule in outlook to run this Visual Basic Script when I receive a specific email. The codes runs a SQL Server DTS Package that imports data and runs a couple of stored procs. If the DTS package takes a while to execute my outlook will lock up until the entire package is complete. I want to run the DTS package to run but I don’t want the application to remain locked till its complete. Anyone have any ideas of how to get around this....
Public Sub RunTVDTSPackage(Item As Outlook.MailItem) Dim oPackage As New DTS.Package On Error GoTo eh
I am trying to delete data from a table prior to populating with new data via a trigger if certain critria matches to eliminate duplicates. I have copied the trigger below. The syntax checks ok but I get any error message saying 'Incorrect syntax near 'GO'. 'ALTER TRIGGER' must be the first statement in a query batch' when I try to save.
Can someone tell me if this is possible please.
IF EXISTS (SELECT * FROM hold_complete WHERE fkey = hold_complete.fkey AND actiontext = 'hold' and Subactiontext = 'pending user') delete from hold_complete where hold_complete.fkey = fkey GO CREATE TRIGGER tr_hold_complete ON CallsHistory for INSERT AS SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED insert hold_complete select ins.AddedDT, ins.fkey, ins.actiontext, ins.subactiontext, con.emailaddress, ca.loggeddt, (con.forename + ' ' + con.surname) as contactname, ca.summary, ca.notes,co.coordinator, co.coordinator,getdate(), ca.status,ca.lastsubaction,getdate(),ca.dateopened ,ca.companyname,getdate(),(null),ch.notes FROM inserted as ins with (nolock) join calls as ca with (nolock)on ins.fkey = ca.callid join contact as con with (nolock) on ca.contactid = con.contactid join company as co with (nolock) on ca.companyid = co.companyid join callshistory as ch with (nolock) on ins.historyid = ch.historyid where ins.actiontext = 'hold' and ins.Subactiontext in ('completed','pending user')
I am trying to understand how to predict what user context a package will execute under, more specifically when it comes to operating system access.
For example we have a package that creates a flat file, during developement it was simply created on the C: drive of the users pc.
Now it must create the files on a remote server.
How do permissions to folders work is this case?
When you execute it in BIDS debug does it use the window account you logged onto the server as to connect to the remote folder ?
If so what happens when deployed to sql server and scheduled in a job? Does it use the domain account that the sql server agent account runs under ? Or will it use the credentials of the proxy account?
I am unclear in this scenario ? Obviously we need to grant the correct permissions on the remote server shared folder to the correct user, but i am not sure under what user account the package will be trying to create files on the remote server as ?
In a nutshell i need clarity on what user a package is executed as when running in a sql job ?
Anyone have a good link or the time to clarify this for me please.
I need a stored procedure that can drop and recreate any table in a DB.
How could I find out all information pertinent to a table and able to generate the create statement dynamically in the preocedure then drop it and recreate the table.
I want to drop table and then recreate. It's referenced by many table and I dont want to drop all constraints referencing to it. Is there any feature like "switch off/on constraints" in MSSQL?(6.5)
Does anyone know how to recreate a database from a log file? Here is the situation:
I had a server go bad. The drive configuration was as follows
Server OS on separate RAID controller - RAID Level 1 SQL Data on separate RAID controller - RAID Level 5 Transaction Logs on Separate Raid Controller - RAID Level 1
I lost the RAID 5 array and can't get it back and don't have a backup. I need to know if/how I can recreate the database from the log file. The log file has been set to grow and has never been purged so I am hoping that I can recreate the database from scratch. Is this possible?
Dear All, after searching continuously for removing the suspect mode of MSDB database, i've found a great article in net.but i dont understand how to proceed. please explain me the steps....
1.Detach the damaged msdb. You can't just detach msdb because you're not allowed to detach system databases. However, you can if you start the server with trace flag 3608. I did this by shutting down the server, navigating to the directory 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLBinn' and doing the following: start sqlservr.exe -c -T3608 2.Move or rename the damaged msdb files (msdbdata.mdf and msdblog.ldf in the 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLData' directory) 3.Run the instmsdb.sql script in the 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLInstall' directory 4.Shutdown and restart the server without the 3608 trace flag
Vinod Even you learn 1%, Learn it with 100% confidence.
I'm currently updating one of our web sites and have encountered astrange problem. The page that is giving me the problem is written inASP and hits a SQL 2K DB. When I click submit I have 4 procs thatneed to be executed. I always get a sql server timeout error. Iopened up Profiler and traced the events. The page hangs on the lastproc called which is basically a select * where id = @id. If I runthe same proc that hangs in query analyzer, literally copy fromprofiler into query analyzer the proc runs no problem but when I'mhitting it from the web it hangs. The proc takes in 1 argument whichI see when I view the trace in profiler. The id getting passed to theproc is correct. I don't think this is a web problem because the COMobject that runs the last 2 procs works on other productionapplications with no problems. I was wondering if anyone has anysuggestions. I'm not a dba but would like to know what the problem isfor future reference. I pasted the proc below just for the hell ofit. FYI, I changed the name of the proc, table and columns forsecurity purposes.CREATE PROCEDURE [dbo].[spName]@TNum integerASSELECT M.*FROM tblName MWHERE (M.[idColumn] = @TNum)GOThanks in advance,Bob
Hi,I have stored procedure (MS SQL Server 2000) which operateson around 600 000 rows (SELECT, UPDATE, INSERT)and executes in 5 minutes,when I put it in SQL transaction it slows down to more than 5 hours (!!)I have to admit that it is not problem with data locks (beside thatprocedurenothing else is executed on db),It is not also problem with that exact procedure, other proceduresalso slow down heavily when wrapped by SQL transactionvery very seldom stored procedure within transaction executescomparably long that its copy without transactionI guess it could be MS SQL Server 2000 configuration/tuning problem.Any ideas ?Chris
I have a problem with a report set up to execute a snapshot and subscription on day 1 of each month. On July 1, it ran and was emailed to the user; on 8/1 it created the snapshot but did not send to the user because of a user error in the subscription, so it was sent manually. the parameters issues were corrected and on 9/1 the snapshot executed but the subscription did not send. In the subscription window, the last good email of a snapshot was sent 7/1. So again 9/1 was sent manually. After fixing the parameters shouldn't this have emailed the snapshot correctly on 9/1?
Do I need to delete this subscription and create a new one to get it to send on 10/1 now?
any suggestions?
the set up is now verified to be identical to other reports which created a snapshot and emailed successfully, the only difference is the original problem with only the 7/1 subscription the only one you see when you look at the "subscription tab" for this report in Reporting services:
Description:Send e-mail to name.one@plife.com, name.two@plife.com
Trigger:SnapshotUpdated
Lastrun: 7/1/2006 7:00 AM
Mail Sent to:Mail sent to name.one@plife.com, name.two@plife.com
Snapshots successful
5/1/2006 7:00:02 AM
6/1/2006 7:00:03 AM
6/1/2006 8:26:31 AM
7/1/2006 7:00:07 AM
8/1/2006 7:00:24 AM
9/1/2006 7:00:19 AM
Logfile for "Ops Specialists/1035 Exchange Issue - Details (200407-R01)" report Portion of log file: ReportingServicesService!dbpolling!f!9/1/2006-07:00:04:: EventPolling processing item e869f577-b189-4e6a-a9be-37b61d737340 ReportingServicesService!dbpolling!13!9/1/2006-07:00:04:: EventPolling processing item 2b275311-0ee4-4cef-972d-86de48f4f360 ReportingServicesService!dbpolling!c!9/1/2006-07:00:04:: EventPolling processing item 3df3960a-ee28-4d5e-a46e-326ab5a0060d ReportingServicesService!library!c!9/1/2006-07:00:04:: Schedule e11a00d3-56b6-46b3-b2d3-9e3156805be8 executed at 9/1/2006 7:00:04 AM. ReportingServicesService!library!13!9/1/2006-07:00:04:: Schedule 177e9473-2c81-4152-a89f-137d25a556e1 executed at 9/1/2006 7:00:04 AM. ReportingServicesService!library!4!9/1/2006-07:00:04:: Schedule 89071f26-435c-4cc7-bd9a-23907ef32b5c executed at 9/1/2006 7:00:04 AM. ReportingServicesService!library!f!9/1/2006-07:00:04:: Schedule 3720d20c-460e-4549-9b57-c3cc97919f5f executed at 9/1/2006 7:00:04 AM. ReportingServicesService!schedule!13!9/1/2006-07:00:04:: Updating report execution snapshot for report /Client Services/Ops Specialists/1035 Exchange Issue - Details (200407-R01) ReportingServicesService!schedule!c!9/1/2006-07:00:04:: Updating report execution snapshot for report /Client Services/Ops Specialists/1035 Exchange Bridged - Details (200407-R01) ReportingServicesService!schedule!f!9/1/2006-07:00:04:: Updating report execution snapshot for report /Client Services/Ops Specialists/Policy Counts (200401-R01) ReportingServicesService!schedule!4!9/1/2006-07:00:04:: Updating report execution snapshot for report /Client Services/Inforce Admin/CAAB226 (200501-R03) ReportingServicesService!dbpolling!d!9/1/2006-07:00:08:: EventPolling processing 4 more items. 8 Total items in internal queue.
I want to create a clean copy of my DB now that it is done. So it can be moved to another Server. It has some sample data in some tables that I would like to keep and some in other table that I don't. How can I do this?
Due to a server move of our SQL databases something 'strange' happend: one of the login accounts disappeared.
When I try to recreate it, SQL gives me back an error stating the account already exists. Read something about loginaccounts becoming orphaned but don't know how to solve it. The login account is a Windows (AD) account and not a SQL account (both requiring different methods of fixing?). How can I recreate just this single account?
I am trying to recreate a MySQL database on a new domain. The original database was exported into about 40 .SQL text files. I am importing these .SQL text files into a new database using phpmyadmin. When I attempt to import the first of these files, I get a "fatal error". It says the upload time has exceeded the 300 second limit. How do I set up the server so there is no upload time limit? Or how would you approach a solution to this problem? There are about 40 of these text files and each file varies in size between 5 and 15mb. Is it possible to split individual files in half or 3 parts and import that way? Any help would be greatly appreciated. Thanks!!
Can anyone help with how to recreate a pull subscription replication job after I had to recreate the msdb database when it was marked as suspect. I still have the subscription but cannot seem to recreate a pull replication job