Hi,
I created a stored procedure that run as a service in the SQL Server , as long as the server is up.
It queries a table , does some work on the rows one by one , and after each row is done - deletes it from the table.
If there is no data - it goes to 'sleep' - waitfor delay ('00:00:03')
does anyone knows if that waitfor command releases the cpu , or just counting and uses the cpu ?
Eyal.
Env: SQL Server 2000It actually waited for 50 seconds instead of 5, triedWAITFOR DELAY '000:00:005' and WAITFOR DELAY '000:00:5' respectively,and got same behavior. Bug or ?TIA
This bit of SQL script can be used to insert a pause of one hundredth of a second in a trigger, stored procedure or SQL script. This can be useful for when MSSQL’s built-in WAITFOR DELAY function is just too long (it’s minimum increment is a full second!)
declare @later datetime declare @now datetime set @later = current_timestamp+'00:00:00.01' set @now = current_timestamp while @now < @later begin set @now = current_timestamp end
I'm sure it could be further tweaked to become a function.. And heck, maybe one of the gurus will tell me that there's something i don't know about WAITFOR or whatnot.. Feedback appreciated!
-- Doug Castell GoldMine Guru www.castellcomputers.com
I have an issue where I am trying to copy between 10-100 TIFF images to a directory, use the command line to combine them (I have a command line driven utility to accomplish this), copy the combined TIFF to another directory, and move to another set of TIFF images.
I can't figure out to make SQL wait for xp_cmdshell to return NULL and then move through the loop. Is there a way to use WAITFOR to look to see what is returned and then move on?
The following code does work but when the stored procedure is called it takes about 10 seconds for it to load. I am not refering to the 30 minute wait before the name is deleted.
Anyone have any ideas why the call would be so slow? Thanks.
SERVICE CODE conPubs = (New SqlConnection("Server=blah blah blah") cmdCommand = New SqlCommand("DeleteName", conPubs) cmdCommand.CommandType = CommandType.StoredProcedure cmdCommand.Parameters.Add("@Name", Name) cmdCommand.Parameters.Item(0).Value = Name conPubs.Open() cmdCommand.ExecuteNonQuery() conPubs.Close()
STOREDPROCEDURE CODE Create Procedure[DeleteName] @Name varchar (30) As Begin Set NoCount On WaitFor Delay "00:30:00" Delete From NameTable Where Name = @Name End GO
I'm implementing a transactional receive from SSB queue. I used System.Transactions.TransactionScope to simplify the transaction management code. When I used the "WAITFOR (RECEIVE ...), TIMEOUT 100" statment I get TransactionAbortedException when I try to open the second connection under the same TransactionScope block.
Note that when I remove the WAITFOR and just doing RECEIVE everything works fine and the transaction state is guaranteed.
Does anybody know if this a known issue with WAITFOR? Does it supposed to abort the transaction when the command completes?
See code below and stack traces I got.
Thanks,
Noam Helfman
Repro code:
Code Blockusing (TransactionScope txnScope = new TransactionScope(TransactionScopeOption.RequiresNew)) { using (SqlConnection conn1 = new SqlConnection(TestConnectionString)) { conn1.Open();
using (SqlCommand command = conn1.CreateCommand()) { command.CommandText = "WAITFOR (RECEIVE TOP(1) message_body FROM TestReceiveQueue), TIMEOUT 100"; // <-- this causes TransactionAbortedException below //command.CommandText = "RECEIVE TOP(1) message_body FROM TestReceiveQueue"; // <-- this works
command.CommandType = CommandType.Text;
using (SqlDataReader reader = command.ExecuteReader()) { while (reader.Read()) { byte[] message = (byte[])reader["message_body"]; } } }
using (SqlConnection conn2 = new SqlConnection(TestConnectionString)) { conn2.Open(); // <-- TransactionAbortedException here
using (SqlCommand command = conn2.CreateCommand()) { command.CommandText = string.Format("INSERT INTO tbl1 VALUES (1)"); command.CommandType = CommandType.Text;
command.ExecuteNonQuery(); }
}
txnScope.Complete(); } } ...
Schema:
Code BlockCREATE QUEUE TestReceiveQueue WITH STATUS=ON ,RETENTION=OFF; CREATE TABLE tbl1 (col1 int not null);
Stack traces:
Code Block System.Data.SqlClient.SqlException: Cannot promote the transaction to a distributed transaction because there is an active save point in this transaction. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.TdsExecuteTransactionManagerRequest(Byte[] buffer, TransactionManagerRequestType request, String transactionName, TransactionManagerIsolationLevel isoLevel, Int32 timeout, SqlInternalTransaction transaction, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlInternalConnectionTds.ExecuteTransactionYukon(TransactionRequest transactionRequest, String transactionName, IsolationLevel iso, SqlInternalTransaction internalTransaction) at System.Data.SqlClient.SqlInternalConnectionTds.ExecuteTransaction(TransactionRequest transactionRequest, String name, IsolationLevel iso, SqlInternalTransaction internalTransaction) at System.Data.SqlClient.SqlDelegatedTransaction.Promote()
System.Transactions.TransactionPromotionException: Failure while attempting to promote transaction. at System.Data.SqlClient.SqlDelegatedTransaction.Promote() at System.Transactions.TransactionStatePSPEOperation.PSPEPromote(InternalTransaction tx) at System.Transactions.TransactionStateDelegatedBase.EnterState(InternalTransaction tx)
System.Transactions.TransactionAbortedException: The transaction has aborted. at System.Transactions.TransactionStateAborted.CheckForFinishedTransaction(InternalTransaction tx) at System.Transactions.EnlistableStates.Promote(InternalTransaction tx) at System.Transactions.Transaction.Promote() at System.Transactions.TransactionInterop.ConvertToOletxTransaction(Transaction transaction) at System.Transactions.TransactionInterop.GetExportCookie(Transaction transaction, Byte[] whereabouts) at System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction tx) at System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction tx) at System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) at System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, dbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at SqlServiceBrokerQueueTest.WAITFORBug() in SqlServiceBrokerQueueTest.cs:line 375
We're trying to troubleshoot a timeout issue, so it was requested that I add a WAITFOR statement (1 hour) in a certain stored proc our application uses. I added it and confirmed that it was working by executing the stored proc in SSMS.
However, when our application (using Entity Framework) calls the stored proc, the WAITFOR statement is ignored.
I am parsing a file where along the flow I use a conditional split. One path of the split is the primary table (with IDENTITY) values. The rest of the paths have a FOREIGN KEY to the primary table.
It seems that SSIS is trying to insert the rows at the same time (which makes sense) but this is causing a problem with the secondary tables and their FK constraint since the primary table is not yet written.
Is there a way to delay the secondary tables until the primary table is done?
(I guess one way is to run through the file twice... once for the primary table and another for the rest but that seems wasteful to me...)
I am using Sql 2005 SP1 and merge replication on a database. One of the tables is used for an audit trail and has a dynamic filter applied so that it doesn't replicated every audit trail record to every subscriber.
Our sp's tend to insert records in to the audit trail table when someone inserts a new product (for example). The problem is that just recently the insert of new products has been taking >2 seconds, this is relatively slow compared to how it used to be 2 months ago.
Using profiler I have found that it is the insert in to the audit trail table that is taking all the time, and this is taking a long time because of something replication is doing. From profiler I have found that the following statement is the culprit. This is something that replication is doing but why it take so long I don't know:
select count(*) from [dbo].[MSmerge_repl_view_000CC979122E4C88AF27FE08CDCC84EB_B5F96F71937D4D9A949DEECFE540D0C4] [AUDIT_TRAIL_DETAIL] with (rowlock) where [RowGUID] in (select [AUDIT_TRAIL_DETAIL].[RowGUID] from inserted [AUDIT_TRAIL_HEADER], [dbo].[MSmerge_repl_view_000CC979122E4C88AF27FE08CDCC84EB_B5F96F71937D4D9A949DEECFE540D0C4] [AUDIT_TRAIL_DETAIL] with (rowlock) where (AUDIT_TRAIL_HEADER.ID = AUDIT_TRAIL_DETAIL.FKAuditTrailHeaderID))
The AUDIT_TRAIL_DETAIL table currently has 1.1 million row in it.
Can anyone give me any clues as to what I should do help improve the performance once again? Should I stop filtering on this table?
It works well but some times I get long delay in connection and read data.is it any way to solve the problem?
for more information whene is working well I can connect to database and get all information I need in .1 sec. when is going to be late this action may takes 20 sec
I`ve been following the newsgroups, and the consensus had seemed to be that 7.0 would be released around November. However, I spoke to a Microsoft partner last week who told me that the release date would be sometime in the second quarter of 1999. Does anyone know whether if this is true/untrue?
Hi, when call a sub-package from the parent package with the executeoutofprocess=true I have about 5-10 sec delay before sub-package starts running. ( this is a big delay for me as the sub_package is in a foreach loop)
the sub_package itself it's very simple and contains just one script task with a small script in in it. even if this delay is for validation I still can't understand why it is taking 5-10 ses. cheers
I've recently implemented a CLR stored procedure on a couple of databases but i've noticed the first time i use the stored proc in a given database i get a delay of around 10-15 seconds. Subsequent calls in new connection sessions execute immediately. I'm assuming the delay is due to the loading of the CLR and the assemblies needed. The assembly being loading by the CLR is not signed. Has anyone else seen this or know a way round this as it is impacting the user experience of our application.
Many thanks for any help you can provide in advance
I have researched this extensively and have seen a few similar posts on this site, and on the rest of the Internet, but have found nothing which solves my problem. This is absolutely crippling my productivity at work and I would greatly appreciate any advice anyone can suggest.
I have several packages which I need to run at quite regular intervals, and initially they worked absolutely perfectly. They ran quickly and efficiently. As time has passed though, seemingly at random, the pre execute phase of some of these has started to take unreasonable amounts of time. There have been times where I have left it on overnight and found the pre execute phase still at 0% the next morning. This is for queries which shouldn't take more than about 15 minutes to run.
However, seemingly at random, every so often one of the packages works absolutely perfectly. For example this afternoon one of these packages completed in 7 minutes, whereas this morning I had left it pre executing for 2.5 hours until I got tired of it and hit the stop button.
What affects the pre-execute phase and what sort of thing would cause this kind of behaviour? I work for a pretty large charity and I have had most of the IT team look into this at some point, and none of them can see any server activity which would explain it.
As an example of a package I am running which displays this behaviour, all that happens is an SQL statement runs, writes the results to a csv file and then emails me to say that it's finished. This is the SQL it uses:
select
REConstituentID as ID,
convert(varchar(10),DeceasedTickedDate,103) as dDate,
coalesce(Title1,'') as Title,
coalesce(FirstName,'') as FirstName,
Surname,
Gender,
AddressLine1,
AddressLine2,
AddressLine3,
AddressLine4,
City,
County,
PostCode,
coalesce(HomePhone,'') as Phone,
coalesce(convert(varchar(10),DeceasedDate,103),'') as DeceasedDate,
coalesce(
convert(varchar(10),(
select min(Date) from dbo.WH_ConstituentAttributes ca where ca.ConstituentID = WH_Constituent.ID and Type = 'Death Notification Date' and (left(Value,8) not in ('Experian', 'Ebiquita') or Value is null)
),103),
convert(varchar(10),(
select top 1 date from dbo.WH_Action where ConstituentID = WH_Constituent.ID and type = 'DM Remark' and ID in (select ActionID from dbo.WH_ActionAttribute where value = 'Deceased notification') order by date asc
),103),
'') as 'Date Notified of Death',
case
when exists (select top 1 null from dbo.WH_ConstituentAttributes ca where ca.ConstituentID = WH_Constituent.ID and Type = 'Death Notification Date' and Value = 'Golden Charter: Funeral Plan Confirmed')
then 1
when exists (select top 1 null from dbo.WH_Action where ConstituentID = WH_Constituent.ID and Category = 'Phone Call' and type = 'DM Remark' and ID in (select ActionID from dbo.WH_ActionAttribute where value = 'Deceased notification'))
then 3
else 5
end as 'Notification Method Code'
from
WH_Constituent
where
Deceased = 'Yes'
and AddressLine1 is not null
and PostCode is not null
and Country = 'United Kingdom'
and DeceasedTickedDate >= {d '2006-09-20'}
and not (
exists (
select top 1 null
from dbo.WH_ConstituentAttributes ca
where
ca.ConstituentID = WH_Constituent.ID
and Type = 'Death Notification Date'
and left(Value,8) in ('Experian', 'Ebiquita')
)
and not exists (
select top 1 null
from dbo.WH_ConstituentAttributes ca
where
ca.ConstituentID = WH_Constituent.ID
and Type = 'Death Notification Date'
and coalesce(left(Value,8),'') not in ('Experian', 'Ebiquita')
)
and not exists (
select top 1 null
from dbo.WH_Action
where
ConstituentID = WH_Constituent.ID
and type = 'DM Remark'
and ID in (
select ActionID
from dbo.WH_ActionAttribute
where value = 'Deceased notification'
)
)
)
There are no lookups, fuzzy groupings or anything else even slightly complicated.
Hi, when I try to run my package with dtexec.exe, it starts fine but in the process it package calles another subpackage and at the time there is big delay before start processing the subpackage. the subpackage has been setup so executionoutofprocess pramater has been set to true. any idea what migth be the problem. I have to metion even when I run this with in the visual studio still I have a big delay.
Can someone advise if there is a delay in data being written to the database following a tableadapter.update(datatable) command?
I save transactions which are subjected to the above and then a listview is updated to reflect them.
As I work through all is OK and the transactions appear in view.
I then run a backup through my app using a backup object to do this and this reports all OK
I then close the app and re-open and as as I am in debug the database is empty.
I perform a restore through my app using a restore object and selecting the backup file I created previoulsy which reports all OK
The retore procedure calls application.restart to allow the app to initialise to the restored data.
The problem is quite a bit of my data in missing from the restore as if the last block I did prior to backup never actaully made it to the database?
I also rememeber noting that at times when the update method is performed the actual timestamp on the physical databse is not updated....until I close the app and return to the designer?
So does this mean then prior to performing a backup I have to somehow force the app to ensure it has written all changes to the databse?
I developed an application uses sqlserver 2005 express edition
I create all queries in storedprocedures.every things work perfect but some times I get long delay in running queries. but after some minutes it comes regular . I coudlnt find any relation between delay and time of work.it comes by chance . also I set timeout for running query for 30 sec. but some times it took more than minutes.
I have a package that will be using an FTP task to send a file (file1) to an ftp site.
Once the file (file1) is received at the ftp site, a corresponding file (file2) will be placed in the ftp directory, and then I will need to ftp the corresponding file (file2) back.
However, there will be a time delay between the time that file1 is received and file2 is put in the ftp directory. Maybe 5 to 30 minutes.
What is the best way to handle this scenario? I want to ftp file2 back as soon as it's available. However, I don't know when that is, and so I was wondering if there is a way to dynamically check for the availability of the file, so that I don't have to "hard code" a wait time into my process.
Hi! I plan to prepare a lot of DMX statements, but can I control when the datamining model actually was calculated?
Lets say i make following 1. create mining structure... - I expect that this execution will take "no" time 2. create mining model... - I expect the same 3. insert into <miningmodel>... select ... - I expect that that actually will take some time- right?
I really only want to prepare definition of mining structures/models and telling the model what data (insert into) it should use.
Is it possible to control the actual processing, eg. execution of insert statement (= execution of time consuming select-part)?
I have a batchjob that only need to define things but calculation of mining model is not needed until someone really want's to see it (and it is ok that this person has to wait on processing).
BOL writes >> Processing Options
The following table describes the processing methods that are available in Analysis Services, and identifies the objects for which each method is supported.
Process Default
Detects the process state of an object, and performs processing necessary to deliver unprocessed or partially processed objects to a fully processed state. This processing option is supported for cubes, databases, dimensions, measure groups, mining models, mining structures, and partitions. ... << But how to control this behaviour? Best regards
I have a multitier app running 24 x 7...All is running ok until the server delay too much to respond when I'm trying to add the records, so I start to receive erros from the msdtc and the transaction is aborted...How can I resolved this trouble? I'm not sure if the cause is too long response from the server, but I have noted that in some moments the server is busier than when it's running ok...just for clearing this, my app have a maximum of 5 threads each one create an transactional objects to process the new entry.
Need your advise on this. I have a DTS package which check for 2 dates and execute tasks when the date do not matched. The problem I am facing now is I could make the next step to start only if the previous step is completed. When the DTS package is executed, all steps being completed almost at the same time. See below / attached DTS package.
In the disgram, I have labelled 5 steps A ~ E, each step needs info from the finished product from previous step to produce correct result in it's own step. I couldn't schedule each step to run at different time because the DTS kicks off based on a file that comes in and each step doesn't have a fixed processing time to complete.
I have tried using On Success or On Complete and both options start the next step immediately not not wait for the job the complete or success. I guess this is because I have transferred the command to external when using command. Is there a way to control by some delay between each task?
Please advise. Thank you.
Each of the step has something like below (refreshing of excel file with macro build in):- I cannot build all macros into one file and run from the main excel.
select @MainUpdate=Main_Update_CET from APMEAPV_Compare select @TempUpdate=Temp_Update_CET from APMEAPV_Compare --select @MainUpdate, @TempUpdate
if @MainUpdate<>@TempUpdate begin DECLARE @commandK varchar(1000) SET @commandK='Start Excel.exe "D:Daily_Status_Report_EDWHEDWH_Runbook_BTS.xls"' exec master..xp_cmdshell @commandK, No_Output
Hi all,I have recently changed IP addresses on my MS SQL cluster (the new IPaddresses belongs to a different network). Both physical node's IPaddresses and resource's IP addresses have been changed.Everything seems to work fine. The cluster is up and respondingnormally. The resources failover to another node when needed correctly.However, I have noticed that the time it takes for the resource tofailover to another node is much longer than in the past (before IPaddresses change). I have noticed that "Network Name" is the resourcetype that causes the delay. Could you please advise if there issomething that I overlooked and needs to be changed.updated as well, sothat groups fail over faster?Thanks,Aleu
When a workstation losts connection to server,it can leave an uncomplete transaction. ThenSQL Server removes the transaction.Could anyone guide me how to set the delaybefore SQL Server do it ?Thanks in advanceJohn S.*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
I am currently experiencing a 30 second delay when starting an SSIS package from a query window or stored procedure in SQL 2005 Management Studio, using xp_cmdshell and dtexec.
When I run the package in BI Dev the execution results state an elapsed time of 4.82 sec, at a command prompt using dtexec the elapsed time is 3.48 sec, from MStudio the elapsed time is 33.86 sec, this test was run using the same configuration and databases. For the MStudio run, if I look at the DTS log file I€™m creating or the PC Application log, it states the package doesn't actually start until 31 sec after the execute button is pressed. I€™ve tried executing the package as both a SQL package and a file package without any difference in elapsed times. I have also set DelayValidation = True for every Task, ConnectionManager and the package itself.
When I look at the package log one difference I see is that the Management Studio executes using €˜NT AUTHORITYSYSTEM€™, BI Dev and the cmd prompt use the local user €˜[Server]Administrator€™, which in this case is the administrator. From this I have to believe it is some kind of user rights problem. I think SQL or the OS is waiting for something and after it times out at 30 sec, it allows the package to run. If this is the case I€™m not sure what it might be or how to find it.
I also tried making an xp_cmdshell_proxy_account with admin rights but this didn€™t seem to work either. I€™ve included the query code below. Any ideas, help or solutions are greatly appreciated.
I have a couple of tables set up with column level tracking and using the "Subscriber Always Wins" conflict resolver.
I am still trying to work out why there is a conflict as the only updates to these rows happen at the subscriber.
What I am finding is that a conflict occurs and there appears to be a delay between the conflict being recognised and then resolved.
The reason I know there is a delay is because this is playing havoc with a data extract that relies on the updates from the subscriber.
For instance, I have a job on which the status is changed. That status change triggers an extract via BizTalk to another system. BizTalk, via an SQL adapter extracts the data from the server and I find that some of it is missing in the extract.
I go and look into the database at the raw data and find that the data is in fact there.
I have certain triggers running which set up export flags in the database to say this data should be exported. This works perfectly when there are no conflicts.
In each case where data has not been extracted, a conflict occurs.
Is there any way to guarantee that the conflict will be resolved at the same time as the synchronisation is complete - i.e. within the same transaction?
I thought replication was supposed to be a single transaction - you get it all or you get nothing.
I'm interested to hear from anyone who has experienced this behaviour.
I have been posting to the Data Presentation Controls forum for about a month regarding a problem I've been dealing with.
http://forums.asp.net/thread/1223055.aspx
What it boils down to is that on a button click event, I was updating some records, then re-executing a SELECT statement to get the records back out and rebind my DataGrids. This was happening too quickly and the data was not being updated in time before the SELECT was executed. So my grids would still display "old" data.
How do I get SQL Server to commit the UPDATE before my C# code continues?
I am implementing the first Sql Service Broker use case in our Sql 2005 application, so forgive me if this is an obvious question. We will be using a service & queue to handle requests to download a batch of documents from a third party service. The basic workflow is: A. Web client requests a document download. B. Web application begins a conversation and sends a message requesting the download C. DownloadService stored procedure retrieves the message and attempts the document download. D. If the download fails, retry every 30 minutes up to max 5 attempts.
Other than (D), this is perfectly straightforward. I will describe the design I've come up with to retry after 30 minutes below. I would appreciate any suggestions on better ways to handle this within SSB.
My solution relies on a small state table: create table MessageState ( ConversationHandle uniqueidentifier NOT NULL, RetryCount int NOT NULL, MessageBody varbinary(max) NOT NULL )The web client begins a conversation and submits a message. It does not end the conversation, but rather the InitiatorQueue has an activation procedure to end its side of the conversation after the DownloadService ends them, as described here http://blogs.msdn.com/remusrusanu/archive/2006/04/06/570578.aspx.
The DownloadService logic, in pseudo-code: 1. Use a typical loop to get conversation groups and receive messages 2. If message type = DownloadRequest 2a. Call a stored procedure to attempt the download 2b. If successful, end conversation 2c. If not successful, add entry to MessageState and BEGIN CONVERSATION TIMER with a 30 minute timeout 3. If message type = 'http://schemas.Microsoft.com/SQL/ServiceBroker/DialogTimer' 3a. Look up conversation handle in MessageState 3b. Recall original MessageBody 3c. Call a stored procedure to attempt the download 3d. If successful, delete MessageState row end conversation 3e. If not successful and no retries left, delete MessageState row and end conversation with error 3f. If not successful and retries remaining, increment MessaageState.RetryCount and BEGIN CONVERSATION TIMER with a 30 minute timeout
The main downside I see to this solution is a certain lack of transparency. After step (2c) or (3f), "select * from DownloadQueue" will no longer show the pending request, although you can find it in sys.conversation_endpoints.
Is there a better idiom to solve this type of problem in SSB?