I have a package that will be using an FTP task to send a file (file1) to an ftp site.
Once the file (file1) is received at the ftp site, a corresponding file (file2) will be placed in the ftp directory, and then I will need to ftp the corresponding file (file2) back.
However, there will be a time delay between the time that file1 is received and file2 is put in the ftp directory. Maybe 5 to 30 minutes.
What is the best way to handle this scenario? I want to ftp file2 back as soon as it's available. However, I don't know when that is, and so I was wondering if there is a way to dynamically check for the availability of the file, so that I don't have to "hard code" a wait time into my process.
I have an application that creates a new database during installation,and I've noticed some strange behavior. Once I've created thedatabase, I am able to immediately create tables and populate lookupdata, provided I remain connected to the server. If, however, Idisconnect and attempt to reconnect immediately, I'll get an errorsaying that my login is invalid for the new database.I can get around this by having my code simply wait 5 seconds beforeattempting to reconnect, but I'm curious to see if anybody here cangive an explaination for why this is happening. Here is a bit ofpseudo code to explain what I'm seeing:open new connectioncreate databasecreate tablespopulate tablesclose connection// open new connection /* can't do this yet, as it would break */for (int a=0; a<5; a++){Thread.Sleep(2000)try{open new connectionbreak;}catch{Debug("still waiting...");}}Running my version of this code, I'll see that "still waiting..."message go past 2-3 times before SQL Server wakes up and realizes thatI'm allowed to connect to it. Anybody know why?Thanks,Jason KesterExpat Software Consulting Serviceshttp://www.expatsoftware.com/---Get your own Travel Blog, with itinerary maps and photos!http://www.blogabond.com/
Env: SQL Server 2000It actually waited for 50 seconds instead of 5, triedWAITFOR DELAY '000:00:005' and WAITFOR DELAY '000:00:5' respectively,and got same behavior. Bug or ?TIA
I am parsing a file where along the flow I use a conditional split. One path of the split is the primary table (with IDENTITY) values. The rest of the paths have a FOREIGN KEY to the primary table.
It seems that SSIS is trying to insert the rows at the same time (which makes sense) but this is causing a problem with the secondary tables and their FK constraint since the primary table is not yet written.
Is there a way to delay the secondary tables until the primary table is done?
(I guess one way is to run through the file twice... once for the primary table and another for the rest but that seems wasteful to me...)
I am using Sql 2005 SP1 and merge replication on a database. One of the tables is used for an audit trail and has a dynamic filter applied so that it doesn't replicated every audit trail record to every subscriber.
Our sp's tend to insert records in to the audit trail table when someone inserts a new product (for example). The problem is that just recently the insert of new products has been taking >2 seconds, this is relatively slow compared to how it used to be 2 months ago.
Using profiler I have found that it is the insert in to the audit trail table that is taking all the time, and this is taking a long time because of something replication is doing. From profiler I have found that the following statement is the culprit. This is something that replication is doing but why it take so long I don't know:
select count(*) from [dbo].[MSmerge_repl_view_000CC979122E4C88AF27FE08CDCC84EB_B5F96F71937D4D9A949DEECFE540D0C4] [AUDIT_TRAIL_DETAIL] with (rowlock) where [RowGUID] in (select [AUDIT_TRAIL_DETAIL].[RowGUID] from inserted [AUDIT_TRAIL_HEADER], [dbo].[MSmerge_repl_view_000CC979122E4C88AF27FE08CDCC84EB_B5F96F71937D4D9A949DEECFE540D0C4] [AUDIT_TRAIL_DETAIL] with (rowlock) where (AUDIT_TRAIL_HEADER.ID = AUDIT_TRAIL_DETAIL.FKAuditTrailHeaderID))
The AUDIT_TRAIL_DETAIL table currently has 1.1 million row in it.
Can anyone give me any clues as to what I should do help improve the performance once again? Should I stop filtering on this table?
It works well but some times I get long delay in connection and read data.is it any way to solve the problem?
for more information whene is working well I can connect to database and get all information I need in .1 sec. when is going to be late this action may takes 20 sec
Hi, I created a stored procedure that run as a service in the SQL Server , as long as the server is up. It queries a table , does some work on the rows one by one , and after each row is done - deletes it from the table. If there is no data - it goes to 'sleep' - waitfor delay ('00:00:03')
does anyone knows if that waitfor command releases the cpu , or just counting and uses the cpu ? Eyal.
I`ve been following the newsgroups, and the consensus had seemed to be that 7.0 would be released around November. However, I spoke to a Microsoft partner last week who told me that the release date would be sometime in the second quarter of 1999. Does anyone know whether if this is true/untrue?
This bit of SQL script can be used to insert a pause of one hundredth of a second in a trigger, stored procedure or SQL script. This can be useful for when MSSQL’s built-in WAITFOR DELAY function is just too long (it’s minimum increment is a full second!)
declare @later datetime declare @now datetime set @later = current_timestamp+'00:00:00.01' set @now = current_timestamp while @now < @later begin set @now = current_timestamp end
I'm sure it could be further tweaked to become a function.. And heck, maybe one of the gurus will tell me that there's something i don't know about WAITFOR or whatnot.. Feedback appreciated!
-- Doug Castell GoldMine Guru www.castellcomputers.com
Hi, when call a sub-package from the parent package with the executeoutofprocess=true I have about 5-10 sec delay before sub-package starts running. ( this is a big delay for me as the sub_package is in a foreach loop)
the sub_package itself it's very simple and contains just one script task with a small script in in it. even if this delay is for validation I still can't understand why it is taking 5-10 ses. cheers
I've recently implemented a CLR stored procedure on a couple of databases but i've noticed the first time i use the stored proc in a given database i get a delay of around 10-15 seconds. Subsequent calls in new connection sessions execute immediately. I'm assuming the delay is due to the loading of the CLR and the assemblies needed. The assembly being loading by the CLR is not signed. Has anyone else seen this or know a way round this as it is impacting the user experience of our application.
Many thanks for any help you can provide in advance
I have researched this extensively and have seen a few similar posts on this site, and on the rest of the Internet, but have found nothing which solves my problem. This is absolutely crippling my productivity at work and I would greatly appreciate any advice anyone can suggest.
I have several packages which I need to run at quite regular intervals, and initially they worked absolutely perfectly. They ran quickly and efficiently. As time has passed though, seemingly at random, the pre execute phase of some of these has started to take unreasonable amounts of time. There have been times where I have left it on overnight and found the pre execute phase still at 0% the next morning. This is for queries which shouldn't take more than about 15 minutes to run.
However, seemingly at random, every so often one of the packages works absolutely perfectly. For example this afternoon one of these packages completed in 7 minutes, whereas this morning I had left it pre executing for 2.5 hours until I got tired of it and hit the stop button.
What affects the pre-execute phase and what sort of thing would cause this kind of behaviour? I work for a pretty large charity and I have had most of the IT team look into this at some point, and none of them can see any server activity which would explain it.
As an example of a package I am running which displays this behaviour, all that happens is an SQL statement runs, writes the results to a csv file and then emails me to say that it's finished. This is the SQL it uses:
select
REConstituentID as ID,
convert(varchar(10),DeceasedTickedDate,103) as dDate,
coalesce(Title1,'') as Title,
coalesce(FirstName,'') as FirstName,
Surname,
Gender,
AddressLine1,
AddressLine2,
AddressLine3,
AddressLine4,
City,
County,
PostCode,
coalesce(HomePhone,'') as Phone,
coalesce(convert(varchar(10),DeceasedDate,103),'') as DeceasedDate,
coalesce(
convert(varchar(10),(
select min(Date) from dbo.WH_ConstituentAttributes ca where ca.ConstituentID = WH_Constituent.ID and Type = 'Death Notification Date' and (left(Value,8) not in ('Experian', 'Ebiquita') or Value is null)
),103),
convert(varchar(10),(
select top 1 date from dbo.WH_Action where ConstituentID = WH_Constituent.ID and type = 'DM Remark' and ID in (select ActionID from dbo.WH_ActionAttribute where value = 'Deceased notification') order by date asc
),103),
'') as 'Date Notified of Death',
case
when exists (select top 1 null from dbo.WH_ConstituentAttributes ca where ca.ConstituentID = WH_Constituent.ID and Type = 'Death Notification Date' and Value = 'Golden Charter: Funeral Plan Confirmed')
then 1
when exists (select top 1 null from dbo.WH_Action where ConstituentID = WH_Constituent.ID and Category = 'Phone Call' and type = 'DM Remark' and ID in (select ActionID from dbo.WH_ActionAttribute where value = 'Deceased notification'))
then 3
else 5
end as 'Notification Method Code'
from
WH_Constituent
where
Deceased = 'Yes'
and AddressLine1 is not null
and PostCode is not null
and Country = 'United Kingdom'
and DeceasedTickedDate >= {d '2006-09-20'}
and not (
exists (
select top 1 null
from dbo.WH_ConstituentAttributes ca
where
ca.ConstituentID = WH_Constituent.ID
and Type = 'Death Notification Date'
and left(Value,8) in ('Experian', 'Ebiquita')
)
and not exists (
select top 1 null
from dbo.WH_ConstituentAttributes ca
where
ca.ConstituentID = WH_Constituent.ID
and Type = 'Death Notification Date'
and coalesce(left(Value,8),'') not in ('Experian', 'Ebiquita')
)
and not exists (
select top 1 null
from dbo.WH_Action
where
ConstituentID = WH_Constituent.ID
and type = 'DM Remark'
and ID in (
select ActionID
from dbo.WH_ActionAttribute
where value = 'Deceased notification'
)
)
)
There are no lookups, fuzzy groupings or anything else even slightly complicated.
Hi, when I try to run my package with dtexec.exe, it starts fine but in the process it package calles another subpackage and at the time there is big delay before start processing the subpackage. the subpackage has been setup so executionoutofprocess pramater has been set to true. any idea what migth be the problem. I have to metion even when I run this with in the visual studio still I have a big delay.
Can someone advise if there is a delay in data being written to the database following a tableadapter.update(datatable) command?
I save transactions which are subjected to the above and then a listview is updated to reflect them.
As I work through all is OK and the transactions appear in view.
I then run a backup through my app using a backup object to do this and this reports all OK
I then close the app and re-open and as as I am in debug the database is empty.
I perform a restore through my app using a restore object and selecting the backup file I created previoulsy which reports all OK
The retore procedure calls application.restart to allow the app to initialise to the restored data.
The problem is quite a bit of my data in missing from the restore as if the last block I did prior to backup never actaully made it to the database?
I also rememeber noting that at times when the update method is performed the actual timestamp on the physical databse is not updated....until I close the app and return to the designer?
So does this mean then prior to performing a backup I have to somehow force the app to ensure it has written all changes to the databse?
I developed an application uses sqlserver 2005 express edition
I create all queries in storedprocedures.every things work perfect but some times I get long delay in running queries. but after some minutes it comes regular . I coudlnt find any relation between delay and time of work.it comes by chance . also I set timeout for running query for 30 sec. but some times it took more than minutes.
Hi! I plan to prepare a lot of DMX statements, but can I control when the datamining model actually was calculated?
Lets say i make following 1. create mining structure... - I expect that this execution will take "no" time 2. create mining model... - I expect the same 3. insert into <miningmodel>... select ... - I expect that that actually will take some time- right?
I really only want to prepare definition of mining structures/models and telling the model what data (insert into) it should use.
Is it possible to control the actual processing, eg. execution of insert statement (= execution of time consuming select-part)?
I have a batchjob that only need to define things but calculation of mining model is not needed until someone really want's to see it (and it is ok that this person has to wait on processing).
BOL writes >> Processing Options
The following table describes the processing methods that are available in Analysis Services, and identifies the objects for which each method is supported.
Process Default
Detects the process state of an object, and performs processing necessary to deliver unprocessed or partially processed objects to a fully processed state. This processing option is supported for cubes, databases, dimensions, measure groups, mining models, mining structures, and partitions. ... << But how to control this behaviour? Best regards
I have a multitier app running 24 x 7...All is running ok until the server delay too much to respond when I'm trying to add the records, so I start to receive erros from the msdtc and the transaction is aborted...How can I resolved this trouble? I'm not sure if the cause is too long response from the server, but I have noted that in some moments the server is busier than when it's running ok...just for clearing this, my app have a maximum of 5 threads each one create an transactional objects to process the new entry.
Need your advise on this. I have a DTS package which check for 2 dates and execute tasks when the date do not matched. The problem I am facing now is I could make the next step to start only if the previous step is completed. When the DTS package is executed, all steps being completed almost at the same time. See below / attached DTS package.
In the disgram, I have labelled 5 steps A ~ E, each step needs info from the finished product from previous step to produce correct result in it's own step. I couldn't schedule each step to run at different time because the DTS kicks off based on a file that comes in and each step doesn't have a fixed processing time to complete.
I have tried using On Success or On Complete and both options start the next step immediately not not wait for the job the complete or success. I guess this is because I have transferred the command to external when using command. Is there a way to control by some delay between each task?
Please advise. Thank you.
Each of the step has something like below (refreshing of excel file with macro build in):- I cannot build all macros into one file and run from the main excel.
select @MainUpdate=Main_Update_CET from APMEAPV_Compare select @TempUpdate=Temp_Update_CET from APMEAPV_Compare --select @MainUpdate, @TempUpdate
if @MainUpdate<>@TempUpdate begin DECLARE @commandK varchar(1000) SET @commandK='Start Excel.exe "D:Daily_Status_Report_EDWHEDWH_Runbook_BTS.xls"' exec master..xp_cmdshell @commandK, No_Output
Hi all,I have recently changed IP addresses on my MS SQL cluster (the new IPaddresses belongs to a different network). Both physical node's IPaddresses and resource's IP addresses have been changed.Everything seems to work fine. The cluster is up and respondingnormally. The resources failover to another node when needed correctly.However, I have noticed that the time it takes for the resource tofailover to another node is much longer than in the past (before IPaddresses change). I have noticed that "Network Name" is the resourcetype that causes the delay. Could you please advise if there issomething that I overlooked and needs to be changed.updated as well, sothat groups fail over faster?Thanks,Aleu
When a workstation losts connection to server,it can leave an uncomplete transaction. ThenSQL Server removes the transaction.Could anyone guide me how to set the delaybefore SQL Server do it ?Thanks in advanceJohn S.*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
I am currently experiencing a 30 second delay when starting an SSIS package from a query window or stored procedure in SQL 2005 Management Studio, using xp_cmdshell and dtexec.
When I run the package in BI Dev the execution results state an elapsed time of 4.82 sec, at a command prompt using dtexec the elapsed time is 3.48 sec, from MStudio the elapsed time is 33.86 sec, this test was run using the same configuration and databases. For the MStudio run, if I look at the DTS log file I€™m creating or the PC Application log, it states the package doesn't actually start until 31 sec after the execute button is pressed. I€™ve tried executing the package as both a SQL package and a file package without any difference in elapsed times. I have also set DelayValidation = True for every Task, ConnectionManager and the package itself.
When I look at the package log one difference I see is that the Management Studio executes using €˜NT AUTHORITYSYSTEM€™, BI Dev and the cmd prompt use the local user €˜[Server]Administrator€™, which in this case is the administrator. From this I have to believe it is some kind of user rights problem. I think SQL or the OS is waiting for something and after it times out at 30 sec, it allows the package to run. If this is the case I€™m not sure what it might be or how to find it.
I also tried making an xp_cmdshell_proxy_account with admin rights but this didn€™t seem to work either. I€™ve included the query code below. Any ideas, help or solutions are greatly appreciated.
I have a couple of tables set up with column level tracking and using the "Subscriber Always Wins" conflict resolver.
I am still trying to work out why there is a conflict as the only updates to these rows happen at the subscriber.
What I am finding is that a conflict occurs and there appears to be a delay between the conflict being recognised and then resolved.
The reason I know there is a delay is because this is playing havoc with a data extract that relies on the updates from the subscriber.
For instance, I have a job on which the status is changed. That status change triggers an extract via BizTalk to another system. BizTalk, via an SQL adapter extracts the data from the server and I find that some of it is missing in the extract.
I go and look into the database at the raw data and find that the data is in fact there.
I have certain triggers running which set up export flags in the database to say this data should be exported. This works perfectly when there are no conflicts.
In each case where data has not been extracted, a conflict occurs.
Is there any way to guarantee that the conflict will be resolved at the same time as the synchronisation is complete - i.e. within the same transaction?
I thought replication was supposed to be a single transaction - you get it all or you get nothing.
I'm interested to hear from anyone who has experienced this behaviour.
I have been posting to the Data Presentation Controls forum for about a month regarding a problem I've been dealing with.
http://forums.asp.net/thread/1223055.aspx
What it boils down to is that on a button click event, I was updating some records, then re-executing a SELECT statement to get the records back out and rebind my DataGrids. This was happening too quickly and the data was not being updated in time before the SELECT was executed. So my grids would still display "old" data.
How do I get SQL Server to commit the UPDATE before my C# code continues?
I am implementing the first Sql Service Broker use case in our Sql 2005 application, so forgive me if this is an obvious question. We will be using a service & queue to handle requests to download a batch of documents from a third party service. The basic workflow is: A. Web client requests a document download. B. Web application begins a conversation and sends a message requesting the download C. DownloadService stored procedure retrieves the message and attempts the document download. D. If the download fails, retry every 30 minutes up to max 5 attempts.
Other than (D), this is perfectly straightforward. I will describe the design I've come up with to retry after 30 minutes below. I would appreciate any suggestions on better ways to handle this within SSB.
My solution relies on a small state table: create table MessageState ( ConversationHandle uniqueidentifier NOT NULL, RetryCount int NOT NULL, MessageBody varbinary(max) NOT NULL )The web client begins a conversation and submits a message. It does not end the conversation, but rather the InitiatorQueue has an activation procedure to end its side of the conversation after the DownloadService ends them, as described here http://blogs.msdn.com/remusrusanu/archive/2006/04/06/570578.aspx.
The DownloadService logic, in pseudo-code: 1. Use a typical loop to get conversation groups and receive messages 2. If message type = DownloadRequest 2a. Call a stored procedure to attempt the download 2b. If successful, end conversation 2c. If not successful, add entry to MessageState and BEGIN CONVERSATION TIMER with a 30 minute timeout 3. If message type = 'http://schemas.Microsoft.com/SQL/ServiceBroker/DialogTimer' 3a. Look up conversation handle in MessageState 3b. Recall original MessageBody 3c. Call a stored procedure to attempt the download 3d. If successful, delete MessageState row end conversation 3e. If not successful and no retries left, delete MessageState row and end conversation with error 3f. If not successful and retries remaining, increment MessaageState.RetryCount and BEGIN CONVERSATION TIMER with a 30 minute timeout
The main downside I see to this solution is a certain lack of transparency. After step (2c) or (3f), "select * from DownloadQueue" will no longer show the pending request, although you can find it in sys.conversation_endpoints.
Is there a better idiom to solve this type of problem in SSB?
All my SSIS packages consistently take 1 minute to start when they are scheduled via the SQL Agent.
For example, a package runs in 20 seconds in the BI Studio. I transfer it to the SQL server and store it in MSDB. I then run it manually from SSMS - Integration Services and it takes 20 seconds. I then schedule it via a job in the SQL Agent and it takes 1 minute 20 seconds.
I can see from some simple logging that there is consistently a delay of 1 minute between the job starting and the package starting.
I have also switched every occurrence possible of DelayValidation to TRUE in all my packages and tasks. All this did was reduce the package run time from 27 seconds to 20 - the 1 minute delay still exists.
This happens on all my packages on all my servers. Any ideas.....?
I wonder if anybody can shed any light on this problem. I have a SQL Agent job which has three steps, each step runs an SSIS package.
The job is scheduled to start at 11.00 pm, which is does successfully. However, it has been taking between 2 and 3 hours to run, which is way longer than it should.
When I've looked at the logging, I've found that the although the job starts at 11.00 pm, the first package (in job step 1) does not start executing until about 11.30. It finishes in about 5 minutes, there is then about an hour delay before the second package (in job step 2) starts. This finishes in about 10 minutes, then there is another hour delay before the third package (in job step 3) starts.
I've tried configuring the steps as SSIS jobs, and also as cmd jobs using dtexec, both exhibit the same behaviour.
Any ideas about what could be causing this delay? The packages are stored in msdb on the same server as the SQL Agent job, if that makes any difference.
Hello I have and tested in 2 way using variable and constant. Query 1: This query I am using vaiable "@last" for the date value. this takes to run for 2 minute .
Query 2: Here I am using value '11/05/2002' instead of varialbe and it is running within a second.
Why these much difference to run this same query?. Can anybody give correct reason or solution for this. Thanks, Ravi
query 1: DECLARE @last smalldatetime SET @last ='11/05/2002'
select distinct(o.account_num),a.enterprise_num,a.site_nu m from oesd.dbo.order_header o WITH(NOLOCK) INNER JOIN oesd.dbo.account a WITH(NOLOCK) ON a.account_num = o.account_num and o.added_date > @last UNION select distinct(o.account_num),a.enterprise_num,a.site_nu m from oesd.dbo.order_header o WITH(NOLOCK) INNER JOIN oesd.dbo.account a WITH(NOLOCK) ON a.account_num = o.account_num INNER JOIN Leap.dbo.ENTERPRISE_STATE es WITH(NOLOCK) ON (a.enterprise_num = es.ENTERPRISE_NUM AND o.last_updated_date > @last)
Query 2:
select distinct(o.account_num),a.enterprise_num,a.site_nu m from oesd.dbo.order_header o WITH(NOLOCK) INNER JOIN oesd.dbo.account a WITH(NOLOCK) ON a.account_num = o.account_num and o.added_date > '11/05/2002' UNION select distinct(o.account_num),a.enterprise_num,a.site_nu m from oesd.dbo.order_header o WITH(NOLOCK) INNER JOIN oesd.dbo.account a WITH(NOLOCK) ON a.account_num = o.account_num INNER JOIN Leap.dbo.ENTERPRISE_STATE es WITH(NOLOCK) ON (a.enterprise_num = es.ENTERPRISE_NUM AND o.last_updated_date > '11/05/2002' )
I noticed that the Reporting Services is taking longer to open in the morning or after a period of time in-active. My guest is that the SSRS is set to idle/stop after certain time when it is in-active state and it will take time to start the Reporting Services when it is activated. My question is how to make SSRS active all the time ie never go to "sleep". What is the Pro and Con of this approach? Thanks.
There is and SQL Agent job that starts a package (from a file system using cmd command). Usually job takes 8-10 minutes. But sometimes it get stuck for a long time (1+ hour).
DTexec process can be found with procmon, but it seems it just not doing anything (And package is not logging to file Start of the execution) After long wait it just runs a package quickly.
I've moved a package to SSIS catalog to try to get more detailed logging, but with no luck.
Job starts at 1 PM, package execution starts at 1:49 PM. Without any messages about the execution in SSISDB log.
First I've thought it might be long validation problem, but when package executes validation messages are there and they perform quick.