Hi,
We are using Shared Schedule to trigger reports at a particular time. We are invoking the SQL Job associated with the shared schedule. How to identify when the shared schedule gets completed i.e. all the associated reports with the shared schedule are generated. We need to trigger other external processes once all the reports are generated. Kindly help me in this.
Subash
I have a job which is scheduled to run once daily. The job is enabled and the schedule is also enabled. After the job runs (successfully), the schedule has its 'Enabled' flag reset (i.e. the 'Enabled' checkbox in the schedule properties has become unchecked) and so is not rescheduled.
I have other jobs configured in a similar way, but they are run and then rescheduled in the normal way without any problems.
I am setting up a role to allow certain users to create and maintain Shared Schedules in SSRS 2005. The tasks I have authorized are: View reports View folders Manage all subscriptions View data sources Consume reports
What else do I need to grant? Right now the user cannot see the Site Settings link in order to get the the manage schedules.
At the Moment we use SQL Server 2008 R2 Std. with Reporting Services. I want to change the individual schedules (non-shared) for 170 subscriptions without using the web Interface.
I tried to change table entries for dbo.Schedule and dbo.Subscriptions but the reports did not run. I also know that here are Jobs in the SQL Server Agent for the schedules. Now I need to understand how the mechanism works that updates the job entries from database tables. Is there a stored procedure which can be used?
Is there a way to restrict the subscriptions to use Shared Schedule only? We want to make sure that we manage the schedules and they'll only run the subscriptions during the off peak hours.
I have had an underwhelming amount of success hunting down the sourceof this error and am hoping that someone here may have some insight.Error logged in the event log:Source: MSSQLSERVERCategory: (2)EVENT ID: 17887Description: IO Completion Listener (0x754) Worker 0x00FEC0E8 appearsto be non-yielding on Node 1. Approx CPU Used: kernel 0 ms, user 0 ms,Interval: 15187.Immediately followed by 100% CPU utilization. The server remains ping-able and I can telnet to the SQL port, but it is otherwise effectivelyunresponsive (queries timeout, RDP times out, etc.). This is alsologged in the SQL error log:2008-03-19 15:52:37.80 Server Using 'dbghelp.dll' version '4.0.5'2008-03-19 15:52:46.80 Server **Dump thread - spid = 0, PSS =0x00000000, EC = 0x000000002008-03-19 15:52:46.80 Server ***Stack Dump being sent to D:Microsoft SQL ServerMSSQL.3MSSQLLOGSQLDump0004.txt2008-03-19 15:52:46.80 Server *************************************************** *****************************2008-03-19 15:52:46.80 Server *2008-03-19 15:52:46.80 Server * BEGIN STACK DUMP:2008-03-19 15:52:46.80 Server * 03/19/08 15:52:46 spid 02008-03-19 15:52:46.80 Server *2008-03-19 15:52:46.80 Server * Non-yielding IOCP Listener2008-03-19 15:52:46.80 Server *2008-03-19 15:52:46.80 Server *************************************************** *****************************2008-03-19 15:52:46.80 Server *-------------------------------------------------------------------------------2008-03-19 15:52:46.80 Server * Short Stack Dump2008-03-19 15:58:13.09 Server Stack Signature for the dump is0x000000062008-03-19 16:03:40.94 Server IO Completion Listener (0x730)Worker 0x00A060E8 appears to be non-yielding on Node 0. Approx CPUUsed: kernel 0 ms, user 0 ms, Interval: 15187.2008-03-19 16:06:10.47 Server Timeout waiting for external dumpprocess 6016.2008-03-19 16:06:10.47 Server IO Completion Listener (0x754)Worker 0x00FEC0E8 appears to be non-yielding on Node 1. Approx CPUUsed: kernel 0 ms, user 0 ms, Interval: 15187.2008-03-19 16:17:34.51 Server IO Completion Listener (0x754)Worker 0x00FEC0E8 appears to be non-yielding on Node 1. Approx CPUUsed: kernel 0 ms, user 0 ms, Interval: 15187.The server is Windows 2003 SP2 (32bit), dual dual-core AMD opteronprocessors, 8GB RAM, RAID 10. /3GB and /PAE are enabled in boot.ini."Lock pages in memory" has been granted to the account SQL runs as.SQL Server is 2005 Standard Edition SP1 (32bit). "Use AWE to allocatememory" is enabled. Min and max memory are set to 5GB.If there is any other info you think may be helpful introubleshooting, please let me know. Any insight you've had into thisor similar problem would be appreciated. Thanks in advance.
I have a package that has 4 Script Tasks that are placed sequentially.
I have Task1--> Task2-->Task3-->Task4
The arrows between them are OnCompletion Arrows as opposed to the Standard OnSuccess arrows.Even if Task2 failed, it would still execute 3 and 4
the catch is that i want it such that when i run the first time and task 2 fails, then all the tasks except task2 should run which is fine, but when i rerun it. I want it such that it realises that task 2 had failed earlier, so it runs just task2.... if both 2 and 4 had failed then it should just run 2 and 4
i tired to implement it with check points, but the problemn is that if it fails at task2 it stops at task2 and does not continue to execute tasks 3 &4... when u rerun it starts at 2 but like i said i would like 3 & 4 to have completed the previous run...
Hi All,I have a DTS package that runs calling a few stored procedures and anactiveX module. The package does some data cleanup and load. It takesan hour to complete. I have emails sent at the beginning and end ofthe package execution along with error notifications if any of thestep fails.The issue is that, the DTS runs successfully. I ran the package fromthe server and it completes. If I schedule it, it has been running forover a few months without any issues. One fine day hell broke looseand it started to give an attitude!!When the job is invoked automatically on the scheduled time, it runsthe package. Does what its supposed to do including sending out thefinal email. but is not finishing the job. The job says its stillexecuting but the final step has completed. I have tried logging thepackage, and it says completed the last step.. But the JOB Log hasn'twritten anything to the log file.. (Job Log is where I am logging thestep where I am calling the DTS).Has anyone have any idea what this issue is? I am at a dead end andhave no clue why this is happening. I am thinking of changing thewhole process but before that I want to try posting this.Any ideas, suggestions will be much appreciated. Please feel free tolet me know if there is any additional information you need in orderto troubleshoot this issue.Thanks in advance..Aravin.
I have three sequence containers setup to run in parallel. I have a final step that parses the log file and displays results, and I want to this to occur when all three containers have completed, success or failure. I therefore have a constraint from each container that feeds into my final step, and all three constraint types are set to "completion".
When I run the package and one of the tasks within a container fails (and fails its parent, but not the package) the final step is not executed.
If I take off all the constraints except one, the final step is executed as expected.
I am using checkpointing if that has any impact. Disabling it makes no difference.
I'm not entirely sure this is the place for it but I need to implement an automatic text completion function.
I'd like to know if there is something, such as a built-in function, that could help me. The best idea I have is to create some sort of node tree and work with that but there has to be a better way.
I need to do it in C# or in MS SQL. Any kind of help is much appreciated!
Is there a way (during the Control Flow) to force a package to complete successfully when a specific condition is met?
Here is what I'm trying to do:
A package will be scheduled to execute once a day The first thing it does is downloads Excel files from an FTP site. If there are no files to download, I don't want the package to fail, I simply want the package to stop (preventing the subsequent Data Flow tasks from executing), returning a successful completion.
The reason, I need a successful completion is because I plan to have MOM monitor the Windows event logs, notifying us of any errors that this package logs.
Is there a way to trigger a SSIS package automatically when it ends successfully. Actually i am moving data from one database to other.......i am passing start date and end date as my paramter...So is there some way once my SSIS package ends succesfully then it can start again on its own reading a new set of date range from a file or something....
Hello All,I have a stored procedure which will act like a main/controller scriptwhich will then invoke more stored procedures (~20). Basically, itlooks something like below:-- start scriptcreate procedure ...print 'process started'exec sp_1exec sp_2exec sp_3....print 'process ended'-- end scriptLooking at it, after running that procedure, immediately I would expectthe first PRINT statement to be printed but it won't. It seemedthat the print statements would only display the messages afterthe whole processing completed.What's the reason for this sort of behaviour. If so, then we wouldnot be able to print any progress reporting in our scripts.Please comment.Thanks in advance.
I'm working on an application that involves various components two of which are an SSIS package and a C# windows service.
The SSIS package imports data from a number of structured flat files into database tables and then runs a number of stored procedures that rearrange/normalize the data and perform certain calculations the results of which are also stored in the database. The package will run at regular intervals as new data structured flat files become available.
The C# windows service is a caching server and the data it needs to cache is the results of the stored procedures run by the SSIS package. I'm therefore looking for a way to notify the C# app when the SSIS package has completed.
I've looked at Sql Servers Query Notification but, if i've understood correctly, that seems to depend on the client code issuing a SQL statement and attaching a sqldependency object that will check for changes to the resultset of the particular query which is not really what i'm after.
I've considered SSIS events but again i think for a C# app to monitor SSIS events the package needs to have been executed from within the C# app. If there's a way for the SSIS package, running completely independently of the C# process, to fire an event that the C# windows service can subscribe to then that would be ideal.
I don't know whether other options like Sql Server Notification Services or even something quite crude like writing a flag to a flat file that the C# service can read would be more ideal?
Hello,I have several SQL Server jobs that execute procedures written to rundeletes and updates against tables in my data warehouse. Some jobsoccasionally get stuck with the status "Performing Completion Actions"and can only be reset by restarting SQL Server or rebooting theserver.I have seen one answer to this problem in a DTS package is to checkthe box specifying "Close Connection on Completion" for each task. Isthis the equivalent of setting "cursor_close_on_commit" to "on" inT-SQL?? If not, what is the equivalent in T-SQL??Thanks in advance!Cliff
I need to build an automated email that gives the completion messageswhen a database is restored (i.e. "Executed as user: sa. ExecutingRESTORE DATABASE DB1 FROMDISK='h:ackupsDB1DB1_db_200411082056.BAK', RECOVERY [SQLSTATE01000] (Message 0) Processed 3816 pages for database 'DB1', file'DB1_Data' on file 1. [SQLSTATE 01000] (Message 4035) Processed 1pages for database 'DB1', file 'DB1_Log' on file 1. [SQLSTATE 01000](Message 4035)")Currently, the Job History box contains it, but I'd rather get it viaemail. The base restore statement works, but it gives me a "Cannotperform a backup or restore operation within a transaction." when Itry to run it as below.works:[build @RestoreCmd]exec (@RestoreCmd)doesn't:[build @RestoreCmd]create table #Error_Finder (listing nvarchar (4000))declare @Errors smallintinsert #Error_Finder exec (@RestoreCmd)EXEC xp_sendmail @recipients = 'dba',@query = 'SELECT * from #Error_Finder',@subject = 'SQL Server Restores'drop table #Error_FinderAny suggestions? My next thought is to start selecting against systemtables in msdb. It looks because the Insert can fail, it's atransaction.
i'm using vs 2005 pro edition i try to enable the statment completion in option menu i try this path (otpton ->text editon->sql script ) but statment completion is grayed i can not check this part how i must do to enable it
How to calculate estimated completion time of a job and what is the variance/difference in time based on previous job history. Looking for tsql query which can accomplish this.For example)...Daily a job is taking 10 mins to complete. However, today due to some reason, the job is running over an hour and still running. It could be a blocking issue or some performance issue on the server due to which the job is still running.
In such cases, using a tsql query or a stored proc which monitor these jobs every 3 mins (Configurable value), so every 3 mins , query has to check, if they are any jobs which are taking more time than its usual completion time/avg completion time in that case shoot an email using dbmail functionality i.e. sp_Senddbmail .. From there, DBA can dig further using waits or sql trace etc...
Hi, I have a simple web application which calls a stored procedure. The stored procedure operates as a transaction and runs for several minutes. I've created a partial class to set the SQLcommand timeout property to avoid any timeouts, which works fine. Unfortunately though, when the application is run in the production environment, it ends in an error after a certain amount of time (maybe a couple of minutes - not exactly sure), which seems to be the same each run. It doesn't appear to end the stored procedure though, which results in locking the tables. It runs fine in the development environment, and it doesn't appear as though any error information is provided when the application crashes. I'm assuming that the ASP.NET application is timing out for some reason, but the stored procedure itself is fine. I can run it directly from SQL server without any dramas. In the Virtual Directory configuration within IIS, I have the script timeout period set to 1200 seconds. The Default Web-Site timeout property is set to 120 seconds, but I'm assuming that this is only for internet connection timeout, not database transaction timeouts. Any information as to what may be causing this is appreciated. Thanks
I'm using two different server for application(.net version 1.1) and database(sqlserver200) with win2k3 environment. I'm getting the below error message all the times. I've veryfied the communication between the two server is fine.
"The timeout period elapsed prior to completion of the operation or the server is not responding"
I am trying to restore a SQL Server 2005 database from a backup file and experiencing a hanging issue after its "finished"
I am doing this in SQL Server Management Studio, generating the following SQL for Restore:
RESTORE DATABASE [AdventureWorks2] FROM DISK = N'C:Program FilesMicrosoft SQL ServerMSSQL.2MSSQLBackupAdventureWorks.bak' WITH FILE = 1, NORECOVERY, NOUNLOAD, REPLACE, STATS = 10 GO
When I run this on the machine i originated the backup with (creating AdvWorks2) it runs fine in no time.
When I run this command on another SQL Sever 2005 instance on another host. It appears to run fine, and I see progress going up to 100% and it says "Restore Completed Successfully"
BUT, for some reason, the database in object explorer is stuck with a "(Restoring...)" label attached to its tree item and I am unable to perform any activities on that database instance. It claims, it's in the middle of a restore operation! again this is after it had reached 100% on progress and declared successfull completion.
Any ideas what could be causing this?
(Note: Both instances are SQL Server 2005 - Service Pack 2)
We are using SQL Server 2005 (not on SP1), and keep getting the following message in the SQL Server log:
IO Completion Listener (0x15e8) Worker 0x00B7E0E8 appears to be non-yielding on Node 0. Approx CPU Used: kernel 0 ms, user 0 ms, Interval: 35092.
After receiving this message, the server slows down considerably and we receive numerous messages saying:
SQL Server has encountered 61 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [xxxxx.MDF] in database [xxxx] (22). The OS file handle is 0x00000CDC. The offset of the latest long I/O is: 0x000000fa21a000
I have not been able to find any documentation or explanation of what the IO Completion Listener message is about or how to correct whatever problem is causing it.
I have a scenario where i have to run update task on multiple servers in parallel and once all of them are completed (success or failure) another task is to be run on another server
1. in maintenance plan, if we add tasks which are not joined, will they run in paralled at the same time 2. if we link the last task to all the tasks with link type 'completed' will the last task complete after all tasks are completed or when any one of them is completed (i have big doubt here)
the business requirement behind this is to bring data from multiple servers into shadow copies locally and then process them together. its ok if some server data transfer fails, but its not ok to start processing centrally while data transfer is going on. further, we want to run data transfer from multiple servers in paralleled to save time.
Sometimes when I do "alter database ABCD set partner failover" I get the following message: Nonqualified transactions are being rolled back. Estimated rollback completion: 100%.
In 99 percent of the cases after such message the first attempt to use an open connection would also raise an error such as "Exception: A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)"
After the first error all subsequent queries would run perfectly.
Is there any built-in way of kicking off a job on SQL Server 2005 Agent whenever a package/job completes in Oracle? Are there any (Triggers? Msft queue? Event Notification?) mechanisms to automate running a job on the SQL side? Any article or knowledge articles would be appreciated also.
If not are there any built-in stardardized polling techniques? Or are there any timers in SSIS? That way I can delay executing a child package until a certain record has been inserted into a control table in Oracle. I don't want to write an inefficient for loop that blocks all other processing on the server and iterates once every second.