The application that I'm working is has a job that runs 24/7 which checks a table for rows with specific status, if it finds it, then I has to execute a job for each row that it finds.
I don't know wheather if SQL Server jobs are asynchronous or not, since I need the execution of jobs to be in parallel instead of in a pipeline mode.
Who should own job in SQL Server 2005? Currently, I have set up a couple new jobs, the owner is a windows login with administrative privileges on the server. So the job runs fine.
When I tried to change the owner to a non-admin user, the job failed, stating that I needed to set up proxy permissions or something to that effect. So I left the owner as the admin user.
Is that ok, or even the preferred owner of jobs?
Also - One of my job steps is calling an SSIS package. On the "general" tab, it shows that it will run the package as the SQL Server Agent, and then there is a section "Log on to Server" where you select windows or sql server login info. In this case, I the SQL Server agent is a different windows account than the job owner... so who is logging in at the package level - the job owner or the SQL Server agent account?
Hello I am using SQL Server 2005 I want to delete the data from my table after specific number of days.This should happen automatically. After the days I specified the data from the table should automatically get deleted. Can anyone let me know how to achieve this.If anyone can give an example for this it will be great.Thanks in advance.
Migrated Packages from DTS to SSIS... Tried running the jobs calling these DTS packages ....but getting error ......
the step in job is : DTSRUN /S<servername> /U<username> /P<password> /N"ATL_Get_Down_Dates" giving error : Executed as user: USATLBNHSYSTEM. DTSRun: Loading... Error: -2147220438 (8004042A); Provider Error: 0 (0) Error string: The specified Storage File contains multiple Packages; Loading requires a Name or Guid. Error source: Microsoft Data Transformation Services (DTS) Package Help file: sqldts80.hlp Help context: 713. Process Exit Code 1. The step failed.
Will prbly write a blog about this pretty soon. Until then, here is a quick llist of steps:
Create an IS package from Visual Studio. In the Toolbox, under the "Control Flow Items" there is a "Transfer Jobs Task". Use this to transfer jobs to your local machine or any other server.
I still have to figure out how to make it write to a flat file. The only destination it allows at this point seems to be another SQL server.
Dinakar Nethi ************************ Life is short. Enjoy it. ************************ http://weblogs.sqlteam.com/dinakar/
Hi there,, I've a problem when I make a a full backup for my db using maintenance plan the job that is associated with the maintenance plan is not working and gives the followning error: The job falied. The job was invoked by user machine nameuser name. The step to run was step1(subplan). The user I'm using is administrator and the jobs was working normaly and suddenly it stoped.
I have two tables t_DTM_DATA_STAGING around 2 million records t_DTM_DATA around 251 million records
The below SQL statement looks for records in the t_DTM_DATA_STAGING table that are not in the t_DTM_DATA table and adds them to a 3rd table. (t_DTM_DATA_STAGING2)
This statement has been running fine for weeks, but now it seems to get hung every day. I ran sp_Who2 and it says the status is runnable. I let it run for around 5 or 6 hours the other day to see if it will finish but it didn't. This SQL job is step 3 in a 6 step SQLAgent job that usually finishes in 30 to 45 minutes.
I'm not sure how to troubleshoot this problem. No other jobs are running at the time this job runs.
Could this SQL statement be written a better way?
Thanks for any help anyone can provide.
Jerid
SET QUOTED_IDENTIFIER ON
INSERT INTO [DTM].[dbo].[t_DTM_DATA_STAGING2] ([CP],,[MAJ],[MINR],[LOCN],[DPT],[YEAR],[PD],[WK],[TRDT],[SYSTEM],[AMOUNT],[DESCRIPTION],[GROUP],[VENDOR] ,[INVOICE],[IDAT],[PO_NUMBER],[DDAT],[RCV#],[RDAT],[RSP],[EXPLANATION],[UPLOAD_DATE],[UPLOAD_USER],[UPLOAD_NAME] ,[RELEASE_DATE],[RELEASE_USER],[RELEASE_NAME],[TRTM]) SELECT t_DTM_DATA_STAGING.CP, t_DTM_DATA_STAGING.CO, t_DTM_DATA_STAGING.MAJ, t_DTM_DATA_STAGING.MINR, t_DTM_DATA_STAGING.LOCN, t_DTM_DATA_STAGING.DPT, t_DTM_DATA_STAGING.YEAR, t_DTM_DATA_STAGING.PD, t_DTM_DATA_STAGING.WK, t_DTM_DATA_STAGING.TRDT, t_DTM_DATA_STAGING.SYSTEM, t_DTM_DATA_STAGING.AMOUNT, t_DTM_DATA_STAGING.DESCRIPTION, t_DTM_DATA_STAGING.[GROUP], t_DTM_DATA_STAGING.VENDOR, t_DTM_DATA_STAGING.INVOICE, t_DTM_DATA_STAGING.IDAT, t_DTM_DATA_STAGING.PO_NUMBER, t_DTM_DATA_STAGING.DDAT, t_DTM_DATA_STAGING.RCV#, t_DTM_DATA_STAGING.RDAT, t_DTM_DATA_STAGING.RSP, t_DTM_DATA_STAGING.EXPLANATION, t_DTM_DATA_STAGING.UPLOAD_DATE, t_DTM_DATA_STAGING.UPLOAD_USER, t_DTM_DATA_STAGING.UPLOAD_NAME, t_DTM_DATA_STAGING.RELEASE_DATE, t_DTM_DATA_STAGING.RELEASE_USER, t_DTM_DATA_STAGING.RELEASE_NAME, t_DTM_DATA_STAGING.TRTM FROM t_DTM_DATA_STAGING LEFT OUTER JOIN t_DTM_DATA AS t_DTM_DATA_1 ON t_DTM_DATA_STAGING.TRTM = t_DTM_DATA_1.TRTM AND t_DTM_DATA_STAGING.TRDT = t_DTM_DATA_1.TRDT AND t_DTM_DATA_STAGING.PD = t_DTM_DATA_1.PD AND t_DTM_DATA_STAGING.YEAR = t_DTM_DATA_1.YEAR AND t_DTM_DATA_STAGING.DPT = t_DTM_DATA_1.DPT AND t_DTM_DATA_STAGING.LOCN = t_DTM_DATA_1.LOCN AND t_DTM_DATA_STAGING.MINR = t_DTM_DATA_1.MINR AND t_DTM_DATA_STAGING.MAJ = t_DTM_DATA_1.MAJ AND t_DTM_DATA_STAGING.CO = t_DTM_DATA_1.CO AND t_DTM_DATA_STAGING.CP = t_DTM_DATA_1.CP WHERE (t_DTM_DATA_1.CP IS NULL)
We have our SQL Server 2005 hosted at a datacenter and have only SS Management Studio access (no term serv, no event log, etc.). Also, our hosting company has disabled the Database Mail feature. We have over 60 jobs running on a minute/hourly/daily basis that are critical for our business. It is super important for us to know if and when a job failed.
Given this scenario, how do you suggest we implement monitoring/notification of failed jobs?
Can anyone please tell me what happens if I have Asynchronous mirroring setup and my Primary server physically dies and not available then what happens?. Does
1. Automatic failover occur to Secondary server? 2. What does the Database state show as. Primary, disconnected?. 3. what happens to my transactions. Are they lost? 4. Does any data loss occur?
If I rebuild a new server how do I sync back my current primary to the new one? In that case is it going to be just a fail back?
We have development and user acceptance (UA) servers. When I start a job on the development server, on the management studio, Start Jobs window and Job activity windows indicate "Executing" until the end of the job and finish with "success" or "failure"
But on the UA server, second after I start a job, Start Jobs window comes up with "success" or failure" and Job activity monitor says "idle" but Job continues to log without any error message and updates the tables. So these two windows are not reliable at all. I have to add that I have only job operator rights on the UA server.
We are seeing high number of hadr_sync_wait types on our server after setting up AOAG during peak times. We have setup sync type as synchronous commit and failover automatic. Can we change these settings to async and manual failover whenever we need and change them back to sync commit during off peak timings. Any drawbacks because of these changes ?
so async cursor population is supposed to create the cursor and return the cursor id quickly, while the server works on async populating the results. For a keyset-driven cursor, SQL Server stores the key sets in tempdb, which it then uses to fetch data for cursor results. Anyway, this works fine for smaller tables, but I'm finding for large result sets, the async cursor population is very slow and indeed seems to approximate synchronous time. The wait stat I get while it is running (supposedly asynchronously) is TRANSACTION_MUTEX.
Example: --enable async cursor exec dbo.sp_configure 'cursor threshold', 0; reconfigure; declare @cursor int, @stmt nvarchar(max), @scrollopt int, @ccopt int, @rowcount int; --example of giant result set set @stmt = 'select * from sys.all_objects o1, sys.all_objects o1';
[code]...
Note that using the SQL "select * from sys.all_objects o1" is much faster than "select * from sys.all_objects o1, sys.all_objects o2". However, if cursor population is async, I'd expect the time to return a cursor id to be similar between the two.
I need to copy (not convert) a number of existing DTS jobs and the related DTS packages from a 2k machine to a 2k5 machine. I'm trying to avoid having to re-create the Jobs and type in all the Jobs steps in 2005.
(Yes, we will be converting the jobs, etc. but that's not an option right now)
I've had good results using the nifty technique from SQLDTS.com (http://www.sqldts.com/default.aspx?204) for the packages, but this doesn't quite work for the Job steps due to the new 'Step_UID' column in the sysjobsteps table in 2005.
I've tried a number of variations in DTS to populate this new column, which is of type uniqueidentifier, but still no joy.
Does anyone know of a workaround or a better method to get the Job copied over from 2000?
Hello All I am currently reviewing MS documentation regarding an upgrade to SQL 05 (from 2000). Given the risks invloved we have opted for a side-by-side upgrade. The "Upgrade Technical Resource Guide" indicates that system databases cannot be upgraded. This leaves the question "What to do with Jobs, Operators and Alerts?" Do these have to be recreated manually? Also is there no easy way to migrate DB Maintenance plans ? I will use MS's sp_help_revlogin stored proc to re-create users but what about the msdb stuff ? I am presuming the structure is different hence cannot script (in 2000) and apply (to 2005) the Jobs. The Copy Databases wizard can copy Jobs etc but my past experience with this has not been brilliant so am reluctant to use it. We plan to use the "backup and restore" method to migrate the user databases. Thanks in advance.
I am trying to migrate few of my SQL jobs to SQL2K5. Can I simply script and execute them on the new environment. I hope the script is forward compatible.
------------------------ I think, therefore I am - Rene Descartes
This problem happens on this particular machine and doesn't happen on other machines that I have. And I think this problem belongs to this forum. Because the problem is triggered by a special setting in the connection string. I followed the steps of threads including Xinwei Hong's instruction for fixing this error(26) but has no luck.
The problem occurs when I call SqlConnection.Open().
I am using window xp, sql2000. the named instance is on local machine, my connection string is like
Whenever I change Asynchronous Processing=true into Asynchronous Processing=false, the problem is gone.
This problem also occurs when I use Visual Studio - Server Explorer - Add Connection dialog - Advanced button - Advanced Properties window where you can set Asynchronous Processing property to true.
I would like to create an SSIS package in 2005 and run it in 2000. Is there anyway to do this? Or does SQL Server 2000 have a precursor to SSIS? I am trying to create a job to automatically catch and kill orphaned processes. -Kyle
I was wondering if there was a SQL query I could run that would list all the current jobs in a SQL 2005 database? I would like to check to ensure a job exists before I call it, but was having trouble finding a way to determine if it existed.
I have a Web application with reports which lasts for a long time, for it I reflected to make call Asynchronous for reports (that is I execute the report and warned the user when the report is ready). I found an example which supplied with SQL Server, but as I am novice C# I understands not everything in the example ( AsynchronousRenderCS).
Hi all, Can anybody tell me if the have had any luck on creating and processing asychronous cursors. According to Microsoft SQL 7.0 books online after you create your Async cursor and then display the variable @@cursor_rows it should display either a negative number like -1245 meaning that it is still loading or a whole number meaning that it finish loading, but every time I display this variable I get -1 and according to MSSQL documentation this means I'm creating a Synchronous cursor, I have modified the cursor threshold settings, declared my cursor INSENSITIVE, and still can't get a cursor to be Async.
What I am looking to do is have a stored procedure begin a dialog with my request service. With that dialog established my stored procedure sends 50 request messages, one for each of the 50 of the United States. I want these to be processed asynchronously by a procedure that is called on activation for the request queue. In that activation procedure the request is processed against the respective state and a response message is sent to the response service (to the response queue). I want to be able to tie these request messages and response messages together with some type of shared identifier. These requests don't need to be processed in any specific order and don't need any fancy locking mechanism via conversation group since these requests require to be processed asynchronously. What is the best approach? Do I need to create 50 seperate queues and open dialogs with each? If this is the route to take, would this be a performance hit?
My goal is to have all 50 states process all at once, each finishing with a response message sent to the response queue. The initiating procedure, after sending these 50 requests, would then spin and wait for all 50 to complete, be it a response, error, or timeout. When all 50 have returned, the procedure would then merge the results and return. So as you can see in my scenario, I dont care when a state is complete as they do not affect the outcome, nor do they access any of the same resources when being processed.
We're looking for a solution to an audit trail issue. Our business people are looking to track each column value that changes(before and after images) for every table on our database as well as the userid that changed the data and when it was changed. Are there any methods that have been employed by other sites to track this level of detailed changes without resorting to triggers for each table and has anyone worked out a way for this audit trail writing to be handled asynchronously within SQL Server?
i tried to use xp_cmdshell in order to execute a vbscript from a trigger and it works, but i notice that trigger wait until the vbscript was terminated. i do some things into vbscript, so i can't wait until the end. There is a way to don't wait until the end, in practice, run vbscript in asynchronous mode??
Hi--done some searching, but I am not finding exactly what I need. I am using an asynchronous script component as a lookup since my table I am looking up on requires an ODBC connection. Here is what my data looks like:
From an Excel connection:
Order Number
123
234
345
The table I want to do a lookup on has multiple rows for each order number, as well as a lot of rows that aren't in my first table:
Order Number Description
123 Upgrade to System
123 Freight
123 Spare Parts
234 Upgrade to System
234 Freight
234 Spare Parts
778 Another thing
889 Yet more stuff
etc. My desired result would be to pull all the items from table two that match on Order Number from table one. My actual results from the script I have is a single (random) row from table two for each item in table one.....So my current results look like:
Order Number Description
123 Freight
234 Freight
345 Null
And I want:
Order Number Description
123 Upgrade to System
123 Freight
123 Spare Parts
234 Upgrade to System
234 Freight
234 Spare Parts
345 Null
etc.... Here is my code, courtesy of half a dozen samples found here and elsewhere...
odbcCmd = New OdbcCommand("SELECT F4211.SDDSC1, F4211.SDDOCO FROM DB.F4211 F4211 Where F4201.SHDOCO = ?", odbcConn)
odbcParam = New OdbcParameter("1", OdbcType.Int) odbcCmd.Parameters.Add(odbcParam)
End Sub
Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer)
Dim reader As Odbc.OdbcDataReader odbcCmd.Parameters("1").Value = Row.SO odbcCmd.ExecuteNonQuery() reader = odbcCmd.ExecuteReader() If reader.Read() Then
With Output0Buffer .AddRow() .SDDSC1 = reader("SDDSC1").ToString .SONumb = Row.SO .SOJDE = CDec(reader("SDDOCO")) End With
End If
reader.Close()
End Sub
Public Overrides Sub ReleaseConnections() connMgr.ReleaseConnection(odbcConn) End Sub
End Class
I just don't know what I need to do to get every row from F4211 where SDDOCO matches Row.SO instead of a single row...... Any ideas or help? Oh, the reason I am starting with my Excel connection is that sheet lists the Orders I need detailed data for, and is only a few hundred rows....F4211 is really really really big.
I have also worked out an alternate way to do this using merge join tasks...but then my datareader source goes off and fetches 300,000 rows from F4211 before my final result set of about 1200 rows. That just feels like a bad approach to me...or am I being over-cautious? I'm a newb (if you couldn't already tell)...so guidence is appreciated.
I noticed that the current SLQCe driver does not offer support for the APM(Asynchronous Programming Model). Are there any plans to do this in the future? In light of the lack of APM functionality doe anyone have any ideas or thoughts on how async operations could be done, or if they are even needed in the context of applications that use SQL Ce
If you have a data file, and you only want CERTAIN rows to pass to the destination, ie) a table
and you are using a script task to accomplish this,
is this a synchronous or asynchronous transformation?
Q. And how do you assign the values to the output? Do you have to create output columns, or not?
I am very very confused right now. I can't seem to find a decent answer to what is a very basic question either in my SSIS book or in the documenation. Perhaps it is so basic, that the question doesn't seem valid? I don't know. But I just don't understand this at all.
I have a tough situation of trying to execute multiple instance of same package, to reduce the process load times.
Introduction:
We have src system which get 7000 tiny files of 72 rows each, and the SSIS package uses For Each Loop task and iterates through each file and loads data. We have a Process table that keeps track of the status of the SRC Process & ETL Load Process.
When the src process starts, For each row in the process table, it assigns a row status 'Assigned' brings in the flat file of 72 rows & updates the status as 'Complete'. When the ETL starts, for each file in the shared directory, it assigns status 'Running' and loads the data and updates status 'Complete'. Then the file is moved to different processes folder. Its like the bridge table between the 2 processes.
Bride Table Format: Table_PK(identity col), (DATE, City) is natural key, it is a cross join of date & City, so the process is getting 1 file every day for 1 city. Initial status are both 'Queued'
Since the bridge table is prepopulated, the src process(which is on Unix) starts multiple threads and gets files with in 30 minutes. But the SSIS is serial process & takes 2 -3 hrs to load the files, most of the time is taken by file operations and SSIS can only start only 1 thread. Future Plan:
So to bring down the processing times, we wanted to start the SSIS packages in the Bridge table instead of starting in the share folder. i.e. for each row in the bridge where SRCProcess is Complete & ETLProcess Queued, start the SSIS process for this src file. Since our SRC files are names as "CityName_Date.csv" it will not be difficult. So we wanted to start multiple threads, that way the load process will be fast. Implementation:
In the T-SQL loop we wanted to use 'xp_cmdshell' and call DTEXEC utility with the src file name as variable. But the DTEXEC is a synchronous process, but I am looking for a way to implement this asyncronously. Just like using "nohup executionscript &" in unix. So any ideas on how to implement this, I looked on the web, and there is some thing about service broker, but all it says is about sending messages & queuing. Any light on how to implement this on windows server is going to be a life saver. Thanks a lot, Venkat
I am evaluating the possibility of replicating a database over a network to our HQ from the control site (one way). The original database is on SQL server.
The database is likely to grow to many terabytes so we would be using transactional replication.
The table to be replicated recieves about 500 records per second. The table will probably consist of a record key (8 byte int), site ID (4 byte int), reading (8-byte float), and timestamp (8 byte timestamp). All up, 28 bytes + whatever overhead exists.
MINOR DETAIL: The HQ's copy should be preferably no more than 1 hour behind the control site's. This would be a long-term setup that would last for many years. Our link is currently about 2 MB/s, and is fairly reliable.
QUESTIONS: I'm guessing that (bytes/record)*(records/second) won't be the whole story. Does anyone have an estimate of the average data efficiency factor for transactional replication? How would I go about calculating how much bandwidth would be needed? Is there a formulae hiding somewhere?
Greetings, I have a requirement for a SQL Server 2005 database to notify 3rd Party applications of a table change. As is stands now, 3rd Party applications access the database via a Data Access Layer (DAL) dll (C#). I'd like to somehow implement an asychronous event notification scheme via the same DAL to these clients.
Can someone offer a clever way to implement such events?
Broker Services? I am under the impression the SSBS is typically deployed when databases are to communicate with one another.
Porting an existing SQL 2k DTS job over to a SQL 2k5 SQL Server running SSIS.
Background: The job loads data into an empty work table and performs some work before clearing out the work table. This job runs every minute.
Question: If the job happens to take longer than a minute, does SSIS create a second instance of the job? Or perhaps it does what DTS did and reschedules the job for the next iteration?
Concern: I need to know because there would be key contraint violations if another instance of the job started before the working table was cleared out.
Hi, Im new to this list and after many days of trying to figure this out-here we go Can you please tell me where I€™m going wrong in my asynchronous script component? I€™m almost there but by using to variable iReadingCount to add the row is not quite correct. There has to be a better way !!! Thanks in advance Dave
I have to process a data from a staging table this which has imported a data in a structure like this, each line has a tag which is a fieldname <MyName > followed by the value <Advice Note Number> is the Tag that tells me it is the start of the next record with the only gotca is there may be up to six <Contractor Billing> Tags in one record.
Tag Val1 Val2 <Advice Note Number> 1374239 <Customer Names> My Name <Customer Address> My Address <Completion Date Time> 2005/11/25 16:30:00 <Service Order Number> 123456 <Phone Number> 999535431 <Telephone Exchange> MNG <Contractor ID> Fabrikan <Service Order Type> F3 <Contract ID> 41 <Comments> 1 2 <Contractor Billing> 165 1 <Contractor Billing> 167 1 <Customer Signature> NO <Advice Note Number> 1374240 <Customer Names> My Name <Customer Address> My Address <Completion Date Time> 2005/11/25 16:30:00 <Service Order Number> 123456 <Phone Number> 999535431 <Telephone Exchange> MNG <Contractor ID> Fabrikan <Service Order Type> F3 <Contract ID> 41 <Comments> 1 2 <Contractor Billing> 165 1 <Customer Signature> NO
So I need a asynchronous script component ( Setting SynchronousInputID=0 turns your component into an asynchronous component - thus giving you access to the output buffer.) Because I need to map this data structure like this
Imports System Imports System.Data Imports System.Math Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper Imports Microsoft.SqlServer.Dts.Runtime.Wrapper
Public Class ScriptMain Inherits UserComponent Dim iReadingCount As Integer = 0 Dim Comments1 As String Dim Comments2 As String Dim Comments3 As String
Dim AdviceNoteNumber As Integer Dim CustomerNames As String Dim CustomerAddress As String Dim ArrivalDateTime As Date Dim CompletionDateTime As Date Dim ServiceOrderNumber As String Dim PhoneNumber As String Dim TelephoneExchange As String Dim ContractorID As String Dim ServiceOrderType As String Dim ContractID As String Dim Comments As String Dim ContractorBilling As String Dim ContractorBillingQuantity As Integer Dim ContractorBilling2 As String Dim ContractorBillingQuantity2 As Integer Dim ContractorBilling3 As String Dim ContractorBillingQuantity3 As Integer Dim ContractorBilling4 As String Dim ContractorBillingQuantity4 As Integer Dim ContractorBilling5 As String Dim ContractorBillingQuantity5 As Integer Dim ContractorBilling6 As String Dim ContractorBillingQuantity6 As Integer Dim ApprovalCode As String Dim TelecomRejectReason As String Dim ContractorRejectResponse As String Dim CustomerSignature As String Dim ReceivedOnTime As String
'Public Overrides Sub CreateNewOutputRows() 'End Sub
Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer) Try
If StrConv(Row.Tag, VbStrConv.ProperCase) = "<Advice Note Number>" Then AdviceNoteNumber = CInt(Trim(Row.Val1)) 'Increase the reading count by 1 iReadingCount += 1
ElseIf StrConv(Row.Tag, VbStrConv.ProperCase) = "<Comments>" Then Comments1 = Left(Trim(Row.Val1), 160) Comments2 = Left(Trim(Row.Val2), 160) Comments3 = Left(Trim(Row.Val3), 160) 'One Line If Len(Comments1) > 1 And Len(Comments2) = 1 And Len(Comments3) = 1 Then Comments = Comments1 End If 'Two Lines If Len(Comments1) > 1 And Len(Comments2) > 1 And Len(Comments3) = 1 Then Comments = Comments1 & " " & Comments2 End If 'Three Lines If Len(Comments1) > 1 And Len(Comments2) > 1 And Len(Comments3) > 1 Then Comments = Comments1 & " " & Comments2 & " " & Comments3 End If