I had created 2 packages... one is the parent package and contains a 50 iterations loop running a secondary package for each iteration... i had reached the following conclusion:
My package takes an average of 5 seconds from the time it ends executing one iteration and starts another... after about 30 iterations... my average time between end and start increases significantly to about 12 secs or even more...
All packages have delay validation set to false, and receive several variables from the parent package... Has for the logging, it is done to files based on a path coming from a variable in the parent package.
To execute the parent package i am using dtexecui.exe and i consider this behavior rather strange... had anyone experienced the same? Can anyone test this?
My environment is a 4 x64 processors with 8gb memory, so i guess its good enough to get 0 secs from end to start
I'm seeing some strange behavior from a stored procedure of mine. It essentially grabs a bunch of rows using a fairly simple JOIN....here's the from statement:
Code Snippet FROM Payment PY (NOLOCK) JOIN (SELECT DISTINCT PY.AccountPaymentId, ROW_NUMBER() OVER(ORDER BY PY.AccountPaymentId ASC) AS RowNum FROM Payment PY (NOLOCK)) AS SQ ON (SQ.AccountPaymentId = PY.AccountPaymentId) INNER JOIN Payee PE ON PE.PayeeId = PY.PayeeId INNER JOIN Party PT ON PE.PartyId = PT.PartyId INNER JOIN Distribution DS ON PY.DistributionId = DS.DistributionId LEFT OUTER JOIN Account AC ON DS.AccountId = AC.AccountId INNER JOIN clm CM ON PE.clm_no = cm.clm_no LEFT OUTER JOIN PartyAddress PA ON PY.PartyAddressId = PA.PartyAddressId AND PT.PartyId = PA.PartyId WHERE RowNum BETWEEN (((@Page * @PageSize) - @PageSize) + 1) AND ((@Page * @PageSize) - @PageSize) + @PageSize and ((@PayeeName IS NULL) OR (PT.[Name] LIKE '%' + @PayeeName + '%')) AND ((@AccountId IS NULL) OR (AC.AccountId = @AccountId)) AND ((@DistributionId IS NULL) OR (DS.DistributionId = @DistributionId)) AND ((@PaymentDate IS NULL) OR (DATEADD(day, DATEDIFF(day, 0, PY.PaymentDate), 0) = DATEADD(day, DATEDIFF(day, 0, @PaymentDate), 0))) -- Ignores the time AND ((@PaymentNumber IS NULL) OR (PY.AccountPaymentId = @PaymentNumber)) AND ((@IsReconciled IS NULL) OR (PY.ReconciledInd = @IsReconciled)) AND ((@AmountIssued IS NULL) OR (PY.PaymentAmount = @AmountIssued)) AND ((@AmountPaid IS NULL) OR (PY.AccountPaidAmount = @AmountPaid)) AND ((@IssueStatus IS NULL) OR (PY.PaymentStatusEnumItemId = @IssueStatus)) AND ((@AccountStatus IS NULL) OR (PY.AccountStatusEnumItemId = @AccountStatus)) AND ((@IsReissued IS NULL) OR (PY.ReissuedInd = @IsReissued)) ORDER BY AccountPaymentID ASC
When I pass a 1 for the @IsReconciled parameter, I get the right number of rows back - 9779. But when I pass a 0 (zero), i get no rows back, although there are 222 rows which satisfy the condition.
Is there somethig I'm overlooking (I don't think I am...)? I don't know whay 1 works and 0 wouldn't...
FYI - the @IsReconciled parameter is set to NULL at the outset of the procedure -
We have an interesting problem. We are attempting to migrate from sql 2000 to sql 2005. the schema we have is exactly the same. the new 2005 box is more powerful than our 2000 box.
here is our schema:
tbl_Items ItemID int pk ReferenceID int sessionid varchar(255) StatusID int
tbl_ItemsStatus statusid int pk isinternalstatus bit
there is an index on (ReferenceID, SessionID, StatusID) and (SessionID, StatusID)
this is the query:
DECLARE @referenceid INTEGER SET @referenceid = 1019
SELECT MAX(i2.itemid) FROM tbl_Items i2 (NOLOCK) JOIN tbl_ItemsStatus s (NOLOCK) ON i2.StatusID = s.StatusID WHERE s.IsInternalStatus = 0 AND i2.referenceid = @referenceid AND i2.sessionid IN ( SELECT i3.sessionid FROM tbl_Items i3 (NOLOCK) WHERE i3.referenceid = @referenceid AND i3.status <> 7 AND i3.status <> 8 AND i3.status <> 10 AND i3.itemid IN ( SELECT max(i4.itemid) FROM tbl_Items i4 (NOLOCK) WHERE i4.referenceid = @referenceid GROUP BY i4.sessionid ) AND i3.itemid NOT IN ( SELECT MAX(i7.itemid ) FROM tbl_Items i7 (NOLOCK) WHERE i7.referenceid = @referenceid AND i7.SessionID IN ( SELECT i5.SessionID FROM tbl_Items i5 (NOLOCK) WHERE i5.status <> 11 AND i5.referenceid = @referenceid AND i5.itemid IN ( SELECT MAX(i6.itemid) FROM tbl_Items i6 (NOLOCK) WHERE i6.referenceid = @referenceid AND i6.status IN (7,11,8) GROUP BY i6.sessionid ) ) GROUP BY i7.SessionID ) )
GROUP BY i2.sessionid
we know this query is pretty bad and can be optimized. however, if we run this query as is on 2005 it takes about 2 hours to run...if we run the exact same query on 2000 it takes 9 seconds.
so this query on 2005 if run takes 2 hours..however, if we omit the s.IsInternalStatus = 0 or the i2.referenceid = @referenceid line it takes about 9 seconds.
why would this be? it makes no sense why omitting one of those where clauses would increase the performance of the query by 2 hours? we know its a bad query...but this doesnt make sense.
Hi!I'm studying to have my MCSE 70-228 certification and I'm trying somethings with backing up transaction logs and shrinking it.Here's what I do:There is no activity in the database by the way.I have a transaction log of 1792 kb...I do the following command:BACKUP LOG TestDB TO TestDBBackupDBCC SHRINKFILE ('TestDB_Log',0)The transaction log is now 1280 kbI do the same command and finally my transaction log is now 1024kb...Any idea why it didn't shrink it at 1024 kb the first time?Thanks!Jeff
Guys, I have some data in an excel sheet. Some of the columns have a few NULL values for certain amount of rows till is gets data. What makes it so weird is that when priviewing this in the wizard, the whole column is filled with NULL values when the number of leading NULLs is quite large. When NULLs are quite a few, the column works fine!! Can anyone explains this? We tried some manual work to cut some of the rows from below and put them at the start and it worked! It's so strange though this behavior. Shiko
I was able to successfully create a database maintenance plan for SQL Server 2000 Transaction Log Shipping for a few databases a few weeks ago. Yesterday, I created a few more but to my surprise, I can no longer do it. I can create a maintenance plan but the job it creates does not start even if I force the job to start. I did exactly the same thing as what I did (as I document everything I do) before but no luck.
I have a script component in a data flow that is exhibiting some strange behavior. In the PreExecute event of the data flow, I stuff a recordset into a variable that is declared at the data flow scope. Within the data flow, I use a script component to read in the data from the recordset.
Example:
Dim olead As New Data.OleDb.OleDbDataAdapter
Dim dt1 As New System.Data.DataTable
Dim row As System.Data.DataRow
olead.Fill(dt1, Me.Variables.rsIntRateStrata)
If I display the count of the records in the data table dt1, it shows 42 rows, which is correct. Run the package, everything runs as expected. So far, so good.
Now, I set up another source/destination within the same data flow, as well as a script component between them, same as the first flow described above. Now my data flow has two parallel flows (different source & destinations). I copy the same script logic from the first flow into the second. Run the package- no errors, everything is fine... except when I inspect the data, it looks like the transformation isn't working correctly in the second script.
So I display a messagebox of each script component during run time. The first component displays 42 records, while the second displays 0 records? Same variable. Same data flow.
So I delete the first (original) flow from my data flow. Run the package again. Now the messagebox says 42.
What is happening here? Do I have to create two variables to duplicate the same recordset if I need to use it multiple times within the same data flow? Is this a bug?
BEGIN declare @datefin_flag datetime, @strip datetime SELECT @strip = dateadd(d,datediff(d,0,getdate()),0) SELECT @datefin_flag = DATE_FIN_PERIODE_FISCALE FROM DM_LKP_CALENDRIER_PERIODE_F WHERE DATE_DEBUT_PERIODE_FISCALE < @strip AND DATE_FIN_PERIODE_FISCALE = @strip --select @datefin_flag --select @strip IF(@datefin_flag != @strip) RAISERROR('You cant run this',16,1) END
Well this Query should return the raiserror it returns completes successfuly since todays date is not the same as the date in the database. if you select @datefin_flag it returns NULL and if you select @strip it brings back todays date how can NULL be equal to to todays date assuming that todays date is equal to NULL. ?
Hi, I want to know if I can execute a set of batch statements ( basically create statements) during the installation of our product. I have used Access with ADP connected to MS SQL Server 7.0 and I am trying to merge the database and table creations with the installation procedure. Please tell me how I can go ahead with this. (I have tried using the installshield and some other similar products with not of much use). Thanks in advance, Mangala
Is the order of execution guaranteed to go from top to bottom in a transaction that has multiple statements like below?
BEGIN TRAN T1; UPDATE table1 ...; UPDATE table2 ...; SELECT * from table1; UPDATE table3 ...; COMMIT TRAN T1;
How about here?
BEGIN TRAN T1; UPDATE table1 ...; BEGIN TRAN M2 ; UPDATE table2 ...; SELECT * from table1; COMMIT TRAN M2; UPDATE table3 ...; COMMIT TRAN T1;
how can i guarantee that statements will be executed from top to bottom in a transaction batch like above? I am not interested in the errors in statements. I just want whole thing to either execute fully from top to bottom or none executes
I have a SSIS Project thats been running fine for months up until yesterday.
There's a master package that calls other packages, and when i run it now in Visual Studio, i get a Message Box after each of it's child packages run. The message states:
TITLE: Microsoft Visual Studio------------------------------The designer window cannot be closed while a package is running.Stop the debugger before attempting to close the window.------------------------------BUTTONS:OK------------------------------
This seems to be causing issues, because VS seems to hang on my machine at some point when running master packages, and i think it might due to a message box.
We have a number of customers using the same database and ASP application. We need to run a script that modifies the database to the latest version. If the script runs twice it will cause problems so we need to build in a fail safe way of stopping it running a second time.
To do this we can update a version table at the end of the script. At the start of the script we check that the version is the previous one. If it isn't then we need to abort the enitre script. The problem is that the RETURN statement will only exit the current batch and execution of the script will continue from the next GO statement.
Is there any way to stop a multi-batch script running if a certain condition is met in one batch in such a way that the remaining batches do not run?
I have a package that has a FOREACHLOOP container. Inside the container is a SCRIPT task that runs a stored procedure and writes the output of the sp to a file.
That part works fine.
However, if I add a second script task inside the FOREACHLOOP container, I get error messages when running the second script task.
The second script task is identical to the first script task, except for the connection information. That is, it's doing the same thing, but running the sp on another server.
I tried removing the second script task from the FOREACHLOOP container, and putting it in it's own FOREACHLOOP container, but it still gives the same error.
This is the error I'm receiving:
Error 30009: Reference required to assembly 'System.Xml, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' containing the implemented interface 'System.Xml.Serialization.IXmlSerializable'. Add one to your project. Line 7 Column 22 through 30
I have a ssis package that has multiple large lookups without memory restriction. When running the package manually from SSMS on the same server it runs on when running automatically under the job agent, the package errors out when the server memory gets depleted by the loading of the large lookup reference data. One of the messages I get is "An out-of-memory condition prevented the creation of the buffer object. "
Anyway, the package runs successfully when it runs automatically under the job agent.
I was curious as to why the above happens. Is that a bug or is the run time behavior different under these 2 environments by design.
I'm trying to run a simple batch file from a SQL job (SQL 7.0 sp4). No errors are received but the job does not complete. When I try to run the job manually, I get a message stating "Error 22022: SQLServerAgent Error: Request to run job my_job (from User DomainAdminUser) refused because the job is already running from a request by Schedule 127 (Schedule 1).
The services are running as a domain admin account.
I need to create a program that will run Client Access to download data from the AS400 to a flat file, then run SQL DTS to import the data into a table for use by another software package.
I've created a bat file that does that using the CA RTOPCB command and the SQL DTSRUN utility. Problem is that it appears that I need to first check to see if there is a table in SQL and if there is to delete it. Otherwise, rather than overlay the existing data in the table it adds to it.
Is there a way to issue a SQL drop table in DOS? Or am I missing something that could be done in SQL?
The DTSRUN is using a local package and CA is using a transaction request.
I've have been sent a large number scritps to update one of the databases i look after. Problem is there are hundreds of individal script files, all of which need to be run.
Is there a script / tool i could use, which would run all of the .sql files in a certain directory and update the database?
I can't seem to find anything built into SQL Server. Job schedular would seem to be the closest thing, but it would still require you select each script needed to be run.
Is it possible using T-SQL to run a batch file located on different server, ie. PC1 has SQL Server on it, PC2 has the batchfile, I need to run the batchfile stored on PC2 on PC1.
Hi, this code run fine from within sql server query window. I want to put this code in a batch file and run the batch file.. it did not work. someone told me the revised code is :
bcp sdnetpro..nbtorder11 out d:databtorder11.txt /c /t,/r -Sservername -Usa -Ppassword"
This is not working. Please if anyone knows how to fix this, I would appreciate. One more thing, how can I confirm that the bcp has successfully done. do I have to create a log file and if so what is the code to create this log file. thanks Ali
I want to keep applications off of my database server so I have set up an application server (APPServer1). On APPServer1 I have a batch file that bcp’s data from DBServer1 into DBServer2 and is being passed the server name of DBServer2. On DBServer1 I have mapped a drive to the directory of APPServer1 and have created a task to run the batch job and pass the server name. So here’s my problem: when the scheduler runs the job, the bcp to DBServer2 fails, because it can not find DBServer2. When I execute the exact same command line in a DOS Box on DBServer1, the bcp works fine. I have verified that the server name is being passed correctly to the batch job in both methods.
I have created a master controller package which runs as follows
deletes all the log files -> deletes few flat files on different drives -> preprocess task(execute package task) -> c# executable (execute process tasks) -> postprocess tasks (execute package tasks)
i need a create a task just before the preprocess task with an user input asking whether he wants to run a particular batch file before proceeding to preprocess. if the user says yes it should run a batch file followed by preprocess tasks, c# and post process or else it should directly goto preprocess, c# and post process (neglecting batch file task)
Batch execution is terminated because of debugger request. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Batch execution is terminated because of debugger request any solution or cause of this exception please let me know freindz
Hello, I am looking for some way to run a sql query from a batch file on the SQL server. The query needs a user entered parameter, if that can be accomodated it would be great. I read about some osql command & tried to run it but it kept saying "Server does not exist or access denied." Please give me the steps to go about this. Thanks, -R
how to catch batch id from a running stored procedure. My intention is that when we run store procedures in batch we are running a lot of procedures and I would like to log each run and if the same procedure is running several times per day I need to separate the runs by a "batch id" for the specific run. I have created a logtable and a logprocedure that logs the start and end of a procedure run and also some values for the run. So I'm trying to find a way of fetching the "batch id" that the sp is running so I can separate the runs when analyzing the logtable. I have looked at metadata tables and also in the table sys.sysprocesses but I cannot find BATCH ID.
I have create a batch file to execute a stored proc to import data.
When I run it from the server (Remote Desktop) it works fine, but if I share the folder and try to run it from my pc, it doesn't do anything. I don't get an error, it just doesn't do anything. My windows user has admin rights in SQL. Why is it not executing from my PC?
Has anyone monitored the execution of SSIS packages with MOM? Are there extreme benefits over just utilizing the built in execution and event logs, as well as the Windows Event Viewer?
What is the recommended way to monitor SSIS execution?
Whenever you connect to a database server that is running SQL 2012 integration services, then you right click on SSISDB, choose "Reports, then Standard Reports, then Executions" there is a display of each execution and options to drill down.
There is a particular execution that when I click to see "All Messages" I get a blank screen in SSMS with the following error: "Error: and error occurred during local report processing. --> An error has occurred during report processing. --> Exception of type 'System.OutOfMemoryException' was thrown."
Before I go add more memory to this SQL box, I need to understand how an out of the box, canned SSRS report such as this, which is built into Integration Services could produce this issue?
When I execute a long running procedure, I get timeout errors when other users try to execute other procedures with UPDATE or INSERT statements.
I suspect that the other procedures are trying to execute DML statements on tables that are locked by the long running procedure.
I have a sharred trigger on all my tables that creates and updates records in tables AuditLogDetails and AuditLogParent for keeping a log of modifications. I suspect that tables AuditoLogDetails and AuditLogParent are locked by the long running procedure.
How can I change the LOCKING behavior of the long running procedure to fix the time out errors that I get?
The long running procedure is displayed below.
ALTER PROCEDURE [dbo].[spPostPresenceToHistory2]
@PostDate DateTime,
@Department Int,
@Division Int,
@Testing Bit = 0,
@XDoc xml OUTPUT,
@XDoc2 xml OUTPUT,
@ModifierID varchar(20),
@Comment varchar(200)
AS
BEGIN
BEGIN TRANSACTION
DECLARE @PostCount Int,@PreCount Int,@DiffCount Int