I have two SSIS packages that I want to run in one SQL Agent job as two individual steps. The two packages run fine when they are in separate jobs. However, when I run the job conaining both SSIS packages (under the same proxy), the first SSIS package starts, but hangs in the middle.
I then tried setting the DelayValidation flag to True as suggested for a similar issue in another thread from this forum. After changing the DelayValidation flag to True for all containers and tasks on the second SSIS package, the first SSIS package ran completely through sucessfully, but the job continued executing for hours and the second SSIS package never started. I finally killed the job.
Any ideas as to what is the problem here? I have logged to the event viewer and see that the first package completes sucessfully. They run successfully in separate jobs, but I can not get them to run together within the same job without hanging.
I have a virtual machine that previously had SQL Server 2005 installed. Since I changed the computer name, I have to reinstall SQL. I removed all the SQL components and am trying to reinstall the database and reporting components. The installation is failing on the "Removing temporary files" step. According to the setup progress screen, the following components have been installed:
OWC11
SQL Server Bakcward-Compatibility Files
SQL Server Database Services
Reporting Services
Visual Studio Integrated Development
SQL Server Books Online
SQLXML4
Workstation Components, Books Online - this step is still in "configuring components" status.
The virtual machine is running Windows 2003 Server SP1 with all current patches, using just under 2 GB of memory (1920 MB of RAM), the host is Windows XP SP2 with all current patches, with 4 GB of memory.
I had no problem doing this procedure on a different virtual machine, so I'm not sure what the problem is here.
I have a MS Access DB that I have successfully u/graded to SQL Express 2005. I run my code using a recordset as normal, and all connections to the database work fine, but I receive the multi step OLE DB error when it gets to a line trying to populate an address field which has a datatype of nvarchar(max), null. (Field was Memo in Access version before).
This field in SQL2005 has allow nulls.
I've tried to add an empty string " " to the variable before being saved, but this still doesn't work.
I'm working with SQL 2000 and am just learning about Maintenance Plans (MP). They seem convenient, but after some time, I'm wondering if they're the best approach long-term. Here are my experiences.
Using the MP Wizard, I created a plan with tasks from all the dialogs:
I was puzzled to find 4 jobs were created, each with just 1 step, and staggered starting times. I expected to find 1 job with 4 steps. So, brimming with confidence, I did just that. I combined all 4 into 1 job, deleted the 3 other MP created jobs, and checked for any job-specific details in the code. However now when I open the MP, I get this pop-up:
"One or more of the jobs created for this plan has had additional steps added to it. It is not recommended that jobs created by the maintenance plan be modified in any way."
Okay, fair warning. Yet it appears the job and all steps run successfully, both on demand, and on a schedule. So now I'm wondering if jobs always need a MP. If I don't mind working with xp_sqlmaint syntax, it appears the only thing I'm giving up is the MP history. But I expect job history and '-WriteHistory' will minimize that loss.
I searched BOL, this Forum, and Google, and found a couple articles. One author preferred the ease of the Wizard, another preferred the control and added features of T-SQL, but both created a MP in their examples. So I'm hoping some experienced DBAs can advise.
If I create a job with multiple steps, and no MP, are there important things I give up or problems I create? Is this approach a bad idea in SQL 2005?
At this stage, I don't need replication or other advanced features. Just simple database maintenance.
I have a very simple SSIS package that executes a .bat file. Here's the actual file it executes:
@Echo Off c: F: cd ReportingServicesScripts rs -i NoteBlankSnapshots.rss -S http://10.90.160.13/ReportServerTest EXIT
The .bat file executes a Reporting Services .rss script using the rs.exe utility. When I run the command from the command prompt, it runs just fine. When I execute the .bat file manually, it runs and completes. When I execute the actual package, it also completes. But when I schedule the package as a job, it just hangs...it never finishes.
The owner of the job is an administrator in SQL Server. I have the SQL Server Agent configured to interact with the desktop - although my .bat file requires no input from the user.
I've created other jobs that just execute plain old SQL using the same owneer and these jobs complete just fine.
I have a very simple SSIS package that executes a .bat file. Here's the actual file it executes:
@Echo Off c: F: cd ReportingServicesScripts rs -i NoteBlankSnapshots.rss -S http://10.90.160.13/ReportServerTest EXIT
The .bat file executes a Reporting Services .rss script using the rs.exe utility. When I run the command from the http://10.90.160.13/ReportServerTest EXIT
The .bat file executes a Reporting Services .rss script using the rs.exe utility. When I run the command from the command prompt, it runs just fine. When I execute the .bat file manually, it runs and completes. When I execute the actual package, it also completes. But when I schedule the package as a job, it just hangs...it never finishes.
The owner of the job is an administrator in SQL Server. I have the SQL Server Agent configured to interact with the desktop - although my .bat file requires no input from the user.
I've created other jobs that just execute plain old SQL using the same owneer and these jobs complete just fine.
What am I missing?
Any help would be appreciated!
Thanks!! __________________ Anthony Robinson
"If I'm curt with you, it's because time is a factor here. I think fast, I talk fast, and I need you guys to act fast if you want to get out of this. So, pretty please - with sugar on top..."
I have a stored procedure which when executed from management studio completes successfully. If I invoke the stored procedure from SSIS which in turn is called from a SQL Server job the jobs runs indefnitely.
Now I wanted to see how far did it go in the execution of sp and I had some logging done at various places in stored procedure....It came till the RETURN 0 , the last statement of sp. The call to the SP is the last task in the SSIS package...So if the sp is alsmost completed and execute sql task is the last one in SSi, why is the job running indefinitely.
Hello everyone, I am having an odd behavior with an SSIS package that transfers data from one database to another. It seems to me that with some of the larger databases the services just stops working (0% utilization) when it is almost done. I don't get errors and the job stays in the "executing" state.
This doesn't happen always, but it will happen once the package is ran many times over the course of the week, I was wondering if you guys could point me in a direction on how I can find out the source of these hangs. Right now I am leaning towards it being a memory issue (or other misc. hardware problems) but I would like to rule out that it is software first.
I have this SSIS package which just doesn't seem to run when executing as a sql job and I keep getting this error:
"The command line parameters are invalid. The step failed."
I read some of the comments in forums and they were suggesting to verify the command line for the sql job since there is known bug in the command line for sql job.
But that didn't seem to resolve it and the reason could be one of the variable values that I am trying to set.
In this package one of the variables that I am trying to set is the connection string and my command line looks like this
When I try to run this from the command line I get the error as:
Argument " package.variables[User::MetaDataConnectionString].Value;Data Source=[SVRNAME];Initial Catalog=[DBNAME];Integrated Security=True;"" for option "set" is not valid.
I think the issue is in the set parameter where it seems to be intepreting the ;'s in the connection string as part of its command (which I seem to be escaping by putting them in quotes but it seems to be stripping them off)
Has anyone else encountered this issue? Is there any other escape character that I should be using?
I am new to SQL Server 2005 (but many years in SQL Server and .NET), and I am sort of having everything figured out for my company. However, one thing that still bothering me is that:
In the old Server 2000, you can execute a single step in a DTS package by right click the step and then click execute step.
In the new server 2005, I can only execute the whole package from the Management Tool and Edit the pack from VS 2005. Is there also a way for me to execute a single step in a SSIS package?
I have a number of packages that I have moved from an old server. Each package was scheduled with a SQL Agent job. On the old server everything ran fine. All of the packages run fine from VS, from DTEXECUI and I have tried one from the command line with DTEXEC and it worked.
When I run from the SQL Agent job, I don't get a failure, the package just hangs. I let one of the agent jobs sit for an hour with no progress. The package typically takes about 15 minutes to complete.
Below is the output from my package log up to the point that it hangs:
I created a SSIS package which loads CSV into database. The package is called in a C# console application which is set as scheduled task in a server. The problem I having is that the package hangs during validation stage: "Truncation may occur due to inserting data from data flow column "Reading Type" with a length of 50 to database column "ReadingType" with a length of 2."There is no problem loading same data from development machine.
Hi, we are writting a SQL Server Integration Services package to import data from a MySql database to a Sql Database.
We are using ODBC 3.51 Drivers to connect to the MySql Database in SSIS. The package runs perfectly in design mode. When we schedule the package to run, the package seems to hang about 1/3 of the times.
What can this be. We used to the package and ran it from a Sql Server 2005 to a Sql Server 2005 database. This setup works perfectly. When i'm doing MySql to Sql Server 2005, 1/3 of the times, it does not work.
I want to convert .rdl to .rdc need full steps.Actually i created .rdl report using sp sucessfully.Now i want to convert it to rdlc while doing it iam getting some authentication error and some thing else.I created rdl in 2008 and i want to change it to rdlc 2010.
I have a package that has multiple data flow tasks. At the end of a task, key data is written into a raw file (file name stored in a variable) that is used as a data source for the next task. Each task requires a success from the preceding task.
Here's the rub:
If I execute the entire package, the results of the package (number of records of certain tasks) differs significantly from when I execute each step in the package in turn (many more records e.g. 5 vs 350).
I get the feeling that the Raw file is read into memory before it is flushed by the previous task, or that the next task begins preparation tasks too early.
Any help is greatly appreciated.
I am running on Server 2003 64 (although the same thing happens when deployed on a Server 2003 32 machine)
I have a simple SSIS package that imports an Excel Spreadsheet into a table. The column heading got changed, so the package failed, as expected, but I would like an alert or some way to make the scheduled job show "failed". I tried putting an event handler on the :Data Flow" step to send an email, but it didn't work.
I would like to figure out the event handler problem, but more important to have the job show as failed.
For some reason the Scheduled job shows "success" even though the SSIS package failed. A better solution is to make the scheduled job itself fail, based on the package failing.
I am using SQL Server 2005 Integration Services to create new values for my tables. One step I must do is execute a query and the script task that receive its constraint (it is set to completion) must do different things depending on the query result.
My question is: how can I know the result of the precedence constraint?
I have a Job Step defined to execute a SSIS Package. This SSIS package contains a Script Task. The Job fails with the message "Package execution failed. The step failed."
I created jobs in SQL Server Agent. Inthe jobs I created steps which name is bankdata.But I am having a problem open the step , Following is the error am getting
"Index was outside the bounds of the array. (SqlManagerUI)
------------------------------ Program Location:
at Microsoft.SqlServer.Management.SqlManagerUI.JobSteps.PopulateGrid(JobStepsData steps) at Microsoft.SqlServer.Management.SqlManagerUI.JobSteps.Microsoft.SqlServer.Management.SqlMgmt.IPanelForm.OnInitialization() at Microsoft.SqlServer.Management.SqlMgmt.ViewSwitcherControlsManager.SetView(Int32 index, TreeNode node) at Microsoft.SqlServer.Management.SqlMgmt.ViewSwitcherControlsManager.OnBeforeSelection(Object sender, TreeViewCancelEventArgs e)"
On the other hand I didn't execute the jobs.Jobs was failed when it start.
When I push my SSIS packages up to my production server (which has a different data source than my developement environment) and I try to open the package on the production server, it takes forever for to validate all the steps of the SSIS package because it's trying to validate against a datasource that isnt there, so it just waits for each element it's validating to time out. This is exceptionally annoying.
Is there a way to turn off this validation 'feature'?
I hope the answer is as simple as the question -- but after reading all the documentation I could find (understand?) and a lot of posts here, I'm no closer to achieving the goal.
I have a Visual C# app, DAYTRACKER, developed in VS2005. It uses a database with several tables constructed using SQL Server 2005 Developer Edition.
I want to deploy the app plus the database plus SQL Express to another machine, to be used by a single user (the administrator) with no need for network connectivity of any kind.
What I have so far is: 1. The application is successfully deployed from a CD-ROM, having used the Publish process within VS2005, and opens on the new machine -- without database connectivity, however. 2. SQL Express is successfully deployed (it deployed as a 'prerequisite' when I went through the Publish process in VS2005) 3. I manually copied the database's .mdf and .mdl files, using SQL Server Managers 'Copy Database' function, then transferred the copies to the new machine into the ..MSSQL.1MSSQLdata folder (where they appear along with the master.mdg, mastlog.ldf etc files)
Now, the DAYTRACKER application's DAYTRACKERConnectionString under 'Settings' in the VS2005 studio reads 'Data Source=DELL3;Initial Catalog=DayTracker;Integrated Security=True' (which are the appropriate parameters for the machine, DELL3, on which I wrote the program.)
The problem, of course, is that SQL Express on the new machine doesn't connect the application to the database. When I go to the 'SQL Server Configuration Manager' and go to the 'SQL Server 2005 Services' and double-click on the 'SQL Server (SQLEXPRESS)' icon (the service is running) and the user is logged on using 'Local System Account'. Under the 'Service' tab the Host Name is 'MUSIC' (which is the name of the new machine I've installed the app onto -- which of course is not the name - DELL3 - that the app's connection string is expecting). Under the 'Advanced' tab, I've tried correcting the name of the Startup Parameters default .mdf and .mdl entries to ..DayTracker.mdf and ..DayTracker_log.mdl, but the server won't start up after I make the changes.
What I'm hoping for: a step-by-step way of doing this type of deployment, preferable getting it all onto one CD-ROM, and installing it on the new machine so that it all works seamlessly from the start, not requiring any 'tweaking' of the SQLServer Express settings by the end-user.
But I'll take pretty much anything that fixes the specific db connectivity problem I've described.
I have an SSIS job that has been running overnight sucessfully has for the last two nights failed with the message:
A fatal error occurred while reading the input stream from the network. The session will be terminated. Error: 4014, Severity: 20, State: 2.
The message is logged in both the SQL log and the application event log.
As this job step involves copying from one database to another on the current server, it is hard to account for the error. Had the error occurred in an earlier job step when database is restored to the current server from a share on another server, the error would be understandable.
The SQL server is SQL2005 SP2 running on Windows 2003 Sp1. I have been unable to locate any changes in the time frame that would account for this error.
When adding an SSIS step to a SQL Server Agent job, when selecting the location of a config file, the dialog lets you select from the database server you're working with. If selecting the location of the package itself (when the source is File System), the dialog lets you select from the machine where Management Studio is sitting instead of from the database server. Is that intentional? And if so, why? Should I just use a fully qualified file name for the package location rather than one using a drive letter?
Hi, I have to transport a big database table and can't read it at once with "select * from table" because the table is bigger than my system memory. Is there a way to read the table step by step? I thought it was possible with ADO and his serverside cursors but I don't now how. I need an "universal" solution that works on SQL Server 2000/2005, MySQL and Oracle.
Connecting to a networked SQL Server Box from my local machine
Open Query Analyzer from Start menu, logging in using sa account.
from the object browser i select my stored procedure (WEA_InsertClaim) - right click and select Debug.
i am prompted to enter the parameter values, which i do, auto rollback checkbox is checked - click Execute.
T-SQL Debugger opens and runs through the stored procedure.
but only buttons enabled are the "Go", "Toggle Breakpoint", "Clear All Breakpoints"
so i can set breakpoints etc. but when i select Go it will not stop at the breakpoints it just runs through the stored procedure from start to finish. giving the correct return code as its output
is there something i need to enable in order to make it stop at breakpoints??
I have an SSIS package that inserts website URLs from a SQLServer table into a variable used by an HTTP Connection Manager, then downloads the data files from those URLs using a ForEach Loop and a Script Task. Works beautifully when a data file is found at the URL, but hangs if no data file is found. I've set the Timeout property on the HTTP Connection Manager to 30 seconds, but doesn't work. how to first check if a data file exists, of if the request returns nothing, or how to trap this situation in a try-catch?
Here is the VB code I'm using in the Script Task:
Public Sub Main() Try ' Connect to website using HTTP connection manager Dim nativeObject As Object = Dts.Connections("HTTP Connection Manager").AcquireConnection(Nothing) ' Create a new HTTP client connection Dim connection As New HttpClientConnection(nativeObject)
I have created a Test SSIS Package within BIDS (VS 2K8, v 9.0.30729.4462 QFE; .NET v 3.5 SP1) that connects to our Test Listener.
There is only 1 Connection Manager Object, and OLE DB Provider for SQL Server.
The ConnectionString lists: Provider=SQLOLEDB.1;Integrated Security=SSPI
The Test Connection within BIDS works.
The Package Control Flow has just 1 Object, and Execute SQL Task that performs an Exec on an SP that contains only a Select (Read).
The Package runs within BIDS.
I've placed this Package within a Job on the Primary Node. Ive run the job successfully using 32 bit runtime on and off. The location of the file on the server happens to be on a share that resides on what is currently the Secondary Node.
When I try to run the exact copy of this Job on the Secondary Node (Which has been Set up for Read All Connections; Yes), I get an error, regardless of the 32 bit runtime opiton. At this point, the location of the file is on the Secondary Node.
The Error is: "Login failed for user 'OurDomainAgent_Account'".
The Agent is a member of NT ServiceSQLServerAgent on both instances, and that account is a member of SysAdmin. Adding the Agent account as well, and giving that account SysAdmin, makes no difference either.