I went thru the online tutorial, but I was not able to finish "Deploying Packages Tutorial" because for some reason the sample packages they had use in the tutorial came up with errors when I added them to my project.
EDIT: See the following post for the trouble I ran into using the sample tutorial. http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=3145471&SiteID=1&mode=1
But I read thru the rest and I was able to get the gist of it and I did a simple example on my own.
And here are some questions.
In my example, I have one package that does a simple load from an XL spreadsheet into a database table. The package also writes some kickout rows (bad data) to a flat file. To keep the example simple, I configure just one value in the .dtsConfig file, and that is the server name.
Question:
(1) According to the tutorial, after I've created the deployment bundle I'm supposed to copy this over to the destination computer and run it there. Can't I just run it from my computer and choose where to install the package in the Package Installation Wizard? That's what I did and it worked and I am able to see the package on the target SQL Server. So, I'm wondering why I need to copy the demployment bundle to the target and run it there.
(2) In the Package Installation Wizard, there is a step called "Select Installation Folder". And the description on this page says, "The installer will install SSIS package dependencies in the following folder". Without knowing what this is going to do, I just picked a folder, and finished out the wizard. When I go and check that folder after the wizard is completed, I see that the .dtsConfig file got put there (on my local machine). What implication does this have? (I don't know how to schedule a job in SQL Server Agent, so I havent actually tried running the deployed package.) I'm going to guess the package is not going to run because the config file ended up in my machine.
(3) In my example, I could've also configured the connection string for the XL file and the flat file directory for the kickout data. But since the file names are a part of the connection string and the file names are likely to remain the same, but the directory locations may change, should I handle this with system variables? (Where the variable will contain the directory path only)
#1 Anyone seen any good BLOGs on building solutions and what occurs? BOL is pretty thin in this area. Does it just spin through all the projects in the solution, validate them, copy the packages to their respective in folder, and add them to the manifest?
I am trying to understand the relationship of setting package configurations and setting variable values during job scheduling. I understand that I can select variables that I want to manipulate at run time using package configurations. I understand that the configuration file is an xml file that the job can be told to access at run time. Here are my questions:
1. Once I create a configuration file, do I physically modify the file to change the variable that is input at runtime? 2. Do I have to select the config file and then change the value using the Set values tab? 3. What is the relationship between the config file and the set values tab?
4. When creating a package configuration, when would you use the options other than XML configuration file?
I've been searching for an answer to my question quite some time now and I've not been able to figure it out yet.
Situation: - I've created a SSIS package containing a bulk insert task. - I've added a package configuration containing the appropriate connection manager (i.e. dev, beta or live) - CreateDeploymentUtility = true - I've copied the deployment folder to our beta server and I started the manifest file to install the package to the sql 2005 server, after that I specified the config file location and changed the value so the approriate connection manager is used. - When I execute the package from the sql server the package doesn't read the value from the xml config file, it uses the connection which was originally specified in the package, whereas when I run the package from my BIDS it is reading the value from the xml config file?
I can't seem to figure out why this is happening? am I missing something here?
I'm slowing coming up to speed on configurations and deployment.
I have 2 questions for this thread.
Question 1:
When I create the deployment manifest (the file that gets created when I build) and then run it on the destination machine, there's a step in the wizard that asks for a folder location. The exact page on the wizard is called "Select Installation Folder", just to make it clear what I'm talking about.
What significance does this folder have? I noticed that when I was using the XML config option, the config file appeared there (and nothing else). When I use the SQL Server config option, I didnt see anything going into that folder. Are these the expected results in each case?
The description on that wizard page says, "The installer will install the SSIS package dependencies in the following folder." I'm not sure what this means and if I should expect more than the XML config file to appear in there (in the case when I used XLM config option). What are the "dependencies", other than the config file, that the wizard is saying that the folder will have?
Question 2:
I've been playing with deployment and there's a bunch of "play" packages that are on my test server. How do I clean these up?
I have a standalone SSIS package that I wish to deploy from the file system, as opposed to SQL Server deployment.
Anyways, I was wondering if there's a command line utility for running SSIS packages on an ad hoc basis? What I was thinking was that I would put the call to the command line in a stored procedure using xp_cmdshell, so that the package can be called that way.
I am trying to write a simple application which simplifies package deployment to SSIS. Basically it allows one to select a folder which contains dtsx-packages and the it would upload those files to the MSDB in a folder of choice on the MSDB. To do this I thought it would be nice to use the Integration Services API (Microsoft.SQLServer.Dts) and use the objects DTS.Runtime.Application and DTS.Runtime.Package.
The result would be this in a nutshell: Dim dtsApp As New Dts.Runtime.Application Dim pkg As Dts.Runtime.Package pkg = dtsApp.LoadPackage(File.FullName, Nothing) --loading the package from the filesystem dtsApp.SaveToSqlServerAs(pkg, Nothing, strPackagePath + PackageName, strDestinationServer, Nothing, Nothing)
However if I try to execute this the I end up with an Strace-assertion error. Searching for that error lead me to the following post: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2173800&SiteID=1
After some more research I found the cause of my problems. I only had SSMS (The workstation components) installed on my development-pc. Someone suggested that installing Integration Services would solve it and after testing this it indeed solved it, but this still didn't solve my problem. The tool I am writing is going to be used on production-workstations which only have the Workstation components installed and installing Integration Services there isn't likely going to happen. Mostly because having SSIS installed locally doesn't have a function; the packages will be deployed to a remote server right after they are loaded from the file.
Trying to find an alternative lead me to Dtutil which miraculously enough does seem to work if given the right parameters, but to me this feels like a dirty solution. I feel it shouldn't be necessary to resort to using System.Diagnostics.Process to manually start Dtutil in the background when I have this nice api specifically designed for SSIS.
Is there anyone here that has a suggestion or an idea how to deploy packages using the Dts.Runtime.Application without installing SSIS on my local workstation?
In reference to the question raised in this thread http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1460591&SiteID=1
Since I'm not able to create a deployment utility, when a config file is shared among multiple packages and also I cannot get the permission from Sys Admins to use Env. Variables I'm struck.
Now I'm thinking of importing the package to Sql Server from the file system. Is there any caveats in this approach? especially regarding the config files?
[edit] Also, do I need any special permissions to view the Integeration Services node in Management Studio? We are using Integerated Authentication, neither do I'm able to run sp_start_job sp in the msdb database. [/edit]
I try to deploy a SSIS Package with the method SaveToDtsServer of Microsoft.SqlServer.Dts.Runtime.Application.
My C# project is to deploy a SSIS Package in a Setup Application (My project has a Installer Class). It's work several times, but suddenly It stop working without change in my code.
There's a System.BadImageFormatException : Message="Invalid access to memory location. (Exception from HRESULT: 0x800703E6)"
Source="Microsoft.SqlServer.ManagedDTS"
StackTrace:
at Microsoft.SqlServer.Dts.Runtime.Application.SaveToDtsServer(Package pPackage, IDTSEvents pEvents, String sPackagePath, String sServerName)
at TestPackageDeployment.Form1.button1_Click(Object sender, EventArgs e) in D:DEVTestPackageDeploymentTestPackageDeploymentForm1.cs:line 41
I try the SaveToSqlServer and SaveToSqlServerAs, but it doesn't Work, it's return the same exception. On the other hand, the method CreateFolderOnDtsServer works perfectly.
It' happen to me on a Win2003 and WinXp. I Use SQL Server 2005 + VS 2005.
I am using Package Configuration to simplify SSIS package deployment process. All the configuration information are stored in XML file. So far so good, However, since I have many, 20, packages. For each package, there is one configuration file to it. During the deployment process, I have dynamically modify connecting string (server name, DB name) to new ones. It ends up 20 or more modification and it's eaily for me to make mistake. Is there any workaround such as setting up environment variable, I guess, to allow me only modify once and apply it to all the packages?
For the past few months I've been developing an DW and ETL with SQL 2005 / SSIS. My packages are being deployed to a SQL Server. Although in the end game we will have a Dev/Staging/Production environments, I would still like to archive production packages when we push staging to production. Essentially I would like to archive the last X packages that were deployed to production where X is a reasonable number (3 - 5). I don't necessarily need to have them accessible to run. One of the purposes is to have another safeguard should we miss anything in user testing and need to roll back a deployment.
I am utilizing VSS and we will have backups running on the production server, but I would prefer to have a archive that is a little more accessible.
I just wondering if anyone has any thoughts on how to extract/archive production packages when the push is made. I could easily develop an app that queries the MSDB and exports the packages to the file system.
I have designed a few SSIS packages in the BI Development studio and deployed them to the msdb database of my SQL server using the generated Deployment Utility.
They deployed and executed just fine, but, I would like to better organize them into folders within the msdb storage area.
Is there a way to tell the project or the generated Deployment Utility to deploy the packages to a specific folder within the SSIS Packages / msdb storage area on the SQL server?
I need to design an asp.net application w/ and MSDE backend database that will be deployed at 50 different location (unrelated to each other). The application's deployment package needs to be downloaded from a web site on the internet. The end user locations do not have SQL2000 or MSDE...some have access.
I want to know if my deployment package can include the MSDE software? Or, can anyone offer ideas on the EASIEST way for these small offices to get MSDE installed on each of their servers so that my ASP.NET applications will connect to via ado.net.
Trying to build a deployment package. I have a number of dtsx in a project that share a connection config file. When I build, the error states: 'Could not copy file "whatever.dtsconfig" to the deployment utility output directory. ... The file already exists'
This is regarding one package where we are trying to deploy the package through €œSql Server deployment€? using .dtsx, .dtsConfig and manifest files, but after deployment the package is not found in €œmsdb€?. Instead it is reflecting in €œfile system€? folder. The same behavior is observed repeatedly when we tried to deploy the package.
We have seen such behavior only in this package. Please help us in solving the above scenario.
My parent package calls packages stored in the file system. While developing, I would like to call packages in the project bin directory. In production, I would like to call packages in a different development. Is this possible?
I can change the package connection string with an expression that refers to user variables PackageLocation1 or PackageLocation2. I would like to do this automatically. Is this something that should be done at deployment time? Or is there a run time value that I can check and conditionally use PackageLocation1 or PackageLocation2?
Development and deployment is done on the same server, so the same enivronment variable value would be used in an indirect configuration. Same thing applies to a file configuration.
Another question: Is it possible to set up a different Installation Folder for use during deployment? Every time I deploy, I have to navigate the folders, you can't even paste in the folder name.
I have a very odd problem. I have a package which uses some custom tasks that were written in C#. When the package is deployed to our production server, *some* of the property values for *some* of the tasks are cleared. For example, I have these five tasks:
All of them inherit (of course) from Microsoft.SqlServer.Dts.Runtime.Task. All of them have custom members (some similar, some different), and of course, different implementation (though they are mostly the same). This test package has one instance of each of the different tasks.
As I said above, when we deploy to our production server, *some* of the property values for *some* of the tasks are cleared -- but when deployed to our dev server, everything remains intact.
Here is what is cleared:
- On 4 of the 5 tasks, the Description property (inherited by Task) is cleared, but the other one remained intact - On 3 of the 5 tasks, the Connection property (custom property in all tasks) is cleared, but the other two remained intact - 3 of the tasks have other string properties that were set, and all of these were cleared
We can reproduce this on two different production servers, and these two servers have some different configurations, suggesting these would not be the culprit:
- They have different service packs (one is build 2047, the other build 3042) - One has the custom SSIS components installed (in the GAC), the other one does not
Our development server, where the package is deployed as expected, has build 2047 w/ the components installed.
Here are the packages, where you can compare and see the differences (using a text comparison tool):
Dev-GOOD.xml Prod-BAD.dtsx
These were created after being deployed by importing within a Visual Studio SSIS project from the server.
Any suggestions would be *greatly* appreciated, as we are totally stumped as to why this is happening.
EDIT: Additional clues, this package is deployed to the MSDB. If it's deployed to the File System, it remains unmodified.
I'm currently strugling with the setup of our packages for deployment to a new environment.
We are working with a master/detail package setup. One master package is created that will call all child packages. In the master package we don't have any connection towards our source and/or target databases/sourcesystems.
Everything works fine, however, starting to deploy the whole set of packages, it seems that we don't have the option to set specific properties of our detailed packages, e.g. connection properties. But this is just what we need.
When we are adding a job in the Job Agent for our master package to be scheduled, we want to be able to set all different connection manager properties, not only the one from the master package and definitely the ones from the detailed packages as there we switch the connections from the development environment towards the acceptance environment.
I tried to fix this with parent package variables, but I can't set the password property, only the ServerName and UserName can be set, not the Password.
Anyone an idea what the easiest and best approach is to solve this burden?
We are trying to create a deployment utility for a solution. The issue we are facing is, we are using a single package configuration file and when we try to build the solution to create the deployment utility, the build process fails saying that the package configuration file already exists. THe reason for this is while trying to build, the utility copies the configation fiel for the packages, it copies for one, but for the second onward, when it tries to copies, it fails saying the file already exists.
Any idea how to overcome this, or else any suggestions how to perform the similar steps to create a deployment utility for a solution in which the packages share a single package configuration file.
I'm trying to deploy an SSIS package to a server ("SQL Server" deployment). The package does have an encrypted password, which has both worked nicely and not in the past. It's entirely possible that our other "DBA" has busted something on the server, thus preventing my access to it, but I'm curious if anyone has any experience w/ error code 0x80004005 (Login timeout expired) in the SaveToSqlServer method. Is that just the generic you-can't-log-in message, or is it really trying to imply that the SQL Server is not responding to login attempts?Thanks for any help, Ben
=================================== Could not save the package "C:Documents and SettingsfooarinDeploymentfoo.dtsx" to SQL Server "BAR". (Package Installation Wizard) =================================== The SaveToSQLServer method has encountered OLE DB error code 0x80004005 (Login timeout expired). The SQL statement that was issued has failed. ------------------------------ Program Location: at Microsoft.SqlServer.Dts.Runtime.Application.SaveToSqlServer(Package package, IDTSEvents events, String serverName, String serverUserName, String serverPassword) at Microsoft.SqlServer.Dts.Deployment.DtsInstaller.SavePackageToSqlServer(WizardInputs wizardInputs, String packagePassword, Boolean bUseSeverEncryption, String serverName, String userName, String password, String packageFilePath, List`1 configFileNames) at Microsoft.SqlServer.Dts.Deployment.DtsInstaller.InstallPackagesToSqlServer(WizardInputs wizardInputs) =================================== The SaveToSQLServer method has encountered OLE DB error code 0x80004005 (Login timeout expired). The SQL statement that was issued has failed.------------------------------ Program Location: at Microsoft.SqlServer.Dts.Runtime.Wrapper.ApplicationClass.SaveToSQLServer(IDTSPackage90 Package, IDTSEvents90 pEvents, String ServerName, String ServerUserName, String ServerPassword) at Microsoft.SqlServer.Dts.Runtime.Application.SaveToSqlServer(Package package, IDTSEvents events, String serverName, String serverUserName, String serverPassword)
After moving my deployment folder to the Target Server, I run the Installation Wizard. As I move next, I am missing the window which is supposed to allow me to set package config values as stated in MSDN:
"If the package includes configurations, you can edit updatable configurations by updating values in the Value list on the Configure Packages page."
Does anyone know why I am not seeing it? In my deployment bundle which I have moved over has currently 3 files:
1) SSIS Deployment Manifest
2) SSIS Package
3) SSIS Config File
Again, I double click on SSIS Deployment Manifest, and it starts fine. I go thru the steps for File System Deployment, and then it prompts for installation folder path. After that, it takes me directly to validation. Why is it not showing me the Configure Packages Page as described in the MSDN Documentation. Please advise. Thanks.
I have deployed a project with multiple packages to SSIS 2012 db. I am able to configure the project parameters fine. But, I am not able to replace the package variable values with the 'Environment' variables.
The following SQL is lifted from one of the Reporting Services / Adventureworks2000 sample reports. I'm a little slow / baffled on how the inner joins are working? Specifically the Inner Join Locale and Inner Join ProductModel. I'm used to seeing Inner Join SomTable On Something = Somthing but how these joins are working is lost on me. Can someone give a quick overview (or point me to a reference) so I can better understand.
Thanks!
SELECT ProductSubCategory.Name AS ProdSubCat, ProductModel.Name AS ProdModel, ProductCategory.Name AS ProdCat, ProductDescription.Description, ProductPhoto.LargePhoto, Product.Name AS ProdName, Product.ProductNumber, Product.Color, Product.Size, Product.Weight, Product.DealerPrice, Product.Style, Product.Class, Product.ListPrice FROM ProductSubCategory INNER JOIN Locale INNER JOIN ProductDescriptionXLocale ON Locale.LocaleID = ProductDescriptionXLocale.LocaleID INNER JOIN ProductDescription ON ProductDescriptionXLocale.ProductDescriptionID = ProductDescription.ProductDescriptionID INNER JOIN ProductModel INNER JOIN Product ON ProductModel.ProductModelID = Product.ProductModelID INNER JOIN ProductModelXProductDescriptionXLocale ON ProductModel.ProductModelID = ProductModelXProductDescriptionXLocale.ProductModelID ON ProductDescriptionXLocale.LocaleID = ProductModelXProductDescriptionXLocale.LocaleID AND ProductDescriptionXLocale.ProductDescriptionID = ProductModelXProductDescriptionXLocale.ProductDescriptionID ON ProductSubCategory.ProductSubCategoryID = Product.ProductSubCategoryID INNER JOIN ProductCategory ON ProductSubCategory.ProductCategoryID = ProductCategory.ProductCategoryID LEFT OUTER JOIN ProductPhoto ON Product.ProductPhotoID = ProductPhoto.ProductPhotoID WHERE (Locale.LocaleID = 'EN')
I'm trying to get the following poll working:http://www.codeproject.com/useritems/Site_Poll_Control.aspIt looks like it's exactly what I was looking for, but it doesn't come with much in the way of instructions. I have the following function: Public Function CastVote(ByVal PollId As Integer, ByVal Answer As Integer, ByVal MemberId As Integer) As Boolean Dim cmd As New SqlCommand("InsertPollResult", New SqlConnection(Connection)) With cmd.Parameters .AddWithValue("@PollId", PollId) .AddWithValue("@PollChoice", Answer) .AddWithValue("@MemberId", MemberId) End With Return (SqlExecuteInsertSp(cmd) > 0) End Function This calls SqlExecuteInsertSp(cmd) which is:Public Function SqlExecuteInsertSp(ByVal cmd As SqlCommand) As Integer Dim i As Integer cmd.CommandType = CommandType.StoredProcedure Try cmd.Connection.Open() i = cmd.ExecuteNonQuery() Catch ex As Exception ErrorMessage = "ProDBObject.SqlExecuteInsertSp(SqlCommand): " & ex.Message.ToString Finally cmd.Connection.Close() End Try Return i End Function I can't figure out what this is doing. The best I can figure is it determines if we have a good connection. Is this right? In my code CastVote keeps returning false, and I don't know why. The answer seems to be in the i = cmd.ExecuteNonQuery() line, but I can't figure out what that line is supposed to be doing.Diane
Hi Guys, I have written quite a big stored procedure which creates a temporary table (multi-session) and updates it. All the statements are encapsulated in a single transaction which is explicitly declared in the code. What happens is that a lock is being put by the server on that table (of type Sch-M) in order thus preventing any type of operations on it (including simple select)
Now, I want to be able read that table from within another transaction. Why is that I cannot use a table hint NOLOCK in the select statement?
Here is some code which reproduces my problem.
Query A:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
BEGIN TRAN TR_DEMO;
CREATE TABLE ##TBL1( Oidx int not null primary key identity(1,1), Name nvarchar(30) not null, Type char(1) not null );
My question is in what situations @@ERROR will be set...
I like to do some logic when some error is occured in a particular statement....
the doc. says the @@ERROR value will be set if an error occurs in a statement, and the control will move to the next statement without exiting(???) the procedure and @@ERROR value can be used in that statement.
but when i execute the below procedure, the execution is terminated ( when the error occurs) without moving to the next statement. please help me to understand the SQL Server's @@ERROR and the situations when it will be set....
----------------------------------------------------------------------- CREATE PROCEDURE VALUE_ERROR_TEST AS BEGIN DECLARE @adv_error INT DECLARE @errno INT DECLARE @var int SELECT @var = '101 a' SELECT @errno = @@ERROR print @errno END go ----------------------------------------------------------------------- procedure get successfully compiled. when executed it says,
Server: Msg 245, Level 16, State 1, Procedure VALUE_ERROR_TEST, Line 10 Syntax error converting the varchar value '101 a' to a column of data type int.
Please consider the following example.CREATE TABLE test (an_ndx int NOT NULL primary key identity(1,1),a_var varchar(48) NOT NULL,last_edit_timestamp datetime NOT NULL default CURRENT_TIMESTAMP);CREATE TABLE test_history (an_ndx int NOT NULL,a_var varchar(48) NOT NULL,last_edit_timestamp datetime NOT NULL,current_edit_timestamp datetime NOT NULL default CURRENT_TIMESTAMP);GOCREATE TRIGGER update_history ON test FOR UPDATEASBEGININSERT INTO test_history (an_ndx, a_var, last_edit_timestamp)SELECT * FROM deleted;UPDATE inserted SET last_edit_timestamp = CURRENT_TIMESTAMP;END;The question is, does this do what I think it should do? What Iintended: An insert into test results in default values for an_ndx andlast_edit_timestamp. An update to test results in the original row(s)being copied to test_history, with a default value forcurrent_edit_timestamp, and the value of last_edit_timestamp beingupdated to the current timestamp. Each record in test_history shouldhave the valid time interval (last_edit_timestamp tocurrent_edit_timestamp) for each value a_var has had for the "object"or "record" identified by an_ndx.If not, what change(s) are needed to make it do what I want it to do?Will the trigger I defined above behave properly (i.e. as I intended)if more than one record needs to be updated?ThanksTed
I am using SQL Server Express and Visual Studio 2005. I am new to batches and am trying to understand how they work. I am trying to write a query that creates an assembly and the functions that are contained in it. Here is my query:
USE ProductsDRM GO
IF NOT EXISTS (SELECT 'True' FROM sys.assemblies WHERE name = 'ComputedColumnFunctions') BEGIN CREATE ASSEMBLY ComputedColumnFunctions FROM 'C:WebsitesAssemblyTestStoredFunctionsStoredFunctionsinStoredFunctions.dll' GO
CREATE FUNCTION fImageFileName ( @ProductID int, @ImageSizeCode nvarchar(4000) ) RETURNS nvarchar(4000) AS EXTERNAL NAME [ComputedColumnFunctions].[StoredFunctions.UserDefinedFunctions].ImageFileName GO
CREATE FUNCTION fTestInt ( @ProductID int ) RETURNS int AS EXTERNAL NAME [ComputedColumnFunctions].[StoredFunctions.UserDefinedFunctions].TestInt GO
CREATE FUNCTION fTestInt2 ( @TestInt int ) RETURNS int AS EXTERNAL NAME [ComputedColumnFunctions].[StoredFunctions.UserDefinedFunctions].TestInt2 END ELSE BEGIN PRINT 'The assembly named "ComputedColumnFunctions" already exists. No new assembly was created.' END
GO
I read in a book about SQL Server 2005 about including a test for whether the object (such as assembly in this case) exists before trying to create it. If I only include the CREATE ASSEMBLY statement and the FROM line below it and delete the next GO down through the last CREATE FUNCTION (just before the END ELSE), it works fine. If I leave it as is, I get a runtime error on the GO line just after the CREATE ASSEMBLY statement. What am I doing wrong?