Most of the time my solutions consist of 1 or 2 packages and config files work well.
Now i have a solution with about 50 packages i have to move to QA.
However a config file has the package ID, so even thoug they use the same data source connection. I would have to create and change 50 config files.
Data sources are kept in the package xml , so if i copy all the packages to QA , and then change the Global Data Source connection, i still have to open each package maually and save it again.
Surely there must be an easy way to move all 50 packages and have the connection now point to QA. But config files and global data sources dont do the trick, what am i missing here ?
I have 9 Dimensions and 16 Fact Packages in my SSIS Project. All these 25 packages uses one source (SQLServer A - staging) and a seperate destination (SQlServer B - Warehouse) . I have completed my development. Now i want to move these packages to production environment. I have a Parent package to control all these facts and Dimensions.
1. Should I always change the Protection level of the Package to "DontSaveSensitive" before moving to some other machine where it has to run under a different user?
2. Which is the best way to follow for configuring the connection strings in my development? Having a variable for each connection string @ parent project level and pass it to child packages or Configure the connection strings directly in a XML config file for one package and in all my other packages i have to say the package to Reuse the existing config file as the source and dest is same for all these packages?
And am hearing one more buzz word "Proxies for running SSIS packages". Any information on this also will help me.
While developing the packages in our developers environment the packages should be stored in file system. Once after the completion of the development, while moving these packages to upstream environments the packages should be deployed to sql server. Is this scenario possible? If possible, can anybody give me some tips for how to do that.
Basically It is like the developers make their changes in the file system packages (internally versions are mainintained by source control). After the completion of the development whenever we do a build and deploy to the upstream systems we should move the packages from file system to sql server database. There is a scheduler in sql server which is responsible for executing these packages.
Can anybody give me some examples?
Another question.
Assume that in the developers environment there is a central sql server database which is storing all the packages in MSDB database. Suppose at a time 2 developers are making modification to a single package. How it behaves? Is parallel developemnt supported in SSIS?
Hi all, I have some problem when deploy ssis packages to sql server
In the deployment wizard, I follow the wizard steps to deploy my ssis packages to sql server that choose deploy to sql server and unchecked (Rely on server storage for encryption) and then following the step to finished. After that, I select data from msdb(sysdtspackages90) in column packagedata has been encrypt. But i don't want to encrypt this package
In all packages they make connections to a SQL 2005 db which sits on the same instance as which the packages are being deployed. In the Child packages these connections have the connection string set as a package property variable in Package configs and in the Parent package this variable exists correctly. This variable in the Parent is itself defined in a PackageConfig which uses SQL Server as the holder for the configuration.
When I deploy the packages the connections to in the child packages fail, one connection by timing out, another with an acquire connection failure.
I have set the DelayValidation to false on all connections but it made no difference. I have also increased the timeout to 180s but again still a failure. When I deploy the parent package on its own it is successful, however when the child packages are deployed on their own these connections again fail to validate due to a timeout and connection failure, yet they are using the same connections defined in the parent.
The child packages are no larger than other single packags that I have deployed successfully with the same connection to the same server.
Is there anything special that I should be aware of regarding deployment of parent and child packages.
In SSIS you can create subfolders in the MSDB folder. However, when deploying a SSIS solution to the server, it seems you cannot specify a subfolder. When selecting MSDB as the target, all packages in the solution end up directly underneath the MSDB folder.
I'm deploying the packages to SQL Server using the deployment utility. It goes through. However i wish to organize the packges into folders on the SSIS Server -> MSDB.
I'm able to create folder under the MSDB node. By default "Maintenance Plans" exists. But while deploying the package to the server i'm unable to specify the folder into which the package to be deployed.
Is this a known gap in the deployment wizard or am i missing something?
Am getting errors trying to deploy a dtsx created by ms (the reporting services execution log one) to which I have made zero changes, but it is not working (2 errors shown below)
error from deployment wizard: ===================================
Could not save the package "H:SSISRSlogRSExecutionLog_UpdateinDeploymentRSExecutionLog_Update.dtsx" to SQL Server "xxxxxxxxxxx". (Package Installation Wizard)
===================================
The SaveToSQLServer method has encountered OLE DB error code 0x80004005 (Login timeout expired). The SQL statement that was issued has failed.
I have a Hits table that tracks the hits on the id of a Link table: Hits: int linkId (foreign Key to Link) datetime dateCreated varchar(50) ip We recently had to merge Links from different systems that are implemented similarly. As a result, all the linkIds are now wrong in the Hits table because the ids all changed. I managed to track down all the old ids and their associated new ids and have it in a table that I call joined_links joined_linksint oldId int newId
So, how do I do a mass update of these linkIds in the Hits table in SQL? I know I could do it in .NET, but I'd rather not write an app to do that runs thousands of update statements. There's gotta be a way to do it something like this:
UPDATE Hits h SET h.linkId = (SELECT newId FROM joined_links WHERE oldId=h.linkId) but obviously I don't have visibility of that linkId in the subselect... A Loop maybe?
I have a series of DTS packages. Each package has 20 queries. Each query has a server name. Is there a way to change the servername without editing each query in each DTS package. I'd like to copy the template DTS package, then perform the modification.
This is a run of the mill application that moves orders from one table to another. There are 2 tables, Ordersummary & HstOrders. Ordersummary has the following columns... Identifier FollowupId OrderNumber OrderReference OrderReferenceOrigin ...... ...... HstOrders has the following columns... Identifier OrderNumber OrderReference OrderType ...... ...... The above two tables are bound by Identifier. After each month end, Orders are moved from Ordersummary to HstOrders.
Now my task is to update all rows in OrderSummary with the order details as seen in HSTOrders for ordertype = 'CREDIT'. OrderReferenceOrigin(in Ordersummary) should be updated with the value of Orderreference(from hstorders).
I have to update each row at a time & I need to write a cursor for mass updates where ordersummary.identifier = hstorders.identifier.
Can someone please help with in writing this update statement as I never wrote a cursor.
I'm fairly new to SQL Server and I'm just wondering if it's possible to Update Statistice for all indexes somehow? I'm looking at the Update Statistics command and it doesn't seem to be possible.
The situation we have is a reporting DB that basically has all it's tables truncated and remade every night by some DTS jobs that import from another datasource and change the data and build some denormalzed tables etc. Some of the large Insert operations go from taking 8 mins to taking several hours sometimes and updating the stats seems to fix the problem for a while. So I'd like to make sure the optimizer has all the latest stats for all tables.
I have an user table with a single integer column. No indexes, no identities, nothing. I have to insert 600,000 rows via a client app. In tests, using BCP/Bulk Insert/DTS all runs OK (sub 3 seconds). However the app takes 5000 rows a second [considerably slower]. I can mimic this slow perfomance, within DTS, by removing the 'Fast Load' & 'Batch' options.
Question= why would the SQL insert run slower on one server and as fast as BCP/Bulk Insert/DTS on another? What can I check on the 'slow' running server? May there be a file version anomally ??
Does anyone know what the best way to do mass updates in SQL server is? I am currently using the methodology suggested in this article
http://www.tek-tips.com/faqs.cfm?fid=3141
But the article is assuming that once I update a field it is going to have a value that is NOT NULL. So I can loop through and update the rows that have a NOT NULL value. But my updated rows do contain NULL values, in this case what is the best way to go about this???
*************************************** Here is my code. I want to avoid using Upd_flag becos after the following code runs I need to reset that flag before I run my next query ***************************************
--Set rowcount to 50000 to limit number of inserts per batch Set rowcount 50000
--Declare variable for row count Declare @rc int Set @rc=50000
While @rc=50000 Begin
Begin Transaction
--Use tablockx and holdlock to obtain and hold --an immediate exclusive table lock. This usually --speeds the insert because only one lock is needed.
update t_PGBA_DTL With (tablockx, holdlock) SET t_PGBA_DTL.procedur = A.[Proc code], t_PGBA_DTL.Upd_flag = 1 FROM t_PGBA_DTL INNER JOIN CPT_HCPCS_I9_PROC_CODES A ON t_PGBA_DTL.PROC_CD = A.[Proc code] WHERE t_PGBA_DTL.Upd_flag = 0
--Get number of rows updated --Process will continue until less than 50000 Select @rc=@@rowcount
I have the following problem. I need to insert 100.000 records (50Kb each) in one operation from a VB program into a SQL Server 2005 database. All of these records will be ready for inserting at the same time. How to make this insert as one big transaction instead of 100.000 small ones?
The company i work for changed names and all email addresses within the company have changed. While it was OK for a while they are no longer going to be forwarding email to the old addresses to the new ones. There are Tons of subscriptions and tons of email addresses that need to be changed to the new names.
If i could find the table with the TO: part of the subscription held in it i could just run an update on that field and it would be solved...however, i cannot find that field...
So, Without going into every subscription in report manager, how can i change the email addresses? Any Suggestions?
I am making a prog that needs to import many records from a spreadsheet on a local computer through asp.net into sql server is there a simple command to do this or is there information on how to do this please give all the information that you can thank you
I want to backing up my hourly transaction log backups direct to a mass storage unit as opposed to the local server. However when trying to set this up it only gives me option of backing up to local drives, even though I have a drive mappping to the mass storage unit.. I'm there is a simple around this.Would appreciate any advice..many Thanks..
I was wondering if anyone knows of any way, including third party tools, to replicate a design change on a table across many different databases. I have written an ASP script that allows me to copy multiple tables to multiple databases in one go but I need something that will allow me to replicate the design of a table by comparing source and destination tables. So far I have scripted most of it but I have no idea on which system table to get the identity information and it seems there must be an easier way!
If I have a trigger on a field in a table, and I update one record trigger fire properly. If I do a update to that same field on all records in the table the trigger does not fire. I the error in the trigger, or do I need to change my update statement?
I have an existing SQL table that I want to import 300,000 rows into. I have copied the table headers into Excel and added all the rows but constantly getting multiple errors no matter what way I try to import it.
So far tried Import Method from SQL Management Studio, BULK INSERT and SQL Script. What is the easiest way?
If I try this
select * into Coupon FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=D:coupon.xls;HDR=YES', 'SELECT * FROM [coupon$]')
I get
OLE DB provider 'Microsoft.Jet.OLEDB.4.0' cannot be used for distributed queries because the provider is configured to run in single-threaded apartment mode.
I've been having problems with my tempdb filling up, and causing all databases on the server to stop functioning properly. I've been removing alot of data lately (millions of rows), and I think this is the reason why my tempdb log is going thru an unusual load.
Whats the best way to make sure the tempdb doesnt fill up causing me major problems? I had temporarily turned off backups while I was having a new HD put in. Am I right in thinking that when a DB is backed up, the tempdb log is reduced in size? Should maintaining a daily backup solution help keep things under control ?
Hi,I need to update a field in about 20 records on a table. The table hasan update trigger (which updates the [lastedited] field whenever arecord is updated). As a result I'm getting an error: "Subqueryreturned more than 1 value.", and the update fails.Is there a way in the stored procedure to handle this issue?thanks for your help.Paul
I've deleted about 3-4 million rows from one of my tables as the data was old and no longer needed. The problem is that now queries are runnning extra slow. I am in the process of running taras isp_ALTER_INDEX however its taking quite a long time and seems to be slowing things down even further while its running as expected. (It's been running 4 hours already, I have stopped it and will rerun it a slower traffic period for the db server)
Just wondering if I have the right approach here or if anyone else has any suggestions.
Hello,I need to alter fields in all my tables of a given database, and Iwould to do this via a t-sql script.Example, I want to change all fields called SESSION_ID to char(6). Thefield is usually varchar(10), but the data is always 6 characters inlength. I have serveral fields that are fixed length that I want tomove from varchar to char.I believe I can find all the tables where the field exists usingselect * from INFORMATION_SCHEMA.columns where column_name ='SESSION_Id'but I dont know how to take this into an ALTER TABLE , ALTER COLUMNthat can be automatedTIARob
Say you have an existing populated SQL 2005 database, with 700+ tables, and you want to just change the order of the columns inside every table. Short of manually building conversion scripts, anyone know an automated way to do this? I was thinking thru ways to do them all in one shot, and have tools like Erwin and DbGhost that could be used also. Basically moving some standard audit columns from the end of the tables to just after the PK columns.
Hola!I'm currently building a site that uses an external database to store all the product details, and an internal database that will act as a cache so that we don't have to keep hitting the external database to retrieve the products every time a customer requests a list.What I need to do is retrieve all these products from External and insert them into Internal if they don't exist - if they do already exist then I have to update Internal with new prices, number in stock etc.I was wondering if there was a way to insert / update these products en-mass without looping through and building a new insert / update query for every product - there could be thousands at a time!Does anyone have any ideas or could you point me in the right direction?I'm thinking that because I need to check if the products exist in a different data store than the original source, I don't have a choice but to loop through them all.Cheers,G.