Running SSIS On Different Servers Without Changing Connection Information
Nov 14, 2007
What is the best way to run SSIS scripts on different servers without changing connection information. Our test server is ppntt140 and our production server is ppntd110. If I create a script on server ppntt140 what can I do so I can move it to server ppntd110 without changing any connection information? Database names are the same, it is just the server that changes. What is the best way to handle this? Thanks in advance.
hi all, i have a package; thats executing another 4 packages, i can able to execute these packages in ASP.net
I want to display the status notification of the executing packages in the webpage( while packages are being executed, i want to display which package is executing, and what is the status of the package! whether it is executing or not? or it is raising any errors?) every thing i want to give in the front end--> its not my requirement, my client ask me to do that!
so please help me! how to display these status information of the running package in asp.net (c#)
(i'll give you one good chocolate if you give me good solution!)
post your mails to vivekanandaraj.natarajan@honeywell.com Thanks you a lot!
When running a package in VS you can see something like this in the output window:
SSIS package "logging.dtsx" starting. Information: 0x40043006 at Data Flow Task, DTS.Pipeline: Prepare for Execute phase is beginning. Information: 0x40043007 at Data Flow Task, DTS.Pipeline: Pre-Execute phase is beginning. Information: 0x402090DC at Data Flow Task, Flat File Source [1]: The processing of file "C: est ssis loggingad_data1.txt" has started. Information: 0x4004300C at Data Flow Task, DTS.Pipeline: Execute phase is beginning. Warning: 0x8020200F at Data Flow Task, Flat File Source [1]: There is a partial row at the end of the file. Information: 0x402090DE at Data Flow Task, Flat File Source [1]: The total number of data rows processed for file "C: est ssis loggingad_data1.txt" is 477. Information: 0x402090DF at Data Flow Task, OLE DB Destination [1011]: The final commit for the data insertion has started. Information: 0x402090E0 at Data Flow Task, OLE DB Destination [1011]: The final commit for the data insertion has ended. Information: 0x40043008 at Data Flow Task, DTS.Pipeline: Post Execute phase is beginning. Information: 0x402090DD at Data Flow Task, Flat File Source [1]: The processing of file "C: est ssis loggingad_data1.txt" has ended. Information: 0x40043009 at Data Flow Task, DTS.Pipeline: Cleanup phase is beginning. Information: 0x4004300B at Data Flow Task, DTS.Pipeline: "component "DataReaderDest" (87)" wrote 0 rows. Information: 0x4004300B at Data Flow Task, DTS.Pipeline: "component "OLE DB Destination" (1011)" wrote 1 rows. SSIS package "logging.dtsx" finished: Success.
This is exactly when I need when a package is running but I want to be able to see it without using Visual Studio. I would do it in Reporting Services but I need to find out to get the information. The SSIS logging feature in a package does not provide that kind of info.
I want to make a SSIS package with Oracle and deploy it in no of oracle databases, for it every time I have to open package and change connection information.
How can I make oracle connection information as variable value so that when I deploy my package on Oracle database it will pick all oracle connection information(User Id, Pwd, Server Name) automatically.
I create a SSIS Package for ETL on my own machine. During development database was also on my machine. For access to this database an OLE DB connection was defined within a package in BI Development Studio. Everything worked well both in debug mode and testing package itself.
Finally I need to load data to a database on a different machine using this package.
I used several scenaries:
1) simply copied the package-file to estination machine, open it for execution, in section "Connection Managers" I edited connection string manually - changed server name and Initial Catalog. And try to execute.
2) on the destination machine I manually created an OLE DB connection (using Microsoft Data Link) to a different database (test succeded), Changed the extention of the connection file 'udl' for ' txt ' and copied its connection string to the field connection string in section "Connection Managers" (pointed in variant 1) ).
3) use Package Configurations, copied the deployment to destination machine, installed the package the way like written here - http://msdn2.microsoft.com/en-us/library/ms365338.aspx. Changed exported properties - Server name, Initial Catalog and also the whole Connection String. Also try to execute.
In all cases I recieved the same error execution message :
"Errors in the metadata manager. Either the database with ID of " OLD_DATABASE_NAME " does not exist in the server with ID of " NEW_SERVER_NAME " or the user does not have permissions to access the object."
As for access (username/pass) settings they are the same for both of them, I have the same administrative rights on both machines. And more with the same rights the ole db connection made was made manually in variant 2 - succeded!!! So I don't think the problem is here.
As for Error message - I think somewhere the OLD name of database (Initial Catalog) is saved, though I tried to change it. Though the NEW value for the server name is substituted.
Please, help me. I don't know what else can I try. And it is not a single case for my practice. So I think - something wrong in my actions.
I have installed MS SQL Server 2005 and Service Pack 2 on a new Windows 32-bit environment. I also installed the 32-bit 10g client (the Administrator option). The tnsnames.ora file is configured properly and I can tnsping to the Oracle database without any issues. I have created a package in Visual Studio and it runs successfully when I execute the job manually. I saved the package to the MS SQL Server and when I log into Management Studio to create a job for this package, I receive the following error:
Message Executed as user: xxxSYSTEM. ...rsion 9.00.3042.00 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 10:42:37 AM Error: 2007-10-10 10:42:37.28 Code: 0xC0202009 Source: x Connection manager "x.x" Description: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft OLE DB Provider for Oracle" Hresult: 0x80004005 Description: "Oracle client and networking components were not found. These components are supplied by Oracle Corporation and are part of the Oracle Version 7.3.3 or later client software installation. Provider is unable to function until these components are installed.". End Error Error: 2007-10-10 10:42:37.28 Code: 0xC020801C Source: Data Flow Task OLE DB Source [18] Description: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method ca... The package execution fa... The step failed.
I have created the package and logged into the server with the same ID to set up the job. And I have set the ProtectionLevel property to "Don't Save Sensitive Data" since I know this has been an issue in the past when I tried automating the job. I've four posts regarding this error message on the MSDN forums that aren't helpful at this moment - I'm hoping that someone will have found a solution since those posts. Any suggestions are welcomed.
I have a SQL database (10000 records / 50 users) that stores information on faxes that we receive. This will sound odd but the last few days some weird things have happened. Chunks of data change arbitrarily - no script that I have can make these changes. Couple examples:
1) the number of pages will change from 1 to 3 - not every entry just a group and usuall ones that have a particular status 2) the status will change from CTW to PFR - not every CTW will change just a chunk
There appears to be no pattern to these changes.
Again, no script that I have can make these changes. This happened 6 months ago and I reinstalled service packs (NT and SQL), reinstalled data components and defragged the hard drive. The problem stopped either because of one of these actions or either on it's own. I have done all of these things this time but nothing has worked. Anyone have any ideas?? Any help is appreciated.
Our project is on SSIS 2012 and we are using the project deployment model. We have parameter the connection managers, created environments, environment variables and configured the references. Hence, when we deploy the solution to higher environments, the connection strings are picked from the environments and not the ones stored in the connection manager.
However, we face issues when in development environment, we need to run the same package but by entering the values manually in the connection manager. Even though the connection details are correct, when we execute the package from Visual Studio, SSIS is not able to connect to the database. Is there any way to have SSIS prompt for the connection details after we have click on "Execute Task" or "Start" from Visual Studio?Â
I have read all the FAQs on restore and find myself still confused. So I apologize if the information is there and I am missing it. I want to move the databases from an old server to a new server, brand new, will be the same name and IP address once the database is transfered. Everything I have read says do a full back up and then restore onto the new server. Makes sense so far, my question is how does the Database Master get handled? Is it necessary to restore it? If so what is the best way to go about it?
I have a group of about 5 servers (which will likely grow toabout 25 in the near future) with their names listed in a table in a database on one of the servers. I want to query all servers in that table using the following query to pull the storage drive, database name, created date, age and size of the databases for each server listed in the table:
SELECT left(mf.Physical_Name,2) AS Storage_Drive, DB_NAME(mf.database_id) AS DatabaseName, db.create_Date, DateDiff(day, db.create_date, getDate()) Age, sum((mf.size*8))/1024 SizeMB
[Code] ...
How would I best accomplish this if I want to implement it using a TSQL procedure?
I've run into a problem attempting to change my service account on the clustered servers from an administrative account to a non-privileged account under SQL Server 2005 Enterprise Edition. When I change the login properties in Configuration Manager I get the following error:
"The user already belongs to this group"
I'm then prevented from making any changes to the service account. I don't know what I'm supposed to do at this point to resolve the problem, so any assistance will be greatly appreciated.
My application is working under InternetUrl "http://servername/test/sqlcesa30.dll", but I am getting following error for "https://servername/test/sqlcesa30.dll"
"A request to send data to the computer running IIS has failed. For more information, see HRESULT."
We have proxy server top of the web server and certificate is installed on proxy server. There is no ceritificate installed on web server which is behind the firewall. Proxy server divert the request on different protocol based on "http" and "https" request and send to web server which is behind the firewall. I am not getting any error when i use RDA using http for above environment, but when i use RDA using https I am getting an above error for the same environment.
I want to keep applications off of my database server so I have set up an application server (APPServer1). On APPServer1 I have a batch file that bcp’s data from DBServer1 into DBServer2 and is being passed the server name of DBServer2. On DBServer1 I have mapped a drive to the directory of APPServer1 and have created a task to run the batch job and pass the server name. So here’s my problem: when the scheduler runs the job, the bcp to DBServer2 fails, because it can not find DBServer2. When I execute the exact same command line in a DOS Box on DBServer1, the bcp works fine. I have verified that the server name is being passed correctly to the batch job in both methods.
I had a website that ran with one server that had SQL 2000 as the DB. I have upgraded to two webservers with MS SQL 2000 on both machines and a load balancer in front of the two servers to handle sending requests to the server with the least load.
My problem is, I am not sure how to start to work my second server with the MS SQL. I need to make sure both DB's are exact duplicates of each other so if a user hits one server they are pulling up the same info as another user hits the other server.
I have a lot of queries running on the ASP pages I use, several pages use search forms for many various processes, and I have several pages that are for users to input information that is to be stored and sent out via email to all the other members.
Not by any means am I a SQL guru of any kind, I am lucky I know what it is
So, can anyone point me in the right direction as to what I should look for or into to work the best possible process to keep these DB's mirrored?
Now these servers are on their own private network using a second NIC card, I did this so the SQL can talk between the servers without using the public network bandwidth.
How do I keep these SQL DB's duplicated? What are some great suggestions? Have you done this before? if so, what has worked best for you? Is it hard to do for someone with limited DB experience? (I learn really quick if I know what I need to learn)
At this point I am confused as to what the best practice would be. Help?
I have zero experience running any databases that spread further than 1 machine, so I have a few theory questions here that hopefully someone can help with. Hopefully this is the right forum, I'm not sure if it classifies as 'clustering'.
Anyways, we are launching a web app that is going to start with just 1 webserver/db server. For speed reasons, after some growth we might have to have a load balanced setup with a webserver in europe and one in north america. Basically the webservers are going to be serving 100,000's of files and each time a file is served it needs to be recorded in the database.
I think that if I'm connecting my european webserver across the internet to my db server, thats killing the purpose of having a webserver in europe to make for faster responses.
I am thinking that this european web server/db serving is only going to be logging the files served. Is there a way to import them into north american database everynight ?
I'm not sure what the best approach would be for something like this, but any suggestions are greatly appreciated.
I've defined a System DSN (and it works just fine for the classic ASP web site) and for a separate project I would very much like to determine what server its connecting to.
My environment is .Net 2.0.
On some boxes, the DSN is configured to connect to SQL Server "George" and on others it's configured to connect with "Carl." Both servers have a database named Lighhouse, I want to connect to another DB using the newer SQLConnection/SQLCommand objects. Which, don't support and ODBC DSN in the connection string. However, when constructing my connection string I need to connect to the same server that's defined in the DSN.
Is there a way extract the information from the DSN?
We have 1 machine running SQL Server 2005 x64 (64 bit). The other machine is backing this SQL Server up. It is a 32-bit machine, so it requires (?) a 32-bit version of SQL Server 2005.
Would this back-up machine work correctly on a database previously managed by a 64-bit machine and vice versa.
We have an environment with 7 servers that are running replication with one another and I'm wondering if there are any tools or experiences that any of you might have that may assist in the auditing of these servers. The data should be in synch accross the boards for all tables, but sometimes problems can arise such as replication not being set up properly, stored procedure's being out of synch, or data gliches etc.
In dealing with these issues we have an in-house written program which analises each table on each server and takes a snapshot and does column by column compare. We also have another program that will synch the data up (basically a delete/insert statement on the publisher). This process can take up to 3 weeks for our quarterly update of every table. I'm wondering if anybody has used any tools such as in SSIS or a third party tool and has done or is doing something similar to what we are doing now. If so, are there any tips you wouldn't mind sharing on how our process might be sped up?
declare @x varbinary(128) select @x = convert(varbinary(128), 'Some user name') set context_info @x
Then in a trigger, you can say:
UPDATE tbl SET who_was_kilroy = convert(varchar, p.context_info) FROM inserted i JOIN tbl t ON i.keycol = t.keycol CROSS master.dbo.sysprocesses p WHERE p.spid = @@spid
I am searching for a more generic way (varbinary 128 is not big enough) to store and access connection wide variables
We have a Main package and which is calling 2 more other packages. The first package contains a connection and we are using a Dataflow task. The data flow task has OleDB Data source which is taking getting columns using a Stored Procedure. And the output we need to write in a Flat File.
The second Package also contains the same(The same Tasks, Database and Stored Procedure Calling) The difference is in the stored procedure Parameters. Based on the different parameters Stored procedures returns the different Columns and Rows output. When we are trying to Get the second package output in OleDb Data source it shows all the columns which is the output of the First Package because it stores External Meta Data.
So My understanding is the Connection to the same database keeps the External metadata information with the connection and because of that it is always getting the same output columns in Ole DB Data source task in the second Package also.
How to Get my correct output from the second package in this case? Or If we dont want to store external Meta data with the Connection then is that possible? If yes then How?
In a Data Flow Task, I have an insert that occurs into a SQL Server 2000 table from a fixed width flat file. The SQL Server table that the data goes into is accessed through an OLE DB connection manager that uses the Native OLE DBMicrosoft OLE DB Provider for SQL Server.
In the OLE DB Destination, I changed the access mode from Table or View - fast load to Table or View because I needed to implement OLE DB Destination Error Output. The Error output goes to a SQL Server 2000 table that uses the same connection manager.
The OLE DB Destination Editor Error Output 'Error' option is configured to 'Redirect' the row. 'Set this value to selected cells' is set to 'Fail component'.
Was changing the access mode the simple reason why the insert from the flat file takes so much longer, or could there be other problems?
Hi. While hardening a ms-sql2000 , I faced with a problem and I`m completely lost ! few days of reading and google searchs didn't gave me any hint...
Here's the scenario : Ms-sql is connected to Oracle , through "MS OLE DB provider for Oracle" . By default MS-SQL runs as SYSTEM , but even if we change it to a "local admin" account , everything works fine . The problem is that it's not wise to let sqlservice to run under privilaged accounts such as system or a member of 'local administrators' . So I tried a normal local user on the host running sql . I fixed every related problem appearing because of using a limited user account and ms-sql works fine in all aspects but one ! While using normal-user account , sql-server fails to load linked-servers and this error pops up in enterprise-manager :
"OLE/DB Provider 'MSDAORA' IDBInitialize::Initialize returned 0x80004005: The provider did not give any information about the error."
I've tried much to find root of this error ( including any comments from related KB articles... ) but no luck . My guess is that , using OLE requires administrative privileges on host , and as I'm running SqlService with normal user, it fails to use OLE. So I should give requried permissions to the user running SqlService . But the problem is that I've no idea where/how I should do that. I've already tried some registry/file permissions but non of them helped me. Some where I red that using ODBC instead of OLE may help , but that seems fail too !
*Note that I'm almost sure it`s a problem OUT of circle of ms-sql , meaning any modifications should apply OUT of ms-sql , because simply giving local administrative privileges to the user, fix the problem.
I'm using sql 2005 and used SSIS to import two Access 97 databases into one sql database. I want to keep updating the info, but when I import again, it just appends everything to the sql database. How do I make it so that it only appends any new information or have it delete the tables and then re-add them again so I have all new, updated information? I was also wondering if I could then have a stored procedure or something that does this and runs like twice a day? If I can, how would I do that?
while connecting to Reporting Services from Mangament Studio getting following error
"The attempt to connect to the report server failed. Check your connection information and that the report server is a compatible version. (Microsoft.SqlServer.Management.UI.RSClient)"
I am supposed to maintain a bunch of excels documenting all the mappings I'm doing in SSIS. The excels have the following format:
Target Field | Target Type | Source Table | Source Field | Source Type | Transformation Rule
Apart from being incredibly tedious, it is hard to keep every excel current, as there are other people adding and taking information from the database model.
I plan to extract this information from SSIS and create the excels dinamically, but, apart from parsing the .dtsx for each package, I see no other way of getting what I need.
Any suggestions on how I should do this? Is there an easier way?
Hi, I have 2 DB as X and Y. They are on 2 Servers A and B. I am in need to create a stored procedure by getting a value from table1 in DB X on Server A and then use that value to get info from table2 in DB Y on Server B.
I am looking for a connection string to let me connect to DB X on Server A and DB Y on Server B.