Good Afternoon,
I have a package with reads data from a text file and persist that data in a table "X" on sql server. I have one dataflow wich does this operation. In the sequence, i have this dataflow connected to the next dataflow, wich read the data from the table "X" and do some joins with another tables and persist that new data on a table "Y". The first problem, the table "Y" have a foreign key to table "X" and i have to do this operation with transaction, the package give problem, it's something like "Cannot enlist this connection on distributed transaction...", but i have many packages, wich I exec with transaction and runs sucessfully.
If i turn off the transaction, runs perfectly, when i put the transaction,it fails.
I've 6 data flow tasks in my package. I need to put all of these dataflows into a transaction and rollback if any one of the task fails. I dont want to use MSDTC.
Easy: read a SQL table with 500 fields, transform, write to flat file using SSIS.
But, I have hundreds of transformations to define using Lookups and Aggregates, Derived Column transformations. I wan to group the data flow transformations in usable (reasonable size) groups (packages, containers, subroutines, whatever you want to call it).
I cannot figure out a simple easy way of doing this most "simple" obvious thing.
Am I the only one on the planet who needs to do this?
Hi, I was wondering how it is posible to join three data sets from different data flows into one txt file. Let's explain a little more:
I have 3 dataflows. Each of them connect to sql server and and by a SQL command, they bring data into SSIS.
Each SQL command differ between them. So each data set have different columns (they dont have the same format). Also the amount of columns differ between each one.
What I need is to join the three data sets into one txt file. How can I do this? It is posible to join them with different data set formats into a txt file?
Is this the best way to join different data? It is better to use as many OLE DB Sources are needed instead of different data flows? Thanks for your help!
I have a data flow task which has around 5 data flows (like the 2nd diagram shown here). These 5 simple flows with just a row count transformation in between. Now, I want to fail the entire task immediately even if one of the data flows failed. Right now if one flow fails the remaining flows fails after a long time, not immediately. How can I make it fails immediately.
The other I would like to do is Can I place these 5 data flows in a transaction, so that if one data flow fails, others data flows also roll backs? ( I assume its not possible)
Perhaps one too many 2000 DTS packages have permanently damaged my ability to think clearly - however, I've find myself very frustrated attempting to create a SSIS Data Flow which replaces a very simple 2000 DTS package.
Take data from table1 in database1, put it in table2 in database2. Table2 in Database2 has an additional column as part of the primary key - so I need to add an arbitrary unique value in each row as it's inserted. Previously, I did this in the transformation script through a variable I incremented.
What's the recommend method to do this now - since row level processing of variables seem to be a no-no?
I have a package that has several data flows that run concurrently after some initial tasks and an initial data flow. I want transactions on each of the data flows and have set the transaction option to Required on the data flows (not on the package itself). I am also using checkpoint restart on the package. A couple things are happening.
1) the first data flow is successful and that releases the several that are waiting. Some of these complete OK but inevitably one or two will fail. The failing data flows will be different from run to run, sometimes one and sometimes two will fail. The error says:
Error: 0xC0202009 at Provider_NF_Code, Delete Provider_NF_Code [130]: An OLE DB error has occurred. Error code: 0x80004005.
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.".
My hunch is that DTC is getting the transactions mixed up. I think it is committing one just after another data flow has already started work expecting the transaction to still be active. That would explain why the failing data flows are random. Plus, if I set the MaxConcurrentExecutables to 1 the entire package is successful. BUT, why have concurrent tasks if you can't run them concurrently.
2) when the package fails with the DTC problem I restart it with the checkpoint file. I was expecting the package to restart with the failed data flows. Instead, it restarts with the initial data flow (that all the other flows wait for in the package). This data flow has always been successful. It's as if the transactions I have put on the individual data flows are actually placed on a single virtual container that all of them are in, and when down stream data flows fail the entire data flow chain is rolled back and set to restart.
How can I get multiple concurrent data flows to run with transactions?
Why are successful data flows being restarted? Can I get just the failed tasks to restart?
We receive data files from different external customers, and these files have identical layouts.
I'm planning to set up a package for each customer. Each package will contain a flat file source -> OLEDB transformation dataflow, (followed by other customer-specific data flows).
What I'd like to do is just create this dataflow once, parameterising the flat file and table names. Is it possible to include this dataflow in each customer package so that if the flat file layout changes, I can just modify the connection managers in the one place, and then recompile each package to pick up the changes?
I'm stuck here when I try to open an existing package which contains several data flows, SSIS tries to validate each data flow and after a while a Visual Studio error message pops up and I can't do anything.
The error message says : "Unable to cast COM object of type 'System._ComObject' to interface type 'Microsoft.SqlServer.Dts.Pipeline.Wrapper.IDTSObject90'. This operation failed because the QueryInterface call on Com component for the interface with IID '...GUID...' failed due to the following error : The application called an interface that was marshalled for a different thread. (Exception from HResult: RPC_E_WRONG_THREAD)"
So I have to make a fairly dynamic Data flow. I will get the most of the configuration from a database table. I will look up the name of the procedure to run as a source (I can use expressions or a script component source for this), I will lookup columns names from a database table.I can use expressions (maybe) or a destination script component for the destination including the destination table name and column names, these will be looked up in a database table.What I am not sure is how I will do the mapping. How can I make this dynamic? The logic for mapping will be in the database as well. Could I create a custom dataflow all in one script? A source, destination and mappings all in one script? Is there an example of this out there.my task ios to make the data flow completely dynamic.all config info would be kept in a SQL Server database.A complete custom script component dataflow task.
I have a package set up basically with two consecutive data flows. The first flow takes data from an OLE DB Source and stores it into a Flat File Destination. The second flow uses this same flat file as a source, alters the data, and stores the data in the same flat file, overwriting the old file. I set DelayValidation to True on the flat file. Still, here are the error messages I am receiving:
Error: 0xC020200E at DO, Flat File Destination [7676]: Cannot open the datafile "C:Temp.txt".
Error: 0xC004701A at DO, DTS.Pipeline: component "Flat File Destination" (7676) failed the pre-execute phase and returned error code 0xC020200E.
I am new to SSIS, so I'm sure I have a setting wrong or something. Is the problem that SSIS is trying to write to a file from which it is simultaneously reading data?
I have a package with 10 synchronous dataflows, which, combined, load about 300MB of flat file data to a database. This package would run successfully on 2 of our database servers, but would regularly fail on a third. The server on which it was failing is a 4 processor box with 16GB Ram with Windows Server 2003, SQL 2005, SSIS and SSRS installed - much more robust than one of the others that the package worked on. The SSIS error messages returned alternated between the following (with no apparent reason why one would show up rather than another, though the first was the most common):
"The file name "\Server1Folder1File1.txt" specified in the connection was not valid."
"The file name property is not valid. The file name is a device or contains invalid characters."
"An error occurred while initializing the flat file parser."
For the first error message, the error would report different connection managers and their associated file as invalid from run to run. All of the files across the 10 dataflows resided in the same network folder, and the package would read in and process a few of them before failing, so the problem was definitely not the connection string.
Searching the forums, etc. for these errors provided no useful information - given the real cause of the problem, these error messages are worse than unhelpful, they send you looking in the wrong direction. It was only when trying to track down another problem on the same server that I discovered the issue. When trying to copy database backups greater than 12GB over the network to this server, the operation would fail with an "Insufficient System Resources" message.
Some research led to the discovery that problem was caused by the /3GB switch in the boot.ini file of the server (don't let your Server team use that switch if you have 16GB of memory or more). Removing the switch and setting SQL to utilize AWE, fixed both the file copy problem AND the SSIS package failure problem. The SSIS package failed, not due to a bad connection string, but rather to insufficient server resources (read memory) to handle the simultaneous connections.
I hope this may help any others trying to track down this kind of SSIS package failure.
I will also provide here what I have gleaned about setting up Memory usage for SQL Server 2005 running on 32 bit Windows Server 2003 (with the caveat that I am no expert €“ corrections and additional information are welcome).
The following links got me started in my research (thanks to the folks who provided such useful information): http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=55191 http://articles.techrepublic.com.com/5100-10878_11-6091280.html http://www.simple-talk.com/community/blogs/brian_donahue/archive/2007/09/30/37747.aspx http://blogs.technet.com/askperf/archive/2007/03/23/memory-management-demystifying-3gb.aspx http://www.modhul.com/2007/11/10/optimising-system-memory-for-sql-server-part-i/
Also, search BOL for: Server Memory Options Enabling Memory Support for Over 4 GB of Physical Memory Enabling AWE Memory for SQL Server
Windows Server 2003 provides access to 4GB of virtual address space. By default, 2GB is assigned to the OS and 2GB to applications. This default can be change to 1GB for the OS and 3GB for applications by the use of the /3GB switch in the boot.ini file.
Physical memory over 4GB can be addressed by enabling Physical Addressing Extensions (PAE), which is done by setting the /PAE switch in the boot.ini file. This does not increase the systems virtual address space, rather it increases the size of the page table (which is maintained within the virtual address space), adding entries to reference the physical memory above 4GB.
It is important to note that these two switches are not interdependent (they do different things and you can turn each on or off regardless of the others status), though the combination of them has an impact on server performance and the maximum amount of physical memory which can be addressed.
The /3GB switch only impacts the allocation of the first 4GB of memory (virtual address space) between the OS and applications (default 50/50 % split, with switch on - 25% OS and 75% applications). The /PAE switch enables the system to reference/manage physical memory above 4GB, but does not alter the allocation percentages of the first 4GB of memory between the OS and applications. However, when PAE is enabled, the OS requires more memory within the first 4GB to manage the physical memory above 4GB (due to increased page table entries). With the /3GB switch, the OS has only 1GB of virtual address space, and only enough space to manage a total of 16GB of physical memory. If 32GB of physical memory is installed, 16GB of it will go to waste.
Address Windowing Extensions (AWE) is an API that allows an application to address more than the 2-3GB of memory that is available to applications within the virtual address space (first 4GB of memory). SQL Server can utilize AWE to take advantage of memory above the first 4GB that is made available via PAE, and can even reserve portions for its own use. I believe (though I can€™t remember where I got this bit) that SQL utilizes AWE memory only for the page cache (buffer pool €“ which seems to be a misnomer), and not for other operations.
To enable AWE, see the BOL references above.
The big question: what are the recommended settings for all of these? That all depends on what you have running on the server. You need to leave space for the OS, SQL Server and any other applications you have.
The hard and fast rules: If you have more than 4GB of RAM, you must use the /PAE switch in order to take advantage of it. If you have more than 16GB of RAM, you must NOT use the /3GB switch in order to take advantage of it.
Based on anecdotal evidence, I€™ve noticed the following generally recommended guidelines €“ assuming the server is dedicated to SQL.
Use of the /3GB switch seems to be a generally accepted practice if you have 8GB of RAM or less. For between 8 and 16GB, some say never use the /3GB switch, others say you can use it up to 12GB and still others up to 16GB. I interpret this to mean that it all depends on what types of loads are being placed on the server and that testing on individual servers will be required to determine whether or not to use the switch. Certainly that was my experience - the /3GB switch worked fine with 16GB RAM, until the server encountered a certain workload. For me, no more /3GB switch.
For setting SQL to use AWE, most seem to agree that it should be enabled if you have more than 4GB RAM. The setting of max server memory is more complicated. BOL seems to suggest (the €˜Server Memory Options€™ entry) a formula of Total Physical Memory minus 1-2GB for the operating system. Based on a desire to be a bit more conservative, I am now using the following formula:
max server memory = total physical memory
minus
4GB for the OS and application processes (since the AWE memory is utilized for page cache, not SQL processes)
minus
AWE memory required by other applications, including other instance of SQL Server
If anyone has additional insight, or a more refined equation, I could certainly benefit from it.
So I have three lookups in a row in my data flow. Basically they are doing data quality checks for me using a reference table.
I want to be able to take the error flows of the three lookups and merge them together (union all) so that I can insert the "errors" (or non matches) into a table.
Can't do it. Because SSIS deems non-matches as "errors" you automatically get the errorCode and errorColumn fields. When you try to union a lookup error output with another lookup's error output, you can't do it.
What I would like to see is a lookup act more like a conditional statment where you have three outputs of a lookup table: match found, no match found, and error. Either that, or I'd like to be able to edit the names of the errorCode and errorColumn fields.
Am I missing something here, or do I need to just add an OLE destination for each lookup error flow when I only want one? 'Course the problem then is that I want to count the number of rows that are in "error" across all of the lookups.
I'm using a script task to split my control flow before performing different data flow tasks depending upon the result of the script.
The dataflow tasks populate a single dataset with data before entering a foreach loop where the data in the dataset is consumed.
The foreach loop is valid for both dataflow tasks but I can€™t get it to accept multiple inputs.
I could have 2 copies of the foreach loop, one for each branch, but this seems wasteful and also means I have to maintain the same chunk of code in 2 places.
Can I join control flows or is there a better way to achieve this?
I'm getting this when executing the code below. Going from W2K/SQL2k SP4 to XP/SQL2k SP4 over a dial-up link.
If I take away the begin tran and commit it works, but of course, if one statement fails I want a rollback. I'm executing this from a Delphi app, but I get the same from Qry Analyser.
I've tried both with and without the Set XACT . . ., and also tried with Set Implicit_Transactions off.
set XACT_ABORT ON Begin distributed Tran update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TRANSACTIONMAIN set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.TRANSACTIONMAIN set REPFLAG = 0 where REPFLAG = 1 and DONE = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.WBENTRY set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.WBENTRY set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.FIXED set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.FIXED set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.ALTCHARGE set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.ALTCHARGE set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TSAUDIT set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.TSAUDIT set REPFLAG = 0 where REPFLAG = 1 COMMIT TRAN
It's got me stumped, so any ideas gratefully received.Thx
I have a design a SSIS Package for ETL Process. In my package i have to read the data from the tables and then insert into the another table of same structure.
for reading the data i have write the Dynamic TSQL based on some condition and based on that it is using 25 different function to populate the data into different 25 column. Tsql returning correct data and is working fine in Enterprise manager. But in my SSIS package it show me time out ERROR.
I have increase and decrease the time to catch the error but it is still there i have tried to set 0 for commandout Properties.
if i'm using the 0 for commandtime out then i'm getting the Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.
and
Failed to open a fastload rowset for "[dbo].[P@@#$%$%%%]". Check that the object exists in the database.
I am getting this error :Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.OleDb.OleDbException: Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.have anybody idea?!
i have a sequence container in my my sequence container i have a script task for drop the existing tables. This seq. container connected to another seq. container. all these are in for each loop container when i run the package it's work fine for 1st looop but it gives me error for second execution.
Message is like this:
Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.
i am getting this error "Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.".
my transations have been done using LINKED SERVER. when i manually call the store procedure from Server 1 it works but when i call it through Service broker it dosen't work and gives me this error.
I wrong delete all contents of a table. I have only have the actual Transaction Log. With Log Navigator i found the lines that i wish/need to roll back. Can you please give me any ideas of how to recover/uncommitted/rollback data from the transaction log.
I wrong delete all contents of a table. I was going to restore the full backup database but the file is corrupt. I have: The last 5 days backups of Transaction Log; The actual Transaction Log;
Can you please give me any ideas of how to restore de transaction logs backups and/or uncommitted de deletion of all rows of the table.
I am in the process of choosing between either SQL Workgroup or Standard Edition. I see the differences in features on the comparison table, but do not see any references to the differing capabilities in handling transactions.
Is there any differences between Workgroup and Standard in terms of handling transaction/data capabilities? i.e. Does Standard have the superior capability in handling X times more TPMs than Workgroup?
If not, am I correct to assume that this is totally determined by hardware configuration (# of CPUs, processor speed, HD speed, RAM) ?
If the data volume / transactions handling is solely determined by hardware configuration, and I know the # of transactions and amount of R/W per second, .......where would be a good reference to know what kind of hardware configuration I need (ideally, once I know the hardware configuration, I guess I would be able to determine I need Workgroup or Standard)
I have a database that I am setting up in SQL Server 2005. Initially, I am doing very large imports of data. Every time I run an import, I am having the increase the size of my transaction logs, and now they are approaching 2 GB. Should these be purging themselves? I have to keep increasing the max size of the log so that I can get my data in. While this will work for now, it is not a long term solution, because I can see the log size growing quite large and the amount of space on the server obviously isn't infinite. Is there a setting that I can change so they will automatically purge? If not, how do I purge this information myself?
Hello everyone, My web application uses SQL Server database and I am connecting via standard SqlConnection object and running stored procedures using SqlCommand object. In one of my page, I have data coming from 2 different tables. Now , data from 1 table comes as only single record. But from other table it comes as multiple records. Meaning, data that I read as 1 record, goes to different textbox and dropdown controls on page. Data that comes in multiple rows, I am binding that data with DataGrid. Now, in aspx page data from both table can be updated and on aspx page I only need to provide a single save button. Now, I am not sure how to save/insert/update a single row in 1 table and multiple rows in another table in 1 transaction. I thought of stored procedure. But I don't think its straightforward with stored procedures since table with multiple records, I am not sure how to pass all the records in stored procedure's arguments.Is there any way that I can control whole transaction in ASP .NET? Thanks,Ujjaval
Here is my dilema. I have a 120 GB database that I need to mask customercredit card numbers in. The field is a varchar (16). I need to updatethe field so that we only store the first 4 numbers and the last 4numbers of the credit card and insert * to fill in the rest of thecredit card number.I was going to do this as a loop using the following code:While Exists (Select Top 10 * From Header Where IsNumeric(CCNbr) = 1)BeginBegin Transaction T1UpdateHeaderSetHeader.CCNbr = Left (D1.CCNbr, 4) + '********' + Right (D1.CCNbr, 4)From(Select Top 10 * From Header Where IsNumeric(CCNbr) = 1) as D1Commit Transaction T1If Not Exists(Select Top 10 * From Header Where IsNumeric(CCNbr) = 1)BreakElseContinueEndIn theory this only selects the top 10 rows, updates them, dumps the logand moves on to the next 10 until all the rows are updated.I tried running this on my test database and it fills up the transactionlog.Can anyone tell me the best way to go about doing what I need?Thanks*** Sent via Developersdex http://www.developersdex.com ***
We have two db's. One live and one test.When I right click on the live one in SQL Enterprise Manager andselect properties -> Data Files ->File Name is LIVE.MDFLocation is F:DataLIVE.MDFWhen I right click on the test one in SQL Enterprise Manager andselect properties -> Data Files ->File Name is LIVE.MDFLocation is F:DataTEST.MDFSame thing applies to Transaction log files too.My concern is File Name is same in both the above cases even thoughthe location is different. What are the consequences of this.Thanks for your helpGVV
I am working on reformating, cleaning, adding derived elements, etc... on data with no data validity checks on the front-end, source data. Its an incremental process where each month I add new data to the prior months' history. Sometimes the new monthly source data "surprises" me -- different values, different data definitons, etc. than expected.
I deal with these surprises by accumulating the fields/values that are unexpected after converting them to varchar with some explanatory language into a table [program_newdataprobs]
If there are any records in [program_newdataproblem] I rollback the transactions so the prior months' history remains unaltered. The problem is I then "lose" the contents of [program_newdataproblem] which I would like to hand to the source data people to troubleshoot.
I have tried the following:
Begin Transaction tran1
.... code that reformats data here if (select count(*) from program_newdataproblem)>0
BEGIN
Begin Transaction tran2
select * into #t1 from program_newdataproblem
Commit Transaction tran2
Rollback Transaction tran1
insert into program_newdataproblem Select * from #t1
END
This bombs because because #t1 no longer exits
Any way I can "keep" the data from program_newdataproblem when I Rollback the other transactions? (without having to store the history in separate tables that I can then access if new data errors occur)
We are designing an ETL solution for a BI project using SSIS.
We need to load a dimension table from a source DB into the DW; inside the DW there are two tables for each source dimension table: a current dimension table and a storical dimension table.
The source table includes technical columns that indicate if each record was processed correctly by ETL procedures.
Each row in the source table can be a new record, or an exisisting one. If it's a new record, a corresponding new record should be inserted into the current dimension table of the DW; if it's an exisisting one, a new record should be inserted in the storical dimension table and the existing record in the current dimension table should be updated.
Furthermore for each record we need to update the source table with an error code. If the record was processed with succes the code is zero, otherwise it contains an error code.
I try to use a single data flow task, using the 'ole db command trasformation' task for managing update and 'ole db destination' task for managing insert.
The problem is that some insert and update should be executed inside a single transaction, for each record; for example if the source contains a new record I should insert the record in the current dimension table and update the source record with error code zero if every think goes fine.
There is some way to group task in the data flow pane in a single transaction ?