The thing is I am using a DTS package to get data from db2 residing on a s390-mainframe and put it in a sql 2000 db on windows 2000 server..
I have the odbc driver that the server guys gave me..I installed it and then run a batch pgm, which i think does the database set up...in short i set up the DSN's in my local machine..I can use the dsn when i use the sql client...
I can't create a new package and then complete the task..Problem is when i try set up a transform task, i get a memory error...it says that the instruction referenced a memory location that can't be read...
So what i do is i use the import task in sql...which inturn uses the same odbc connection and runs the query on db2..inserts the results into a table in sql..then save the package...Only problem is that i can't edit the transorm task...same memory error...
Any idea why this happens ?? I would like to see how i can get past this using Odbc...
Hello, I am working on a module to extract data from a Teradata server to SQL database. I am using a DTS package to extract the data and need to make the data source name (database name and object name) configurable at runtime and to be read from a Config table in the SQL database. What do you suggest is the simplest and most efficient method to do this?
I am trying to use a dynamic SQL query in the data tranform task with the data source as a query. But its giving me a strange syntax error. Can we use a dynamic SQL in the query option of DTS transform task source?
I am importing data from xls file to a db table with a dts. In the time of the dts creation I am using 'Transform Data Task Properties' GUI window to map incoming xls fields (source) to the table columns (destination). Question: Is there any way to invoke the 'Transform Data Task Property' GUI window in dts runtime and use it to change the mapping dynamically in the run time? Thanks, Vadim.
I'm still trying to learn the advantage of having stored procedures. I have a DTS that uses a Transform Data Task to append the result of a view into a table. All operations are done locally in the server.
Do I have any advantage if I write a stored procedure to insert the view into the table, and then call the stored procedure in the DTS, in stead of using the Transform Data Task ?
I am trying to read in a flat file, transform the fields and store into a destination database.
In DTS, this works using Transform Data Task Properties. I define the columns and then have a VB script on the Transformations tab that changes any bad data.
Is there a way to do this in SSIS that I can define the column transformations and re-use my VB scripts?
I have an Access 2.0 database that holds call data on a mapped drive. I am running MS SQL Server 2000. I can open it and view the records inside. I can even run the query below and get results, if I removed the CallDate and CallTime parameters.
SELECT CallDate, CallTime, Mid(CallRecordData, 68, 3) AS Extension, 'I' AS Direction, Mid(CallRecordData, 34, 11) AS Called, Val(Mid(CallRecordData, 18, 2)) + Val(Mid(CallRecordData, 21, 2))/ 60 AS Minutes, Val(Mid(CallRecordData, 21, 2)) AS Seconds FROM CallRecords WHERE (CallDate = ?) AND (CallTime >= ?) AND (CallTime < ?) AND (Mid(CallRecordData, 30, 1) <> '9')
When I preview in the Transform Data Task, I get: Package Error Error Source: Microsoft JET Database Engine Error Description: No value given for one or more required parameters.
When I look at the parameters, they are listed. I check their values, and they have the appropriate values (DateCalled, String, 07/14/2005) (StartTime, String, 06:30) (EndTime, String, 07:00)
When I run it in the build query or in Access with a linked table to the source, I can enter the values when asked for them and it works.
i have too many DTS packages to migrate to SSIS, and while examining a DTS package in BIDS (converted with the migration utility) i tried to edit the resulting migrated package, which opened the DTS interface with the two connection icons joined by the big fat arrow with a gear on it...not exactly what i had in mind, iow, it looks like SSIS on the outside, but its still DTS on the inside. So I stripped out a series of components from a more complex package hoping that simplifying it would reveal the contents of old DTS Transformations tab at least partially set up in a Derived Column transformation. Can i get there from here, or must i recreate every stinking definition in a derived column manually from the ground up? thanks very much for your help
Hi JayH (or anyone). Another week...a new set of problems. I obviously need to learn .net syntax, but because of project deadlines in converting from DTS to SSIS it is hard for me to stop and do that. So, if someone could help me some easy syntax, I would really appreciate it.
In DTS, there was a VBScript that copied a set of flat files from one directory to an archive directory after modifying the file name. In SSIS, the directory and archive directory will be specified in the config file. So, I need a .net script that retrieves a file, renames it and copies it to a different directory.
I am trying to use a DTS package to get data from db2 in a s390 environment. I am able to use the Import task and then run a query on db2, save the package and execute the package.But when i try edit the transform task i get a mmc.ese application error...it says that the instruction at addres "" tries referencing memory at address "". The memory could not be read...
I installed a ibm odbc driver on my client...obviously the connection seem to work since the package executes...But then the edit issue...
If any one faced this problem or know what i am doing wrong....appreciate ur time and effort... Thanks
In my current project i have a requirement to assign value of an aggregate transform to a variable. But i need to accomplish it without using a script task.
Hi friends, Can somebody tell me how to do this- How can we Analyze existing code used to transform data into the Operations Data Warehouse, and make changes to correspond to upcoming changes in the SAP data sources. Thanks
On my MS SQL Server 2000, I am trying to create a generic way to load tables into my datawarehouse.
I have as input to the process a large number of table definition(s) stored individually as files on my server. And, ascii delimited data files in various locations but mostly accessible via NFS mounts.
I created two DTS package in MSSQL2K that in theory represents what I want to do:
package1 ... invoke package2 with global variables to load a system of related tables
package2 ... check for a trigger file ... set the "Execute SQL Task" statement to my first file ... run the "Execute SQL Task" which drop/add's a table ... set a "Connection" to a data source file that I want to use ... run the transformation and, with that my package starts to fall apart ... set the "Execute SQL Task" statement to the next file, and ...... goback and execute it
I can't figure out how to set the table in the transformation section to the table I want to use. And, I assume next to have the transformations links between the source and new table relinked.
The source files contain in the first row the column names as found in the tables I just created.
Okay all I have a problem. I have two list of adresses and phonenumbers. I do not have control over the contents of the lists The onlyunique field between the two is the phone number. I need to be able toinner join the two lists on phone number.This would normally be straigt forward but the problem is that they areformated different and one of them does't even have a control on theformating.*Phone numbers are US phone numbers only.The one list (table) that does have a control uses this formatAAA3334444where AAA is the area code333 is the 3 digit prefix4444 is the four digit suffixthe second list (table) does not have any standardized formating andcan be filled with extraneous spaces, parentheses and dashes not tomention the leading 1 in some instances.I thought that I could do some kind of regular expression to do acomparison but I havn't as yet found a good resource to tell me how todo it or if it is even possible. Or maybe break up the one I know hasa standardized format into something like this:'%AAA%333%4444%' and doing my comparison that way. however It is veryimportant that only those list items in both list that are truly thesame place be listed.suggestions and solutions are apreciated
I have a very simple SSIS package that is moving data from a DB2 database to a Teradata box. I've run it around 10 times, twice it pushed data over, the balance of the time, it executes with no error, but moves nothing over. In the "incomplete" runs, a command line box pops up for half a second, then the package ends.
Does anyone have ideas as to why this behavior is occurring?
I have an OLE DB Source and i want to transform the data type fields of the table before i export the table in an OLE DB Destination. Is there a way to transform numeric value to float, and numeric to nvchar?
I have a package that works fine in development. I move the package over to test and it fails validation in the lookup transform.
Error 46 Validation error. Data Flow Task - PO Lines Interface: Lookup - LIST PRICE [29621]: output column "LIST_PRICE_PER_UNIT" (29667) and reference column named "LIST_PRICE_PER_UNIT" have incompatible data types. SPO_TO_ORACLE_PO.dtsx 0 0
What strikes me as odd is the fact that I don't have a way of specifying the data types. I just specify the column I wish to return as a new column with the same name. Anyway, why would this work in one instance but not another?
Hi ...I can't figure out how to put nested tables into the Data Mining Model Training Transform (SSIS). Can anybody help me? some example please...!!!?? Diego B.
I will try the mixed security tomorrow. I have in the mean time discovered that I am not able to use the odbcping.exe utility successfully. It returns the same 08001 error message (Specified SQL Server not found). Does this means that the problem is within ODBC? What are somethings that I can try?
Thanks, Kevin
-----------
You need mixed security, also check password of the sql service account on nt.
------------ Kevin G. at 3/8/00 3:21:47 PM
I have set up replication on two SQL Servers (6.5), SP5a, on NT 4.0 (SP3). The Distribution Task on the Publisher is failing with the following error:
08001[Microsoft][ODBC SQL Server Driver][dbnmpntw]Specified SQL Server not found.
I am using standard security in a workgroup environment. I have my trusted connection setup and I am using named pipes. I had this process working on our test servers but when I tried to implement it into production I received the above message. Please give me some ideas or things to try. What source can I use to look up the 08001 error?
I'm "trying" to set up Replication. The Publishing/Distribution server is in one domain, and the subscribing server is in another. Both domains are fully trusted.
The synchronization step builds the .tmp file, but the repl_subscriber Distribution task bites the dust with an error message, "28000[Microsoft][ODBC SQL Server Driver][SQL Server] Login failed".
The setting on the distribution options dialog box is ODBC, SQL Server. I'm not using a special login/password. I've even tried putting a user name and password there, and it doesn't work. The ODBC connections test out fine on both servers. Any suggestions where I've gone wrong?
I can't figure out how to put nested tables into the Data Mining Model Training Transform (SSIS). I can do a simple case table, but how do you get those nested tables with DM Training Transformation? Any ideas? Samples?
W2k3 server, SQL 2005. @@version = Microsoft SQL Server 2005 - 9.00.1399.06 (Intel X86) Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 1)
I have my first SSIS package almost working, but I'm having an odd problem and can't find any information to help resolve it.
I'm importing from a flat file (csv) to an existing table (append). I've got a Derived Column transformation in the middle to do some data cleanup. It's all working except for one little problem...
One of the transformations is 'REPLACE([Column 3],"^","; ")', output to a new column. (The input file has a field that uses carets as delimiters between an unknown number of items; I'm changing that to semicolons for easier reading.) Not all rows have data in this column, some will have one item, some will have multiple items.
The REPLACE works except that it fills in repeated data for all the blank rows.
Example:
Incoming data is:
1 Smith,Jane^Jones,Jane
2 Brown,John
3
4 Adams,James^Adams,Jim
5
6 White,Debra
Data inserted into the table is:
1 Smith,Jane; Jones,Jane
2 Brown,John
3 Brown,John
4 Adams,James; Adams,Jim
5 Adams,James; Adams,Jim
6 White,Debra
I've tried to use a Conditional to skip the empty rows, but I can't get that working at all (get syntax errors no matter what I put in).
Any suggestions on how to fix this would be most appreciated!
I am trying to create a simple BI Application for SSIS. In Visual Studio 2005 I just get a Data Flow Task from the toolbar and add it to the project. When I double click it I get the following error:
The task with the name "Data Flow Task" and the creation name "DTS.Pipeline.1" is not registered for use on this computer.
Then when I try to delete it it gives this other error:
Cannot remove the specified item because it was not found in the specified Collection.
I am creating this application in an administrator account in this computer, so I doubt the problem is related to permissions. I am running SQL Server 2005 and Visual Studio 2005 in WinXP Tablet PC Edition.
Any suggestions why this is happening and how to fix it?
I am using SQL 2005 SSIS. I am joining several large tables and then the move result into another table in the same database.
I would like know which method is faster:
Use Execute SQL Task to insert the result set to the target table
Use the Data Flow Task to insert the result set to the target table. (Use OLE DB source to execute SQL command and then use the SQL destination) Could you tell me why then other is slower?
I have a common requirement in numerous SSIS processes to take my main input data set and to remove all rows from it that match a second input data set on a given key and output this as the main output. I also want to output (as a second output) all the rows from the main input data set that did match on the given key. However, I don't want to merge in data from the second input, nor am I interested in rows from the second input data set that have no match in the main input.
E.g. If I have the following data:
Main input: Key Name --- ---- 1 Steve 2 Jamie 3 Donald
I have a stored procedure that is executed via a sql script task that returns a full result set. I map this result set to a variable or object type. Is there a way to use this variable as a data source in a subsequent data flow task?
I'm trying to get a record count out of a databse using OLE DB Source and row count tasks but keep getting an error. I set up a variable as int32 and select the variable name in the row count task and when I go to the Input Columns tab to select a field to count, it gives me this error:
Error at Data Flow Task[Row Count[505]]: The component "Row Count" (505) has forbidden the requested use of the input column with lineage ID 32.