Special Caracters While Importing Data With DTS From MyODBC
Jul 20, 2005
When importing data into MS SQL Server 2000 from a MyODBC (v3.51) data
source using Data Transformation Services, special characters like öäüéàè
are not imported correctly. However, when the MyODBC data source is used in
any other program like Access, Excel etc. it works fine.
Does anyone have any experience with this? Any hint to solve this problem
would be very much appreciated!
SO I installed the MyOdbc drivers 5.0 and 3.51 and then went into my Data Sources(ODBC) drivers in my Administrative Control Panel I then proceeded to add the DSN Under System DSN and I also tried User DSN When I try to use the import/export tool in my SQL Server 2005 Management studio I don't get presented with the MySql drivers at all for a source
Hello, I have a field of type the Var, the stored data makes about 700 characters. During a select through the query analyser, it tronque in about 200 or 300 characters. How to explain it? thank in advance Pascal
Hey everyone! I'm doing an export from SQL into excel spreadsheet and then am going to clean out certain parts of the data with global search/replace. The problem is that the SQL data is full of special characters such as |'s and the little box looking characters. How do I export without these characters? I know its possible, I did it about 2 years ago and remember I did some crazy file conversion (make wk3 or something) but I no longer remember Any help would be much appreciated! Thanks, Geoff
PS, attached is a screenshot of the data to give you an idea of what I'd like to strip!
I have a table with several columns of information that I wish to set up some form of schedule to go through this data and remove any special characters that may interfere with other code processes.
Mainly the coma's and the apostrophes. It really messes with my asp pages and scripts when retrieving this information and trying to do other things with it, so I need to figure out how to remove these from the tables so it does not cause these issues.
Knowing this, I cannot figure out how to keep the data in the row/column and just extract the special characters from that data. The other problem is, everything I try requires me to insert either a coma or apostrophe as part of the code string which in lies my issue.
How can I parse through my data, leave the data as-is, but just get rid of coma's, apostrophes, and double quotes?
Does anyone have a basic example that I can use to expand on?
There is a ">" character (right-angle bracket) inside my SQL Server password. When I supplied this password to the bcp utility, the ">" character was treated as an output redirection symbol. So the password was truncated at ">" and the bcp output got sent to a file with a name consisting of the rest of the password after ">". I'm using SQL Server 2012. I cannot use Windows authentication due to company policy. Is there a way to resolve this without changing the password?
I have a source sql 2005 with the database collation SQL_Latin1_General_CP1_CI_AS and destination with sql 2012 with the same collation.
But the SQL server llvel collation is different, sql 2005 uses Latin1_General_CI_AI and sql 2012 uses "SQL_Latin1_General_CP1_CI_AS"
Now when i load the data from 2005  for one table to sql 2012 i could see special characters in one column. And i dont see that in the source database. Is there a way to avoid that or is it something we need to manually fix.
I need to create a function that will check data inputted by a user into a column anc check for special charaters. If any of these exist then block the insert.
Hi, in SSIS I read data from a DB2 database on AS400 using the Client Access ODBC Driver for DB2 from IBM and write it to a SQL Server database. Since it does not work using the odbc driver as data source directly, I use a data reader component with .net providersodbc. Some special characters were not translated correctly when read from DB2. They show up as ? in the SQL Server target table.
I tried to change the client locale in the ODBC connection properties but it did not help me. I tried changing all other settings in odbc but it still does not work.
In dts I could source all the data without this problems and good speed using the same nodbc driver.
The OLEDB Providers delivered with SSIS do not work in SSIS or I am too stupid to configure them correctly. They are even too slow as I explained above.
I cannot use the MS OLEDB Provider for DB2, because it works only in Enterprize Edition and we only have the Standard Edition.
Thus, only using Client Access ODBC Driver for DB2 with net providersodbc (as bridge) is performant enough and works on Itanium. But how to work around the problem with the special characters?
SELECT t.Doctor, t.LedgerAmount, t.TransactionDate, ISNULL(lg.LedgerGrpDesc, 'No Sales Group') AS LedgerGroup FROM Transactions t LEFT OUTER JOIN LedgerGroups lg ON t.LedgerDescription = lg.dbLedgerDesc
[Code] .....
My problem is that the data in t.LedgerDescription sometimes now has either leading/trailing white space or more likely special chars so the join against lg.dbLedgerDesc doesn't always work.
I can't change the source of the data to strip out special chars/white space so am stuck on how to deal with it.
I tried using LTRIM & RTRIM in the where clause but this doesn't seem to have had any effect...
LEFT OUTER JOIN LedgerGroups lg ON LTRIM(RTRIM(t.LedgerDescription)) = lg.dbLedgerDesc
I'm presented with a problem where I have a database table which must be migrated via a "custom tool", moving the data into a new table which has special character requirements that didn't exist in the source database. My data resides in an SQL Server 2008R2 instance.
I envision a one-time query which will loop through selected records and replace the offending characters with --, however I'm having trouble understanding how this works.
There are roughly 2500 records which meet the criteria of "contains bad characters", frequently containing multiple separate bad chars, and the table contains roughly 100000 rows.
Special Characters are defined as #%&*:<>?/{}|~ and ..
While the field is called "Filename" it isn't always so, it is a parent/child table where foldernames are also stored.
The examples I'm finding are all oriented around SELECT statements, to change the output of what I see returned, however I'd rather just fix the entire column using an UPDATE. Initial testing using REPLACE fails because I don't always have a single character as the bad thing in a string.
In a better solution, I found an example using a User Defined Function to modify the output of a select, but I cannot use that UDF in an UPDATE.
My alternative is to learn enough C# to modify the "migration tool" to do this in-transit, but I know even less about C# than I do of SQL.
I gather I want to use @@ROWCOUNT to loop through the rows but I really can't put it all together in a cohesive way.
I setup this package to import data from a Sharepoint list to a SQL Server data table. The primary key of my SQL table is mapped to the Title column of my Sharepoint list. There is a possibility that duplicate values will be entered in the Title field of the Sharepoint list. So when importing data into my table via SSIS, my package always error-out when there it comes across duplicate values. how you others have managed data integrity when importing from a Sharepoint list with the Title column being mapped to the primary key of a table.
I have one column in SQL Server 2005 of data type VARCHAR(4000).
I have imported sql Server 2005 database data into one mdb file.After importing a data into the mdb file, above column data type converted into the memo type in the Access database.
now when I am trying to import a data from this MS Access File(db1.mdb) into the another SQL Server 2005 database, got the error of Unicode Converting a memo data type conversion in Export/Import data wizard.
Could you please let me know what is the reason?
I know that memo data type does not supported into the SQl Server 2005.
I am with SQL Server 2005 Standard Edition with SP2.
Please help me to understans this issue correctly?
We have a daily process, which copies millions of rows of data from one DB to another over Linked Server. Just checking on the best practise, are there more efficient ways than the Linked server to copy millions of rows of data from one DB to another? I checked bulk insert but that transfers the data from the file to DB not DB to DB.Â
I have created a simple package that uses a sql command to pull data from an oracle database and inserts the data into a sql 2005 table. Some of the data fields that i am pulling from contain two digits after the decimal point, however this data is lost when it gets into sql. I have even tried putting the data into a flat file, and still the data is lost.
In the package I have a ole db source connection which is the oracle database and when i do the preview i see all the data I need. I am very confused and tried a number of things to get the data into sql, but none work. Any ideas would be very helpful.
when i m importing data from excel to Sql using DTS the column which has text content was not imported as same in excel sheet. whereas a special character is appearing in between the lines. the text field contains multiple lines but the conetent is imported in single line .
I'm wondering if SSIS will be the solution to the problem I'm working on.
Some of our customers give us an Excel sheet with data they want to insert or update in the database.
I've created a package that will take an Excel sheet, do some data conversion so the data types match up and after that I use a Slowly Changing Data component to create the insert/update commands.
This works great. If a customer adds a new row to the Excel sheet or updates an existing row changes are nicely reflected in the database.
But now I€™ve got the following problem. The column names and the order of the columns in the Excel sheet are not standard and in the future it could happen a customer doesn't even use an Excel sheet but something totally different.
Can I use SSIS for this? Is it possible to let the user set the mappings trough some sort of user interface? I€™ve looked at programmatically creating the package but I€™ve got to say that€™s quit hard to do€¦ It would be easier to write the whole thing myself than to create the package trough code ;)
If not I thought about transforming the data in code before I pass it on to the SSIS package in something like XML. That way I can use standard column names and data types.
So how should I solve this problem? Use SSIS or not?
I'm new to SQL and DTS packages. I am trying to import data from an excel spreadsheet to an SQL server table via DTS package. It seems that the excel task looks at the first few records in a column to determine the datatype for that column. If the first few records are text, the entire column is imported as text. If numeric, the entire column is imported as numeric. There are about 25,000 records. In one field, the most important one, about half of the records begin with letters and the rest are all numbers. It is the subscriber ID field, and some subscriber IDs are all numbers, some are letters and numbers. The entire column should be imported as text. However, when I run the transform data task from the excel connection, none of the records that are all numbers are imported. I end up correctly importing only 13,000 of the 25,000 records. The rest are imported with the subscriberID field as <NULL>. I tried using the CAST or CONVERT function in the SQL query, but get the error message "Undefined Function."
hello, I create a txt file with a bash script, and i need to use it in a DTS package. But, i don't know how i can specify the type of my column. So in the transformations task, i have an error due to an incompatible type. what can i do to fix this error ? thanks,
I am creating a DTS package that is combining several tables, converting one column of data to a new column removing all special characters, then exporting the unique data based on this column and another column, and the max of other duplicates to a new table.
Now that I have the data in this table, I want to import any data that is not in my main table.
This "CLEANED" table does not have a designated "key" column, but the table I want to import the unique items does have an ID column that is also a primary key column.
DTS seems to want me to have a Key column to reference when importing from the CLEANED table to the MAIN table.
How would I go about checking the MAIN table against the CLEANED table, having DTS import only the unique items from the CLEANED table that are not present in the MAIN table based on three columns? The rest of the columns I want to just extract the MAX data from the duplicates.
Now here is the query I use to extract the unique values from the "CLEANING" table to get the data to the "CLEANED" table, but do not know how to use this to import into the MAIN table using something similar.
Code:
select partno2, MAX (partno) as partno, alt, MAX (C_alt) as C_alt, Max (cmpycd) as cmpycd, MAX (type) as type, compFN, MAX (pndesc) as pndesc, MAX (equipment) as equipment
into tbl_CLEANED from tbl_CLEANING group by partno2, alt, compFN ORDER BY partno, compFN
The three main columns I need to check against are: partno2 alt compFN I have named the columns the same in both tables.
partno2 is the column that has been copied from partno with all special characters & spaces removed. This is the main column I am using as a reference for unique values, then if no match, I have it check against the alt column, then the comFN column. If there are no matches in any of these columns, then I want to extract the data to the MAIN table.
How can I compare these tables and import only unique info to the MAIN table?
In addition, how can I also check items that are the same in both tables and update the MAX info for the other columns (not the three I use for reference - these I need to leave alone) and update those if there is more data in the CLEANED table then in the MAIN table?
I have a process that calls a proc that BCP's a delimited file into a table. Well the SOX police say a header and footer must be added to the file. Needless to say this screws my BCP process. Does anyone know how to strip a header and footer record from a text file using transact sql or have any other suggestions to strip the records?
I am trying to import data from excel into my server, but get this error message:
Error during transformation 'directcopyXform' for row number 1. Errors encountered so far in this task 1. TransformCopy 'DirectCopyXform' conversion error: Conversion invalid datatypes on coulumn pair 19 (source column '*9' (DBTYPE_WSTR( destination column 'F19' (DBTYPE_R8)
we are trying to import data from a flat file using an uptick (`) as a column separator and {CR/LF} as a record terminator. There is a variable number of columns for each record. The initial record in the flat file has 3 columns. Upon processing this record, the import sets all records to 3 columns and does not read the column separators past the second column (even though there may be up to 7 columns in the record).
This method worked ok in DTS2000 and it works with Excel. Any suggestions?
There's alot of discussion in this Forum concerning the importation of data into SSE.
I recently discovered that you can quite easily export tables directly from MSAccess to SSE. Simply 1) select the desired MSAccess table, 2) choose 'Export' from the file menu, 3) in the 'Save as Type' drop down, select ODBC databases(), 4) at this point, an 'Export' dialog with the name of the selected table appears, 5) click 'OK', 6) the Data Source Manager appears - go to 'Machine Data Source' and select a DSN that connects to SSE (you need to have set this DSN up before), 7) click 'OK' and and your table will be exported from MSAccess to SSE.
For whatever reason, the table you exported will always end up in SSEs 'Master' database tables. There is probably some work-around to correct this, but I havnt spent much time fooling with it.
Now, MSAccess has a very good parser for importing several types of external data, including Excel worksheets, CSV files, and text data. However, to get a satisfactory into MSAccess, you may need to edit your data files. Then, to import the data, from MSAccess select 'Get External Data' -> 'Import' from the File menu and just follow the instructions.
BTW, there is alot of confusing discussion concerning the ability of SSE to use DTS. Some have said that it is not supported, only in the full-blown version. Others have suggested you can download the DTS application from Microsoft and use it with SSE although it does not show up in the SSEMS directory tree.
I downloaded the file 'SQLServer2005_DTS.msi' and tried to install it. It appeared to install OK but I cannot find it anywhere on my machine. Weird, eh.
While most of this thread is in the way of a comment, I have a couple of questions:
1 - Is there some way to connect a machine DSN to a database in SSE that is other than 'Master' ?
2 - Are there better ways to import data into SSE ?
3 - Is there some way to move or copy a table from one SSE database to another ?
BTW, please do not respond to this thread by posting some link. Many of the links I have attempted to follow from this Forum are either irrelevant to the posted problem or are no longer available. If I wanted that kind of help, I could use Google or go to the library. Try to answer the question(s) directly. If you don't know the answer, say so, or at least, don't answer at all.
With SQL 2000 there was an Import/Export facility for importing data into a Sql database. Could somebody tell me how to import an old database which could be in Csv text or Paradox into my new SQL 2005 tables. Thanks
I have created an application that uses the login, create, etc login components in .net. How hard is it to convert all my old users, passwords, usertypes into the new tables. It almost looks like I have to do them by hand and created a new guid(userid), along with the same guid in the aspnet_usersinroles and aspnet_Membership. Is there a script to do this programatically?
Hi i have an excel spreadsheet in which I want to take the data and put them in a table, the table and excel speadsheet have the same unique-ID, what i need to do is retrieve the extra fields of the excel spreadsheet and match them up with the table. Is this possible, if so how?
Here is the scenario: I have an excel spreadsheet that contains 182 columns, and I need to move this data into a semi-normalized database for reporting. The SQL Server database schema has 11 tables. Some of the tables are going to use identity columns for their PK, other tables are using a value that comes from this spreadsheet for their PK values.Anyway, I have never done a DTS package of any significance before, and know I most likely need to write some VBScript to handle sticking data into the proper data tables, etc.I am just hoping someone can point me at a good resource, give me an alternative means of doing this (this is a process that will need to happen whenever a new Excel spreadsheet is dropped into a folder or on a schedule, either one). I would love to write some C# code to handle these things, but a DTS package would probably be the best, I just don't know where to start.Thanks,