I understand that SQL 2005 now supports XML documents. I have a rather large file that is in XML format that I need to get into a table in SQL2005. I have tried the import wizard to no avail. I even tried to use DTS packages in SQL 2000 and still no luck.
Does anyone know of an article or information I can obtain on how to import data from and XML file to a SQL 2005 Table?
Here is a basic idea of the data as it is shown on the first row of the XML file.
Hi I have to import data from a number of excel files to corresponding tables in SQL 2005. The excel files are created using excel 4.0. I have created an excel connection manager and provided it with the path of the excel sheet.Next i have added an excel source from the toolbox to the dataflow. I have set the connection manger, data access mode, and the name of the excel sheet (the wizard detects the sheet correctly) in the dialog window i get when i double click the excel source. Every thing goes fine till here. Now when i select the 'columns' in this dialog window or the preview button, i get this error
TITLE: Microsoft Visual Studio------------------------------Error at Data Flow Task [Excel Source [1]]: An OLE DB error has occurred. Error code: 0x80004005.Error at Data Flow Task [Excel Source [1]]: Opening a rowset for "test4$" failed. Check that the object exists in the database.------------------------------ADDITIONAL INFORMATION:Exception from HRESULT: 0xC02020E8 (Microsoft.SqlServer.DTSPipelineWrap)------------------------------
I tried restoring a sql2000 database into sql2005 but the import failed. Is it possible to do a import like this or do you have to do some kind of db conversion?
In SQL 2000, I had a working DTS package that would import a Pervasive SQL database into SQL 2000 (There is a good reason, provided on request). The column type definitions came over just fine in SQL 2000 with a few minor changes.
In SQL 2005 (SSIS), i create a Data Source via the Connection Manager (Provider: .Net ProvidersOdbc Data Provider) to the Pervasive database (System DSN, <database_odbc>). I then create a Data Destination via the Connection Manager (Provider: Native OLE DBSQL Native Client) to the SQL database. Both databases reside on the same machine.
I've created a DataSource Reader and used the sql command, "select * from ARCustomer" as an example. The issue is with "data types" for the columns. They don't come close to resembling the results that i had in SQL 2000 DTS.
Is there another method or parameter setting that will preserve the "data types" for the columns being imported from the Pervasive database.
This has been a real stumbling block and any help would be truly appreciated. Thanks in advance for your assistance ... Bernie
I am trying to import an Excel Spreadsheet into SQL2005. There is a column in the spreadsheet that has character values, and numbers. I have formatted the numbers as text on the spreadsheet. I have declared the column on the table as char/varchar/nchar, but whatever I do, the numbers don't get imported into the table, but show up as nulls. Any idea why?
There is a "text" file generated by mainframe and it has to be uploaded to SQL Server. I've reproduced the situation with smaller sample. Let the file look like following: A17 123.17 first row BB29 493.19 second ZZ3 18947.1 third row is longer And in hex format: 00: 41 31 37 20 20 20 20 20 ”‚ 31 32 33 2E 31 37 20 20 A17 123.17 10: 66 69 72 73 74 20 72 6F ”‚ 77 0D 0A 42 42 32 39 20 first row™ª—™BB29 20: 20 00 20 34 39 33 2E 31 ”‚ 39 20 20 73 65 63 6F 6E 493.19 secon 30: 64 0D 0A 5A 5A 33 20 20 ”‚ 20 20 20 31 38 39 34 37 d™ª—™ZZ3 18947 40: 2E 31 20 74 68 69 72 64 ”‚ 20 72 6F 77 00 69 73 20 .1 third row is 50: 6C 6F 6E 67 65 72 ”‚ longer
I wrote "text" in quotes because sctrictly it is not pure text file - non-text binary zeros (0x00) happen sometimes instead of spaces (0x20).
The table is:
CREATE TABLE eng (
src varchar (512)
)
When i upload this file into SQL2000 using DTS or Import wizard, the table contains:
select src, substring(src,9,8), len(src) from eng < src ><substr> <len> A17 123.17 first row 123.17 25 BB29 493.19 22 ZZ3 18947.1 third row 18947.1 35
As one can see, everything was imported, including binary zeros. And though SELECT * in SSMS truncates strings upon reaching 0x00's, still all information is stored in tables - SUBSTRINGs show that.
When i upload this file into SQL2005 using SSIS or Import wizard the result is following: < src ><substr> <len> A17 123.17 first row 123.17 25 BB29 4 ZZ3 18947.1 third row 18947.1 25
This time table is half-empty - all characters behind binary zeros in respective rows are lost.
I stumbled upon this problem while migrating my DTSes to SSIS packages. Do you think there is some workaround, or i need to turn on some checkbox or smth else could help? Please...
For the life of me I cannot figure out why SSIS will not convert varchar data. instead of using the table to table method, I wrote a SQL query so that I could transform the datatype ntext to varchar 512 understanding that natively MS is going towards all Unicode applications.
The source fields from Access are int, int, int and varchar(512). The same is true of the destination within SQL Server 2005. the field 'Answer' is the varchar field in question....
I get the following error
Validating (Error)
Messages
Error 0xc02020f6: Data Flow Task: Column "Answer" cannot convert between unicode and non-unicode string data types. (SQL Server Import and Export Wizard)
Error 0xc004706b: Data Flow Task: "component "Destination - Query" (28)" failed validation and returned validation status "VS_ISBROKEN". (SQL Server Import and Export Wizard)
Error 0xc004700c: Data Flow Task: One or more component failed validation. (SQL Server Import and Export Wizard)
Error 0xc0024107: Data Flow Task: There were errors during task validation. (SQL Server Import and Export Wizard)
DTS used to be a very strong tool but a simple import such as this is causing me extreme grief and wondering of SQL2005 is ready for primetime. FYI SP1 is installed. I am running this from a workstation and not on the server if that makes a difference...
I setup this package to import data from a Sharepoint list to a SQL Server data table. The primary key of my SQL table is mapped to the Title column of my Sharepoint list. There is a possibility that duplicate values will be entered in the Title field of the Sharepoint list. So when importing data into my table via SSIS, my package always error-out when there it comes across duplicate values. how you others have managed data integrity when importing from a Sharepoint list with the Title column being mapped to the primary key of a table.
I have one column in SQL Server 2005 of data type VARCHAR(4000).
I have imported sql Server 2005 database data into one mdb file.After importing a data into the mdb file, above column data type converted into the memo type in the Access database.
now when I am trying to import a data from this MS Access File(db1.mdb) into the another SQL Server 2005 database, got the error of Unicode Converting a memo data type conversion in Export/Import data wizard.
Could you please let me know what is the reason?
I know that memo data type does not supported into the SQl Server 2005.
I am with SQL Server 2005 Standard Edition with SP2.
Please help me to understans this issue correctly?
We have a daily process, which copies millions of rows of data from one DB to another over Linked Server. Just checking on the best practise, are there more efficient ways than the Linked server to copy millions of rows of data from one DB to another? I checked bulk insert but that transfers the data from the file to DB not DB to DB.Â
I have created a simple package that uses a sql command to pull data from an oracle database and inserts the data into a sql 2005 table. Some of the data fields that i am pulling from contain two digits after the decimal point, however this data is lost when it gets into sql. I have even tried putting the data into a flat file, and still the data is lost.
In the package I have a ole db source connection which is the oracle database and when i do the preview i see all the data I need. I am very confused and tried a number of things to get the data into sql, but none work. Any ideas would be very helpful.
when i m importing data from excel to Sql using DTS the column which has text content was not imported as same in excel sheet. whereas a special character is appearing in between the lines. the text field contains multiple lines but the conetent is imported in single line .
I'm wondering if SSIS will be the solution to the problem I'm working on.
Some of our customers give us an Excel sheet with data they want to insert or update in the database.
I've created a package that will take an Excel sheet, do some data conversion so the data types match up and after that I use a Slowly Changing Data component to create the insert/update commands.
This works great. If a customer adds a new row to the Excel sheet or updates an existing row changes are nicely reflected in the database.
But now I€™ve got the following problem. The column names and the order of the columns in the Excel sheet are not standard and in the future it could happen a customer doesn't even use an Excel sheet but something totally different.
Can I use SSIS for this? Is it possible to let the user set the mappings trough some sort of user interface? I€™ve looked at programmatically creating the package but I€™ve got to say that€™s quit hard to do€¦ It would be easier to write the whole thing myself than to create the package trough code ;)
If not I thought about transforming the data in code before I pass it on to the SSIS package in something like XML. That way I can use standard column names and data types.
So how should I solve this problem? Use SSIS or not?
I'm new to SQL and DTS packages. I am trying to import data from an excel spreadsheet to an SQL server table via DTS package. It seems that the excel task looks at the first few records in a column to determine the datatype for that column. If the first few records are text, the entire column is imported as text. If numeric, the entire column is imported as numeric. There are about 25,000 records. In one field, the most important one, about half of the records begin with letters and the rest are all numbers. It is the subscriber ID field, and some subscriber IDs are all numbers, some are letters and numbers. The entire column should be imported as text. However, when I run the transform data task from the excel connection, none of the records that are all numbers are imported. I end up correctly importing only 13,000 of the 25,000 records. The rest are imported with the subscriberID field as <NULL>. I tried using the CAST or CONVERT function in the SQL query, but get the error message "Undefined Function."
hello, I create a txt file with a bash script, and i need to use it in a DTS package. But, i don't know how i can specify the type of my column. So in the transformations task, i have an error due to an incompatible type. what can i do to fix this error ? thanks,
I am creating a DTS package that is combining several tables, converting one column of data to a new column removing all special characters, then exporting the unique data based on this column and another column, and the max of other duplicates to a new table.
Now that I have the data in this table, I want to import any data that is not in my main table.
This "CLEANED" table does not have a designated "key" column, but the table I want to import the unique items does have an ID column that is also a primary key column.
DTS seems to want me to have a Key column to reference when importing from the CLEANED table to the MAIN table.
How would I go about checking the MAIN table against the CLEANED table, having DTS import only the unique items from the CLEANED table that are not present in the MAIN table based on three columns? The rest of the columns I want to just extract the MAX data from the duplicates.
Now here is the query I use to extract the unique values from the "CLEANING" table to get the data to the "CLEANED" table, but do not know how to use this to import into the MAIN table using something similar.
Code:
select partno2, MAX (partno) as partno, alt, MAX (C_alt) as C_alt, Max (cmpycd) as cmpycd, MAX (type) as type, compFN, MAX (pndesc) as pndesc, MAX (equipment) as equipment
into tbl_CLEANED from tbl_CLEANING group by partno2, alt, compFN ORDER BY partno, compFN
The three main columns I need to check against are: partno2 alt compFN I have named the columns the same in both tables.
partno2 is the column that has been copied from partno with all special characters & spaces removed. This is the main column I am using as a reference for unique values, then if no match, I have it check against the alt column, then the comFN column. If there are no matches in any of these columns, then I want to extract the data to the MAIN table.
How can I compare these tables and import only unique info to the MAIN table?
In addition, how can I also check items that are the same in both tables and update the MAX info for the other columns (not the three I use for reference - these I need to leave alone) and update those if there is more data in the CLEANED table then in the MAIN table?
I'm stumped. When using the classic asp page, the table produced on the web has 4 rows. But, when using the asp.net 2.0 side against the same stored proc, I get double the rows - 8. However, when I pull from the sql table directly there are only 4 rows. Any ideas? I've looked at this far too many hours...da = New Data.SqlClient.SqlDataAdapter("pTblTempDisplayInsert2 0,'" & coList & "','" & tblList & "','" & varTblNameT & "'", cn) da.Fill(ds, "myTables") For Each drow As System.Data.DataRow In ds.Tables("myTables").Rows varTxt &= drow.Item( "titleID") & "<br />" Next I get:18521850205520531852185020552053If I run the last line of the stored procedure with the variable table name filled in:SELECT d.titleID FROM ##varTblName d LEFT JOIN xtblTitles t ON d.tblID=t.tblID LEFT JOIN tblStatTitles s ON d.titleID=s.titleID LEFT JOIN xtblURL u ON u.urlID=t.urlID ORDER BY d.tblID, d.[year], d.lineOrder, d.countyID I get:1852185020552053If I do a simple select * from ##varTableName, I get four rows....Thanks,Janet
I need to transfer data from SQL Server 2000 to SQL Server 2005 nightly. Replication DB1.T1 table in SQL 2000 to DB2.T1 table (the same table structure) in SQL 2005. What are my options? I need to extract a daily data and do this daily. Some example would be greatly appreciated. Thanks,
I have a SQL 2000 and SQL 2005 servers separately, but by whatever reason, I can't use SQL 2000 Enterprise Mgr anymore now so I have to use SQL 2005 Mgm studio to open up my SQL 2000 databases now.
However, I'd like to export data from Sql 2000 to another Sql 2000 server, but via a Sql 2005 mgm studio interface right now. How can I do so? I can't find "Export Data" from the "Task" context menu in Sql 2005 mgm studio.
Hi, there; I want to importing data from ODBC, I created DataReader Source which use a .NET Provide Odbc Data Provider and connected successfully. My destination is a OLE DB Destination that points to SQL2005. I set the SQL command as "SELECT * from ....". I also have problem to create new table in SQL2005 using SSIS Import and Export Wizard, it doesn't know the source table table schema (two date type column). So I create the new table manully and run the package, I got error:
SSIS package "Package1.dtsx" starting. Information: 0x4004300A at Data Flow Task, DTS.Pipeline: Validation phase is beginning. Information: 0x40043006 at Data Flow Task, DTS.Pipeline: Prepare for Execute phase is beginning. Information: 0x40043007 at Data Flow Task, DTS.Pipeline: Pre-Execute phase is beginning. Information: 0x4004300C at Data Flow Task, DTS.Pipeline: Execute phase is beginning. Error: 0xC02090F5 at Data Flow Task, Source - Query [1]: The component "Source - Query" (1) was unable to process the data. Error: 0xC0047038 at Data Flow Task, DTS.Pipeline: The PrimeOutput method on component "Source - Query" (1) returned error code 0xC02090F5. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. Error: 0xC0047021 at Data Flow Task, DTS.Pipeline: Thread "SourceThread0" has exited with error code 0xC0047038. Error: 0xC0047039 at Data Flow Task, DTS.Pipeline: Thread "WorkThread0" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown. Error: 0xC0047021 at Data Flow Task, DTS.Pipeline: Thread "WorkThread0" has exited with error code 0xC0047039. Information: 0x40043008 at Data Flow Task, DTS.Pipeline: Post Execute phase is beginning. Information: 0x402090DF at Data Flow Task, Destination - PITest CMF [169]: The final commit for the data insertion has started. Information: 0x402090E0 at Data Flow Task, Destination - PITest CMF [169]: The final commit for the data insertion has ended. Information: 0x40043009 at Data Flow Task, DTS.Pipeline: Cleanup phase is beginning. Information: 0x4004300B at Data Flow Task, DTS.Pipeline: "component "Destination - PITest CMF" (169)" wrote 0 rows. Task failed: Data Flow Task SSIS package "Package1.dtsx" finished: Success.
Can any one know what's wrong here?
And it is quite pain that you have to specify the table name every time you want to create a new table in SQL2005. It is so easy in SQL2000!!!
Hello, everyone. I am having problems catching a data duplication issue. I hope I can get an answer in this forum. If not, I would appreciate it if someone can direct me to the right forum. I am working on a vs2005 app which is connected to a sql2005 db. Precisely, I am working on a registration form. Users go to the registration page, enter the data, ie. name, address, email, etc. and submit to register to the site. The INSERT query works like a charm. The only problem is that I am trying to reject registrations for which an email address was used. I put a constraint on the email field in the table and now if I try to register using an e-mail address that already exists in the database I get a violation error (only visible on the local machine) on the sql's email field, which is expected. How can I catch that there is already an email address in the database and stop the execution of the code and possibly show a message to the user to use a different address? Thank you for all your help.
Hi I have a (possibly) common position where half of our IT department is SQL 2005 and the rest is SQL2000. For myself, having to work in a SQL2000 environment and needing data from a SQL2005 Cluster I came up with this solution. Also, to alert me when the process starts and completes since it is part of a scheduled process, I have it do a RAISERROR with logging to record entry and exit times. In addition, this runs out a series of PRINT statements that lets the operator know what table is currently being worked on.
Of course, any suggestions for speeding this up would be helpful!
What I have found that works for getting data (albeit slow) from SQL 2005 down to SQL 2000 is to script a fairly simple copy process -
-- it is actually pretty easy to follow if you just read the code......
-- first, we find all the views which are present, so they can be ignored when copying the raw data
CREATE TABLE #VIEWS (TABLE_NAME NVARCHAR (255) )
INSERT #VIEWS SELECT TABLE_NAME FROM OPENDATASOURCE( 'SQLOLEDB', Data Source=SQL2005server;User ID=trust_me;Password=i_know_what_i_am_doing' ) .SQL_SIS_SYSTEM.INFORMATION_SCHEMA.VIEWS
-- now, we get the actual tables with data CREATE TABLE #TEMP (TBL_SCHEMA VARCHAR (12), TBL_NAME VARCHAR (255), RECCNT INT )
INSERT #TEMP SELECT TABLE_SCHEMA , TABLE_NAME FROM OPENDATASOURCE( 'SQLOLEDB', 'Data Source=SQL2005server;User ID=trust_me;Password=i_know_what_i_am_doing' ) .SQL_SIS_SYSTEM.INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_NAME NOT IN (SELECT TABLE_NAME FROM #VIEWS)
-- then, we start copying tables over - now we tag the ones which are populated with at least 1 row. -- the first cursor loop gets all table names, the second will find row counts of source tables. -- this segment uses (ugh) cursor to loop through and gather all of the data. This cursor is at the table name level - not -- processing anything, and is used only to find tables which have a rowcount > 0 -- believe it or not, the cursors run pretty darn quickly since they arnet doing any calculations - just finding -- tables with row counts > 0
SET @QUOT = CHAR(39) SET @LBRAKT = '[' SET @RBRAKT = ']' SET @IUT = 'IsUserTable' SET @ODBC_CMD ='(' + @QUOT + 'SQLOLEDB' + @QUOT + ',' + @QUOT + 'Data Source=SQL2005server;User ID=trust_me;Password=i_know_what_i_am_doing' + @QUOT + ').SQL_SIS_SYSTEM.dbo.' PRINT 'BEGIN TABLE SCHEMA LOAD CURSOR. ' + CAST(GETDATE() AS VARCHAR (50) ) DECLARE GETEM CURSOR FOR SELECT TBL_SCHEMA, TBL_NAME FROM #TEMP OPEN GETEM FETCH NEXT FROM GETEM INTO @TBL_SCHEMA, @TBL_NAME
WHILE @@FETCH_STATUS = 0 BEGIN
SET @SQL = 'UPDATE #TEMP SET RECCNT = ' + '(SELECT COUNT(*) FROM OPENDATASOURCE(' + @QUOT + 'SQLOLEDB'+ @QUOT + ',' + @QUOT +''Data Source=SQL2005server;User ID=trust_me;Password=i_know_what_i_am_doing' + @QUOT + ').SQL_SIS_SYSTEM.' + @TBL_SCHEMA + '.' + @TBL_NAME +')' + ' WHERE TBL_NAME = ' + @QUOT + @TBL_NAME + @QUOT EXEC (@SQL) FETCH NEXT FROM GETEM INTO @TBL_SCHEMA, @TBL_NAME END CLOSE GETEM DEALLOCATE GETEM
I'm not sure if this should be in the Sql or VB forums.
Using VS2005 I have a graphics program which holds data in a Dictionary Object.
The data consists of a Key and a Structure called Arrows. This structure holds a startpoint and an endpoint, both of which consist of x and y co-ordinates. I need to insert these four co-ordinates into X,Y,X1 and Y1 columns of a table,
The following Insert statement will insert the correct IdKey and Textbox data, but will not compile the co-ordinates.
I have initialised the Points of the structure, but get the error message that Type is not defined.
With dbCommand
Dim _mstartPoint As Point
Dim _mendPoint As Point
Dim strSql As String
For Each kvp As KeyValuePair(Of String, Arrow) In Arrows
I'm a new SQL2005 user. I'm trying to import data from MySQL version 5x into my SQL2005 by wizard. I created a DNS file and tested successfully using MySQL Connector/ODBC v5. Everthing seems fine but at the last step selecting data source. The SQL2005 wizard forced me to choose using SQL command option instead of selecting tables/views from a list. Anyone can tell me why? My collegues faced the same case as they help me to find the reason. I'm sure that there are a few source objetcts in MySQL source.
When i run above script, it prompt me an error message; "Microsoft OLE DB Provider for SQL Server The requested operation could not be performed because OLE DB provider "IBMDA400" for linked server "TESTDB2" does not support the required transaction interface."
If i run it without "begintrans" and "committrans", it update the data successfully.