Copying Table Data From SQL Server 2005 To SQL Server 2000 - Very Slow When Using OLEDB Source And Destination Sources?
May 8, 2006
An SSIS package to transfer data from a DB instance on SQL Server 2005 to SQL Server 2000 is extremely slow. The package uses an OLEDB Source to OLEDB Destination for data transfer which is basically one table from sql server 2005 to sql server 2000. The job takes 5 minutes to transfer about 400 rows at night when there is very little activity on the server. During the day the job almost always times out.
On SQL Server 200 instances the job ran in minutes in the old 2000 package.
Is there an alternative to this. Tranfer Objects task does not work as there is apparently a defect according to Microsoft. Please let me know if there is any other option other than using a Execute 2000 package task or using an ActiveX Script to read records from one source and to insert them into the destination source, which I am not certain how long it might take and how viable will that be?
The DTS works perfectly when I run it manually. However, when I run itas a job it fails. Before you ask if i'm running it under differentsecurity context. I have already made sure of that. I was logged intothe server through remote viewer, when I created and ran the package,as well as scheduling the job. So the accounts they're running underare consistent. They're the same accounts as the SQL Agent is runningunder and it's the sys admin account.The data source is a Fox pro database with a pull of two tables. I amusing Microsoft OLE DB Foxpro driver as my source connection and OLE DBConnection for SQL Server as my destination. I am doing a simple tableto table transformation. The path of my connection is a mapped Drive:E:Main. There are other packages and jobs within my job queue that arepointing to the same database and they seem to run fine using the abovemapped drive. The ONLY difference between this package and otherpackages are that, they're few months old and this one was created lastnight. I have also enabled logging on this package and here is thebelow error when the job fails:Package Steps execution information:Step 'DTSStep_DTSDataPumpTask_1' failedStep Error Source: Microsoft OLE DB Provider for Visual FoxProStep Error Description:Invalid path or file name.Step Error code: 80040E21Step Error Help File:Step Error Help Context ID:0Step Execution Started: 9/23/2006 11:39:17 AMStep Execution Completed: 9/23/2006 11:39:17 AMTotal Step Execution Time: 0.031 secondsProgress count in Step: 0
I apologize for not doing the legwork to see if i can answer my own question, but I am close to a loosely planned SQL 2005 migration and don't have time and resources to test my own theories.
Is there a way for a Reporting Services 2000 server to connect to SQL server 2005 databases? I've tried creating a new data source and changing a report to this data source, but it seems like the report is still using the old data source. I'm guessing i might need to register new data providers on the 2000 RS server and then change the existsing data sources.
I have a simple data flow task, composed of only an OLEDB Source, a Conditional Split, and two Execute SQL statements (both insert statements, one after the other). When I run my package in Visual Studio for debugging, I noticed that after executing around ~9800 in the first and another ~9800 records in the second insert statements, the OLEDB Source will take around 3 or 4 minutes to fetch another set of ~9800 records. I have set the DefaultBufferMaxRows property of the Data Flow to 10000. My query to retrieve those 700,000 records runs for about 2-3 mins to finish (which I think should be decent enough). Is this an expected behavior of SSIS? The expected number of records to be retrieved is 700,000, and it takes forever to finish the transfer of these records. Please help
Inside a data flow task, i have a oledb source and destination. In my situation, I need to pull data from a table in the source, but also hard code some columns myself, which means my source is a blend of data from table, hard coded data, which will then have to be mapped to columns in oledb destination. Does anyone which option to choose in the oledb source dropdown for the data access mode. Keep in mind, i do need to run a a select query, as well as get data from a table. Is it possible to use multiple oledb sources and connect to one destination, because that is really what intend to do here. I am not sure how it will work, or even if its possible. Basically my source access mode needs to be a blend of sql command and table columns, how would that be implemented? Any help or advice is appreciated.
I got an error when i do an OLE db Source pointing to an sql 2000 database and executing a sql query inside the OLE Source. The ole source will point to an OLE DB destination which is an sql 2005 database.
But i got the below error:
Error at Data Flow Task [OLE DB Destination [245]]: the column firstname cannot be processed because more than one code page (936 and 1252) are specified for it.
Error at Data Flow Task [DTS.Pipeline]: "component "OLE DB destination" (245)" failed validation and returned validation status "VS_ISBROKEN".
Error at Data Flow Task [DTS.Pipeline]: One or more component failed validation.
Error at Data Flow TaSK: There were errors during task validation.
We are in the process of upgrading a SQL Server 2000 database to 2005. For the mean time, we're trying it out on SQL Server Express 2005. My question is, can reports in SSRS 2005 pull data from SQL Server 2000 in case both are installed side by side in the same machine and the 2000 version is the default instance while the 2005 express edition is the named instance?
I have right now sql server 2000 instance installed on a server (serverone) which is accessed on all nodes through an ODBC connection.
Now we have upgraded our server to windows server 2005 which will be on other machine servertwo. I have right now installed sql server 2005 in servertwo. Wish to copy the database from serverone to servertwo. Once it is copied and everything is working fine. I want to uninstal sql server 2000 from serverone.
What is the best way to do this? I am trying different things like tried running code Advisor on serverone which is not recognising the sql server 2005 instance of servertwo. It's giving message as since the sql server is installed on default settings it can't be accessed remotely. I see that SQL server 2000 database is accessible on all nodes including servertwo. Why is it that servertwo sql server is no where accessible?
Copy a full directory from source to destination (Done)
then for each file on the destination directory,it must process that file and insert rows on the table.
So I created a foreach loop, and created a varriable aclled CURRENTFILENAME, and assigned it into the foreachloop to index 0.
Inside the foreachloop I created a dataflow task, in the dataflow task I dragged a flat file source and an oledb destination, but I noticed that the flat file source requires flat file manager, and the flat file manager requires a unique FILE NAME. I cant put this c:copia*.txt.
I took a loot at flat file source properties and it has associated the flat file manager, but I can not assign the filename to the variable from the foreach.
This is a problem that never get solved, sometime I can use other way to avoid it, but havn't found a solution yet, i hope I can get some more idea here.
I am using SQL 2005, when I run
select * into #import1 from OpenRowSet('microsoft.jet.oledb.4.0','Excel 8.0;hdr=yes;database=\ws8webjeff2.xls', 'select * from [jeff2$]')
I get
Cannot initialize the data source object of OLE DB provider "microsoft.jet.oledb.4.0" for linked server "(null)".
when I try to compile a SP with that statement in it, I get the same error, like
create stored procedure test
as begin
select * into #import1 from OpenRowSet('microsoft.jet.oledb.4.0','Excel 8.0;hdr=yes;database=\ws8webjeff2.xls', 'select * from [jeff2$]')
end
so it seems the error may not relate to the real file, since at the compile stage, it should not check the real file?
On my live db, after I restart the SQL service, the statement will work, after a while, one or several days, I get the same error again. I can not restart my live db quite often for sure, so now I have another backup db server, I need run the statement on the backup server and then read the data from there.
I have the same problem at two places, both use SQL 2005.
So far there are three questions
1, why it works after restart, but only last for a while? something about memory? since the backup db seldom need restart and work fine after many days.
2, why it gives error in compile stage?
3, why two dbs in different Enviroment has the same problem
The most answer I have gathered so far is permission issue, true I got similar error if the import file is located in a place which SQL has no right to access. But in this case, it should not be.
I was wondering if someone can point out the error or the thing I shouldn't be doing in a stored procedure on SQL Server 2005. I want to switch from SQL Server 2000 to SQL Server 2005 which all seems to work just fine, but one stored procedure is causing me headache.
I could pin the problem down to this query:
DECLARE @Package_ID bigint
DECLARE @Email varchar(80)
DECLARE @Customer_ID bigint
DECLARE @Payment_Type tinyint
DECLARE @Payment_Status tinyint
DECLARE @Booking_Type tinyint
SELECT @Package_ID = NULL
SELECT @Email = NULL
SELECT @Customer_ID = NULL
SELECT @Payment_Type = NULL
SELECT @Payment_Status = NULL
SELECT @Booking_Type = NULL
CREATE TABLE #TempTable(
PACKAGE_ID bigint,
PRIMARY KEY (PACKAGE_ID))
INSERT INTO
#TempTable
SELECT
PACKAGE.PACKAGE_ID
FROM
PACKAGE (nolock) LEFT JOIN BOOKING ON PACKAGE.PACKAGE_ID = BOOKING.PACKAGE_ID
LEFT JOIN CUSTOMER (nolock) ON PACKAGE.CUSTOMER_ID = CUSTOMER.CUSTOMER_ID
LEFT JOIN ADDRESS_LINK (nolock) ON ADDRESS_LINK.SOURCE_TYPE = 1 AND ADDRESS_LINK.SOURCE_ID = CUSTOMER.CUSTOMER_ID
LEFT JOIN ADDRESS (nolock) ON ADDRESS_LINK.ADDRESS_ID = ADDRESS.ADDRESS_ID
AND PACKAGE.CUSTOMER_ID = ISNULL(@Customer_ID,PACKAGE.CUSTOMER_ID)
AND PACKAGE.PAYMENT_TYPE = ISNULL(@Payment_Type,PACKAGE.PAYMENT_TYPE)
AND PACKAGE.PAYMENT_STATUS = ISNULL(@Payment_Status,PACKAGE.PAYMENT_STATUS)
AND BOOKING.BOOKING_TYPE = ISNULL(@Booking_Type,BOOKING.BOOKING_TYPE)
-- If this line below is included the request will take about 90 seconds whereas it takes 1 second if it is outcommented
--AND ADDRESS.EMAIl LIKE '%' + ISNULL(@Email, ADDRESS.EMAIL) + '%'
GROUP BY
PACKAGE.PACKAGE_ID
DROP TABLE #TempTable
The request is performing quite well on the SQL Server 2000 but on the SQL Server 2005 it takes much longer. I already installed the SP2 x64, I'm running the SQL Server 2005 on a x64 environment. As I stated in the comment in the query it takes 90 seconds to finish with the line included, but if I exclude the line it takes 1 second. I think there must be something wrong with the join's or something else which has maybe changed in SQL Server 2005. All the tables joined have a primary key. Maybe you folks can spot the error / mistake / wrong type of doing things easily. I would appreciate any help you can offer me to solve this problem.
On the web I saw that there is a Cumulative Update 4 for the SP2 which fixes the following:
942659 (http://support.microsoft.com/kb/942659/) FIX: The query performance is slower when you run the query in SQL Server 2005 than when you run the query in SQL Server 2000
Anyhow I think the problem is something else, I haven't tried out the cumulative update yet, as I think it is something different, more general why this query takes ages to process.
I want to process data from source table to destination table without using cursor.
At this point; we are creating temp table and inserting data from source to temp table. once we get data into temp table; using while loop we are processing record one by one to destination table.
while executing stored procedure; we noticed that there are few records in source table which are invalid and stored procedure is terminating from such records.
Any better approach to log INVALID data and resume code to process next record instead of terminating.
I am stuck on finding a solution to transpose source data from a system via a metadata look-up table into a destination table. I need a method to transpose/pivot the source data into columns (which are by various data-types). The datatypes for each column are listed in a metadata table.
Source Data Table:
Table Name: Source
SrcID AGE City Date 01 32 London 01-01-2013 02 35 Lagos 02-01-2013 03 36 NY 03-01-2013
Metadata Table:
Table Name:Metadata
MetaID Column_Name Column_type 11 AGE col_integer 22 City col_character 33 Date col_date
Destination table:
The source data to be loaded into the destination table(as shown below):
I am try to transfer some tables data from one database server into another database server. I create a package in SSIS, and I use a variable to pass each table name. In Data flow, I use a OLEDB Source, but I cannot set the Data access mode to Table name or view name variable. Ever time, I will get this following error info "===================================
Error at Data Flow Task [OLE DB Source [31]]: A destination table name has not been provided.
(Microsoft Visual Studio)
===================================
Exception from HRESULT: 0xC0202042 (Microsoft.SqlServer.DTSPipelineWrap)
------------------------------ Program Location:
at Microsoft.SqlServer.Dts.Pipeline.Wrapper.CManagedComponentWrapperClass.ReinitializeMetaData() at Microsoft.DataTransformationServices.DataFlowUI.DataFlowComponentUI.ReinitializeMetadata() at Microsoft.DataTransformationServices.DataFlowUI.DataFlowAdapterUI.connectionPage_SaveConnectionAttributes(Object sender, ConnectionAttributesEventArgs args)".
Some one can tell me what is the reason, or give me some examples.
Can anyone shed light on why I cannot get away from named pipes in my SQL2005 Reporting Services Data Sources? Can it not just use TCP/IP? My configuration uses two servers, one in which the reportserver is setup on, and another which hosts the database that is reported against. We are trying to avoid having to open up named pipes on the server that stores the data.
Any help would be appreciated.
An error has occurred during report processing.
Cannot create a connection to data source 'wc_datasource'.
an error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)
I have 5 or more tables to join to get a particular output which has to be sent to a destination table. In the 5 tables some are inner joins and some are left outer join. I am opting for stored procedure at this point. But I would like to know how can this be done in data flow transformations having multiple souce and merge joins or any other alternates. I tried using merge join, but this does not accept more than two tables.
I saw this simple post which kick started me to use ssis transformations to stored procedures. But I encounter issue. http://www.mssqltips.com/tip.asp?tip=1322 error "The destination component does not have any available inputs for use in creating a path".
I have a problem with bad perfomance with my import of data from a SQL Server 2000 database. I use an OLEDB datasource in my SSIS package to connect to the sql server 2000 database. My 2005 server runs 64bits but i dont think this is an issue. With this configuration the import is VERY slow, we are talking about 40+ minutes to get 3.5 million rows with about 20 columns. When i create a test DTS package on the SQL Server 2000 server itself and run it, its blazingly fast. Has anyone run into something similar?
I have a table in MS Access that I would like to copy/import into SQL Server 2005. I would like to automate this process so that it copies the table over on a daily basis. Can someone tell me how this can be done?
Problem: ColA (Source) Rounding error to PARTY_NO (Destination) I have a field of text of in a flat file that the flat file connection manager Source picks up correctly €ś70000893€? However when it gets the OLE DB Connection Destination the data has changed to 70000896. That€™s before its even Written to the database. The only clue that something is wrong in the middle is the great Data viewer shows the number as 7.000009E+07 Other clues looking at the data it appears there is a rounding error on only the number that dont end in 00 ColA (Source) PARTY_NO (Destination) 71167300 71167296 70329000 70329000 70410000 70410000 Any ideas people? Thanks in advance Dave
When expoting data from excel to sql server table, using SSIS package, after exporting is done, how would i check source rows are equal to destination rows. If not to throw an error message.
How can we handle transactions in SSIS 1. when some error/something happens during export and the # of rows are not exported fully to destination, how to rollback the transaction in SSIS.
When expoting data from excel to sql server table, using SSIS package, after exporting is done, how would i check source rows are equal to destination rows. If not to throw an error message.
From a source table (in SERVER1) I get ids of candidates and from another source (in SERVER2) I get their CVs (text files stored in various Folders). My destination table (in SERVER3) has two fields, CandidateId & CandidateCV.
I have to transfer the data in above fashion for nearly 1 million records. How can I write a DTS package which picks up the text file from SERVER2 based on the CandidateId which comes from SERVER1? Probably I need some kind of looping mechanism which changes the candidate id & his CV file.
When copying data to a remote SQL2K5 destination from a SQL2K5 source database, both using mixed sql server security mode, my job generates the following error:
[Transfer SQL Server Objects Task] Error: Execution failed with the following error: "Cannot apply value null to property Login: Value cannot be null..".
This occurs after the destination database tables have been truncated and replacement data from the source would begin to copy.
The same process can be successfully completed from the Management studio with a simple data export process. However, when I run the saved package again from the BI interface, I get this error.
My search engine searches have yielded numerous hits of others having the same problem with one microsoft rep indicating it was a bug and would be resolved in sp1. I am working with sp1. Oddly, there is only mention of this in the forums. No KB article from MS addresses the problem and I do not see it addressed elsewhere at sqlservercentral.
It appears that others have switched to Integrated Security and resolved the problem. However, I do not have that option with a remotely hosted database.
Does anyone have any information concerning this problem?
I want to schedule a job which pulls files from a non SQL server (Sybase) which later needs to have a step 2 kicking the ssis package. Problem is that, on the source a batch file will run every 4 hours and outputs total of 10 text files. (takes 5 minutes complete). Now, on destination i want to pull these files via SQL job but while scheduling;
1. I don't see any option saying like 4 hours 10 minutes or so 2. If its out there, then i believe this might be a problem as this time would be an increment one e.g next run would be 4 hours 20 minutes in that case.
How should i achieve pulling these files up because we have an SSIS package on destination that needs those text files to be used as soon as they arrive on SQL server(destination)
Here is the brief to my problemWe had our database on SQL Server 2000 and Windows 2000.This machine had 2gb of RAM and dual Penitum 3 processors and about 25-30 users were connected all the time. The size of database is around 2 gb. Even on this setup rate of data retrival was good, never had any issues. We moved to SQL Server 2005 and Windows 2003. This machines has 2 Pentium Xeon 3.4 processors and 2 stick of KINGSTON 1024 MB 333 MHZ DDR DIMM ECC CL2.5 DUAL RANK X4 INTEL. The rate of data retrival is awful and its very slow. It using about 1.7 to 1.9 gb of RAM all the time. Page File usage is about 2.07 gb and Virtual Usage is about 1.7gb.I dont quiet understand why is it so slow to get data. We use bespoke software, so nothing has changed there. Hardware specification of our server is far more better then the recommended system requirement for SQL Server 2005.Am i missing something out or i havent set up the SQL Server properly? Any help would really be appreciated.Mits
I am replacing a server so I need to migrate everything. The old server is running SQL2000 and the new server is running SQL2005. I am trying to write an SSIS solution to migrate everything for me and I can't even get started because I get the error "The source server can not be the same as the destination server". At the same time I am changing the name of the Domain so the two machines arenot even members of the same Domain. I am doing this over the Internet so the machines are not even on the same subnet. The only thing I can think of is that the machine names are the same so even though the domains are different therefore the full names are different, the NetBIOS names are the same. Could that be the problem?
I am pretty new to SSIS. I am trying to create a package which can accept data in any of several formats. i.e. CSV, Excel, a SQL Server database/table and import the data into my destination database.
So far i've managed to get this working OK. However I am now TOTALLY stuck. I'm currently trying to just concentrate on the data sources being a CSV (using a Flat File Data Source) and/or an Excel Spreadsheet.
I can get the data in and to my destination using a UNION ALL component and mapping the data sources to it so long as both the CSV file and the Excel spreadsheet exist.
My problem is that I need my package to handle the possibility that only the CSV file might exist and there is no Excel spreadsheet. In which case i'd like the package to ignore the Excel datasource completely. Currently either of my data sources do not exist I get errors and the package terminates.
Is there any way in SSIS that I can check all my data sources to see which ones exist (i.e. are valid). If they exist I want to use them. If it doesn't exist i'd like to disgard it (without error - as long as there is a single datasource the package should run)
I've tried using the AcquireConnection method in a script task on each of my connections, hoping that it would error if the file/datasource did not exist. It doesn't though (in the case of an Excel datasource it just creates a empty excel file for me).
The only other option I can come up with are to have seperate packages depending on the type of data we want to import and then run a particular package depending on the format of the source data. This seems a bit long winded. I am pretty sure I must be able to do what I want to achieve but I can't work out how.
I'll be grateful to anyone who can send me any tips/hints/links on how I can achieve this.