Delete Duplicate Records From Huge Tables
Jul 20, 2005
Hi All,
I've the following table with a PK defined on an IDENTITY column
(INSERT_SEQ):
CREATE TABLE MYDATA (
MID NUMERIC(19,0) NOT NULL,
MYVALUE FLOAT NOT NULL,
TIMEKEY INTEGER NOT NULL,
TIMEKEY_DTTM DATETIME NULL,
IID NUMERIC(19,0) NOT NULL,
EID NUMERIC(19,0) NOT NULL,
INSERT_SEQ NUMERIC(19,0) IDENTITY(1,1) NOT NULL
)
GO
ALTER TABLE MYDATA
ADD CONSTRAINT PK_MYDATA
PRIMARY KEY (INSERT_SEQ)
GO
The TIMEKEY_DTTM field is generated, from the value actually inserted
into the
TIMEKEY field, by the following trigger:
CREATE TRIGGER TIMEKEY1
ON MYDATA
FOR INSERT AS
BEGIN
DECLARE @M_TIMEKEY_DTTM DATETIME
SELECT @M_TIMEKEY_DTTM = DATEADD(SECOND, INS.TIMEKEY +
EP.GMT_OFFSET * 0 ,'1970-01-01 00:00:00.000')
FROM INSERTED INS, LOCATIONINFO EP
WHERE INS.EID = EP.EID
UPDATE MYDATA
SET TIMEKEY_DTTM = @M_TIMEKEY_DTTM
FROM INSERTED INS, MYDATA MD
WHERE MD.INSERT_SEQ = INS.INSERT_SEQ
END
GO
There is also a composite, non unique, index defined on the
tuple:(MID,IID,TIMEKEY,EID)
CREATE INDEX IX_METDATA ON MYDATA (MID,IID,TIMEKEY,EID)
GO
As a consequence of an application design change, I would also change
this index to be UNIQUE, but when I try to drop and create it I get an
error, because the tables stores some duplicated rows...
In order to succesfully upgrade the index definition, I wrote some DML
staements
to lookup and remove the duplicated rows, keeping only the first
record inserted, i.e. the one with the lowest INSERT_SEQ:
--
-- This table stores then umber of duplicated records eventually
discovered
-- into the MYDATA table; the initial value for the NUM_DUPLICATES
field is
-- 0 (no duplicated record)
--
DROP TABLE DUPLICATES
GO
CREATE TABLE DUPLICATES (
TABLENAME VARCHAR(17),
NUM_DUPLICATES NUMERIC(19,0) )
GO
INSERT INTO DUPLICATES VALUES ('MYDATA',0)
GO
INSERT INTO DUPLICATES VALUES ('CATEGORIESDATA',0)
GO
--
-- ///////// CLEAN UP OF MYDATA TABLE
--
DROP TABLE TMP_MYDATA
GO
CREATE TABLE TMP_MYDATA (
MID NUMERIC(19,0) NOT NULL,
TIMEKEY INTEGER NOT NULL,
IID NUMERIC(19,0) NOT NULL,
EID NUMERIC(19,0) NOT NULL,
INSERT_SEQ NUMERIC(19,0) )
GO
--
-- Insert into the TMP_MYDATA table all the duplicated records for
-- the tuple (MID,IID,TIMEKEY,EID) and NULL for the INSERT_SEQ field
--
INSERT INTO TMP_MYDATA (MID,IID,TIMEKEY,EID)
SELECT MID,IID,TIMEKEY,EID
FROM MYDATA
GROUP BY MID,IID,TIMEKEY,EID
HAVING COUNT(*)>1
GO
--
-- Updates the INSERT_SEQ field to the lowest value in the group
-- of duplicated records
--
UPDATE TMP_MYDATA
SET TMP_MYDATA.INSERT_SEQ = (
SELECT MIN(INSERT_SEQ)
FROM MYDATA
WHERE TMP_MYDATA.MID = MYDATA.MID AND
TMP_MYDATA.IID = MYDATA.IID AND
TMP_MYDATA.TIMEKEY = MYDATA.TIMEKEY AND
TMP_MYDATA.EID = MYDATA.EID )
GO
--
-- Updates the value of NUM_DUPLICATES for the MYDATA table.
--
UPDATE DUPLICATES
SET NUM_DUPLICATES = (SELECT COUNT(*) FROM TMP_MYDATA)
WHERE TABLENAME = 'MYDATA'
GO
--
-- Delete from the MYDATA table all the duplicated records,
-- keeping only the row with the lowest INSERT_SEQ
-- The delete is performed only if there are duplicated recors;
-- this is achieved using a "short circuit" AND on the number of
records
-- stored into the NUM_DUPLICATES field of the DUPLICATES table for
-- the MYDATA table...
--
DELETE FROM MYDATA
WHERE ( SELECT NUM_DUPLICATES FROM DUPLICATES WHERE TABLENAME =
'MYDATA') > 0 AND
EXISTS ( SELECT 1
FROM TMP_MYDATA
WHERE MYDATA.MID = TMP_MYDATA.MID AND
MYDATA.IID = TMP_MYDATA.IID AND
MYDATA.TIMEKEY = TMP_MYDATA.TIMEKEY AND
MYDATA.EID = TMP_MYDATA.EID AND
MYDATA.INSERT_SEQ > TMP_MYDATA.INSERT_SEQ )
GO
This tecnique works fine on a normal table (1M recs) but is not very
performant
on huge tables (>10M records)!
Do you know a better way to achieve the task of removing all the
duplicates records, preserving the lowest INSERT_SEQ betwee the
duplicates and also preserving the sequence seed, so that a new record
inserted at time t1>t0 is enumerated with an INSERT_SEQ|t1 >
max(INSERT_SEQ)|t0 ?
Thanks a lot for your help!
Patrizio
PS. sorry for such a large post!
View 1 Replies
ADVERTISEMENT
Oct 21, 2014
Im trying to delete duplicate records from the output of the query below, if they also meet certain conditions ie 'different address type' then I would merge the records. From the following query how do I go about achieving one and/or the other from either the output, or as an extension of the query itself?
SELECT
a1z103acno AccountNumber
, a1z103frnm FirstName
, a1z103lanm LastName
, a1z103ornm OrgName
, a3z103adr1 AddressLine1
, A3z103city City
, A3z103st State
[code]...
View 1 Replies
View Related
May 30, 2008
Hi Everyone,
We have a large test database with million of records for more than company site Code. Sometime we want to refresh the data of that database for one or more site Codes.
In order to do that I have to delete all records of the site code we want to refresh on the test database first then copy a new set of data from production database over. Since we refresh data based on the site code therefore I have to use the Delete command instead of Truncate.
Since this is a huge database with thousand of tables and million of records per table I have a performance issues with delete command. So what would be the best to delete a large number of records without writing any information to database log file?
FYI: The Recovery model of this database is Simple
Regards,
Jdang
View 9 Replies
View Related
Jul 30, 2004
I use a tabel for storin log data from a mail server. I noticed that I'm getting duplicate records, is there a way to delete the socond and/or third entry so I dont have any duplicates?
I need this done in SP.
View 1 Replies
View Related
Oct 19, 2000
Sorry for the new thread.
I have a userprofile table. There are a lot of duplicate records in this table. e.g.
USERID-----LASTNAME---EMAILADDRESS----CREATEDATE
----------------------------------------------------------------------
1----------A-----------A@yahoo.com---------2000-09-05 16:07:00.000
2----------A-----------A@yahoo.com---------2000-09-10 16:07:00.000
3----------A-----------A@yahoo.com---------2000-09-15 16:07:00.000
Userid is auto number, lastname and emailaddress are PK.
I want to delete duplicate records. If lastname and emailaddress are the same, only keep a record which createdate is the most newest date. See above example I only want to the record which userid is 3. I have alreday created a code which I attached below. This code onle keep a record which userid is 1.
Anybody can help me to solve this problem? Thanks.
============== My current code ====================
delete from userprofile where userprofile.userid in
--list all rows that have duplicates
(select p.userid
from userprofile as p
where exists
(select lastname, emailaddress
from userprofile
where lastname = p.lastname and emailaddress = p.emailaddress
group by lastname, emailaddress
having count (userid)>1))
and userprofile.userid not in
--list on row from each set of duplicate
(select min(p.userid)
from userprofile as p
where exists
(select lastname, emailaddress
from userprofile
where lastname = p.lastname and emailaddress = p.emailaddress
group by lastname, emailaddress
having count (userid)>1)
group by lastname, emailaddress)
View 2 Replies
View Related
May 7, 2008
Hello everyone,
I have a little dilemma. I have a table ALLTABLE that has duplicate records and I want to delete them. ALLTABLE has these columns with these values for example:
Policy Premium Class State Entity Number
ADC-WC-0010005-0 25476 63 31 1
ADC-WC-0010005-0 25476 63 31 2
ADC-WC-0010005-0 25476 63 31 3
ADC-WC-0010005-0 1457 63 29 4
ADC-WC-0010092-1 2322 63 37 1
ADC-WC-0010344-0 515 63 01 1
ADC-WC-0010344-0 515 63 01 2
As you can see there is some duplicates in the first 3 rows and the final 2 (the entity number is the only difference). I want the table to look like this:
Policy Premium Class State Entity Number
ADC-WC-0010005-0 25476 63 31 1
ADC-WC-0010005-0 1457 63 29 4
ADC-WC-0010092-1 2322 63 37 1
ADC-WC-0010344-0 515 63 01 1
Thank you so much for the help. It is really appreciated.
View 11 Replies
View Related
Jun 5, 2008
Hi Guys,
I have the following table
customerid customername
------------------------
1 AAA
1 AAA
2 BBB
2 BBB
2 BBB
3 CCC
3 CCC
Here, I need to delete duplicate records from the above table.
After deleting the duplicate records the table should be
like this:
customerid customername
------------------------
1 AAA
2 BBB
3 CCC
Can any one help me!!!!!!
Regards
js.reddy
View 3 Replies
View Related
Jun 24, 2008
Hi,
Please help me in constructing a query that will delete duplicate records in a table;
Please see my table structure below:
CREATE TABLE LATEST_DATA (
[ID] int NOT NULL,
[IPage] varchar(100) NULL,
[IDevice] varchar(255) NULL,
[IGroup] varchar(255) NULL,
[IField] varchar(255) NULL,
[IValue] varchar(255) NULL,
[IIcon] int NULL,
[IID] int NULL,
[ReportID] int NOT NULL,
[RVersion] varchar(255) NULL,
[RHost] varchar(255) NULL,
[RUser] varchar(255) NULL,
[RLocation] varchar(255) NULL,
[RDateTime] varchar(16) NULL,
[RComplete] bit NOT NULL
);
-- Insert Latest data
INSERT
INTO LATEST_DATA
SELECT
[Item].[ID],
[Item].[IPage],
[Item].[IDevice],
[Item].[IGroup],
[Item].[IField],
[Item].[IValue],
[Item].[IIcon],
[Item].[IID],
[Item].[ReportID] ,
[Report].[RVersion],
[Report].[RHost],
[Report].[RUser],
[Report].[RLocation],
[Report].[RDateTime],
[Report].[RComplete]
FROM
[dbo].[Item],
[dbo].[Report]
WHERE
[Report].[ID] = [ITEM].[ReportID] AND
[Report].[ID] = ( SELECT
MAX([Report].[ID])
FROM
[dbo].[Report]
);
thanks
View 8 Replies
View Related
Apr 9, 2008
I have inherited a database with no primary keys in it. :(
There are, of course, duplicate records by the "logical key", i.e., the list of columns that should have been used in a primary key, if they only had one. I can easily identify when I have duplicate records using group by and having clauses. That's the easy part.I can identify duplicate records by "key columns" and the subset of those records that are "identical duplicates on all columns". I need to delete the "extra copies"
I can't do it by hand using sql server manager, it won't let me, because it recognizes that more than one row matches that row. In Oracle, I would use the pseudo-column "rowid", which would expose the internal identifier for the row.SqlServer doesn't appear to have this concept. is there another way other than creating an empty shadow table, copying all the distinct duplicate records into the shadow table, deleting all the duplicates in the old table, then copying the distinct duplicates back into the old table.
View 3 Replies
View Related
Nov 29, 2004
Hello All,
Please show me how to delete duplicate records from a table.
Thanks in Advance
View 1 Replies
View Related
Feb 5, 2012
I want to delete the duplicate record from a table keeping 1 record aside.
My base table is-info
idnameclass
2abc6a
3abc6a
4abc6a
1abc6a
2abc6a
4abc6a
4abc6a
3abc6a
3abc6a
1abc6a
2abc6a
5abc6a
id-int
name-text
class-varchar
(there is no primary key in this table)
Now I want the result in following way:
idnameclass
2abc6a
3abc6a
4abc6a
1abc6a
I have tried the following query and its running fine but its not a dynamic stuff.
DELETE top (SELECT COUNT(*)-1 FROM aaa WHERE id ='3') --or put some number
FROM aaa
WHERE id ='3'-- or put some number
View 14 Replies
View Related
Jul 20, 2005
I uploaded some data about 2 or 3 times and it keep appending it to thetable.Now I want to keep only first duplicate and delete rest of.Suppose part number 123 has been added 3 times so I want to keep only 1record.Thanks
View 1 Replies
View Related
Jul 7, 2007
I was importing records via DTSWizard, and I was having problems so I turned off Enforce Replicaton, Enforce FK Constraints on a couple of fields. I'm new with SQL Server so I'm not sure if this even caused the problem. (Do I need to turn these back on, or is this a Developer switch of some kind?)
The end result left me with duplicate records in the table, and I'm not able to delete any of them. This is the Error I got...
A problem occurred attempting to delete row 1.
Error Source: Microsoft.VisualStudio,Datatools.
Error Message: The row value(s) updated or deleted either do not make the row unique or they after multiple rows(2 rows).
If someone could tell me what I need to do so I can delete the records I'd really appreciate it.
Thanks,
Bill
View 11 Replies
View Related
Mar 4, 2008
Hello Frnds....Can anybody give the answer of this question as How to Delete duplicate records from Table ?
I Know that with check option and also with Unique Constraint we can avoid to enter duplicate records in table but How to delete from table which does not have any constraints ?
View 8 Replies
View Related
Oct 21, 2014
Im trying to delete duplicate records from the output of the query below, if they also meet certain conditions ie 'different address type' then I would merge the records. From the following query how do I go about achieving one and/or the other from either the output, or as an extension of the query itself?
SELECT
a1z103acno AccountNumber
, a1z103frnm FirstName
, a1z103lanm LastName
, a1z103ornm OrgName
, a3z103adr1 AddressLine1
, A3z103city City
[Code] ....
Delete and merge duplicate records from joined tables? I am trying to delete duplicate records from the output of the query below, if they also meet certain conditions ie 'different address type' then I would merge the records. From the following query how do I go about achieving one and/or the other from either the output, or as an extension of the query itself?
Select a1z103acno AccountNumber, a1z103frnm FirstName, a1z103lanm LastName, a1z103ornm OrgName, a3z103adr1 AddressLine1, A3z103city City, A3z103st State, A3z103zip Zip, a6z103area AreaCode, a6z103phon PhoneNumber, a8z103mail Email from proddta.fz103a1 with (nolock) inner join proddta.fz103a2 with (nolock) ON a1z103acno = a2z103acno INNER JOIN proddta.fz103a3 with (nolock) ON a2z103adid = a3z103adid and a2z103actv = 'Y' and a2z103prim = 'Y' LEFT OUTER JOIN proddta.fz103a5 with (nolock) ON a1z103acno = a5z103acno and a5z103actv = 'y' and a5z103prim = 'Y' INNER JOIN proddta.fz103a6 with (nolock) ON a5z103phid = a6z103phid LEFT OUTER JOIN proddta.fz103a8 with (nolock) ON a1z103acno = a8z103acno and a8z103actv = 'Y' and a8z103prim = 'Y'
View 2 Replies
View Related
Jul 20, 2005
Hi,I have an sql database that has the primary key set to three fields,but has not been set as unique(I didn't create the table).I have 1 record that has 2 duplicates and I am unable to delete theduplicate entries.If I try to delete any of the three records(they are identical) I getthe message 'key column is insufficient or incorrect. Too many rowswere affected by update'.I am trying to do this within Enterprise Mgr.Any suggestion?Thanks much
View 2 Replies
View Related
Jun 25, 2004
Hello,
How can I delete duplicate entries from tables in my database using Query Analyzer, as there are many duplicate entries in my tables, I want to delete them.
Thanks in advance,
Uday.
View 4 Replies
View Related
Aug 20, 2007
Hi
I want to delete the duplicate rows from two tables and get the resultant non-duplicate rows from both the tables into another table
View 4 Replies
View Related
Aug 29, 2006
Hi All,
So.. I'm a complete newb to SQL stuff.
I managed to find the 'Deleting Duplicate Records' from SQLTeam.com (thanks, by the way!!).. I managed to modify it for one of my tables (one of 14).
-- Add a new column
Alter table dbo.tblMyDocsSize add NewPK int NULL
go
-- populate the new Primary Key
declare @intCounter int
set @intCounter = 0
update dbo.tblMyDocsSize
SET @intCounter = NewPK = @intCounter + 1
-- ID the records to delete and get one primary key value also
-- We'll delete all but this primary key
select strComputer, strATUUser, RecCount=count(*), PktoKeep = max(NewPK)
into #dupes
from dbo.tblMyDocsSize
group by strComputer, strATUUser
having count(*) > 1
order by count(*) desc, strComputer, strATUUser
-- delete dupes except one Primary key for each dup record
deletedbo.tblMyDocsSize
fromdbo.tblMyDocsSize a join #dupes d
ond.strComputer = a.strComputer
andd.strATUUser = a.strATUUser
wherea.NewPK not in (select PKtoKeep from #dupes)
-- remove the NewPK column
ALTER TABLE dbo.tblMyDocsSize DROP COLUMN NewPK
go
drop table #dupes
Now that I've got that figured out, I need to write the same thing to fix the other 13 tables (with different column info)- and I'll need to run this daily.
Basically I've put together some vbscript that gathers inventory data and drops it into an MSDE db (sorry - goin for 'free' stuff right now). Problem is it has to run daily so that I'm sure to capture computers that turned on at different times etc which ever-increases my database 'till I bounce off the 2GB limit of MSDE.
So the question is, what would be the best way to do this? Can I put the code into a stored procedure that I can execute each day?
Thanks for your help....
View 4 Replies
View Related
Jul 14, 2014
I have tried joining several tables and the result displays duplicate rows of virtually every line/row. I have tried using distinct but this didn't work. I know it could because there's several columns from some of the tables named the same.
select purchaseorders.traderid,
suppliers.name
stockbatches.partid,
allpartmaster.partdesc,
allpartmaster.prodgroup,
[Code]....
View 2 Replies
View Related
Jan 8, 2008
Hello,
I have the following problem:
2 tables: both have the same pk values.
one table must be deleted based on a filter (I mean the table is not delete completely but only some records), I would like to delete same records in the second table.
for ex:
table 1: pk: 1,2,3,4,5
table 2: pk: 1,2,3,4,5
table 1: deleting 1,2, 3 thus also in table 2 pk: 1,2,3 must be deleted.
At the and of process Table1 and Table2 must have the same records (always also in the case of failure, errors and so on ).
The target is avoid using triggers.
OUTPUT is not useful because it writes what is deleted (or may be useful but how to use it?).
How can I do?
Thank
View 1 Replies
View Related
Jan 30, 2014
I am trying to delete the records in 2 related tables. The 'child' table has a field called [SETA],
I want to delete all the records in this table that contain the same info, as well all the fields in the parent table that is related to this table. They share the [ID] field as key. This is my code:
Code:
DELETE FROM Student a full outer JOIN Qualification b on a.[ID] =b.[ID] WHERE b.[SETA] = @SETA
View 4 Replies
View Related
Feb 19, 2007
Hello, Everyone
I had 2 tables contain data:
1) old_data: 601,195 records
firstname
lastname
address
city
state
zip
zip4
2) current_data: 410,185 records
firstname
lastname
address
city
state
zip
zip4
3)Questions: How I write a query make to delete records? From table 'old_data' delete match records reference to table 'current_data' and keep the remain good records are from 'current_data' table.
****Here's below my test query that came up with if there are any error and please provide me any suggestion or new query. Very important task ....Thanks you to all.
/*******
Delete current_data
From old_data Cross Join current_data
Where old_data.FirstName = current_data.FirstName and old_data.LastName = current_data.LastName
and old_data.Address = current_data.Address
********/
RV
View 8 Replies
View Related
Aug 8, 2007
I'm kinda embarrased, it's been quite awhile since I've played with SQL or Access but here's my brain fart. I need to open an access DB by the name of eHomes. Inside there are two tables AD_IMAGE & PROPERTY. Property is the main table and stores most of the info. It has a field in it called AD_ID and others except for the image file name. The AD_IMAGE table just keeps the image file name and has 3 fields: ID - AD_ID - & Image1. The problem is when the ad is deleted from the property table the AD_IMAGE table record referring to the PROPERTY.AD_ID doen't get deleted. So, I just want the AD_IMAGE table cleaned up to match what's active in the PROPERTY table using the common AD_ID fields in both tables.
Thanks
View 1 Replies
View Related
Feb 15, 2006
how to delete a very huge log file, to free up some harddisk space
View 3 Replies
View Related
Jan 15, 2004
Hi,
My DB size (Right click on DB Name, Data Files tab, Space Allocated field) was 10914 MB.
I delete a huge table (1.2 million records * 15 columns).
I checked the db size again. It didnt change.
Shouldn't it decrease because I delete a huge table ??
View 14 Replies
View Related
Sep 28, 2006
Hi All,
I used a data flow task, and when trying to transfer data from a OLE DB Source (records ~ 75 lac) to a destination OLE DB Source, SSIS fails at the middle giving an error saying the Transaction log got filled, try again after clearing the same.
My query is what is the most efficient way to transfer say records more than 50 lac ensuring that it doesn't fail in the middle?
Thanks in Advance,
Mithun.
View 5 Replies
View Related
May 19, 2012
I am using Master Data Service for couple of months now. I can load, update, merge and soft delete data in MDS. Occasionally we even have to hard delete data from MDS. If we keep on soft deleting records in a MDS table eventually there will be huge number of soft deleted records. Is there an easy way to hard delete all the soft deleted records from all MDS tables in a specific Model.
View 18 Replies
View Related
Jan 20, 2006
What is the simplist/correct way to delete multiple records from multiple tables. Any help appreciated. Thanks! (Yes, I'm totally new to this.)
delete dbo.tblcase
where dbo.tblcase.case_id in ('299760', '299757', '299739', '299732', '299902',
'299901', '299897', '299894', '299873', '299872', '299870', '299865', '299860',
'299858', '299854', '299846', '299838', '299834', '299821', '299813', '299803')
delete dbo.tblcaseclient
where dbo.tblcaseclient.case_id in ('299760', '299757', '299739', '299732', '299902',
'299901', '299897', '299894', '299873', '299872', '299870', '299865', '299860',
'299858', '299854', '299846', '299838', '299834', '299821', '299813', '299803')
delete dbo.tblcaseinformation
where dbo.tblcaseinformation.case_id in ('299760', '299757', '299739', '299732', '299902',
'299901', '299897', '299894', '299873', '299872', '299870', '299865', '299860',
'299858', '299854', '299846', '299838', '299834', '299821', '299813', '299803')
delete dbo.tblcaselawyer
where dbo.tblcaselawyer.case_id in ('299760', '299757', '299739', '299732', '299902',
'299901', '299897', '299894', '299873', '299872', '299870', '299865', '299860',
'299858', '299854', '299846', '299838', '299834', '299821', '299813', '299803')
delete dbo.tblcaseprosecutor
where dbo.tblcaseprosecutor.case_id in ('299760', '299757', '299739', '299732', '299902',
'299901', '299897', '299894', '299873', '299872', '299870', '299865', '299860',
'299858', '299854', '299846', '299838', '299834', '299821', '299813', '299803')
View 1 Replies
View Related
Dec 21, 2005
In
one of our forth coming projects, with ASP.Net/C#/MSSQL Server, We have
to deal with a Business table having about 15 millions of records. We
want to know, that which methodologies should we adopt, both regarding
front end and back end perspective, so the site could give optimised
performance. Also in place of a Dedicated Server, the Hosting Company
provides MSDE (that come with .net). Will this create any problem with
this project, that have such a huge table? Should we go for some
advanced database technique, such as, Clustering, Spliting Tables, etc.
Followings are the fields that the business table contains:
ID, Category ID (which comes from a Category table, each business is
under a category), BusinessName, SignupDate, Address1, Address2, Phone
Number,
Hours Of Operation, Years in Business, LicenseNumber, DiscountCoupon, Website
View 3 Replies
View Related
Aug 15, 2012
I need to compare if two developers did the job correctly and created identical tables.
The problem is more complex, but I will try to solve it somehow if I solve the problem of comparing two tables (let them be in different SQL Server 2008 databases) and their properties. No data needs to be compared.
View 6 Replies
View Related
Oct 22, 2015
I am copying data from database to an excel file through SSIS. database is MS SQL 2005 and BIDS is also 2005.However, the job doing this task fails every time.As per investigation, the result of the query is more than 100,000 rows and we know that excel has a limit of 65000 rows of data.Is there a way of setting the limit up? or something? a better approach maybe so that everything will be copied to the excel file successfully.
The PrimeOutput method on component "Source - Query" (1) returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure. End Error Error: 2015-10-22 04:34:58.05 Code: 0xC0047021
Source: Data Flow Task
Description: SSIS Error Code DTS_E_THREADFAILED.
Thread "SourceThread0" has exited with error code 0xC0047038. There may be error messages posted before this with more information on why the thread has exited. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 4:30:00 AM Finished: 4:35:05 AM Elapsed: 304.844 seconds. The package execution failed. The step failed. "
View 1 Replies
View Related
Mar 13, 2007
Actually in my transformation i am transferring huge amount of data.
i have been using oledb command finally to dump my incoming data to respective tables.
For Example :
if you have two tables
table 1,table 2
in my incoming data i have a lookup and check for two unique columns with that of the unique columns in the table 1.if the record does not exsist i try inserting a record into table 2 and get the unique filed of the record and store that in particular column of table 1.
the data is very large an is this the better why or any suggesstions do let me know..
View 5 Replies
View Related