Fuzzy Grouping: Any Success With &&> 3 Million Records?

May 18, 2006

I have tried to process > 3 million Fuzzy grouping records on two different servers with no success. 3 mill works but anything above 4 mill doesn't. Some background:

We are trying to de-dup our customer table on: name (.5 min), address1 (.5 min), city (.5 min), state (exact). .8 overall record min score.
Output includes additional fields: customerid, sourceid, address2, country, phonenumber
Without SP1 installed I couldn't even get a few hundred thousand records to process
Two different servers - same problems. Note that SSIS and SQL Server are running locally on both
The higher end server has 4GB RAM, the other 2.5 GB RAM. Plenty of free disk space on both
SQL Server is configured to use 2 GB of RAM max
The page file is currently at 15GB

After running a number of test on both servers trying different batch sizes etc. the one thing I noticed is that it seems to always error out when SSIS takes over and starts chewing up all the available RAM. This happens after the index is created and SSIS starts "warming caches". On both servers SQL Server uses up about 1.6GB of RAM at this point while SSIS keeps taking over RAM until all physical RAM is used up.

Some questions:

Has anyone been able to process more then 3 million records and if so what is your hardware configuration?
Should we try running SSIS from a different server so it has access to the full amount of physical RAM? (so it doesn't have to fight for RAM with SQL Server)
Should we install Win 2003 Enterprise Server so we can add more RAM?
Any ideas why switching to the page file might be causing errors?

Thanks!!

Keith Doyle





View 17 Replies


ADVERTISEMENT

Difference Between The Fuzzy Lookup And Fuzzy Grouping In Ssis

Aug 14, 2007

Dear Friends,



i think fuzzy lookup

COMPARES WHAT WE ARE MAPING THE COLUMNS WITH SPELLING (IT WILL REJECT ATLEAST 1 LETTER IS DIFFRENT IN ANY RECORD MAPPED COLUMN) EX: RAVI != REVI


what is fuzzy grouping ???? please explain

regards
koti




View 3 Replies View Related

DB Engine :: Deleting 1 Million Records From Transaction Table Of 10 Million Data On 24/7 Environment

Jun 12, 2015

I have a requirement to delete 1 Million records from a table having 10 Million data and it's being queried on 24/7 basis (don't have a downtime). how can I achieve that?

View 13 Replies View Related

Fuzzy Grouping Error

Oct 16, 2005

I am using the Sept CTP, I am doing a fuzzy grouping on 1.5Mil records.

View 7 Replies View Related

Fuzzy Grouping Using Original Key

Nov 14, 2007

I managed to get fuzzy grouping working. The relevant output (_key_in and _key_out) are stored in a new table that is a copy of the old table + fuzzy grouping columns.

How do i get SSIS to store the _key_in and _key_out in the original table?
The new matching column _key_out refers to the new key: _key_in. How could i get SSIS translate that to a matching column that refers to my original key?

View 1 Replies View Related

Fuzzy Grouping In SSIS

Aug 2, 2007


hi focks,


WHAT IS THE USE OF Fuzzy Grouping IN SSIS

and please give me the example

regards
koti

View 1 Replies View Related

Fuzzy Grouping Errors

May 15, 2006

Hi - we have been evaluating using Fuzzy Grouping and Lookup for maintaining our large list of customer records.  Initial testing with Grouping on about 300K records went great but now with a larger sample of 7.3 million records we are running into problems.   It doesn't appear to be system limitation - the index is built reasonably quickly and without errors but when it starts the matching we get these errors:

[Fuzzy Grouping Inner Data Flow : DTS.Pipeline] Error: The ProcessInput method on component "Fuzzy Lookup" (86) failed with error code 0x8000FFFF. The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running.

[Fuzzy Grouping Inner Data Flow : DTS.Pipeline] Error: Thread "WorkThread0" has exited with error code 0x8000FFFF.

[Fuzzy Grouping Inner Data Flow : DTS.Pipeline] Error: Thread "WorkThread1" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown.

[Fuzzy Grouping Inner Data Flow : OLE DB Source [1]] Error: The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020.

[Fuzzy Grouping Inner Data Flow : DTS.Pipeline] Error: Thread "WorkThread1" has exited with error code 0xC0047039.

[Fuzzy Grouping Inner Data Flow : DTS.Pipeline] Error: The PrimeOutput method on component "OLE DB Source" (1) returned error code 0xC02020C4.  The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.

[Fuzzy Grouping Inner Data Flow : DTS.Pipeline] Error: Thread "SourceThread0" has exited with error code 0xC0047038.

One thing we did find is that our test server didn't have SP1 installed and that seemed to help a lot (we were getting buffer errors prior to SP1).   One other note - the desination table is populated with all the data but no scoring has been applied to it.

Does anyone have any ideas what could be causing this?

Thanks!

Keith Doyle

View 5 Replies View Related

Fuzzy Grouping In Parallel

May 11, 2007

Hello,

I have created a project to do de-dupification of addresses.



I understand that Fuzzy Grouping will take less time if it has lesser data volume to process.

My source feed file is sometimes huge. So I am splitting the input into multiple branches based on

the first letter of the city. There are 7 branches in the process.


Source File Feed
|
Split data into 7 groups
|
------------------------------------------------------------------------------------------------------------------------------------------
| | | | | | |
FzGrpg FzGrpg FzGrpg FzGrpg FzGrpg FzGrpg FzGrpg
| | | | | | |
Split Split Split Split Split Split Split
| | | | | | |
------------- -------------- -------------- -------------- -------------- -------------- --------------
| | | | | | | | | | | | | |
<- - - - - - - Write the Canonicals and Dupes from each of these splits into database - - - - - - - - ->


When I designed this I was hoping that each of the Fuzzy Grouping tasks will execute in parallel.

But in reality they are processing one after the other.

Is there anyway to make them execute in parallel?



Appreciate your help.



Thanks

KM

View 12 Replies View Related

Extracting The Duplicates Using Fuzzy Grouping

Jul 29, 2006

Hi,

I have an Oracle table called "Party" which contains Party_Id as primary key and have Party_Name, Party_Addr etc., as fields. We have lot more duplicate party details such as (party_name and party_addr) in this table. We are trying to aviod duplicates using FUZZY logic of SSIS.

1. Is any body suggest me how to create package to avoid duplicates using Fuzzy logic for this scenario(Step by step instructions are good for me to understand SSIS).

2. Could you please provide me some samples for FUZZY(Please send me a sample to my email)

View 1 Replies View Related

Fuzzy Grouping Causing Minidump

Oct 18, 2007

I was running a Fuzzy Grouping task on SQL Server Enterprise Edition SP1 without any issues. I then applied SP2 and now that same Fuzzy Grouping is causing a minidump and terminating the process.

First, does anybody know anything about this kind of issue?

Second, I tried to run the minidump file in Visual Studio but I cannot actually run the dump file in Visual Studio as I keep getting the following exception:


Debugging information for 'DtsDebugHost.exe' cannot be found or does not match. No symbols loaded.

Finally, I did obtain a random error on the server itself that displayed the GUID: 58FC39EB-9DBD-4EA7-B7B4-9404CC6ACFAB.

This GUID appears to be tied to a Dr. Watson error but, again, I cannot figure out what process is breaking.

Can somebody please help?

View 1 Replies View Related

Need Help In Address Cleansing-- Fuzzy Grouping?

Oct 4, 2007

Hi,

We do not have any Address Cleansing tools and the requirement is we have to cleanse the data, finding the best possible record which has all info and update other records accordingly.

I am Not sure we can do this Fuzzy Grouping Transformation.

Example:

I have Source table with following info.












Customer_id

Location_Address

Location_City
Location_State
Location_Zip
Location_County

TT101
252 HARVARD RD
ATLANTA
GA
30340
FULTON

TT101



30340


TT101
252 HARVARD RD
ATLANTA




TT102
125TEST
CUMMING




TT102
125 TEST DR
CUMMING
GA
30040
FORSYTH

TT102


GA
30040

Please let me know the solution

Thanks in advance.

View 4 Replies View Related

Dynamically Configuring Fuzzy Grouping

Jun 7, 2007

Hello,



I have been struggling with this for quite awhile so any help would be appreciated.



I need to know if there is away to populate the fuzzy grouping control dynamically. I know you programmatically design a package and customize it in C# but for our purposes we would like to control the SSIS package via database settings. When the settings change the package would then act different. Its a simple a package consisting of an Input - fuzzy grouping - conditional split - output. The connections are setup dynamically using parameters, expressions and a script task. Is there anyway I could do a similar thing for Fuzzy Grouping?

View 13 Replies View Related

Fuzzy Grouping Performance Issue

May 30, 2007

Hello All,

We have a SSIS package which includes Fuzzy Grouping in Data Flow. It takes two columns from source table and saves outputs in different table with match score etc. Following is the way we are doing it:
1. Load required data from table using OLEDB connection (source)
2. Sort the data
3. Apply Fuzzy grouping (using dedicated database instead tempdb and MinSimilarity = 0.6)
4. Send to destination table using OLEDB connection (destination)

In input table we have millions of records. It takes too long to execute and even sometime it fails after running 12 hours. Any suggestions for performance improvement are welcomed.

Appreciate your help.

Thanks and regards,
Ashish Basran

View 1 Replies View Related

Fuzzy Grouping Resource Usage

Nov 21, 2007

I have a few questions about the amounts of resources used by the fuzzy grouping transformation. I am running a little less than 5mil records through a fuzzy grouping that exact matches one column and fuzzy matches one. The server executing the package is a dual-core xeon with 2gb ram, running a default instance of sql 2005 enterprise.

I have been attempting to execute this package for a while now but it keeps erroring out for various reasons. At first, it was from a lack of available memory. I limited the memory usage of sql server to 256mb and set the buffer temp storage path, which alleviated those errors. However, now, my tempdb transaction log is growing significantly. It failed once for not being able to grow and reallocate quickly enough, but enlarging the auto-growth factor fixed that. Then, it filled up the volume the tempdb log was on, so now I have moved it to the san and am about to try again.

I was wondering, does anyone have a general idea on approximate resource usage by fuzzy grouping? Specifically, is there an approximate relation between the number of records grouped and the amount of ram/pagefile required? Also, on the database backend, how big can I expect the tempdb data/log files to get?

View 5 Replies View Related

Fuzzy Lookup / Grouping Design

Apr 7, 2008

Hi,

I need some advice on fuzzy lookup / grouping design.
I have a requirement that, I think, is between lookup and grouping transformations.

In one of our applications, users can enter manually a label for some information in the database.
Every month, I will store all the new data in our OLAP DB, and I want to group these labels with a fuzzy logic.
Historical data (already loaded) have to be grouped, as well as new data coming every month.

I have no predefined canonical data, so Fuzzy Lookup seems not adapted to my pb.
Fuzzy Grouping seems ok, but it would require to put historical data as well as new data as an input of the Fuzzy Grouping Transfo to constitute groups. This seems not efficient to me.

Any clue ?

M.D

View 1 Replies View Related

Fuzzy Grouping Similarity Calculations

Apr 30, 2008

Hi all,

My question is how to calculate the similarity by using SQL query, example LIKE % , order by.....? Now i'm doing a function same like fuzzy grouping but i do not know how to get the answer, mean how they get match with those selected row of data.


Hope my question is clear. How to write the correct query? What should i do? I 'm newbie in Integration Services, so i need ur explaination in step by step if there hv correction.



I am looking forward to hearing from you shortly and thanks a lot in advance.

Thanks!

rgds,
xuenly

View 3 Replies View Related

Keep Duplicate With Highest Score Fuzzy Grouping

Jan 22, 2012

I have recently decided to dedupe my data but i am having a problem after running fuzzy grouping with the query on updating which duplicate to keep

_key_in is unique, _key_out is the duplicates so for example:

_key_in , _key_out , name , score , dedupe
1 , 1 , ron , 10 , purge
2 , 1 , ronn , 15 , keep
3 , 3 , john , 5 , keep
4 , 4 , matt , 15 , keep
5 , 4 , mat , 10 , purge
6 , 4 , matt , 15 , purge

I want to keep the _key_out with the higher score by setting the field de_dupe to 'keep' and the remainder to 'purge'. The score can also be the same within a duplicate so in the case it is the same i just need to keep one it doesnt matter which one. The query i have below nearly works but it marks duplicates with the same score as keep.

Code:
UPDATE b
SET b.dedupe_result = 'keep'
FROM
[BusinessListings].[dbo].[MongoOrganisationACTM1Destination] b
INNER JOIN

[Code] ....

View 2 Replies View Related

Fuzzy Grouping Seemingly Corrupting Data

Jan 10, 2007

I've seen one other post on this topic from October 2005 and I thought I'd bring it up again. I've a Fuzzy Grouping component in my data flow. The output data from it appears to be the result of records spliced into other records. This includes pass-through columns, not merely "clean" or similarity columns. For example (I've added the suffixes for illustrative purposes):

AddressLine1_in: 162 OAKMONT
AddressLine1_out: 162 OAKMONTLAMINATION INC

CityStateZip_in: Alexander, AR 72002-8539
CityStateZip_out: Alexander, AR 72002-8539116-7066

These are just pass-through columns, although "used" columns are seeing something similar (below.) Any others with this experience?

City_in: Alexander
City_out: Alexandertle Rock

View 1 Replies View Related

Fuzzy Grouping - First Name Similarities; Bill = William, Etc...

Aug 14, 2007

Hello,

I was wondering how Fuzzy Grouping deals with and handles first name similarities. Is there a way to configure it so that Anthony = Tony, Bill = William, etc€¦? I created a simple package with several rows containing similar first names and ran the fuzzy grouping on the first name column. I received only one possible duplicate of Will = William which was at 56%. I lowered the threshold down to 1% and still only one match.

Now I understand and appreciate the reasons for this but was wondering if this type of situation was considered and a way of dealing with it is available.

Thanks,
Beac

View 3 Replies View Related

Fuzzy Lookup And Grouping Training Dataset?

Mar 2, 2008


Hi All,

Is there a way the fuzzy lookup or grouping can be trained so that similarities and confidence values rely on previously matched strong links?

For example: I can link 80% of my two datasets using one strong identifier (say phone #) which I trust. My goal then, is to use the probability of matching of the rest of my linking fields (say Name,Address,Gender,DOB) in a "matched by phone number" pair to train a fuzzy lookup task to be done on the unlinked 20% of the datasets.

This "training set" would in theory influence the similarity and confidence values of the fuzzy output since each linking column would carry a different weight or contribution towards a confident match.

Does anyone out there knows how to do this in practice in SSIS?

View 1 Replies View Related

Fuzzy Grouping Transform Corrupts Pass-through Data

Aug 2, 2005

We are working with a client and are using Fuzzy Group transform for de-duping, and hierarchy creation for a national account list.

View 4 Replies View Related

Fuzzy Grouping Matching Nulls To Empty Strings/spaces

May 30, 2007

Will the fuzzy grouping task match a null value to an empty string (or spaces)? I've got 5 columns I'm matching on, and one of them may be null for certain rows but an empty string for others. Given the 4 other columns may match, will this difference stop similar columns being grouped together?



(Someone's modified my grouped data since it was deduped, which takes a while, and I'm hoping for a quick answer on this).



Thanks in advance.

Ben

View 3 Replies View Related

Integration Services :: Error Running A Fuzzy Grouping Process

May 26, 2015

I have a table that I need to identify similarities so I'm running a Fuzzy Grouping Process. I'm getting the follow errors and I can't identify the problema since all the fields are varchar, except for the first that is int but not use in the fuzzy.

select
MSSEndCustomerTPID
, orgname
, address1
, cityname
, statename
, countryname
from [sales].[vw_Fact_VolumeSales] a
inner join [GMOFBI].[dbo].[vw_Dim_MSS_Organization] b
on a.EndCustomerOrganizationKey=b.MSSOrganizationKey

[code]...

View 3 Replies View Related

SQL INSERT 1.6 Million Records

Jan 27, 2006

I am currently working on a simple page to insert 1.6 million UK postcode records into an SQL server table. The table has three columns for the postcode, longditude coordinate and lattitude coordinate. The data is sourced from a pipe (|) delimited txt file and inserted into the database using a FOR loop. The problem I have is that the page will hang after inserting only 10,000 records, the page displays either an invalid View State error or a page cannot be found error.
Now I assume the viewstate error stems from the fact that there is a form on the page which simply contains a button to execute the script and a few labels to show the progress. But without the form and associated viewstate the insert still fails to complete.... any ideas?? Would I be better running this on a thread or should I just do it in stages and be patient. I have now modified the page to read the database on load and pick up from where it crashes?

View 2 Replies View Related

Updating 4 Million Records

Aug 30, 2006

Meg writes "Hi,

I have a table that has 4+ million records. I need to update those records. I am facing some performance issue. Can someone please advice?

update stage
set batch_status = 1
where update_status = 0


Update transaction
Set aId = s.aId,
b = s.b,

from stage s
Where s.aId = transaction.aId
and s.batch_status = 1


Update stage
Set update_status = 1,
batch_status = 2

where

batch_status = 1

When I run the above query with "set rowcount 1000", it runs in one minute. When I run the query for "set rowcount 10000", it runs in 1 hour 56 minutes. Can someone help me to optimize it?

Thanks.
Meg"

View 4 Replies View Related

56 Million Records Search

Jul 20, 2005

Hey folks...So I have a table that looks like this:CREATE TABLE [tblStation] ([CAMPAIGN] [varchar] (8),[LISTNUM] [varchar] (10),[PHONE] [varchar] (10),[EVENTTIME] [datetime] ,[STATION] [int],[OPERATOR] [varchar] (16),[EVENTCODE] [varchar],[CALLSPAN] [decimal](18, 0),[FDISP] [int],[RECORDNUM] [varchar],[STC] [varchar],[PROMOC] [varchar],[EXP_CAMP] [varchar],[PROMO3] [varchar],[MAXATT] [char],[LISTNAME] [varchar],[SITENAME] [char],[Row_id] [int] IDENTITYIt's taking nine seconds to run the following command:SELECT count([fdisp])FROM [TrunkFiles_new].[dbo].[tblStation] WITH (NOLOCK)WHERE fdisp IS NULLAnyone familiar with a table of this size having performance likethis? The [fdisp] column has a non clustered index on it.Thanks in advance...

View 1 Replies View Related

How Well SQL Server Can Support 300 Million Records...

Nov 16, 2001

How well SQL Server can support 300 million records...
Any body is working on big database like this. can anyone give me some input on this. it's going to be 60GB database size.

View 1 Replies View Related

Indexing A Table With 80 Million Records

Mar 26, 2004

i have a directory database with approx. 80 million records. i am feeding the database with bulk_insert. Indexing one of the fields took about 8 hrs. After indexing when i run queries with the indexed field the response time is under 1 sec. However if i run select queries with like on non-indexed fields it takes more than 2 mins. So i decided to index 4 other fields in the database and it looks like the indexing process is going to run for 2 days.
i am a novice in SQL database design and i am not sure if this is the best way to index the table. i am just using create index. Any suggestions / advice welcome.

View 5 Replies View Related

Fastest Way To Update 20 &#043; Million Records

Mar 19, 2008

Hello,
What is the fastest way to update 20million records in our database.
I have tried to do a simple update statement like this:
update trail_log with (tablockx, holdlock)
set trail_log .entry_by = users.user_identity
from users
where trail_log.entry_by = users.user_id

but it take 10 plus hours to run since it cannot commit the transactions until the very end. So was was thinking that I need to commit in batch like after 50K but that is slow as well.
Set rowcount 50000
Declare @rc int
Set @rc=50000
While @rc=50000
Begin
Begin Transaction
update trail_log With (tablockx, holdlock)
set trail_log.entry_by = users.user_identity
from users
where trail_log.entry_by = users.user_id
and trail_log.entry_by not like '%[0-9]%'
Select @rc=@@rowcount
--Commit the transaction
Commit
End
go
I have let the above statement run for 1.5 hours and it only update 450000 rows. Any ideas...
Maybe I'm doing it wrong. Please Help!!

View 1 Replies View Related

Efficiency: 40 Million Records Script.

Oct 12, 2007

Hi all,


I have a sql script that updates records in a table with 40 million records.

There is some functionality in the script that could be put away in functions for code reuse/elegance.

Functions would cause execution overhead.

What else could I use besides functions that would allow me the code reuse and not compromise the execution over head? Is there any thing like includes in TSQL that would allow me to do so?

TIA..

View 4 Replies View Related

Free Text Search For 2 Million Records

Apr 23, 2007

Hi

I have a new client with an existing system that has just over 2 million business listings in one table. Each business listing is associated with one business category.

* Company Table (around 20 fields):

companyID
companyName
categoryID
state
postCode
etc.

* Category Table (5 fields)

categoryID
categoryName
etc.

We are using MSSQL 2005 Express Edition with Advanced Services

A free text search needs to be performed on the companyName and categoryName limited by region (state and or postcode).

1) What kind of response times should I expect for the free text search (I have not used the free text search before)

2) How should I index the companyName and categoryName so they are both used in a joined query? i.e. Do I just configure the free text search index on each field separately and it should work?

Any suggestions appreciated.

Best Regards

Kevan

View 2 Replies View Related

T-SQL (SS2K8) :: Compare Tables With More Than 4.9 Million Records?

Mar 18, 2014

I want to compare ONLY 1 Column values from 2 tables having more than 4.9 million records. There is a difference of 4000 rows between the 2 tables.

SELECT ID From TABLE1 where ID not in (SELECT DISTINCT ID From TABLE2)

My above query took nearly 4.5 hours to run and I had to cancel it. Is there a better way to write the query . I just want to compare the ID - column values which are missing in TABLE2

View 7 Replies View Related

SQL 2012 :: 1.5 Million Records Into Temp Table

Sep 23, 2014

I come from a web based world were loading 1.5 million records into a temp table is suicide. I’m doing more data warehouse stuff now and I was looking into optimizing a buddies proc and noticed he was loading 1.5 million records into a temp table. We had a discussion about it because being from a web world I was drastically against it. He on the other hand didn’t feel it was an issue being it gets called once maybe twice a day. The tempdb is set to autogrow and it is on a different drive than all the other databases on the box. It has one ldf and mdf. He’s creating an index on the table after load. Why we shouldn’t be loading 1.5 million recs into temp table?

View 5 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved