SQL 2012 :: Partitioning A Large Dimension?
May 9, 2014
The SQL CAT team's recommendation is to avoid partitioning dimension tables: URL.....I have inherited a dimension table that has almost 3 billion rows and is 1TB and been asked to look at partitioning and putting maintenance in place, etc.I'm not a DW expert so was wondering what are the reasons to not partition dimensions?
View 7 Replies
ADVERTISEMENT
May 15, 2014
I have a very large table that I need to partition. Ideally the table will write to three filegroups. I have defined the Partition function and scheme as follows.
CREATE PARTITION FUNCTION vm_Visits_PM(datetime)
AS RANGE RIGHT FOR VALUES ('2012-07-01', '2013-06-01')
CREATE PARTITION SCHEME vm_Visits_PS
AS PARTITION vm_Visits_PM TO (vm_Visits_Data_Archive2, vm_Visits_Data_Archive, vm_Visits_Data)
This should create three partitions of the vm_Visits table. I am having a few issues, the first has to do with adding a new clustered index Primary Key to the existing table. The main issue here is that the closed column is nullable (It is a datetime by the way). So running the following makes SQL Server upset:
ALTER TABLE dbo.vm_Visits
ADD CONSTRAINT [PK_vm_Visits] PRIMARY KEY CLUSTERED
(
VisitID ASC,
Closed
)
ON [vm_Visits_PS](Closed)
I need to define a primary key on the VisitId column, but I need to include the Closed column in order to partition on it.how I would move data between partitions on a monthly basis. Would I simply update the Partition function, or have to to some sort of merge, split, or switch function?
View 2 Replies
View Related
Nov 14, 2007
Hi folks! I'm looking for advice on partitioning a large table. In the DDL below I've changed names to protect the guilty.
My table has this schema:
CREATE TABLE [dbo].[BigTable]
(
[TimeKey] [int] NOT NULL,
[SegmentID] [int] NOT NULL,
[MyVal] [tinyint] NOT NULL
) ON [BigTablePS1] (TimeKey) -- see below for partition scheme
alter table [dbo].[BigTable] add constraint [PK_BigTable]
primary key (timekey asc, SegmentID asc)
-- will evaluate whether this one is needed, my thinking is yes
-- based on the expected select queries.
create index NCI_SegmentID on BigTable(SegmentID asc)
The TimeKey column is sort of like a unix time. It's the number of minutes since 2001/01/01, but always floored to a 5 minute boundary. so only multiples of 5 are allowed.
Now, this table will be rather big. There are about 20k possible SegmentIDs. For every TimeKey from 2008/01/01 to 2009/01/01 (12 months), I'll have on the order of 20000 rows, one for each SegmentID.
For the 12 month period, there are 365*24*60/5=105120 possible TimeKey values. So the total rowcount is over 2 billion. (20k * 105120)
Select queries are expected to be something like this:
-- fetch just one particular row...
select MyVal from BigTable
where TimeKey=5555 and SegmentID=234234
--fetch for a certain set of SegmentID and a particular time...
select
b.SegmentID
,b.MyVal
from BigTable b
join OtherTable t on t.SegmentID=b.SegmentID
where b.TimeKey=5555
and t.SomeColumn='SomeValue'
Besides selects, also I need to be able to efficiently issue update statements against the table with new values in the MyVal column based on a range of TimeKey values (a contiguous span of a few days) and sets of about 1000 SegmentID. updates would always look like this:
update t
set t.MyVal=p.MyVal
from BigTable t
join #myTempTable p on t.TimeKey=p.TimeKey
and t.SegmentId=p.SegmentId
where #myTempTable would have order of 1000*24*60 rows in it, all with contiguous TimeKey values, and about 1000 different SegmentID values. #myTempTable also has a clustered pk on (timekey asc, SegmentId asc).
After the table is loaded, it would never get any inserts or deletes. only selects and updates.
Given the size, and the nature of the select and update queries, this table seems like a good candidate for partitioning. I'm thinking it makes sense to partition on TimeKey.
So my question is, is it stupid to create a separate partition for each day in the year long span of TimeKeys this table covers? That would mean 365 partitions in the partition function and partition scheme. Something like this:
CREATE PARTITION FUNCTION [BigTableRangePF1] (int)
AS RANGE LEFT FOR VALUES
(
3680640 + 0*1440, -- 3680640 is the number of minutes between 2001/01/01 and 2008/01/01
3680640 + 1*1440,
3680640 + 2*1440,
3680640 + 3*1440,
...snip...
3680640 + 363*1440,
3680640 + 364*1440,
3680640 + 365*1440
);
GO
CREATE PARTITION SCHEME [BigTablePS1]
AS PARTITION [BigTableRangePF1]
TO
(
[PRIMARY],[PRIMARY],[PRIMARY],
...snip...
[PRIMARY],[PRIMARY],[PRIMARY]
);
GO
does anyone have any experience with partitioned tables with so many partitions? Is a few hundred partitions too many? From my understanding of partitions, seems like having so many will be ok. Is it somehow worse than having hundreds of tables in a database?
Even with one partition for each day, I'll still have 24*60*20000/5 ~ 5m rows in each one.
5m seems like a manageable number. 2b does not.
elsasoft.org
View 2 Replies
View Related
Feb 9, 2015
We have an existing BI/DW process that adds large chunks of data daily (~10M rows) to an existing table, as well as using Deletes to remove stale data. This scenario seems to beg for partitioning to support switching in/out data.
After lots of reading on this, I have figured out the mechanics of the switching, bit I still have some unknowns about the indexes needed to support this.
The table currently has several non-clustered indexes, including one on the partitioning column - let's call that column snapshotdate. Fortunately there are no FKs involved, and no constraints.
Most of the partitioning material I see focuses on creating a clustered PK to assist with switching. Not sure if this is actually necessary, but assume I create one using an Identity column (currently missing) plus snapshotdate.
For the other non-clustered, non-unique indexes, can I just add the snapshotdate to the end of the index? i.e. will that satisfy the switching requirement?
View 1 Replies
View Related
Jun 17, 2014
I have a question about partitions in both SQL Server table and Tabular Model. I started to use Tabular Model recently.
I need to partition a table that collects daily rows for different clients.
The natural partition key is a combination of clientID+dateID (something like CL-YYYYMMDD)
I created a configuration table with a primary key PartitionID IDENTITY(1,1) , that contains also the field clientID and dateID
Every day I add a new row in it and I get a partitionID for the new client and date
Then I created a partitioned fact table using PartitionID as the partition field, using the partition function and the partition schema as well.
The daily client data is inserted in the partitioned table using the partitionID
Everything works fine, and the data are loaded correctly into the partitioned fact table.
Then I created a Tabular Model where the fact table is the partitioned table, and I created tabular model partitions using something like "select <field list> from PartitionedTable where partitionID = <partitionID>"
In this way, every day I load partitioned data in both sql server and tabular model. I have two dimensions, client and calendar
Now my question is: when I browse the Tabular Model, and I'm selecting a specific dimension date and dimension client, am I using the partitionID index correctly?
Or should I put in the tabular model partition query something like "select <field list> from PartitionedTable where clientID = <clientID> and dateID = <dateID>"? In this case is still working the partitionID index? How can I check it?
View 0 Replies
View Related
Dec 3, 2014
I need Dynamic Partition of SQL Table.
1. What is the best practice for partitioning (on date column)
2. The project on which i am working correctly have a case where in i get the update of my status flag after few days (Say 15 - 30) in that case if my data got into partition table how to update and how to search which partition has my data
3. Is creating partition occupies more disk space?
4. Is every partition would need index?
View 7 Replies
View Related
Feb 25, 2014
Script to do the table partitioning for a 500gb in sliding window technique?
View 9 Replies
View Related
Mar 6, 2014
If the partitioning MERGE command attempts to drop historic data at the wrong boundary point then data movement between file groups may be necessary before or during the next index rebuild. The script below creates 2 test tables, one using a range right function and the other using range left. The partitioning key is a number between 0 - 59, an empty partition is maintained at the start and end of ranges, 4 partitions contain data in the ranges between 0-14, 15-29, 30-44, 45-59. Data in the lowest range (0 - 14) is switched out and a merge command is run, edit the script to try the different merge boundaries, edit the variables at the start to suit runtime environment 'Data Drive' & 'Log Drive' paths.Variables are redeclared but commented out at the start of code blocks to allow stepping through if desired.
--=================================================================================
-- PartitionLabSetup_20140330.sql - TAKES ABOUT 1 MINUTE TO EXECUTE
-- Creates a test database (workspace)
-- Adds file groups and files
-- Creates partition functions and schema's
-- Creates and populates 2 partitioned tables (PartitionedRight & PartitionedLeft)
[Code] ....
The T-SQL code below illustrates one of the problems caused by MERGE at the wrong boundary point. File Group 3 of the Range Right table is empty according to the data space views, it cannot be dropped though. File Group 2 contains data according to the views but you are allowed to drop it's file.
USE workspace;
DROP TABLE dbo.PartitionedRightOut;
USE master;
ALTER DATABASE workspace
REMOVE FILE PartitionedRight_f3 ;
--Msg 5042, Level 16, State 1, Line 2
--The file 'PartitionedRight_f3 ' cannot be removed because it is not empty.
ALTER DATABASE workspace
REMOVE FILE PartitionedRight_f2 ;
-- Works surprisingly although contains data according to system views.
If the wrong boundary point is used then the system 'Data Space' views show where the data should be (FG2), not where it actually still is (FG3). You can't tell if data movement between file groups is pending and the file group files are not protected from deletion by the OS.
I'm not sure this is worth raising a connect item for but it would be useful knowing where data physically resided after a MERGE RANGE and before an INDEX REBUILD, the data space views reflect the logical rather than the physical location if a data movement is pending.
View 0 Replies
View Related
Aug 25, 2014
We have a database and have 6-7 growing tables. All the tables have Primary and foreign key relation. I want to do partition based on the date column.
I need 3 partitions
First partition has to hold present data
second partition need to hold the previous year data (SAS storage)
Third partition need to hold all the old data and need to be in the archive database
I understand that first we need to disable the constraints (Indexes PK & FK)
Then create partition function and partition schema
Then Create the Constraints again
View 9 Replies
View Related
Oct 26, 2015
When i add a dimension to the cube dimension without any relation in my dimension usage to any measure group my units are going down.However when i remove the dimension from the cube am getting the correct values.
View 4 Replies
View Related
May 18, 2015
I have a table that needs to be incorporated into the data warehouse.The table has the following schema.
CREATE TABLE [dbo].[Consignment](
[Id] [int] IDENTITY(1,1),
[BooingID] INT
[BookingDate] [datetime] NULL,
[CarrierServiceName] [nvarchar](255) NULL,
[CarrierServiceCode] [nvarchar](255) NULL,
[code]...
This Table has the same granularity as the fact table as it’s one row per booking.However due to the nature of the data I would not want to incorporate this into the fact table.The Originating and Destination addresses are populated for each booking and are required for reporting.
Question:Should this be moved into a fast changing Dimension table.? or would there be a better way to incorporate this data.
View 1 Replies
View Related
Jul 14, 2014
I know the type of Dimension but any real time example for Dimension table
1) Conformed Dimension
2) Junk Dimension
3) Degenerated Dimension
4) Role Playing Dimension
View 3 Replies
View Related
Dec 19, 2014
I have a fact table with few flag columns.
What is the best way to bring them to dimension?
Do I need to create dimension(dummy) from fact table for each flag or all flags in single dimension?
View 0 Replies
View Related
Sep 10, 2015
how to move the dimension attributes from currency to geography and vice versa(i.e need to change their positions) in SQL Server 2012. i need currency to be placed in top of geography or geography below currency.
View 6 Replies
View Related
Aug 3, 2015
I have built a fact table and few dimension views in Datamart with the aim of creating a Cube.
On the Fact table I have added a CASE Statement with the following threshold for Premium due amounts:
CASE WHEN....
'Due_0-1_Month'
'Due_1-2_Month'
'Due_2-3_Month'
'Due_Over_3_Months'
'Overdue_0-1_Month'
'Overdue_1-3_Month'
'Overdue_3-6_Month'
'Overdue_Over_6_Months'
...END
I then created a Dimension to link this to:
CREATE VIEW...
Select 'Due_0-1_Month' as Ageing_Threshold
union all
Select 'Due_1-2_Month'
union all
Select 'Due_2-3_Month'
[Code] ....
I was successful in processing the cube, however the problem is everytime I drag the dimension on the columns field in Pivot tables the Thresholds start to break up the other amounts that I have on display like Acquisition Costs, Tax amounts. I am only interested in showing the breakdown of Premium amount measure by the Threshold dimension.
somehow 'Hide' or 'prevent' the Threshold dimension from breaking down the other measures on the Pivot and only breakdown the amounts for Premium?
how I should structure my tables in SQL or any MDX queries to resolve this.
View 0 Replies
View Related
Feb 11, 2015
My log file size is of 5 GB, I just wanna reduce this to some extent without adopting shrinking method. So is there any way to do the same ?
View 1 Replies
View Related
Mar 8, 2014
We are having very big tables in TBS and wanted to setup a strategy for index maintenance.
View 3 Replies
View Related
Apr 20, 2015
I'm preparing a checklist for myself before getting ready to migrate from 2005 to 2012. Our largest database is a nice one at over 250GB. I'm thinking my best bet to minimize any downtime would be to Restore the DB (NORECOVERY) on the new server and keep rolling it forward with the transactional logs. Eventually I'll need to bring the old DB offline and do one last backup and apply that one to the new server but that should be a small time frame given the whole process could take several hours.
View 5 Replies
View Related
Jun 9, 2015
I have a table with raw scientific test results in a single field, some of which are over 25Mb field. I need to parse into the field to find and aggregate selected values from the field.
Table structure is
CREATE TABLE [dbo].[Gxxx_Data](
[id] [uniqueidentifier] NOT NULL,
[Status] [nvarchar](50) NULL,
[GxxxItem_ID] [int] NULL,
[Stats_Data] [varbinary](max) NULL,
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
[code]...
From which I need to parse and summarize the (Assembler) opcodes (MOV,CMPi, SHR etc...)I need to parse the large field [Stats_Data] to locate the target data.The internal result strings are delimited with Char(10), conservative counts are from 64k to over 100k lines in each record. Is there a way to parse the individual lines into another table (temp) that would be queried/regexed ?
View 9 Replies
View Related
Dec 30, 2013
We have a large OLAP database, about 2.5 TB spread out over 3 data files on three different drives, and recently someone ran a query that created a table that continued to grow until the data files filled the available disk space (about 3 TB total - 1 TB per drive).
Tonight I plan on running a full backup (it's in Simple mode) and running a ShrinkFile on all three files sequentially with TRUNCATEONLY just so it will remove the space after the last extent. Any way to tell ahead of time how much space this will recover?
Granted running a DB Shrink is one of those things you just don't do, but this is a one-time shot and unavoidable to get the file size back under control.
View 5 Replies
View Related
Jan 9, 2014
I am attempting to do a rather simple purge task on a very large table. This task will need to take place daily and delete records older than 6 months out of the database. On first pass this will delete well over 130 million rows. I thought the best way to handle this is create a proc and call the proc from a SQL Agent Job that runs nightly. Here is an example of the script:
CREATE PROCEDURE usp_Purge_WCFLogger
AS
SET NOCOUNT ON
EXEC sp_rename 'dbo.logs', 'logs_work'
GO
SELECT * INTO dbo.Logs_Backup FROM dbo.Logs_Work WHERE TIMESTAMP < DATEADD(month, -6, GETDATE())
[Code] .....
View 3 Replies
View Related
May 7, 2014
I need to create a Clustered Index (CI) on a very large SQL Server 2012 database table. This table has about approximately 10 billion rows, 500 GB in size. The job ran for about 20 hours into it and then fails with error: "Out of disk space in tempdb". My tempDB size is 1.8TB, but yet it's still not enough.
Here is my script:
CREATE CLUSTERED INDEX CI_IndexName
ON TableName(Column1,Column2)
WITH (MAXDOP= 4, ONLINE=ON, SORT_IN_TEMPDB = ON, DATA_COMPRESSION=PAGE)
ON sh_WeekDT(Day_DT)
GO
View 9 Replies
View Related
Jul 28, 2014
I need to create script that will import large XML files (500 - 7GB) on a daily basis and store the data in a relational db structure.
What is the best and fastest way of importing such files. I have played around with smaller files and found the following.
1. SSIS XML Data Source: It doesn't seem to like the complex elements types and throws out the file.
2. Using Bulk File Import, sorting the file in XML variable and using XQuery to parse the file: This works but it can't take a file more than 2GB in size, so I can't use this method.
3. C# + XML Serialization: This also works, but seems to be terribly slow. I open the DB connection once, so it doesn't open and close for each db call, but still seems like it takes a long time.
how to import large XML quickly in a relational table structure?
View 9 Replies
View Related
Jun 9, 2015
I am fetching large amount of data from teradata to sql server using linked server. I am facing below query:
A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)
View 0 Replies
View Related
Jun 16, 2015
I have a table with about 466 Million rows. In this table there is a int column called WeeksToRetain as well as a EventDate column containing the date the row was inserted. I am trying to delete all the rows that that should be deleted according to the WeeksToRetain. For example, if the EventDate is 5/07/15 with a 1 in the WeeksToRetain column the row should be removed by 5/14/15. I am not sure what days SQL considers the beginning and end of the week. However the core issue I am having is the sheer mass of deletions I must do and log growth.
So I am trying to do the delete in batches. More specifically I want to load a temporary table with a million rows, then use the temporary table to load a sub temporary table with 100,000 rows and join this temporary table to the table I want to delete from looping through 10 times to get 1 million. The Logging.EvenLog table which is the table I'm trying to purge has a clustered index on EventDate (ASC). I would like to run this in a schedule job with enough time between executions for log backups to run.
DECLARE @i int
DECLARE @RowCount int
DECLARE @NextBatchDate datetime
CREATE TABLE #BatchProcess
(
EventDate datetime,
ApplicationID int,
[Code] .....
View 9 Replies
View Related
Aug 29, 2015
Using merge replication + web synchronization, I have a situation when there are large amount of data changes to upload to the publisher, Merge agent would create a large request and send it over. The publisher gets it and is able to work on it. After few minutes it has finished but (I assume) the connection has been dropped. At the subscriber's side, it appears that the merge agent is hung. The output would look like something like this:
Upload request size is XXX bytes.
Uploaded a total of 100 chunks.
Uploaded a total of 200 chunks.
Uploaded a total of 211 chunks.
The request message was sent to [URL] ....
Normally, when the publisher finishes working, the merge agent then continues processing. But when it takes more than few minutes (it seems to break about at 2 or 3 minutes), merge agent will hang as long as the InternetTimeout setting is (currently 20 minutes) before finally failing and retrying.
But that's not right. The publisher was done and can't communicate back to the merge agent (presumably because the connection was dropped). As a result, merge agent will try to re-enumerate changes on top of giving appearance that it's hung.
I've already fiddled with settings such as MaxUploadChanges, UploadGenerationsPerBatch, UploadReadChangesPerBatch, and UploadWriteChangesPerBatch. However, none of those setting actually ensure that the request message is too large. It has worked in breaking up the changes into separate batches (e.g. processing a single table rather than all tables) which results in more frequent updates and thus avoid the problem.
However, when a single table has several changes, it is still lumped into one large request which then takes more than 2-3 minutes to process on publisher's side and thus I still end up with the same symptom of merge agent hanging.
Is there anything else I could try to get merge agent to keep its connection alive even during processing a large request?
View 0 Replies
View Related
Sep 8, 2015
I have the following scenario:
SQL database on SQL 2012
Large Production table 15 Million record
The table has 3 years of data
New monthly data is being added every month.
A New Monthly data is being loaded, checked and finally approved after 6 or 7 iteration before approval.Because of this iteration the monthly data set is being added then deleted then added then deleted few times.Because the table is big this process takes time, any thoughts on how to make the delete insert process faster.Keep in mind I cannot do much because it is a production table and is being access by other users to do other analysis.
Delete is done based on trx_date which is a year/month combo, like 201508.
The table has monthly sales by customer aggregated.
The table structure is:
CREATE TABLE [dbo].[Sales](
[batch_key] [int] NOT NULL,
[Company_key] [int] NOT NULL,
[customer_key] [char](22) NOT NULL,
[Trx_Date] [int] NOT NULL,
[account] [nvarchar](35) NOT NULL,
[code].....
View 9 Replies
View Related
May 10, 2014
In a Library Management database we have these tables
1) Document ( DocNo , Doc_type , permalink,inDate)
2)Title(id, DocNo,Main_Title, Other_Title)
3)Author(id , Author_Name , Author_Family,Type--Like:main author , translator ,....)
4)Publisher(id,DocNo , Name,Publisedate,address)
5)Subject(id,DocNo,Subject)
6)Description(id,DocNo,ISBN,description)--one document may have some ISBN,etc
In document table I have 500,000 records.
I want to search a word in these tables ,for example i want to search 'Computer' ,this word may be in subject or title or description and etc. How can I do this with best performance?
View 3 Replies
View Related
Aug 18, 2014
SQL 2012
I have a source table in the staging database stg.fact and it needs to be merged into the warehouse table whs.Fact.
stg.fact is not a delta feed it is basically an intra-day refresh.
Both tables have a last updated date so its easy to see which have changed.
It will be new (insert) or changed (update) data that I am interested in, there are no deletions.
As this could be in the millions of rows that are inserts or updates then this needs to be efficient.
I expect whs.Fact to go to >150 million rows.
When I have done this before I started with T-SQL Merge statement and that was not performant once I got to this size.
My original option was to do this is SSIS with a lookup task that marks the inserts and updates and deal with them seperately. However I set up the lookup tranformation the reference data set will have a package variable in the SQL commnd. This does not seem possible with the lookup in 2012! Currently looking at Merge Join transformation and any clever basic T-SQL that could work as this will need to be fast, and thats where I think that T-SQL may be the better route.
Both tables will have >100,000,000 rows
Both tables have the last updated date
The Tables are in different databases but on the same SQL Instance
Each table holds 5 integer columns, one Varchar, one datatime
Last time I used Merge it was a wider table with lots of columns so don't know if this would be an option.
View 6 Replies
View Related
Sep 18, 2015
We have a table to 100M rows and up until now we were fine with an non clustered index a varchar(4000) because we never went above 900 bytes (yes it is a bad design).We have the need to support international character sets now so the column was updated to nvarchar(4000) and now we have data past the 900 byte limit.
The data is long, seems useless but is needed by the business and they need to be able to search "where bigcolumn like 'test%'". With an index, even with a huge amount of data, it was 'fast'. Now of course without an index it is unusable. The wildcard is always at the end of the search. I made a full text index on the column and basic queries such as: select * from ourtable where contains(bigcolumn, 'AReallyLongStringofTextHere') works fine unless there is a space in the data. We loose thousands of returned rows because of spaces in the data.
I have tried select * from ourtable where contains(bigcolumn, '"AReallyLongStringofTextHere that includes spaces"') but not all of the data is returned. I get 112 rows with the contains statement. The table scanning statement of "select * from ourtable where bigcolumn like 'AReallyLongStringofTextHere that includes spaces%' returns 1939 rows.I understand that a full text index is breaking the long string up since it contains spaces. Is there a way to retain the entire string as 1 index entry or is there a way to fix my query to return all of the rows?
View 9 Replies
View Related
Feb 11, 2014
Other than right-clicking on each individual table in SSMS and generating a CREATE script, is there a simple way to generate CREATE TABLE scripts for tables within a given database?
Background: I have a bunch of tables in one database, and I would like to add tables to a second database that have the same names and basic structures of some of the tables from the first database.
I do not need to transfer any data from the tables, this is a seperate project that will use a similar data structure. I just want to generate the CREATE TABLE scripts for 30ish tables within the first database, and then I'll tweak the scripts as appropriate and run them against the new database.
[URL] ....
View 7 Replies
View Related
Aug 7, 2014
I have a large excel spreadsheet created by finance user that contains several decades worth of sales data.
Here is a small sample:
Guest Count
Unit ID1/2/2011 1/9/2011
3 0
7 0
8 0
90 0
151696 1202
222769 1914
232704 2110
250 0
282838 1882
331089 691
363581 3064
371469 1062
I need to get this data into an SQL table in the following form so I can use it to further manipulate the data and update several other tables. I am thinking that UNPIVOT or CROSS APPLY might be the way to go, but am not sure how to code it.
The desired output:
Unit IDDate Guest Count
31/2/2011 NULL
71/2/2011 NULL
81/2/2011 NULL
91/2/2011 0
151/2/2011 1696
and so on ......
The spreadsheet has 2900 columns and 3500 rows so performance is definitely a consideration as well.
View 9 Replies
View Related
Mar 28, 2015
Our system runs a SQL Server 2012 DB, it has a table (table_a) which has over 10M records. Our system have to receive data file from previous system daily which contains approximate 3M updated or new records for table_a. My job is to update table_a with the new data.
The initial solution is:
1 Create a table (table_b) which structur is as the same as table_a
2 Use BCP to import updated records into table_b
3 Remove outdated data in table_a:
delete from table_a inner join table_b on table_a.key_fileds = table_b.key_fields
4 Append updated or new data into table_a:
insert into table_a select * from table_b
As the test result, this solution is very inefficient. Step 3 costs several hours, e.g. How can I improve it?
View 9 Replies
View Related