Exceedingly Long Update On Large Tables - Why?

Mar 28, 2006

We have a simple UPDATE query, joining two tables, that takes much longer than 10 hours to run, but if we break the table in six (10 million rows in each table), it takes only fifteen minutes to run each part.

Why? And how can we tell in advance whether a query will cross the threshold into l.o.n.g.r.u.n.n.i.n.g query? Or, how can we prevent it?

The system is Windows XP Pro with 4GB RAM (/3GB switch), and SQL Server Standard 2005. Log files, swap files, dbf files are on separate drives. The system is dedicated to SQL Server. No other queries are running at the same time. The database is in Simple logging mode. Each table is a few GB with 60 million rows.

An example problem query is: (updating fewer than 10 bytes)
UPDATE bigtable
SET bigtable.custage = scores.custage, bigtable.custscore = scores.custscore
FROM bigtable
JOIN t2 ON bigtable.custid = scores.custid

In this case, each table has 60 million rows. 'custid' is a sequential, unique integer. SCORES table is clustered on 'custid' and is 1.5GB in size. BIGTABLE has an index on 'custid', and is 6GB in size. There is a one-to-one match between the tables on 'custid', but not enforced. The SCORES table was created by exporting a few fields (but all 60 million records) from BIGTABLE, updating the values in a separate program, then importing back in SQL Server into the SCORES table.

The first time this query was run, we stopped it after it ran 16 hours. When we broke up the bigtable into 10 million record chunks (big1, big2, big3..., big6) each update only took 15 minutes, for 90 minutes total.

* How can in we tell in advance that the full chunk would take more than a few hours?
* Why is it taking SO MUCH LONGER than in smaller chunks?
* When a query is taking that long to run, is there any way to tell where in the plan it is?
* What should we do differently?

Thanks for any help; this is a real head scratcher for us.

View 9 Replies


ADVERTISEMENT

Large Long Running Query

Mar 20, 2008



The query show below is designed to use seasonal profiles to compute 53 weeks of forecast data and then from that compute the number of weeks of supply of each item at each location. The query works but the volume of data produced (20+M rows) is substantial. If I limit the CTE to a single location, it run is 2 seconds and returns 41,000 rows. But when run for all locations and items, it runs for more than 4 hours. Would I do better converting the CTE to a sub-query and adding an index to improve the performance of the main query?


WITH Forecast AS

(SELECT Location_Idx

,Item_Idx

,Week_Code

,(CAST(AnnualQty AS DECIMAL(9))/53.0)*[Profile] AS fcst

FROM dbo.FactReplenishmentProfile rp

INNER JOIN dbo.FactSeasonalProfile sp

ON sp.SeasonalProfile_Idx = rp.SeasonalProfile_Idx

)

SELECT fcst1.Location_Idx

,fcst1.Item_Idx

,fcst1.Week_Code

,fcst1.fcst AS WeekQty

,SUM(fcst2.fcst) AS CumQty

FROM Forecast fcst1

INNER JOIN Forecast fcst2

ON fcst2.Location_Idx = fcst1.Location_Idx

AND fcst2.Item_Idx = fcst1.Item_Idx

AND fcst2.Week_code <= fcst1.Week_Code

GROUP BY fcst1.Location_Idx,fcst1.Item_Idx,fcst1.Week_code,fcst1.fcst

View 4 Replies View Related

Long Running Update...

Aug 20, 2001

Hi everyone.... I'm trying to execute this update statement... It takes an eternity... any ideas on how to rewrite or speed it up?

It's a several step process... below is everything that i run, one step at a time. The final update statement is what takes so long. It should only affect about 2600 rows out of a potential 9000. That's why I'm confused on the response time

select d.olddevicename, de.device, d.newdevicename into #temp9
from dns d, devices de
where de.device = d.olddevicename

update #temp9 set device = newdevicename where olddevicename = device

update devices set device = #temp9.device from #temp9, devices where #temp9.device in
(select #temp9.device from #temp9, devices where #temp9.olddevicename = devices.device)

Thank you!

J.

View 1 Replies View Related

Update Take Too Long On Some Records.

Mar 31, 2008

Hi all,

I have confusing problem here. My table which have about 150.000 rows have problem with updates on 15 rows.

It's access application which connect to sql server 2005 database. Access gets error:
"ODBC --update on linked table 'MyTable2 failed"

If i try to update on sql it's take couple minutes to update.
Evrything else on that table is ok.

View 11 Replies View Related

Update Taking Long Time In 2000 Then SQL 7.0

Mar 7, 2002

Hi,
I have a table with 48 million rows,when i executed following update query it is taking 10 HOURS in SQL SERVER 2000 with SP1.
Where as when i executed same query in SQL SERVER7.0 with same table then it is taking 13 MINUTES. Comming to Machine...SQL 2000 Server has more processors and greater memory than SQL 7.0 m/c.
It looks strange but this is true.Does any one faced such problem..is there any bug in SQL 2000?????

Here is Query::

update cus_pay_jan_dist set univ_regdate = b.dayid
from cus_pay_jan_dist a with (nolock), tm_dayids b with (nolock)
where a.univ_regdate = b.dayidnum and a.univ_regdate like '2001%'


Thanks
Ananth

View 6 Replies View Related

Update Takes Long Time To Complete!?

Jul 20, 2005

Hi There,I have an update statement to update a field of a table (~15,000,000records). It took me around 3 hours to finish 2 weeks ago. After thatno one touched the server and no configuration changed. Untilyesterday, I re-ran it again and it took me more than 18hrs and stillnot yet finished!!!What's wrong with it? I can ran it successfully before. I have triedtwo times but the result was still the same.My SQL statement is:update [all_sales] aset a.accounting_month = b.accounting_monthfrom date_map bwhere a.sales_date >= b.start_date and a.sales_date < b.end_date;An index on [all_sales].sales_date is built successfully.A composite index on ([date_map].start_date, [date_map].end_date) isbuilt successfully.My server config is:SQL Server 2000 with Service Pack 3Windows 2000 with Service Pack 4DELL PowerEdge 6650 ServerDUAL XEON 1900MHz Processors2G RAM2G Page File on Drive C2G Page File on Drive DDELL Diagnostics on all SCSI harddisks were all PASSED.Any experts could simly give me a help????Thanks x 1,000,000,000

View 4 Replies View Related

Update Statment Taking Long Time

Mar 27, 2008



I have an update statment in my SSIS that use to take 10 minutes in SQL 2000 dts and now its take 1 hour 15 minutes in SQL 2005.

this is my sql update statment -
Update WeeklySalesHistory set
weekendingdate =
(SELECT LastTransDateTime from ReplicationControl
where TableName = 'WEEKHST')
where weekendingdate is null

It is using ole db connection. About 36,000 records that it is updating.

I have read ole db can be slow and to use staging table. Does that mean on all updates like this I have to use a staging table and then insert. I didn't use to have to do this in SQL 2000. Has it changed. Are there any other options?


any input greatly appreciated.


View 7 Replies View Related

One Large Update Vs. Many Small

Oct 8, 2007



Hello,

the application will add items into a "bag". That is, the items in one table will refer a record in another table. This will be done in timely manner -- with second or minute delays between adding a new item. There will be up to thouthand of items per bag. The option is to wait until a full bag accumulates and set up all the references at once by using


UPDATE items SET container_ref = bag WHERE id IN [...]

The disadvantage of such all-at-once I see is inability to encapsulate the functionality into a SP -- the problem is to pass a set of IDs. The advantage should be efficiency in terms of total SQL Server load. How mush would it be?

View 3 Replies View Related

Large Tables In SQL 7.0

Jul 12, 2001

We currently have a data warehouse running on SQL 7.0, SP2. One of our primary fact tables is now well over 155 million rows in it. The table is not very wide, as it only contains 17 columns, most of which are defined as integers. The entire database is only 20 GB.

The issue is that the loads from the staging table to this fact table have significantly deteriorated over the last month or so, dropping from over 400 transactions per second to around 85. We drop all the indexes on the fact table before we load the data into it.

Are there issues with a manageable table size in SQL 7.0 that we need to be concerned about? And should we consider partitioning the table into several smaller tables and join them with a "union all" view?

I really need to get this performance issue resolved, as our IT support vendor is pushing us to port the data warehouse to UDB because they tell us that SQL server is not scalable enough to handle this volume of data.

Thanks for any help you can provide.

George M. Parker

View 6 Replies View Related

Large Tables

Aug 10, 2000

Hi,

How can i partition the large tables so that the insert and updates which iam doing on the tables take less time.

I want to know how can i partition large tables and if i do that how is that the performance is going to be increased.

Thanks.

View 1 Replies View Related

Large Tables

Mar 13, 2001

How can I find largest 5 or 10 tables in a database?

Thanks in advance
Chan

View 2 Replies View Related

Concurrent Snapshot Still Locking Tables For A Long Time

Apr 5, 2006

We have transactional replication running with a seperate publisher/distributor/subscriber. I want to add a couple of articles to the publication, and then initialise them. I have added the articles and run sp_refreshsubscriptions

I now want to refresh the subscriptions. I have selected not to lock the tables on the snaphot tab of the publication properties, but whenever I run the snapshot agent it locks the application solid! Its odd, as soon as I run the snaphot agent, the phones start ringing within minutes. The application is Great Plains and I have set the snapshot agent to run nightly anyway.

Is there any way I can run the snapshot agent during working hours to refresh this one article? Once I have successfully done this, I have a number of articles want to add - but I can't lock the tables when refreshing the initial snaphot.

View 1 Replies View Related

Use Replace() In An Update In A Large Ntext

Apr 21, 2008

I've got a table that I have to update in preparation for our environment move (2k to 2005 SP2). The developers that designed the application created a table called schemas, which holds the contents of an XML file inside of an ntext field named Data.I need to parse through the field and do a find/replace to replace all instances of www.site.com with www7.site.com. It's all over the place in the file. The problem is, that the datalength() of each of the fields (there are 2 rows) are above 15000.normally, I'd run something like this:update schemas set data=replace (cast(Data as varchar(max)),'www.site.com','www7.site.com') where data like '%www.site.com%'Smaller columns it works great - but it won't work on these because they're too big (the update will chop anything beyond the varchar(max) value). I could do it manually, but this DB will be refreshed from production on a weekly basis and I'd like to script as many of the environment changes to the DB as much as possible. Any ideas?

View 1 Replies View Related

Searching In Large Tables........

Mar 7, 2007

Hi there,i am having some problem related to SQL server........ Actually i am having a table called ZipCodes that have around 80,000 rows... and the size of the table is around 100 MB...... and my table is now on web Server,.  now my problem is that when i fire some query that needs to go through whole of the table then it estimated time to execute the query comes to be 13 seconds and the corsor threshold is set to 7 seconds (and i can't change that)....... so the SQL server cancels the query to be fired........Now i need some Methodology/Technique through which i can search Large Tables with minimum calculations in minimum Time............(Any Ideas)....

View 3 Replies View Related

COMPRESSING LARGE TABLES

Mar 19, 2001

Is it possible to compress the large tables in the database,

like COMPRESS, ARCHIVE options we use to reduce the size of files
stored on any operating system.

I know there is a difference between the file stored on disk and the table created in the database, but currently I am facing space problems wherein, I have to manage my database within the space available, so please advice me if the option is available in SQL Server 6.5 or 7.
I will be happy if I get the solution immediatly as currently I am facing this problem and waiting for your reply.
Thank you
Amol

View 1 Replies View Related

Restore Two Tables From Large DB Into A New DB

Feb 12, 2008

I am fairly new to SQL, so please forgive me if my question is a bit elementary. I need to pull two individual tables out of a massive DB into a new DB for testing.

Thanks for the help.

View 2 Replies View Related

Manipulating Large Tables

Feb 15, 2008

I'm in the midst of a long file conversion job. Today I found that one of the tables (converted from csv) to be 6.7 million records. My sql script which I use to reconfigure the weird original date format, into something the rest of the planet uses, times out due to the size.

Does anyone please know of a file utility to automagically split sql server 2005 tables for later re-combining once my scripts have successfully completed their task on the smaller tables?

View 7 Replies View Related

Partioning Large Tables

Nov 14, 2007

I am making a warehouse managment system. The system will cotain much data, but only a small portion of the data will be accessed frequently. Most of the data will only be accessed seldomly, but the customer wants to keep all historic data (just in case they should need it sometime). I have figured I need to partion the tables somehow to keep what is fresh in one place, and historical data in another place. What is the best way to do this? I am thinking about making historical tables. For example I can have a table named PickList and another table named PickListHistorical. When a picklist is processed/complete I can move it over to the PicklistHistorical table, but when the users need to search for a specific picklist I have to look in both tables. I can ofcourse create a view for this to make it transparant. Sql server 2005 introduced some automatically partioning. Will it be better to use this than create my own historic tables? If so, can you please tell me how I do it?

Thank you!

View 11 Replies View Related

Comparing Large Tables

Oct 19, 2007



I've successfully created SSIS packages where I compare two tables in different databases on different servers. However, this is good enough to compare hundreds of thousands of records quickly. The process becomes a huge performance problem when trying to compare table differences when I'm looking at tables that each contain tens of millions of records.


One database is on a SQL 2005 box and the other DB is SQL 7.0 so the lookup component fails for this type of SQL Server. I've been implementing merge joins and conditional components to do my standard table comparisons.

Is there another way to implement this process or maybe partition it somehow to take pieces of the table at a time and compare them? I'm open to ideas.

View 11 Replies View Related

Multiple Update Triggers Or One Large Trigger With If&#39;s

May 1, 2001

I have a table which when certain columns are updated, need a trigger to fire to update a next schedule date in that same table for that record. I can write the trigger, but my question for performance and efficiency is which approach would be better. Separate triggers fo the 8 columns, or a large trigger with an If to check if these columns are updated.
Thanks

View 1 Replies View Related

Tuning An UPDATE Statement On A Large Data Set?

May 26, 2004

I'm updating the name data in a large user database with the following UPDATE statement. The staging table was bulk loaded from a flat file and contains 10 million records. The production table (Recipients) contains 15 million records. This worked correctly but this single update statement took an entire ten hours to run which is way too long. While it was running the server was clearly 100% disk bound. CPU activity was near nothing. We've just upgraded RAM from 1GB to 2GB but we expect data sizes to grow significantly and we can't keep adding RAM. Absolutely nothing else is running on this server. Any ideas how I can optimize this?

UPDATE Recipients
SET [First] = Stages.[First]
, [Last] = Stages.[Last]
FROM
Stages
INNER JOIN Recipients ON
(Stages.UserName = Recipients.UserName
AND Stages.DomainID = Recipients.DomainID)
WHERE
(CASE WHEN Stages.[First] IS NULL THEN 1 ELSE 0 END
+ CASE WHEN Stages.[Last] IS NULL THEN 1 ELSE 0 END)
<=
(CASE WHEN Recipients.[First] IS NULL THEN 1 ELSE 0 END
+ CASE WHEN Recipients.[Last] IS NULL THEN 1 ELSE 0 END)

Text execution plan. I've made small annotations with the % information from the graphical execution plan:

|--Clustered Index Update(OBJECT:([Recipients].[dbo].[Recipients].[PK_Recipients]), SET:([Recipients].[First]=[Stages].[First], [Recipients].[Last]=[Stages].[Last]))
|--Top(ROWCOUNT est 0)
|--Sort(DISTINCT ORDER BY:([Bmk1000] ASC))
14% |--Merge Join(Inner Join, MANY-TO-MANY MERGE:([Stages].[DomainID], [Stages].[UserName])=([Recipients].[DomainID], [Recipients].[UserName]), RESIDUAL:(([Recipients].[UserName]=[Stages].[UserName] AND [Recipients].[DomainID]=[Stages].[Domain
25% |--Clustered Index Scan(OBJECT:([Recipients].[dbo].[Stages].[IX_Stages]), ORDERED FORWARD)
61% |--Clustered Index Scan(OBJECT:([Recipients].[dbo].[Recipients].[PK_Recipients]), ORDERED FORWARD)

Everything I've heard on the subject suggests you change the index scans to index seeks. How do I do this?

Any other tuning advice is greatly appreciated.

Here are the exact statements I used to create the tables:

CREATE TABLE Recipients (
ID INT IDENTITY (1, 1) NOT NULL,
UserName VARCHAR (50) NOT NULL,
DomainID INT NOT NULL,
First VARCHAR (24) NULL,
Last VARCHAR (24) NULL,
StreetAddress VARCHAR (32) NULL,
City VARCHAR (24) NULL,
State VARCHAR (16) NULL,
Postal VARCHAR (10) NULL,
SourceID INT NULL,

CONSTRAINT PK_Recipients PRIMARY KEY CLUSTERED (DomainID, UserName)
)

CREATE TABLE Stages (
ID INT NULL,
UserName VARCHAR(50) NOT NULL,
DomainID INT NULL,
Domain VARCHAR(50) NOT NULL,
First VARCHAR(24) NULL,
Last VARCHAR(24) NULL,
StreetAddress VARCHAR(32) NULL,
City VARCHAR(24) NULL,
State VARCHAR(24) NULL,
Postal VARCHAR(10) NULL
)
CREATE CLUSTERED INDEX IX_Stages ON Stages (DomainID, UserName)

View 11 Replies View Related

What Is Most Efficient Way To Use DataAdapters With Large SQL Tables

Jan 9, 2008

 I'm using DataAdapters with my SQL database with the intention of all the SELECT, UPDATE, INSERT, DELETE commands to be automatically generated.One table is huge so I'm wondering is it more efficient to "SELECT Top(1) * FROM hugetable" instead of  "SELECT * FROM hugetable" in order to facilitate the generation of commands.I hope this isn't too confusing.Thanks,Geoff  

View 2 Replies View Related

4 Seperate Tables Or One Large Table?

May 10, 2008

I have 4 tables with the respective amount of records
1) 6755
2) 2021
3) 2021
4) 355

They all have the same columns. However, they need to be seperate, or at least when I query them. I'll be accessing this database via the web. i was first afraid that a large database would cause major slow down when accessing the db. So I broke it up into 4 tables. If I combined all 4 tables into one large table and just had a column that differentiated the 4, how significant would be the change in speed when accessing the table? It's not a big deal to keep them seperate, its just that when I have to add or remove a column from one table I have to remove it from all the tables. Furthermore, I'm using a module from DEVEXPRESS, don't know if anyone has heard of it, but when you use a gridview, it loads up the entire table even though your paging (which I think is retarded), so for that reason I was afraid it would slow up my access to the db. Any thoughts?

View 2 Replies View Related

Slow Inserts Into Large Tables

Nov 29, 2000

We are inserting into a table, which includes an identity primary key column. When the table gets really large (i.e. 1.5 million records), the performance of the inserts reduce.

I noticed that when we insert into the table an exclusive lock on the table is obtained. Do inserts into tables with identities always lock the table?

Given the table size is unavoidable, does anyone have a suggestion to improve the performance?

Thanks,
Matt

View 6 Replies View Related

Temp Tables Vs. Large Table

Aug 4, 2005

I have a few hundred users, maybe a dozen or two active at any given time, accessing the same database via ASP. The database has many tables, one being a very large orders table with a few million records, in which I have created a view against. A view only because I need to allow the user to filter quite extensively against the results. The users typically only need to view records for the last 30 days and results for each user might be five thousand records or less.

My question is this. Would I be better off writing each user's resultset to a temp table for that user's session and allow the filtering and sorting by the user go against that temp table and increase my hardware requirements to accomodate that. Possibly to the point of creating a database cluster. OR would I be better off leaving it as is where each users uses the same view.

FYI...each user may need visibility to only a hand full of fields, but over all the view must maintain many fields.

Any thoughts on this would be greatly appreciated. Thanks in advance.

Dave

View 2 Replies View Related

Large Number Of Tables And Performance

Jan 25, 2008

Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.

I would really appreciate if you can give me some advice and if you have any good links that would be great...

View 10 Replies View Related

Query Optimization Help For Very Large Tables

Nov 1, 2007

I have the following table structure:
tableA (~85,000 rows) primary key = [colA,colB]
tableB (~850,000 rows) primary key = [colA,colC]
tableC (~120,000,000 rows) primary key = [colA,colB,colC]

IMPORTANT: colC is DATETIME

For a SET of rows in tableA (about 50,000) I need to pull the MOST RECENT (given a date) corresponding values from tables B and C. The only way I can think of doing this is the following:

SELECT tableA.colA
,(SELECT TOP 1 colX FROM tableB WHERE colA = tableA.colA AND colC <= @INPUTDATE ORDER BY colC desc)
,(SELECT TOP 1 colY FROM tableB WHERE colA = tableA.colA AND colC <= @INPUTDATE ORDER BY colC desc)
,... --some more columns from tableB
,(SELECT TOP 1 colX FROM tableC WHERE colA = tableA.colA AND colB = tableA.colB AND colC <= @INPUTDATE ORDER BY colC desc)
,(SELECT TOP 1 colY FROM tableC WHERE colA = tableA.colA AND colB = tableA.colB AND colC <= @INPUTDATE ORDER BY colC desc)
,... --some more columns from tableC
FROM tableA
WHERE tableA.colX = 'some criteria'


Is there any other way anyone can suggest? Unfortunately, because tableC is so large, the disk IO (I think) causes this query to take over an hour. (If I had monster RAM and super fast disk this wouldn't be as big an issue, but that's not an option right now )

Thanks in advance!

View 7 Replies View Related

Large Number Of Tables And Performance

Jan 25, 2008

Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.

I would really appreciate if you can give me some advice and if you have any good links that would be great...

Waleed Eissa
http://www.waleedeissa.com

View 9 Replies View Related

Performance Issues With Large Tables

Dec 5, 2007

Hi,

I have a table with over 61 million records having a clustered index on an identity column(Primary key). Simple count queries are taking minutes to execute on this table (ex: select count(1) from table1). I have checked the statistics on the primary key which displayed me the histogram having the 39th million record as the Range-hi-key. I updated the statistics on this column and tried requerying, but still it took atleast 5 minutes to give me the count of records in the table. Also, there were no users using the table when I queried. Inserts into this table were working fine. I have other tables in my database with 41 million records having no such issues. Can anyone point me to the problem areas in such scenarios?


Thanks,
Harish

View 6 Replies View Related

How To Commit The Rest Of The Transactions Of An Update In Large Bulk?

Feb 8, 2004

I have to modify the table structure where the table have a lot of data already. The log is getting full due to uncommitted transactions, there is a lot of data being updated in large bulks, not all of the transactions are committed, the update task cannot be completed.
However, there is no more spare disk space for it to commit the transaction. Anyone can help?

View 2 Replies View Related

SQL Server 2008 :: Update Statistics With Large Databases

Feb 5, 2015

Currently our database size is around 350G. It will grow up to 1.5 TB

We have the

Auto create statistics option :True,
auto update statistics option :True,
auto update statistics asynchronously option : False

at database level

we have a weekly job, update statistics running very long time. It is created through maintenance plan using the option full scan.

Previously they tested with sampling but instead of full scan running with the sampling effected the queries.

Is there option to avoid the long time job duration.

If we didn't run the statistics manually what will happen? How do you maintain statistics with large databases

View 9 Replies View Related

Large SQL Update - Effect On SQL 2005 Transactional Replication

Feb 1, 2007

I'm a newbie to Replication and recently setup the following.

Publisher and Distributor on the same SQL2005 server, then I've got 7 subscribers(SQL2000 servers) and I'm using push subscriptions. I'm replicating 5 SQl tables which don't have too many changes and these are scheduled to run every 3 hours. In a few days a large one off SQL update with add an additional 10,000 rows to one of the replicated tables. I was wondering what impact this would have on the above setup i.e are there any sort of limitations here. I'm assuming not but thought I would check. I'm thinking it will just cause additional overhead on the server, but the update is being applied when no users will be using the database.

Any feedback greatly appreciated.

Thanks

View 1 Replies View Related

Reducing Large Tables, Re-index And Backup Them

Jan 2, 2003

What is the best procedure/sequence to reduce some tables containing large number of rows of
a SQL 2000 server?
The idea is first to check which tables grow extremely fast (all statistics, user or log tables), reduce the table
according to the number of months the user wishes to keep in the table.
As a second step backup remaining rows of table as txt files on harddisk (using DTS), UPDATE STATISTICS and re-indexing reduced table.
Run DTS Package every month once (delete oldest month and backup newest month) and do the same as above to keep size of tables adequate.
What is a fast way to reduce number of rows of a large table - the following example produces an error (timeout expired) of my
ADO connection when executing:
SET @str = 'DELETE FROM ' + @ProcessTable + ' WHERE ' + @SelectedColumn + ' < DATEADD (m,' +' -' +
@KeepMonthsInDatabase + ',
+ GETDATE())'
EXEC (@str)
Adding ConnectionTimout = 0 did not help unfortunately.

What is the best way to re-index the table just maintained?

Thanks

mipo

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved