Data-archiving And Purging Strategy
Mar 31, 2008
Regarding SQL Server data, I am looking to implement the beset Data-Archive and Purge policy. Normal, we do SQL Backups and keep the history for some period , for example, 8 weeks, so we can go back and restore any data point in time upto 8 month in past. and we also do Tape backups.
Question is Where can I get nice article or documentation on this to best design such policy where I make sure that I am covered for point in time recovery of database (which is sql backups) and point in time recovery in far past, say, 3 years ago using tape backups, and I need to make sure that I don't repeat the same efforsts.
Any advices or suggestion on this topic.
Thanks,
View 5 Replies
ADVERTISEMENT
Feb 28, 2007
Does anyone know how to purge old data in a MS SQL server 2000database?
View 1 Replies
View Related
Jan 17, 2007
How do I purge data off of an MSDE database. I only want to keep 6 months of data in the database. Right now I have data going back to 2004. I get errors about every 10 seconds. "Primary File Group is Full" is the error I am getting.
View 7 Replies
View Related
Oct 15, 2006
Is it possible to purge all records in the database while retaining the the table structures. Even better yet, could I do it on a table by table basis? If I simply delete all the records the identities for the tables do not revert back to 1.
View 2 Replies
View Related
Jul 5, 2014
I have 6 tables which are very huge in row count and records needs to deleted which are older than 8 days.
Little info: Every day, 300 Million records are inserted in below 7 tables. we should maintain only 8 days worth of data in below tables. How to implement Purge script which can delete records in all tables in the same time and with optimized parallelism.
Master table which has [ID],[Timestamp]
Table Name: Sample - 2,578,106
Child tables: Foreign key [ID] is common for all the tables. There is no timestamp column in child table. So the records needs to deleted based on Min(ID) from Sample
dbo.ConnectionDB - 1,147,578,048
dbo.ConnectionSS - 876,458,321
dbo.ConnectionRT - 118,133,857
dbo.ConnectionSample - 100,038,535
dbo.Command - 100,032,235
View 9 Replies
View Related
Aug 2, 2000
Hi everyone!
My problem is, that i don#t know how i can archive the data. That means to documentate when, who, etc. changed the data (in a seperate table).
I tried to solve it with different triggers.
Thanks in advance,
Maria.
View 1 Replies
View Related
Oct 6, 2004
Hi ,
I need to archive my production database to a new Server.....
Is it possible to move data using INSERT INTO ServerName.DBName.dbo.TableName from the current Database Server!!!
Do I need to create a linked server to do this....or shoud I go for DTS..
Thanks
Cheriyan.
View 1 Replies
View Related
Jan 11, 2007
What's the best archiving procedures? :)
SlayerS_`BoxeR` + [ReD]NaDa
View 2 Replies
View Related
Aug 21, 2015
I'm working on archiving data from some tables. I've duplicated the data structure, with the exception of not including the IDENTITY specifier on INT columns, so that the archive table will keep the value that was generated in the original table. This is all going well, until I tried to copy the data over where the column is specified as a timestamp data type. I've looked this up and found a couple of things. First, documentation for SQL 2000 says,
Timestamp is a data type that exposes automatically generated binary numbers, which are guaranteed to be unique within a database. Timestamp is used typically as a mechanism for version-stamping table rows. The storage size is 8 bytes.
And then documentation for the soon to be released SQL 2016 on the rowversion data type says,
The timestamp syntax is deprecated. This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature.
and
Is a data type that exposes automatically generated, unique binary numbers within a database. rowversion is generally used as a mechanism for version-stamping table rows. The storage size is 8 bytes. The rowversion data type is just an incrementing number and does not preserve a date or a time.
OK, I've read the descriptions, but I don't get it. Why have a timestamp/rowversion data type?
View 9 Replies
View Related
May 24, 2002
How do I put data into a text or excel file before I attempt a deleteion from a large table. I know how to select the necessary data, but i'm not sure about the t-sql required to put it into a file?
are there any better methods of archiving?
thanks
View 3 Replies
View Related
Aug 10, 2015
I wrote a script to archive and delete records rom a table back in 2005 and 2009.
I can't seem to get the syntax right. Any sample script to simply archive and delete records?
This is what I have so far.
DECLARE @ArchiveDate Datetime
SET @ArchiveDate = (SELECT TOP 1 DATEPART(yyyy,Call_Date)
FROM tblCall
ORDER BY Call_Date)
--SELECT @ArchiveDate AS ArchiveDate
DECLARE @Active bit
[Code] ....
View 9 Replies
View Related
Feb 29, 2000
Is there a way, for example to script a DTS Package, so that it can be deleted and recreated at a later date if necessary? I have quiet alot of these, but few are used regularly. The msdb database is now up to 80 MB. However I don't want to delete them and have no way to recreate them.If I took a backup of msdb and then deleted the packages, would restoring msdb at a later date restore the packages?????
View 1 Replies
View Related
Sep 22, 1999
Hello,
I have been placed in the position of administering our SQL server 6.5 (Microsoft). Being new to SQL and having some knowledge of databases (used to use Foxpro 2.6 for...DOS!) I am faced with an ever increasing table of incoming call information from our Ascend MAX RAS equipment. This table increases by 900,000 records a month. The previous administrator (no longer available) was using a Visual Foxpro 5 application to archive and remove the data older than 60 days. Unfortunately he left and took with him Visual Fox and all of his project files.
My question is this: Is there an easy way to archive then remove the data older than 60 days from the table? I would like to archive it to a tape drive. We need to maintain this archive for the purposes of searching back through customer calls for IP addresses on certain dates and times. We are an ISP, and occasionally need to give this information to law enforcement agencies. So we cannot just delete it.
Sorry this is so long...
Thanks,
Brian R
WESNet Systems, NOC
View 1 Replies
View Related
Jul 23, 2005
Hi techiesI have set up a Transaction replication from My Primary Server toSecondary Server on Orders table.Thousand of records gets inserted on Orders every hour which getreplicated on the secondary server. it works finereporting apps uses Secondory server's Orders table data for generatingreports .The Problem :Let say if i want to Remove older records from Orders table in theprimary serverwith out reflecting this change on the secondary server.is there a way to PREVENT this operation /transaction to be propogatedto the secondary server.Note : i am moving the records to another table (orders_Archive ) anddeleteing the rows from orders table . Also I need all the rows to bepresent on the secondary server table.Please advice ASAPRegards,Raj
View 4 Replies
View Related
Apr 22, 2007
I have been developing a genealogy application using a SQL Server 2000 database and ASP .NET 2.0. In this application a process, Ged.Parse, converts data from the GEDCOM standard format (a heirachical file format that looks as if it was designed for 80-column cards) into my SQL Server database.
As we started to load reasonable quantities of data into the system we found that the on-line response became abysmal. This problem was fixed by defining a number of secondary indexes (response times dropped to under a second, from previously exceeding 2 minutes and often timing out). Unfortunately however the processing time of Ged.Parse then tripled, and it may now take up to an hour to process a GEDCOM. I believe that this is a byproduct of defining several indexes that are not needed by Ged.Parse itself, but which are of course maintained as Ged.Parse inserts new records into the database.
I am wondering what my best strategy is, apart from putting Ged.Parse into a background task and just letting it trickle away. (I will probably do this anyway). What I'd like to be able to do is to have Ged.Parse load records without creating the secondary indexes, and then create the indexes for the newly-added records as a penultimate step just before it makes them available for general use. Of course there is no way that you can do this: records in a table are either indexed or they are not.
Proposed change: recode Ged.Parse to load data into temporary tables, say NewPeople, NewFacts, etc., with these tables having only the indexes required by Ged.Parse. Then, as the last process in Ged.Parse run a SQL procedure with code like: - Insert into People Select * From NewPeople Delete from NewPeople etc
This is a reasonable amount of programming, so before I make this change could somebody tell me: will this be significantly faster overall, or is this likely to make little or no improvement compared to the present process in which Ged.Parse loads data directly into People, Facts, etc? Two facts that may influence the answer. First, all record relationships are through GUIDs, so records in NewPeople, NewFacts, etc would already have their final key values. Second: although Ged.Parse needs to form relationships between records, these relationships are only within the new records (created from the same GEDCOM), and Ged.Parse does not need to relate any of these new records to earlier records.
Thank you,
Robert Barnes.
View 2 Replies
View Related
Dec 16, 2007
Hello there,
Don't know if this is the right forum to be asking this, but I'll give it a try...
I'm relativelly a beginner in SQL Server and T-SQL in general. The problem I'm trying to solve is the following:
The big picture is that I have data coming from different data sources which I need to store on a database for later reference. Each data source might have a different set of measurements. For example, data source 1 might log Pressure and Humidity while data source 2 logs Pressure and Temperature. Once the data is present on the DB, the users can go ahead and retrieve data for a given [datasource/measurement/time interval] to generate reports or charts.
My implementation so far consists of two tables: series_info and series_data. series_info holds general information for a given series of measurements for a given data source (Pressure for data source 1, Pressure for data source 2, Humidity for data source 1 and Temperature for data source 2, in our example). Each series has a bigint index as primary key.
The table series_data contains all data relative to the series from series_info. Each piece of data has a bigint as a primary key, an associate time (which is always crescent) and a foreign key to the series it represents (in series_info).
Alright, everything is cool so far. However, whenever a user wants to retrieve data for given [data source/measurement/time interval], this takes very long, since all data is interposed in series_data and for every search it's necessary to find where the desired data actually lies.
One obvious solution for this would be to dynamically create a new table to hold the data for each series, but that would just make my database disorganized, since there would be thousands and thousands of tables.
Another thing that comes to my mind is to create a table with information of where lies the data for a given [data source / measurement] for given dates. So when the user requested data for a given [data source/measurement] between, say, january and february, we would first look at this intermediate table and find out that the data lies between indexes 1000 and 2000 on the series_data table, so the next SELECT command to series_data would already contain a restriction like WHERE index>=1000 and index<=2000. This should probably improve the speed of retrieval.
What do you guys (or girls) think? Maybe there's simply a classical solution for such a case.
Thanks in advance!
View 6 Replies
View Related
Jul 28, 2015
I’m looking for clearity on partition switching. The idea is to use many BULK INSERT statements into table dbo.X_n in parallel and when BULK INSERT for table dbo.X_n is completed, switch dbo.X_n into dbo.bigdaddy. I think this is the fastest way to upload a couple hundred GB of data.
In learning about partition switching (in part) from The Data Loading Performance Guide under Partition SWITCH, I hear the instructions to say copy the main table exactly to become a target. But in that same step (#1), I read that we need to change the default file group of the target (dbo.X_n) from the default file group. Then it says I need to match indexes and lists the filegroup as something we need to match with the main table.
As an overview of the partition switching strategy, I think the whole point of BULK INSERT with partitioning is to have seperate files (in same group) to enable concurrent uploading where each table has its own file. Once the upload is completed to a table (dbo.X_n) then we do the partition switch into the main table (dbo.bigdaddy). The data we just uploaded doesn’t actually move, just the metadata for it.
When I read the instructions linked above, I hear “Don’t have the same filegroup on your target as the main table. You must have the same filegroup on your target as the main table.”
Where am I disconnected?
View 5 Replies
View Related
Aug 27, 2015
I am writing a query where I am identifying different scenarios where data changes between one week and the next. I've set up my result set in the following manner:
PrimaryID SKUChange DateChange LocationIdChange StateChange
10003 TRUE FALSE TRUE FALSE
etc...
The output I'd like to see would be like this:
PrimaryID Field Changed Previous Value New Value
10003 SKUName SKU12345 SKU56789
10003 LocationId Den123 NYC987
etc...
The key here being that in the initial resultset ID 10003 is represented by one row but indicates two changes, and in the final output those two changes are being represented by two distinct rows. Obviously, I will bring in the previous and new values from a source.
View 3 Replies
View Related
Jul 28, 2015
I’m looking for clearity on partition switching. The idea is to use many BULK INSERT statements into table dbo.X_n in parallel and when BULK INSERT for table dbo.X_n is completed, switch dbo.X_n into dbo.bigdaddy. I think this is the fastest way to upload a couple hundred GB of data.
In learning about partition switching (in part) from The Data Loading Performance Guide under Partition SWITCH, I hear the instructions to say copy the main table exactly to become a target. But in that same step (#1), I read that we need to change the default file group of the target (dbo.X_n) from the default file group. Then it says I need to match indexes and lists the filegroup as something we need to match with the main table.
As an overview of the partition switching strategy, I think the whole point of BULK INSERT with partitioning is to have seperate files (in same group) to enable concurrent uploading where each table has its own file. Once the upload is completed to a table (dbo.X_n) then we do the partition switch into the main table (dbo.bigdaddy). The data we just uploaded doesn’t actually move, just the metadata for it.
“Don’t have the same filegroup on your target as the main table. You must have the same filegroup on your target as the main table.”
View 1 Replies
View Related
Mar 29, 2007
I created a SSIS package and several data flow componenets for this package.
What does strategy exist to deploy SSIS package and data flow components into a enterparise server?
Thanks in advance.
View 2 Replies
View Related
Jul 17, 2004
How to implement a optimal archival and purging in MSSQL SERVER
databases
View 3 Replies
View Related
Sep 23, 2005
hi !
I have a job that creates tranlog backups every 15 mins and makes a full database backup at midnight.
I want to purge the old logs after the full backup , how can I do that ?
thanks
Sami
View 2 Replies
View Related
Jun 20, 2007
Hi DB Experts
I have a question on shrinking tempDB. Currently, I am using MS SQL 2005 Server, I have a software that used SQL DB but I had created a separate database to store my data. The tempDB is totally not used at all.
Problem - Whenever I export data into my created database on MS SQL server, tempDB also grew, I noticed that the files grew so large that it crashed the server. I am running 50GB free space on my drive where by the MS SQL server was installed.
Question - May I know are there any solution to shrink or freeze the growth of tempDB size?
Best regards
TEWCT
View 7 Replies
View Related
Aug 7, 2007
Hi,
In one of our DR Servers we have configured Custom Logshipping. In the folder where the .TRN files are getting copied there is a script to purge the files which are older than one day. Following is the code for the same.
@Echo Off
if exist filelist del filelist
date /t > rundate
for /f "tokens=2* delims= " %%i in (rundate) do set rundate=%%i
for %%i IN (*.trn) do echo >> filelist %%i %%~ti
for /f "tokens=1,2* delims= " %%i in (filelist) do if not %rundate% equ %%j del %%i
:pause
:Exit
Instead of removing the files older than 1 day, I need to keep 3 days transaction logs.
Being a novice I don't have much idea how to accomlish it. Can anybody help me with this?
Many thanks in advance,
Sandhya
View 1 Replies
View Related
Mar 29, 2006
My transmission queue has lots of messages that will never, ever be delivered because the transmission_status = "The session keys for this conversation could not be created or accessed. The database master key is required for this operation."
How can I purge the transmission queue to get rid of this junk?
View 1 Replies
View Related
Jul 1, 2002
When connecting to an SQL Server (v7 in this case) the log file shows that old and deleted databases are still being opened as part of the process. Further, these db names are now reserved and cannot be reused, even thought I've deleted them.
Is there any way I can purge all traces of these deleted db names?
You might have guessed, I'm a newbie, which is why there are a few dozen deleted DBs.
Thanks for any help
Max
atomax@gmx.net
View 2 Replies
View Related
May 31, 2006
Hi,
I'm using SQL2K with SP4 2187. I have created a DB Maintenance wizard where the purging older than 1 day is set.
However, this feature seems not to be working, even if I tried two ways. Delete the scheduled job and recreate it - not successful, 2nd) delete the Maintenance Plan, still not successful.
Is this a bug or do I miss something here.
View 4 Replies
View Related
Jul 19, 2007
Hi,We have a Java application that runs against a variety of backendsincluding Oracle and MSSql 2000 and 2005. Our application has ahandful of active tables that are constantly being inserted into, andsome of which are being selected from on a regular basis.We have a purge job to remove unneeded records that every hour "DELETEFROM <tableWHERE <datafield< <sometimestamp>". This is how we arepurging because we need 100% up time, so we do so every hour. Forsome tables the timestamp is 2 weeks ago, others its 2 hours ago. Thedate field is indexed in some cases, in others it is not... theDELETE is always done off of a transaction (autoCommit on), butexperimentation has shown doing it on one doesn't help much.This task normally functions fine, but every once in a while theinserts or counts on this table fail with deadlocks during the purgejob. I'm looking for thoughts as to what we could do differently orother experience doing this type of thing, some possibilities include:- doing a select first, then deleting one by one. This is apossibility, but its SLOW and may take over an hour to do this so we'dbe constantly churning deleting single records from the db.- freezing access to these tables during the purge job... our appcannot really afford to do this, but perhaps this is the only option.- doing an update of an "OBSOLETE" flag on the record, then deletingby that flag... i'm not sure we'd avoid issues doing this, but its'an option.The failures happen VERY infrequently on sql2005 and much morefrequently on sql2000. Any help or guidance would be mostappreciated, thanks!Bob
View 3 Replies
View Related
Oct 9, 2007
hi All,
I have to remove non-useful and duplicate records containing NULL , Blanks and extra spaces (either on left or right side of the column values) etc from all the tables in my server's DB XYZ weekly thru a a scheduled job with the help of a Stored Proc, that s i guess called Purging og DB. Plz help how i can do it with T-SQL.
Also i have to find out and remove all the duplicate DB objects(tables) from the DB .e.g. a table existing with name TABLE_TEST or TABLE_DEBUG etc for an original table TABLE , making sure no any of the base table is dropped.
Plz help me reagrding these two problems.
Thanks in advance for the quick replies.
Mohit
View 1 Replies
View Related
Jun 29, 2006
HiWe have need for an SSIS package that would perform routine purging of the growing data in some of the tables used to support notification services. While running Sql in a regular job would seem to suffice, an SSIS package would be more in line with the other processes for the alerts in terms of manageability. The SSIS package should follow the following guidelines:Deletes records from a given number of tables, based on a specified date column for each table and a specified number of days for each table, or other conditions. SystemAlertQueue: 30 days old based on the SubmitTimestamp column. SystemAlertChron: 30 days old based on the EventTimestamp column. SystemAlertNotifChron: 30 days old based on the NotifTimestamp column. WMICheck, related tables (based on WMICheckID, see WMIAlerts database diagram in SqlServer) 15 days old based on BeginCheckDate column. Each tables deletion routine should be distinguishable in the package. Does any body know how to do this.. Please help meRegardsDeepu M.I
View 1 Replies
View Related
Feb 17, 2007
Hi All,
I'm building an archiving applicaition for my small organization, but I have many file types such as videos, photos, texts, pdfs.... and some huge file sizes of about 500 MB.
My questions are:
1. Somefriend told me that there is a hardware compression device which can compress allready compressed videos and JPEGs. Is it correct? If so, where can I surf to find such devices?
2. Which datatype have I to use with SQL Server 2000 to handle all the different file types and sizes?
Thanks very much
Haytham
View 3 Replies
View Related
Feb 14, 2002
Hi,
We have a table of size greater than 200GB, so all the SQL is very slow.
Is there any way of doing table partition, If so How?
Is there any better way of archiving a portion of table used currently which can be restored later when is required, If so How ?
Thanks
John Jayaseelan
View 5 Replies
View Related
Jan 22, 2008
Hi,
Actually i have done a program (vb.net) to archive the database.
Due to some policy issues, i not allow putting the EXE file into the database server.
So, i only can using the SQL Agent.
My question is:
(1) Since i already prepare set of program (vb.net), how can I put into the sql agent?
(2) I found some articles, SSIS can be done. How to do it?
How you all have a clear guide for me.
Thanks anyway.
View 5 Replies
View Related