Under 6.5, Best Way To Archive Data Out Of Large Table?
Oct 31, 1999
Hello:
The purchased-application mssql 6.5, sp 4 that I am working on has one large table has 13m illion. It the largest table considering thenextlatgest table is only1.75 million rows.
Thew vnedor has made a change to this largest table in recommending changing a data type -- char to varchar. To make this change easier to do,
I want to "archive" older data not necessary for the current year or current processing to another table.
What is the best way to do this archiving?
Any information you can provide will be greatly appreciated. Thanks.
I have two tables say A and Archive. After a certain period of time some records are to be sent to archive table.To copy records to archive table I am using SqlBulkCopy operations.Now I have to delete the records from A Table. I was thinking of sending a Comma seperated id's of rows that are to be deleted to a stored procedure.Are there any better techniques to move data to archive table and to remove data from main table.?Thanks.
I am studying indexes and keys. I have a table that has a fixed width of data to be loaded in the first column which is parsed in a view based on data types within the fixed width specifications.
Example column A: (name phone house cost of house,zipcodecountystatecountry) -a view will later split this large varchar string based column b: is the source filename of the data load (varchar 256) ....
a. would there be a benefit of adding a clustered or nonclustered index (if so which/point in direction on why)
b. is there benefit of making one of these two columns a primary key (millions of records) or for adding a 3rd new column as a pk?
c. view: this parses the data in column a so it ends up looking more like "name phone house cost of house zipcode county state country" each having their own column.
-any pros/cons of adding indexes (if so which) to the view instead of the tables or both for once the data is parsed?
I need to delete data from a particular table which has more than half a million records. The data needs to be deleted is more than 200,000 records from the table. What is the best way to delete the data from the table other than importing into a temporary table and performing the same operation?
Let me know if the strategy to be followed is okay.
1. Drop all the triggers 2. Drop all the indexes 3. Write a procedure with a loop setting ROWCOUNT to 1000 and delete the records. ( since if I try to delete all the rows it will give timeout error ) The above procedure will delete 1000 records for each batch inside the loop till it wipes out all the data for the specified condition. 4. Recreate Indexes and Triggers.
Please let me know if there are any other optimal solution.
Hello, Im a very new to SQL server etc, so please bear with me.
I am currently trying to find a quick way of making a large number of database entries within a particular table all have the same value in one particular column.
I have created a query in MS SQL Server Management Studio, which outputs all the entries in a particular table that I want to change. Currently they all have different values in a certain column in this table, and I want them all to have the same value in that column.
Project assigned to me called ''Archive off old data''.As SQL DBA the best way to archive large table is data Partitioning. But one table does not enough in terms of size to do partition and plus table has 32 other tables,view,sps dependencies.The data from that table needs to archive off every week to different database on different server. My question are
1) what is the best way to archive data ? 2) what are steps I need to follow ? Do I need to remove dependencies first and then take out data?.
I have a table called customers that store information about the particular customer. I would like to have a table called Archive so that when I delete a Customer that have not been active for a specific time then the deleted information will be automatically be inserted into the Archive table. Do I need to create the archive table with the same numbers of columns as exactly as the customer table?
I need some basic idea about how this should be implemented.
We are running SQL Server 2005 express on Windows 2003. The database server gets significant amounts of data.
Because of the 4GB data limit we have a daily cron task which goes through and deletes data older then 90 days.
We would like a way to archive this data instead of deleting it. Is there any way to take data and compress it and store it in a different way, so that if needed, customers can query directly out from the compressed data? Cleary querying from compressed would be slower but that is ok.
Any other solutions that would allow us to archive data instead of deleting it? Thanks.
We have transactional replication setup to replicate data from production across to a reporting server.
We want to ARCHIVE production, but don't want the ARCHIVE duplicated on the reporting server.
Does anyone know of a way that the reporting server can be stopped from replicating these changes, and continue to hold the FULL history of the database?
We have transactional replication setup to replicate data from production across to a reporting server.
We want to ARCHIVE production, but don't want the ARCHIVE duplicated on the reporting server.
Does anyone know of a way that the reporting server can be stopped from replicating these changes, and continue to hold the FULL history of the database?
I have a table contains huge rows of data. Performance issue raised. I amthinking archive some data so that the table will not be that big. The mostconvience way is move it to another table. The problem is: will this solvemy performance problem? or I need to move it to another database to reducethe database size?Regards,TrueNo
I currently have one table that lists all projects and tasks within the organisation. One of the table fields is the task status, open or closed. I would like to be able to have a process by which the tasks that are completed are removed from the table and placed into another (archive) table. The same records then being removed from the original table. which then only contains the incomplete tasks. This process could be run at given times during the day or at the point when the status of a task is changed from open to closed, either way each time the process is run it would need to append the rows removed into the archive table. Anyone any ideas on the best way to do this?.
In a project with SQL Servcer 2005 the customer is interested to save (archive) content of a table frequently. They want to have possibility to restore the table content as before. I have suggested to export the table every day to a text file and save the file in Visual source safe. If they need there will be possibility to import the text file later. Now I wonder, is there any other way to do archiving of the table content in SQL Server 2005?
We have a large table which is very old and not much ppl take care about, recently there is a performance problem from the report need to query to this table. Eventally we find that this table have primary key missing and there is duplicate data which make "alter table add primary key" don't work
Besides the data size of this table require unacceptable time to execute something like "insert into new_table_with_pk from select distinct * from old table"
Do you have any recommendation of fixing this? As the application run on oracle , sybase and sql server, is that cross database approace will work?
I need to update a large table, about 55 million rows, without filling the transaction log, in the shortest time as possible. The goal is to alter the table and change the data type for Text column from VARCHAR(7900) to NVARCHAR(MAX).
Since I cannot do it with an ALTER TABLE statement (it would fill up the transaction log) I'm thinking to:
- rename column Text in Text_OLD - add Text column of type NVARCHAR(MAX) - copy values in batches from Text_OLD to Text
The table is defined like:
create table DATATEXT( rID INTEGER NOT NULL, sID INTEGER NOT NULL, pID INTEGER NOT NULL, cID INTEGER NOT NULL, err TINYINT NOT NULL,
[Code] ....
I've thought about a stored procedure doing this but how to copy values in batch from Text_OLD to Text.
The code I would start with (doing just this part) is the following, but maybe there are more efficient ways to do it, or at least there's a better way to select @startSeq in the WHILE loop (avoiding to select a bunch of 100000 sequences and later selecting the max).
declare @startSeq timestamp declare @lastSeq timestamp select @lastSeq = MAX(sequence) from [DATATEXT] where [Text] is null select @startSeq = MIN(Sequence) FROM [DATATEXT] where [Text]is null BEGIN TRANSACTION T1 WHILE @startSeq < @lastSeq
I'm trying to move my current use of an sql 2000 db to sql 2005.
I need to update a table definition (to change a field to an Identity)
I'm getting a dialog box (in SQL server management studio) on save saying :
'xxxx' table
- Saving Definition Changes to tables with large amounts of data could take a considerable amount of time. While changes are being saved, table data will not be accessible.
I press 'Yes' to the dialog box.
After 35 seconds, I get another dialog box saying:
'xxxx' table
- Unable to modify table.
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Well, the server is responding and I can query that talbe and other, I can add/delete rows to other columns. I can modify other (smaller) tables.
I have a combo box where users select the customer name and can eithergo to the customer's info or open a list of the customer's orders.The RowSource for the combo box was a simple pass-through query:SELECT DISTINCT [Customer ID], [Company Name], [contact name],City,Region FROM Customers ORDER BY Customers.[Company Name];This was working fine until a couple of weeks ago. Now wheneversomeone has the form open, this statement locks the entire Customerstable.I thought a pass-through query was read-only, so how does this do atable lock?I changed the code to an unbound rowsource that asks for input of thefirst few characters first, then uses this SQL statement as therowsource:SELECT [Customer ID], [Company Name], [contact name],City, Region Fromdbo_Customers WHERE [Company Name] like '" & txtInput & "*' ORDER BY[Company Name];This helps, but if someone types only one letter, it could still bepulling a few thousand records and cause a table lock.What is the best way to populate a large combo box? I have too muchdata for the ADODB recordset to use the .AddItem methodI was trying to figure out how to use an ADODB connection, so that Ican make it read-only to eliminate the locking, but I'm striking outon my own.Any ideas would be appreciated.Roy(Using Access 2003 MDB with SQL Server 2000 back end)
I am using SQL 2012 SE. I have 2 databases say A and B with same structure and relationships. There are 65 tables in each database. A is already replicating data to database C for 35 tables. Now I need to move data from A to B which is greater than getdate()-1 everyday for all the tables and once the move is done I need to delete this data from A. And the same thing the next day and everyday. Since this is for 65 tables its challenging to identify the insert order. Once the insert order is identified the delete order will be the reverse of it.
Is there a tool or any SP that could generate the insert order script? The generate scripts data only is generating the entire data and these databases are almost 400GB. Some tables have 200Mil+ rows. So it takes forever.
I'm looking for a process to archive data through replication. I have nightly job that purge records in few source tables(publisher) retaining only 3 yrs data. I have archive database (subscriber) that contains prior 3 yrs data and current 3 yrs data.
Before nightly job DELETES records in Source table i want to STOP replication so that the delete is not replicated in archive database. After the job completes i would like to TURN ON replication so that any new inserts and updates in Source will ONLY be replicated in archive database.
My DBA tested this but after last step of turning replication back ON archive database is sync'd with source table.
There are around 70 tables where 30 of them are transactional tables that needs record purge. Developing ETL process is possible but tedious.
I wrote code to archive Data.I created a table that set the maximum number of records to archive and delete and loop until finished or a set a flag in a table to stop archiving. I was able to limit the number of records to commit.
I have these two tables Log and CategoryLog, I need to archive records older than 13 months in these two tables to two separate tables and then delete the archived records from Log and CategoryLog tables. The problem is that only 'Log' table has a date column, the other table CategoryLog does not have any date column. But the two tables are connected by a column(LogID). How to archive the data and then delete the archive data from both tables.
I need a script that can be schedule for a SQL7 server to archive a table-space's data for exery day except current, then pull the last 7 days worth of data back into the table. Any one have anything like this? My dbase is a load totaling dbase and the size is getting out of control.
Hi we have a table with about 400000 records in it. It starting to take longer and longer to add a new record. I was thinking of creating another identical table and archiving off most of the records every month (we are now adding about about 4000 records a day) . Is this the best thing to do? I don't know a lot about sql server so any help or suggestions would be great
I am wondering if tempdb stores all results tempararily whenever I query a large fact table with over 4 million records which joins another dimension table? Since each time when I run the query, the tempdb grows to nearly 1GB which nearly runs out all the space on my local system drive, as a result the performance totally down. Is there any way to fix this problem? Thanks a lot in advance and I am looking forward to hearing from you shortly for your kind advices.
I am developing an application that has a table with lots of records(network traffic) but the data is summarize every so often to create summary records (old records are deleted). The problem is that I have a PK based on an autoincrement ID (int) that will run out of numbers. However, this ID is not referenced anywhere, (not a foreign key from another table, not use for deletion and there is no update in this table whatsoever).
So my possibilites are: 1.- reseed the id when it is about to run out. 2.- make the id bigint 3.- remove the id and change the PK to 2 other fields 4.- remove the id and without PK
I am leaning toward option 4, because I do not see the need for a PK, but I understand that it is quite out of the normal.. So I would like to hear from other people ( I do not have much experience with DB).
I also like option 3. I already have a index on one of the other fields (time).
If I use BCP to export a very large table will that table be blockedfor writes during the export process? I don't want to prevent usersfrom accessing that table during the bcp process?Thank You, TFD.
Hi, I have this page that upload's PFD's to a table. In principle this works fine. Until I try to upload large files (3 to 4 MB)I need to even upload larger files than that. (Don't really know as of yet what users are going to come up with) I get TimeOut problems. Now some people say it is not possible to exceed a limit of about 4 MB. But that there is a workaround by changing something to the web.config file.Can somebody give me info about that, (I am quite a novice really)I tried to change it like this, but to no avail: <system.web><httpRuntime maxRequestLength="102400"enable = "True"requestLengthDiskThreshold="102400" useFullyQualifiedRedirectUrl="True"executionTimeout="102400"/></system.web> Thanks for any help!
On a nightly basis, this table gets rebuilt in a temporary database. Once the table has been built and scrubbed, i need to move it into our webservers db.
I'd like to do this with minimal interuption to the website.
Possible techniques:
1) I could set up a DTS package to copy the table object overwriting the destination table
2) I could export to a flat file and then bulk import into the live table (after truncating it)
3) I could run a process to update smaller chunks of data at a time running delete queries and insert queries.
Anybody have a thought on the best way to do this so that the web users would be virtually unaware that anything was happening ?