SQL Server 2008 :: Making Use Of A Large Transaction File To Delete Records?

Jun 5, 2015

Currently we has a database of size about 300G. Because our backup system failed some time past we were left with a transaction log file which grew to about 160G. However our backups are working again and everything is working fine. My understanding is that now the transaction log file is practically empty but the capacity remains at 160G.

When you delete records the deleted transactions are going to get logged to the transaction file. My understanding is when a backup is done these transactions get discarded out of the transaction file.

could I make use of this relatively large transaction file and start deleting transactions without out actually adding to the transaction file size.

The plan is to delete records from logging tables that are not referenced to by any other table without this increasing the transaction log file.For example over a period of a few weeks we can delete a chunk of records from a table. Then after it has completed a backup we can delete another chunk of records out of this table until we have got the table down to the records that we now need.Will this work?

View 2 Replies


ADVERTISEMENT

SQL Server 2012 :: Delete Large Number Of Records?

Sep 8, 2015

I have the following scenario:

SQL database on SQL 2012

Large Production table 15 Million record

The table has 3 years of data

New monthly data is being added every month.

A New Monthly data is being loaded, checked and finally approved after 6 or 7 iteration before approval.Because of this iteration the monthly data set is being added then deleted then added then deleted few times.Because the table is big this process takes time, any thoughts on how to make the delete insert process faster.Keep in mind I cannot do much because it is a production table and is being access by other users to do other analysis.

Delete is done based on trx_date which is a year/month combo, like 201508.

The table has monthly sales by customer aggregated.

The table structure is:

CREATE TABLE [dbo].[Sales](
[batch_key] [int] NOT NULL,
[Company_key] [int] NOT NULL,
[customer_key] [char](22) NOT NULL,
[Trx_Date] [int] NOT NULL,
[account] [nvarchar](35) NOT NULL,

[code].....

View 9 Replies View Related

SQL Server 2008 :: Job To Delete All Records Then Copy All From Another Table?

Apr 22, 2015

I am using a SQL Server Agent jobs that run each morning to update the records in a table to match what they should be for that day. I built them and tested it using a test table called "testtable1". It worked fine. But when I switched over to our production table, it fails saying the table has to be decaled. What would be the difference. The production table has a "@" in front of the name, is that causing issues?

USE [Live_build]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
BEGIN
DELETE
FROM @ZIPLIST
INSERT INTO @ZIPLIST
SELECT * FROM tblZip3DSWed;
END

View 4 Replies View Related

SQL Server 2008 :: Find All Transaction (insert / Delete / Update) On A Database For A Day?

May 8, 2015

i would like to know it's possible to find all transaction(insert, delete,update) on a database for a day. if yes what can i do.

View 2 Replies View Related

SQL Server 2008 :: Archive And Delete Old Records In Table With No Date Column

Feb 4, 2015

I have these two tables Log and CategoryLog, I need to archive records older than 13 months in these two tables to two separate tables and then delete the archived records from Log and CategoryLog tables. The problem is that only 'Log' table has a date column, the other table CategoryLog does not have any date column. But the two tables are connected by a column(LogID). How to archive the data and then delete the archive data from both tables.

View 9 Replies View Related

Data Flow Task To Delete Records And Then Insert Records In Transaction

Aug 6, 2007

HI,

I have been trying to solve the locking problem from past couple of days. Please help mee!!

Scenario:
--------------
I have a SSIS package in which 2 data flow tasks. 1st data flow task deletes records from a 5 tables and the 2nd data flow task should insert records into 1 of the five tables after the success of 1st data flow task. This scenario runs in Transacation.

The above scenrio in the 2nd data flow task hangs in runtime. It does not complete. with sp_who2 command i could see that there is an intent share lock(LK_M_IS) on the table and the status is SUSPENDED.

I dont know how to come out of this locking. Please help.

Thanks ,
Sunil

View 7 Replies View Related

SQL Server 2008 :: Large Binary Dataset - Database Or File System?

Jun 2, 2015

I have a well-structured but also very large binary data-set that is generated by a C++ application every five minutes. The data needs to be accessed by SQL applications. Since data is generated every five minutes, performance is key, both for write and read. The data set is about 500MB.If data is written to the file system, the write performance doesn't involve SQL server. For reading it, I have a CLR to read the portions of the data that I need based on offset and length. That works and is very fast. The problem is that data is stored in the file system, so it is not self-contained within the database.

A second option that I haven't explored yet, is to write the data into a table as VARBINARY(MAX). I would read the data using SUBSTRING with appropriate offset and length. Performance of SQL write/read of binary data of this size, and whether there is a third option I haven't thought off. I'm using SQL Server 2014.

View 5 Replies View Related

Error Occur While Delete Large Volum Records

Jan 6, 2004

I had write a ActiveX service to delete several tables and those records are more than 100000. When I test it by deleted 1000 records it is ok, but once the volum is increase until 100000, it will give me a error message said timeout operation fail.

how can i overcome this problem. please!!!!

View 8 Replies View Related

SQL Server 2008 :: Transaction Log File Drive Is Missing From SAN

Feb 17, 2015

We had some SAN issues and we dont have Transaction Log files for some databases.. The drive which was holding this Tlog files were missing.. How to bring back databases.

View 1 Replies View Related

Shrinking Unreasonably Large Transaction Log File.

Oct 12, 2002

Gurus,

I have inherited a SQL 2000 database ( (I am new to SQL DBA) and I found this when I was checking the db properites . The transaction log has grown bigger than the actual data file, I thought transaction log backups would truncate the inactive portion of the log file and shrink the transaction log, but it was not the case it seems, may be it was truncating the inactive portion of the log, but not shrinking it. This site does not have a job for truncating the data/log files periodically. What is the best method to deal this situation, how can I shrink the Transaction log quickly?,

All your suggestions are welcome.

TIA,
-Jay

View 2 Replies View Related

SQL Server 2008 :: Huge Volume Of Records To Copy To Excel File Through SSIS

Oct 22, 2015

I am copying data from database to an excel file through SSIS. database is MS SQL 2005 and BIDS is also 2005.However, the job doing this task fails every time.As per investigation, the result of the query is more than 100,000 rows and we know that excel has a limit of 65000 rows of data.Is there a way of setting the limit up? or something? a better approach maybe so that everything will be copied to the excel file successfully.

The PrimeOutput method on component "Source - Query" (1) returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure. End Error Error: 2015-10-22 04:34:58.05 Code: 0xC0047021

Source: Data Flow Task
Description: SSIS Error Code DTS_E_THREADFAILED.

Thread "SourceThread0" has exited with error code 0xC0047038. There may be error messages posted before this with more information on why the thread has exited. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 4:30:00 AM Finished: 4:35:05 AM Elapsed: 304.844 seconds. The package execution failed. The step failed. "

View 1 Replies View Related

Delete 2nd Transaction Log File

Feb 9, 2005

Hello
We have a SQL Server 2000 database with 2 transaction log files.
The 2nd file was created when we were running out of disk space and the person creating it was not familiar with the dbcc shrink command.

I now want to get rid of the 2nd log file. I ran the following steps with no success:

DBCC SHRINKFILE ('Log_file', EMPTYFILE )
--Message: Cannot shrink log file 3 (log_file) because all logical log files are in use.

ALTER DATABASE db1 REMOVE FILE 'Log_file'
--Message: The file 'Log_file' cannot be removed because it is not empty.


There are no users or open transactions in the database. I have also tried sp_detach_db and sp_attach_single_file_db but that does not work either as the database attaches both the transaction logs back.

Please advise.

Thanks

Nina

View 2 Replies View Related

SQL Server 2008 :: Maintenance Plan Delete History Trying To Delete Wrong Files

Sep 11, 2015

I have some simple files but they are failing because the delete history task is failing as it is looking for files in a non existent directory.

It is looking for files in C:Program FilesMicrosoft SQL ServerMSSQL10_50.INSTANCEMSSQLLog whereas it should be looking in C:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLLog

how I can get this corrected so I can get the Maintenance Plans to run correctly.

I have tried deleting and recreating the Plan but to no avail

View 0 Replies View Related

Delete Records In The Destination File In SSIS

Apr 9, 2007

How do I delete records in the destination file in SSIS using BI Development Studio?

View 1 Replies View Related

Delete Records In The Destination File In SSIS

Apr 10, 2007

How do I delete records in the destination file in SSIS using BI Development
Studio? Thanks.

View 3 Replies View Related

Website Making A Large Amount Of Commands

Jul 29, 2014

I have been in contact with my Hosting provider who have told me that my website seems to be making a large amount of SQL commands, and locking the tables.

View 3 Replies View Related

Making The Value Of One Coloumn The Same For A Large Set Of Data In A Table?

Nov 19, 2007

Hello, Im a very new to SQL server etc, so please bear with me.

I am currently trying to find a quick way of making a large number of database entries within a particular table all have the same value in one particular column.

I have created a query in MS SQL Server Management Studio, which outputs all the entries in a particular table that I want to change. Currently they all have different values in a certain column in this table, and I want them all to have the same value in that column.

How do I do this within Server Management Studio?

Thank you.

View 3 Replies View Related

SQL Server Large Transaction Log

Feb 3, 2006

I'm working with a database that has a relatively small amount of data.  The size of the data file for this database is in the 100-110 MB range, which gives an idea of its size and how much data is in it.  What is slightly troubling and baffling, is that the transaction file for this same database is significantly larger than the data file, at about 2.5 GB.  What could be happening that is causing this transaction log to be larger than the actual database size? 
Thx

View 1 Replies View Related

SQL Server 2008 :: Parent Records Ordering And Display Child Records Next To It?

Sep 7, 2015

declare @table table (
ParentID INT,
ChildID INT,
Value float
)
INSERT INTO @table
SELECT 1,1,1.2

[code]....

This case ParentID - Child 1 ,1 & 2,2 and 3,3 records are called as parent where as null , 1 is child whoose parent is 1 similarly null,2 records are child whoose parent is 2 , .....

Now my requirement is to display parent records with value ascending and display next child records to the corresponding parent and parent records are sorted ascending

--Final output should be

PatentID ChildID VALUE
33 1.12
null3 56.7
null3 43.6
11 1.2
null1 4.8
null1 4.6
22 1.8
null1 1.4

View 2 Replies View Related

SQL Server 2012 :: How To Batch Delete Large Table

Jun 16, 2015

I have a table with about 466 Million rows. In this table there is a int column called WeeksToRetain as well as a EventDate column containing the date the row was inserted. I am trying to delete all the rows that that should be deleted according to the WeeksToRetain. For example, if the EventDate is 5/07/15 with a 1 in the WeeksToRetain column the row should be removed by 5/14/15. I am not sure what days SQL considers the beginning and end of the week. However the core issue I am having is the sheer mass of deletions I must do and log growth.

So I am trying to do the delete in batches. More specifically I want to load a temporary table with a million rows, then use the temporary table to load a sub temporary table with 100,000 rows and join this temporary table to the table I want to delete from looping through 10 times to get 1 million. The Logging.EvenLog table which is the table I'm trying to purge has a clustered index on EventDate (ASC). I would like to run this in a schedule job with enough time between executions for log backups to run.

DECLARE @i int
DECLARE @RowCount int
DECLARE @NextBatchDate datetime
CREATE TABLE #BatchProcess
(
EventDate datetime,
ApplicationID int,

[Code] .....

View 9 Replies View Related

SQL Server 2008 :: Add ID And Primary Key To Large Table(s)

Apr 24, 2015

Background:

* SQL Server 2008 R2
* Database was created from a third party product. The product writes to the 3 tables that I need to make changes to 24/7 and downtime is not an option. All changes must be done live.
* Database overall size is ~200 GB
* The 3 tables I must update make up ~190 GB of that space.
* Tables have no primary key or ID columns. Therefore, the data is highly fragmented.
* Of the ~190 GB of space allocated for the tables, there is roughly 70 GB of actual data.
* Rows of the table are not guaranteed to be unique. In fact, on one of the tables, tests were ran with a small sample of data and duplicates were very much evident.

What I'm trying to accomplish here is to get an ID column added to the 3 tables and set that ID field as the primary key. Doing so will force the data to become much less fragmented than it is currently and with purging and new inserts, eventually fragmentation will be nearly non-existent.

Problem:
Making table changes on tables this large while data is constantly being added poses many risks and can cause data loss. This was tried on a smaller table than these three and the entire table was lost in the process. Restore from backup was needed to get back to most recent log backup point.

Original Solution:
My original plan was to create a backup of each table and run the script below to migrate the majority of the data temporarily into the new table. I could then update the original table (which now would contain much less data) and then migrate the data back.

CREATE TABLE #temp
(
MsgDate varchar(10)
,MsgTime varchar(8)
,MsgPriority varchar(30)
,MsgHostname varchar(255)

[Code] ....

Original Solution Problem:
The problem with the solution above is that it calls the DELETE function on the original table using the values from the temporary table. When there are duplicate rows, which have not all been inserted into the backup table yet, they will all be removed from the original table because there is nothing unique to separate them out. In my testing, I had 10,000 rows in the original table and ended up with 9,959 rows in the backup table.

Question 1: Is my approach to making these table changes reasonable?
Question 2a: If so, how can I make sure I don't lose data as part of this temporary migration of the data to my backup tables?
Question 2b: If not, what would be a better approach that isn't going to cause disruption to the application that INSERTs data 24/7 and won't have any risk of data loss?

View 9 Replies View Related

SQL Server 2008 :: Large Tables In OLTP

Jul 14, 2015

How many no of records of the tables are called large tables.

We are getting more deadlocks. We are using default isolation. Read & insert statements are blocking each other and causes dead locks.

I am thinking that might be purging will reduce deadlocks.

The table has 15million records. Is this table consider as large table or not in OLTP systems?

In general how many records we need to consider as large table.

View 1 Replies View Related

SQL Server 2008 :: Update Statistics With Large Databases

Feb 5, 2015

Currently our database size is around 350G. It will grow up to 1.5 TB

We have the

Auto create statistics option :True,
auto update statistics option :True,
auto update statistics asynchronously option : False

at database level

we have a weekly job, update statistics running very long time. It is created through maintenance plan using the option full scan.

Previously they tested with sampling but instead of full scan running with the sampling effected the queries.

Is there option to avoid the long time job duration.

If we didn't run the statistics manually what will happen? How do you maintain statistics with large databases

View 9 Replies View Related

SQL Server 2008 :: Migrate Contents Of Large Table From One DB To Another?

Mar 3, 2015

I have a large table containing about 800 million rows with an average row length of about 1K. The columns in the table are char columns. I need to move the contents of this table into a similar table where the target columns are varchar. The original table column definitions are compatible with the target table but the reverse is not necessarily true. For example, one column is being changed from int to bigint. The table is partitioned.

So, what is the fastest way to migrate the data. I was thinking to unload each partition into a flat file and load the target table running multiple load streams? Is this a good way?

View 0 Replies View Related

Making A Row Delete Itself At A Certain Time?

Dec 2, 2006

Is there any possible way to make a row delete itself at a certain date and time? I am tring to make an "On_Sale" table and perticular items(rows) must expire at a certain Date and time. I have an idea to delete a row when it expires but it will cause an extra burden on my application. So is it possible to have SQL responsible for deleting a row at a certain time and date?

Also, I have another question:
when I asign a PK Identity to a certain column is there a way I can enforce consecutive order of PK values? For example if I delete a row and the PK value was 5 it will reoder the whole table so all PK values are in consecutive order:


PK-----------Item----------price
1------------XYZ1----------$$$$
2------------XYZ2----------$$$$
3------------XYZ3----------$$$$

when I delete PK 2 this is what happens

PK-----------Item----------price
1------------XYZ1----------$$$$
3------------XYZ3----------$$$$

the PKs are not in consecutive order


What I want is this to happen: when PK 2 is deleted I need it to show as follows

PK-----------Item----------price
1------------XYZ1----------$$$$
2------------XYZ3----------$$$$

The PKs stay in consecutive order.



See how the PK are still in consecutive order?

View 2 Replies View Related

SQL Server 2008 :: Replication Error - Field Size Too Large

Feb 1, 2011

I've got two databases on the same server and replicate some tables from one database to another.The replication is configured so not to drop the table if it exists, but to delete the data based on the filter if one exists. There are two tables on the subscriber that have some extra columns.

I get "field size too large" error when trying to replicate them. Is there a workaround without having to make the publisher and the subscriber tables identical by schema?

View 5 Replies View Related

SQL Server 2008 :: Retrofitting Partitioning To Existing (large) Tables

Feb 9, 2015

We have an existing BI/DW process that adds large chunks of data daily (~10M rows) to an existing table, as well as using Deletes to remove stale data. This scenario seems to beg for partitioning to support switching in/out data.

After lots of reading on this, I have figured out the mechanics of the switching, bit I still have some unknowns about the indexes needed to support this.

The table currently has several non-clustered indexes, including one on the partitioning column - let's call that column snapshotdate. Fortunately there are no FKs involved, and no constraints.

Most of the partitioning material I see focuses on creating a clustered PK to assist with switching. Not sure if this is actually necessary, but assume I create one using an Identity column (currently missing) plus snapshotdate.

For the other non-clustered, non-unique indexes, can I just add the snapshotdate to the end of the index? i.e. will that satisfy the switching requirement?

View 1 Replies View Related

SQL Server 2008 :: What Could Cause Large Gaps In Servers Default Trace

Feb 12, 2015

I have the default trace on a SQL Server 2008R2 instance enabled and found today that there is a gap of nearly 4 minutes in the trace during a time of the day when there most certainly is not going to be a 4 minute window of nothing.

What if anything could cause the default trace to have a gap like this? The SQL Server Instance (against my preferences) is hosted on VMware however it has its on HOST and so its resources are not being shared with any other server. The data & log files reside on different parts of the SANS. Our IT & Network admins are looking into the issue on their end but when I looked and found a near 4 minute gap in the default trace it hit me that this could be something above/outside of SQL Server.

View 1 Replies View Related

SQL Server 2008 :: How To Find Statements That Cause Large Memory Paging

Apr 22, 2015

I am monitoring our production server, and noticed that periodically we have spikes of Memory Paging Rate (pages/sec).

How to find particular queries/stored procedures that causing this?

View 5 Replies View Related

SQL Server 2008 :: Speed Up Text Search In Large Result Set?

Jul 14, 2015

I have a query below which filters detail field in the #TempLogins table. The details field is a text field which contains many types of text strings, some containing urls that have parts like "ResultID=5" which is what is contained in the ResultIDSearch and ResultSetIDSearch fields. The records with entries like "ResultID=5" are the ones I'm trying to filter for.

The problem I have is that the query takes way too long to run. The TempLogin table has around 200 K records and the TempSearch table has around 80 K records.

select * from #TempLogins a where exists
(select 1 from #TempSearch t1 where
a.detail like '%' + t1.ResultIDSearch + '%'
or
a.detail like '%' + t1.ResultSetIDSearch + '%')

View 1 Replies View Related

SQL Server 2008 :: Changing Large / Existing Table To Sparse Columns?

Sep 21, 2015

I have some huge tables (think 200+GB for a single table) which are excellent candidates for sparse columns. The tables have many columns which are defined with decimal datatypes (13,2) with a large percentage of them (over 50% in most cases- some as much as 99%) being 0.00. Since this is very expensive in terms of storage my idea is to set all the 0.00 values equal to NULL then set them as sparse. Across 100 or so identical databases, I have 5 such tables, with 20-40 columns in each table.

1.) three steps for each column in each table in each db.

Step 1: update table to allow for nulls

Step 2: update tabe set column=null where column =0.00

Step 3 update table set sparse columns

2.)

Step 1: Create entirely new table with sparse column definitions

Step 2: copy entire table, transforming 0.00 to null for affected columns via SSIS

Step 3: drop original table, rename new table to original name

View 0 Replies View Related

SQL Server Admin 2014 :: Restore Lost Transaction From Transaction Log File

Jun 10, 2015

I have Full database backup upto previous day and transaction logfile of Today transaction. my database has crashed. I have restored previous day's Full backup. I have faced difficulty to restore today's transaction from today's transaction log. What are the steps to restore full database back and one day's transaction log file. Note: there is no differential database backup and transaction backup.

View 8 Replies View Related

SQL Server 2008 :: Deleting Large Number Of Rows With Foreign Key And Mirroring Setup

Feb 19, 2015

We have a database. It is enabled for mirroring. We need to delete the old records. That is around 500k records from a table. But it has foreign key relation. How to do in Production servers these kind of deletes?

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved