Hi there - I have an sms server which uses a sql 7 database. The transaction log on it keeps growing at an incredible rate (about 1gb p/w). The database was successfully backed up yesterday, and I thought that should delete the inactive entries in the log. Can someone tell me some steps to do reduce it to a regular size?
I upgraded from SQL 6.5 to SQL 7 last month, and so far, everything's been going fine.
However, I'm not using my old SQL 6.5 backup scripts, which, when the backup was done, would dump the transaction log with TRUNCATE_ONLY, shrinking the log size.
My SQL 7 server is set up with a Maintenance Plan which does everything, including backup, but the log file seems to be growing and growing. I'm up to 4.5 gigs now, for a database with a data file of 3.5 gigs.
How do I "dump transaction with TRUNCATE_ONLY" on a SQL 7 database?
I wonder if anyone could explain why when monitoring the transaction log size it doesn't appear to be growing! I'm using the following code to test image data types with logging.. I've got 'Truncate log on Checkpoint' switched off and 'Select into Bulk copy' also switched off.
Running the following code I would expect to see the transaction log grow and grow and grow... Monitoring it using perfmon indicates that it isn't in fact logging...
DECLARE @ptrval varbinary(16)
SELECT @ptrval = TEXTPTR(pr_info) FROM pub_info pr INNER JOIN publishers p ON p.pub_id = pr.pub_id AND p.pub_name = 'New Moon Books'
declare @Loop int select @Loop = 0
While @Loop <= 10000 BEGIN
WRITETEXT pub_info.pr_info @ptrval with log 'New Moon Books (NMB) '
My database has a situation where my transaction log is growing out of control. However I have not been able to figure out where any memory leaks are occuring.
Is there a way to monitor the database in order to find out at when the tlog is growing. Or even better, what sql is being executed that is causing this unreasonable tlog growth?
The log file for database 'P5_Nextel' is full. Back up the transaction log for the database to free up some log space
What i'm doing is, i just resizing the space allocated, but the problem is my disk is now out of space. How can i prevent this kind of problem without adding a new disk? Is there any other way?
I am having problems with the transaction log growing. I set up a maintenance plan to backup every 15 minutes. The recovery model is set to full. It is set to auto grow by 10 percent with unrestricted file growth. The space allocated is 2mb. Should this be set higher? It seems whenever I unchecked the auto shrink, the log is growing larger. The command I use to check is:
SELECT * FROM sysfiles WHERE name LIKE '%LOG%' GO
This is the log size and the log space used.
1.492187538.02356
This has grown 4 percent since yesterday. Are there any good practices to maintain these log files?Any help would be appreciated.
Hi,We have created a SQL server 2000 database. We observe that thetransaction log keeps growing over time. We are now about to run out ofspace. We have been periodically shrinking the database. Neverthelessthe size has increased. I would imagine that a transaction log can beeliminated if we stop the database. Can that be done? Is there a way tocompletely wipe off the transaction log?Thanks,Yash
I have to admit, I'm usually using the MySql database, but in this particular case I have to use MSSQL 2000.
Over to my problem.
I'm building a web-based system (who isn't these days) in which the user types arbitrary information that is published when the user pushes the save button. Nothing new about that.
Here comes the tricky part, when the user wants to edit an existing item I copy all information in the database and sets the id of the 'edit-copy' to the negative value ( id 45 becomes id -45 for the edit-copy). This is also done on all items in other tables that is connected to the main item.
This way I get a copy that the user may edit without messing up the published information. When the user is done I either delete all the negative items (cancel) or delete the positive items and update the negative to become positive (save).
So far so good, allmost... my problem is that the transaction log grows tremendously.
Is there any other way to accomplish a safe edit that doesn't affect the transaction log as much as my current solution?
Could I be doing something wrong when updateing or deleteing my items?
The transaction log of a database grows until it runs out of disk space. If disk space is full, all databases on instance may get problems. Because of this i have set a limitation on how much it may grow, up to 40 GB. It grows in steps of 100 MB. It reaches its limit a couple of times a week, causing the application to hang.
The database file itself is about 2,3 GB large.
The SQL version is 10.50.4276, SQL2008R2 SP2.
The recovery model is FULL.
I have a backupjob that runs a FULL Backup at midnight.And a backupjob that runs a LOG backup every 30 minutes.Both finishes with success. However, the transaction log is never truncated, the unused space is never released.
I have checked for "long running jobs", it sometimes sys "backup_log", sometimes "active_transaction".
Could a workaround be to set the recovery mode to simple, and create a full-backup job that runs every 30 minutes for this database? It is a critical database....
We have a Principle, Mirror and Witness set-up and all is working fine, however, the transaction logs for a few large databases just keep growing over a the course of the month until the disk is full. As I understand it, and having tried you can't dump the transaction logs while mirroring is configured, is there any way at all to commit and truncate the logs while mirroring is running or do I have to manually remove the mirroring each month, dump the transaction logs and then re-enable it again after doing the backup/restore?
The databases in question are about 6GB in data size and the transaction logs can grow to be about 60GB in a month.
Would a normal SQL Server 2005 backup truncate the logs if I configured this? At the moment we use Litespeed for SQL server for nightly backups.
I know this question has been posted before but after reading some other post I'm still a bit confuse of what to do?
In my database, I have a 3Mb data file and the transaction log file has growed to 500Mb and keep growing... Can you please advise what I should do to reduce it size and/or stop it from continue growing?
hi, we have a SQL Server 2000 database which we set to 'SIMPLE' recovery mode at 6pm (due to nightly large data loads). We revert back to 'FULL' recovery mode at 6am.
My understanding was that in 'SIMPLE' recovery mode, the transaction log would not grow because it would automatically be truncated after a checkpoint. However this is not the case. I thought perhaps it could be due to a long running uncomitted transaction, but when I ran 'dbcc opentran', the oldest running transactions doesn't last for more than a couple minutes. I manually run a 'checkpoint' command as well in the hope of forcing the transaction log truncation. I repeat this a couple of times to no avail. When I run 'dbcc sqlperf(logspace)' , I can still see the transaction log growing.
It is not until I run 'backup log db with truncate_only' that the transaction log gets truncated. I do not understand, why the transaction log does not get automatically truncated in SIMPLE recovery mode?
We have configured the following in the Publisher server..
1) Merge Replication - Synchronisation to be running in the continuous mode.
2) Merge Replication - Synchronisation in Scheduled mode.
The issue that we are facing here is the transaction log file of the databases which are in replication are growing very largely. And we get this error message in the Subscriber :
Error messages:
The Merge Agent failed to retrieve article information for publication 'MCC_Pos_CashlessPub'. Increase the -QueryTimeOut parameter and restart the synchronization. When troubleshooting, use SQL Profiler or restart the agent with a higher value for -HistoryVerboseLevel and check the output log file for errors. Correct any database engine conditions that may be causing internal replication stored procedures to fail. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147201017) Get help: http://help/MSSQL_REPL-2147201017
The transaction log for database 'tempdb' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases (Source: MSSQLServer, Error number: 9002) Get help: http://help/9002
In fact for the past two days there are no data movement or any changes to the table, still we are able to see the growth in the transaction log file.
As mentioned in the error description when checked the log_reuse_wait_desc column of the sys.databases table, it showed the value "LOG_BACKUP". So took database backup 3 times. and took transaction log backup 2 times from the subscriber server in which the error was thrown. Still the issue persists. There is no change in the transaction log size.
What is the reason behind the growth of LDF?? Is it because of Merge Replication Configured in the server??
Kindly check and let as know regarding this issue. We are facing lot of problems because of this issue. Thanks in advance.
i need to present: the 12 month as field name whan the first filed is the customer name and the rest is the sum(sale)
so the result will be like this name 1 2 3 4 5 6 7 8 9 10 11 12 name1 $1 $2 0$ 8$ $1 $2 0$ 8$ $1 $2 0$ 8$ name2 $1 $2 0$ 8$ $1 $23 0$ 8$ $1 $23 0$ 8$ name3 $1 $2 0$ 8$ $1 $2 0$ 8$ $13 $2 0$ 8$
name4 $1 $2 0$ 83$ $1 $2 0$ 8$ $1 $2 0$ 8$
well so far i reached this - wich do the sum but on 12 row per name: well .. so if you could help me ??
SELECT CASE WHEN MONTHs = 1 THEN MONEY ELSE 0 END AS '1', CASE WHEN MONTHs = 2 THEN MONEY ELSE 0 END AS '2', CASE WHEN MONTHs = 3 THEN MONEY ELSE 0 END AS '3', CASE WHEN MONTHs = 4 THEN MONEY ELSE 0 END AS '4', CASE WHEN MONTHs = 5 THEN MONEY ELSE 0 END AS '5', CASE WHEN MONTHs = 6 THEN MONEY ELSE 0 END AS '6', CASE WHEN MONTHs = 7 THEN MONEY ELSE 0 END AS '7', CASE WHEN MONTHs = 8 THEN MONEY ELSE 0 END AS '8', CASE WHEN MONTHs = 9 THEN MONEY ELSE 0 END AS '9', CASE WHEN MONTHs = 10 THEN MONEY ELSE 0 END AS '10' FROM (SELECT DISTINCT MONTH(tblActoin.MyDate) AS MONTHs, SUM(tblActoin.amount) AS MONEY, tblIdentity.name FROM tblActoin INNER JOIN tblIdentity ON tblActoin.IdentityID = tblIdentity.id GROUP BY tblActoin.MyDate, tblIdentity.name) AS derivedtbl_1
We are trying to compare our current calendar week (based on Monday being the first day of the week) with the previous calendar week.
I'm trying to produce a line chart with 2 axis:
- x axis; the day of the week (Mon, Tues, Wed etc - it is fine for this to be a # rather than text e.g. 1 = Mon, 2 = Tues etc) - y axis; the cumulative number of orders
The chart needs two series:
Previous Week. The running count of orders placed that week. Current Week. The running count of orders placed this week.
Obviously in such a chart the 'Current Week' series is going not going to have values along the whole axis until the end of the week. This is expected and the aim of the chart is to see the current week compares against the previous week for the same day.
I have two tables:
Orders TableCalendar Table
The calendar table's main date column is [calDate] and there are columns for the usual [calWeekNum], [calMonth] etc.
My measure for counting orders is simply; # Orders: = countrows[orders].
How do I take this measure and then work out my two series. I have tried numerous things such as adapting TOTALMTD(), following articles such as these:
- [URL] ... - [URL] ...
But I have had no luck. The standard cumulative formulas do work e.g. if I wanted a MTD or YTD table I would be ok, it's just adjusting to a WTD that is causing me big issues.
I'd like to add a yesterday dimension member to a new dimension, like a "Time Utility" dimension, that references the second last day of non empty data in a cube.
At the moment, I'm doing this:
Code Snippet
create member [MIA DW].[DATE TIME].[Date].[Yesterday] as [DATE TIME].[Date].&[2007-01-01T00:00:00]
select [Measures].members on 0, non empty [DATE TIME].[Date].members on 1 from [MIA DW] But the [yesterday] member does not seem to belong to [DATE TIME].[Date].members?
So I guess there's two questions:
1) Can I have a new empty dimension which contains all these special members like "Yesterday" or "This Week" and "Last Week" (these last two obviously refer to a set of Dates)
2)How come the Yesterday member is not returned by the .members function?
I have merge replication setup up for 6 SQLCE Subscribers. I have noticed that the MSmerge_tombstone table is growing at a fast rate regardless of any changes to the data in the database. It seems to be consistantly adding 50 rows of data to the table every 2 minutes. As the table grows it causes the SQLCE subscirbers to fail with the following message:
ERROR: -2147467259 SQL Server Reconciler failed: Run
ERROR: -2147200925 : Failed to enumerate changes in the filtered articles.
ERROR: 0 : The merge process timed out while executing a query. Reconfigure the QueryTimeout parameter and retry the operation.
I'm sure that this is due to the size of the MSmerge_tombstone.
Should the MSmerge_tombstone table grow at this rate? 36,000 rows every 24hrs!
I understand there is the sp_mergecleanupmetadata Stored procedure but if i use this does that mean that because i have to reinitialise all the subscribers, they are going to have to pull down the whole subscription again.
I have since Changed a settings to make subscription expiration date to 8 days instead of never expires but we're still getting 50 rows added every 2 minutes
SQL SERVER 2000 SP3 Hope someone can shed some light on this for me.
while(select MAX(wrh) from @tem1 where wrh = 0) < 1 begin update @tem1 set wrh = (select toaccount from @tem1 where reportdate = (select min(reportdate) from @tem1 where wrh = 0))+(select max(wrh) from @tem1) where wrh = (select max(wrh) from @tem1 where wrh = 0 ) and reportdate = (select min(reportdate) from @tem1 where wrh = 0) end
this is the result while executing loop statement .
employeeidreportdatereportatleftatdehdrhwehwrh 129029 Jan 201409:3019:15008:0009:20024:00065:54 129028 Jan 201409:0018:45008:0009:18016:00056:34 129027 Jan 201409:0018:45008:0009:18008:0009:18 129025 Jan 201408:0010:00005:0002:00045:00047:16 -- week end 129024 Jan 201409:1718:45008:0009:01040:00045:16 129023 Jan 201409:1918:46008:0009:06032:00036:15 129022 Jan 201409:1718:47008:0009:05024:00027:09 129021 Jan 201409:1618:35008:0008:46016:00018:04 129020 Jan 201409:1818:55008:0009:03008:0009:03
How to update only that week hrs , don't continue next week...
In my reports I am extracting the data of number of people joined in all the weeks of the year. And in one of reports I have to extract the data of the number of people joined until the last week from the first week. I am trying out all the logics but nothing is working for me as such. Can any one help me with this issue??????
I need a Select sentence that return me the first week of the month for a given week.
For example If I have week number 12 (Begins 2015/03/16 and Ends 2015/03/22) I need that returns 9, I mean Week number 9 wich is the first week of march (having in mind @@DATEFIRST).
I only need give a week number of the year and then returns the week number of the first week of that month.
I have a query that run every day to update a summary table which has week number and day of week. what I currently do is delete all records from the summary table and then summarize all the data availabe from four tables adn then populate the table daily. I want to know if I can run the update query to run only for the week number and day of week depending on getdate. Can I do this?
Function F_ISO_YEAR_WEEK_DAY_OF_WEEK returns the ISO 8601 Year Week Day of Week in format YYYY-W01-D for the date passed. W01 represents the week of the year from W01 through W53, and D represents the day of the week with 1 = Monday through 7 = Sunday.
The first week of each year starts on the first Monday on or before January 4 of that year, so that the year begins from December 28 of the prior year through January 4 of the current year.
This code creates the function and demos it for the first day, first date+60, and first date+364 for each ISO week/year from 1990 to 2030.
drop function dbo.F_ISO_YEAR_WEEK_DAY_OF_WEEK GO create function dbo.F_ISO_YEAR_WEEK_DAY_OF_WEEK ( @Datedatetime ) returnsvarchar(10) as /* Function F_ISO_YEAR_WEEK_DAY_OF_WEEK returns the ISO 8601 Year Week Day of Week in format YYYY-W01-D for the date passed. */ begin
declare @YearWeekDayOfWeekvarchar(10)
Select --Format to form YYYY-W01-D @YearWeekDayOfWeek = convert(varchar(4),year(dateadd(dd,7,a.YearStart)))+'-W'+ right('00'+convert(varchar(2),(datediff(dd,a.YearStart,@Date)/7)+1),2) + '-'+convert(varchar(1),(datediff(dd,a.YearStart,@Date)%7)+1) from ( select YearStart = -- Case finds start of year case whenNextYrStart <= @date thenNextYrStart whenCurrYrStart <= @date thenCurrYrStart elsePriorYrStart end from ( select -- First day of first week of prior year PriorYrStart = dateadd(dd,(datediff(dd,-53690,dateadd(yy,-1,aaa.Jan4))/7)*7,-53690), -- First day of first week of current year CurrYrStart = dateadd(dd,(datediff(dd,-53690,aaa.Jan4)/7)*7,-53690), -- First day of first week of next year NextYrStart = dateadd(dd,(datediff(dd,-53690,dateadd(yy,1,aaa.Jan4))/7)*7,-53690) from ( select --Find Jan 4 for the year of the input date Jan4= dateadd(dd,3,dateadd(yy,datediff(yy,0,@date),0)) ) aaa ) aa ) a
return @YearWeekDayOfWeek
end go
-- Execute function on first day, first day+60, -- and first day+364 for years from 1990 to 2030.
select DT= convert(varchar(10),DT,121), YR_START_DT = dbo.F_ISO_YEAR_WEEK_DAY_OF_WEEK(a.DT), YR_START_DT_60 = dbo.F_ISO_YEAR_WEEK_DAY_OF_WEEK(a.DT+60), YR_START_DT_365 = dbo.F_ISO_YEAR_WEEK_DAY_OF_WEEK(a.DT+364) from ( select DT = getdate()union all select DT = convert(datetime,'1990/01/01') union all select DT = convert(datetime,'1990/12/31') union all select DT = convert(datetime,'1991/12/30') union all select DT = convert(datetime,'1993/01/04') union all select DT = convert(datetime,'1994/01/03') union all select DT = convert(datetime,'1995/01/02') union all select DT = convert(datetime,'1996/01/01') union all select DT = convert(datetime,'1996/12/30') union all select DT = convert(datetime,'1997/12/29') union all select DT = convert(datetime,'1999/01/04') union all select DT = convert(datetime,'2000/01/03') union all select DT = convert(datetime,'2001/01/01') union all select DT = convert(datetime,'2001/12/31') union all select DT = convert(datetime,'2002/12/30') union all select DT = convert(datetime,'2003/12/29') union all select DT = convert(datetime,'2005/01/03') union all select DT = convert(datetime,'2006/01/02') union all select DT = convert(datetime,'2007/01/01') union all select DT = convert(datetime,'2007/12/31') union all select DT = convert(datetime,'2008/12/29') union all select DT = convert(datetime,'2010/01/04') union all select DT = convert(datetime,'2011/01/03') union all select DT = convert(datetime,'2012/01/02') union all select DT = convert(datetime,'2012/12/31') union all select DT = convert(datetime,'2013/12/30') union all select DT = convert(datetime,'2014/12/29') union all select DT = convert(datetime,'2016/01/04') union all select DT = convert(datetime,'2017/01/02') union all select DT = convert(datetime,'2018/01/01') union all select DT = convert(datetime,'2018/12/31') union all select DT = convert(datetime,'2019/12/30') union all select DT = convert(datetime,'2021/01/04') union all select DT = convert(datetime,'2022/01/03') union all select DT = convert(datetime,'2023/01/02') union all select DT = convert(datetime,'2024/01/01') union all select DT = convert(datetime,'2024/12/30') union all select DT = convert(datetime,'2025/12/29') union all select DT = convert(datetime,'2027/01/04') union all select DT = convert(datetime,'2028/01/03') union all select DT = convert(datetime,'2029/01/01') union all select DT = convert(datetime,'2029/12/31') union all select DT = convert(datetime,'2030/12/30') ) a
I'm getting this when executing the code below. Going from W2K/SQL2k SP4 to XP/SQL2k SP4 over a dial-up link.
If I take away the begin tran and commit it works, but of course, if one statement fails I want a rollback. I'm executing this from a Delphi app, but I get the same from Qry Analyser.
I've tried both with and without the Set XACT . . ., and also tried with Set Implicit_Transactions off.
set XACT_ABORT ON Begin distributed Tran update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TRANSACTIONMAIN set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.TRANSACTIONMAIN set REPFLAG = 0 where REPFLAG = 1 and DONE = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.WBENTRY set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.WBENTRY set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.FIXED set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.FIXED set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.ALTCHARGE set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.ALTCHARGE set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TSAUDIT set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.TSAUDIT set REPFLAG = 0 where REPFLAG = 1 COMMIT TRAN
It's got me stumped, so any ideas gratefully received.Thx
I have a design a SSIS Package for ETL Process. In my package i have to read the data from the tables and then insert into the another table of same structure.
for reading the data i have write the Dynamic TSQL based on some condition and based on that it is using 25 different function to populate the data into different 25 column. Tsql returning correct data and is working fine in Enterprise manager. But in my SSIS package it show me time out ERROR.
I have increase and decrease the time to catch the error but it is still there i have tried to set 0 for commandout Properties.
if i'm using the 0 for commandtime out then i'm getting the Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.
and
Failed to open a fastload rowset for "[dbo].[P@@#$%$%%%]". Check that the object exists in the database.
I am getting this error :Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.OleDb.OleDbException: Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.have anybody idea?!