Truncate Transaction Log Through Sql Query Analyser
Oct 20, 2005
Hi guys
My website is in asp and sql2000. My problem is the ISP gave access to database through query analyser. some days the transaction log is growing too high. so i want to clear it. i call up them and clear it. My question is can truncate the log file through query analyser ?
I had limited access to database.
We have a SQL 6.5 Server with several DB's on it. Specially there's a critical DB on two separate devices for the Data and Transaction Log. The Data Device is 700MB and the Transaction Log started 210MB. Yet, the truncate function of the Log Device is not freeing space on it. We have been forced to expand the transaction device up to 860MB!!!! which is an outrageous size for it. We have tried the DBCC CHECKTABLE(syslogs)followed by DUMP TRANSACTION <<db_name>> WITH NO_LOG and then once again the DBCC CHECKTABLE(syslogs). We even tried to create a new DB only with the dat file, but this also didn't work. Our Server Disks are almost full, and we can't grow the device no more.
I need to truncate the transaction log, however, to do a backup on it we need 15GB of space free on the server which we don't have. How do we just force it to truncate it? I know the actual database is backedup and is OK...
SQL Server 6.5 Hi! We have trans.log 200MB in total and 71 MB free space. I run DBCC OPENTRAN and it shows no active transactions exist. I run DuMP TRANSACTION .... WITH TRUNCATE_ONLY and it doesny clean log also. What to do to get space back?
A number of procedures where run that filled the transaction log. Can I truncate the log during regular working hours or should I take the database down to single user mode and truncate?
I have a mere 100MB db with a 4GB transaction log. I want to truncate the log as I understand that truncating it will shrink the log by removing the transactions that have already taken place. However, the option to do a transaction backup is greyed out. I suspect this is from the db being in transactional replication with another server; however, I don't know for sure.
Are there any other ways that I can shrink the transaction log? I would like to do shrink it without taking the db offline either.
We do full backup every day and recovery model is Full, but we never done transaction log backup, so the transaction log files keep growing. What should I do? I think I should set recovery model to Simple, and actually we do DBCC Shrinkdatabase after full backup every day, but the transaction log file is still around 15GB.
Is it possible to TRUNCATE a table and BCP data into the same table in one TRANSACTION? My problem is that I want to refresh(delete and via BCP append new data) a table without disturbing running applications. Can I run BCP from a SQL-script or a stored procedure?
Within SQL Ent Manager - I am unable to truncate the tran log using the shrink file option window (although I can shrink the database file) - nor can can I truncate the log using command line sql in a query analzer window (dump tran < > with truncate only)....
I use SSME to do a full backup of both the database & transaction log, selected "Truncate ..." in the options for the log backup. The log doesn't truncate.
I have looked at the reasons logs don't truncate in Books Online & can not find any that apply. There are no open transactions & in sys.databases log_reuse_wait is 0.
Is it possible to truncate Transaction Log and Shrink DATABASE while the database is being used by users or the database becomes unuvailable during this operations?
Hello all, I have a very simple script which I use to truncate and reclaim space on all the transaction logs in a SQL Server 2005 database. However, I have some Sharepoint db names I can't change that have dashes ('-') in the names, e.g., SharePoint_AdminContent_dc27334f-fb2d-4453-9764-5d8b730fb9e1. The script won't back up those databases because it has a problem with the dashes in the names. Does anyone have any thoughts on how I could modify the script to get it to work? Here is the script:
ALTER PROCEDURE [dbo].[SP_GlobalTruncate_transaction_logs]
AS
Set quoted_identifier off
DECLARE @dataname varchar(300)
DECLARE @dataname_header varchar(75)
DECLARE datanames_cursor CURSOR FOR SELECT name FROM sysdatabases
WHERE name not in ('master', 'pubs', 'tempdb', 'model', 'northwind')
PRINT 'Free space removed and transaction log truncated for each user database' GO
And here is the error I get: Database SHAREPOINT_ADMINCONTENT_DC27334F-FB2D-4453-9764-5D8B730FB9E1
Msg 102, Level 15, State 1, Line 1
Incorrect syntax near '-'.
Msg 319, Level 15, State 1, Line 1
Incorrect syntax near the keyword 'with'. If this statement is a common table expression or an xmlnamespaces clause, the previous statement must be terminated with a semicolon.
-- Get the new Customer Identifier, return as OUTPUT param SELECT @NoteID = @@IDENTITY
-- Insert new notes for all the users that the note pertains to, in this case this will be by the assigned -- users. IF @FK_UserIDList IS NOT NULL EXECUTE spInsertNotesByAssignedUsers @NoteID, @FK_UserIDList
-- Insert New Address record -- Retrieve Address reference into @AddressId -- EXEC spInsertForUserNote -- @FK_UserID, --@NoteID, -- @BeenRead -- @Fax, -- @PKId, -- @AddressId OUTPUT
COMMIT TRANSACTION
-------------------------------------------------- GO
Database is in simple recovery mode, and published with transaction replication push subscription, just one subscriber but the database is huge. I don't want to overwrite the schema at the subscriber either.
I had to run an alter database command on a published database, it created so many logs that an extra drive had to be added along with an extra log file to accommodate all the logs.
The problem I have is I'd like to know clear the file of logs so I can drop the temporary log file, and give the drive back, but I cannot.
I have tried dbcc shrinkfile with the emptyfile option but it never clears, I have also tried it with notruncate and truncateonly options (mainly out of desperation).
I do not need to worry about point in time restore as a full backup is taken before and after the operation. After which the database will be put back into Full recovery mode.
I have looked at log_reuse_wait_desc and the file says 'Replication', so I am now thinking the file cannot empty because replication is keeping one of the VLFs active. I tried dropping and recreating the subscription hoping it might free something up and I could get somewhere, but it made no difference.
Do I have to remove replication completely to get round this? Surely not.
I have also tried putting the database back into full recovery mode, doing a full DB backup, and a transaction log backup, but its made no difference, which is also what makes me think a portion of the log is still active because of replication, and perhaps the transactions have not gone through to the subscriber, which raises another question, why not?
I have not tried restarting SQL server, as I'd like to know a way out of this without having to do that, plus I do not think it would make any difference anyway.
I created transactional replication on a database and setup pull subscriptions on each subscriber to run at a scheduled time once a day. The scheduled start time on each subscriber can differ. The transaction log on the publishing database will eventually consume all possible disk space. Is it possible (and safe) to shrink or truncate the transaction log file for the publishing database before all the subscribers completed running its daily pull subscription? If not, how can I manage disk space for the transaction log on the publishing database and ensure all transaction are replicated to the subscriber?
I have one database configured with the Recovery Model "Simple".
I am getting alot of full transaction log messages... is this supposed to happen?
Another question is:
Imagine i am in a middle of a big select into statement... and in another query i run the backup truncate log... am i going to loose information on the other batch ("select into?")??
We are using SQL Server 2005 (SP1). I have created a maintenance plan that backs up up the datebase every night. The problem is that the transaction log is continuing to grow. I have been told that a full backup will automatically truncate and shrink the transaction log. However, this is not happening. How can I truncate and shrink the transaction log after a full backup as part of our maintenance plan. Thank you.
This what i did , since i need to maintain five sql servers ,i thought i will build a repository so on my desk top (running sql server ) i created a table name master_dbscript with the following fields
server_name varchar(20), dbname varchar(20) db_create_scripts text
using enterprise manager-- all tasks --generate sql scripts , (cut & paste to the insert statement in query analyser, the following is the insert statement
insert into master_dbscript values ('isd11t','test','ALTER TABLE [dbo].[child] DROP CONSTRAINT FK_child_parent GO /****** Object: Trigger dbo.test_patcase Script Date: 25/08/2000 12:10:09 ******/ if exists (select * from sysobjects where id = object_id(N'[dbo].[test_patcase]') and OBJECTPROPERTY(id, N'IsTrigger') = 1)drop trigger [dbo].[test_patcase] GO ')
oops it created all the objects in the database where i tried to run the insert statement. god saved me , i tried this with the test database.
when i tried the same with bcp it worked fine and i was able to see the record in my table (one record) ,note you cannot use dts because it will support maximum 8000 chars only .
I am executing the following query in the query analyser.
"select * from alien119700 order by alienid"
In the message pane it shows
SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms. SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 4 ms. SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 0 ms.
(43 row(s) affected)
SQL Server Execution Times: CPU time = 0 ms, elapsed time = 454 ms. SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 0 ms.
We use SQL 2K(service pack 1).Our query analyser will freeze often.So we loose all query production work.Does anyone know if Version 2 has a fix?Please help.
I noticed that query analyser is much more quicker than EManager when I access my database from my hosting provider... is there any way to see the properties of the table X for example as one can do with EM...
I would be grateful if you could provide me with any query sample conserning this issue...