Cannot Shrink Tlog File And DB Not Replicating But DBCC OPENTRAN Shows HA Process
Nov 21, 2014
I've already shrunk the tlog from 350 GB to 313.My DB Server (2008 R2 Sp2) cannot be restarted and the db cannot go offline or detach due to company policy.My DB after changing from full to simple mode still has 313GB tlog file and when I run DBCC OPENTRAN I get Transaction information for database 'DB'.
Replicated Transaction Information:
Oldest distributed LSN : (0:0:0)
Oldest non-distributed LSN : (2882:26:1)
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
Which means this DB is participating in a High Availability process like replication, mirroring or log shipping.
So I run EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1.
This is useful when there are replicated transactions in the transaction log that are no longer valid and you want to truncate the log.
But I get an error:
Msg 18757, Level 16, State 1, Procedure sp_repldone, Line 1
Unable to execute procedure. The database is not published. Execute the procedure in a database that is published for replication.
There are currently 9 connections and all are sleeping.What else can i try in order to shrink the tlog file?
DBCC OPENTRAN shows "REPLICATION" on a server that is not configured for replication. The transaction log is almost as large as the database (40GB) with a Simple recovery model. I would like to find out how the log can be truncated in such a situation.
I keep seeing this return from running a DBCC OpenTran:Transaction information for database 'Live_App'.Oldest active transaction:SPID (server process ID) : 92UID (user ID) : 1Name : DTCXactLSN : (12837:1924:1)Start time : Oct 4 2004 8:54:03:570AMDBCC execution completed. If DBCC printed error messages, contact yoursystem administrator.I don't see anywhere in code that begins a transaction with the nameDTCXact explicitly. Is this a generic name for any transaction that isopened without an explicit name? The problem I am having with this isthat sometime it will start and may not get commited or rolledback forquite some time. I have seen it remain for over 1 1/2 hours before.Would that be caused by the application not cleaning it up?Your help in explaining the source of this will be appreciated. I didfind an entry on Microsoft.com that used the word DTCXact. It wastalking about Transaction Propagation from Resource Manager ToApplication. I'm not sure if this applies to what I am seeing here ornot.Thank you.Kalvin*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
I have a 2.8G SQL 7 database with 1.6G used space. I want to shrink it to 1.8G, but after I issued the 'DBCC srhinkfile(datafile, 1800)', it didn't shrink the file, instead it expanded to 2.7G used space? I tried on a test server, the command worked every time although the time to complete the task tended to increase every time.
I did the DBCC DBREINDEX (fill factor 90) and updated usage, the size didn't change. DBCC CHECKDB, CATELOG AND SHOWCONTIG were all performed and there wasn't any error or fragmentation.
Now I'm wondering what's the actual size of my database, 1.6G or 2.7G? If there's a way that I can get back close to my original used sized, then I can try to shink the file again. Any comment is welcome. Thanks advance.
I'm running full recovery mode and doing log shipping so changing to simple mode is not an option.
I'm running BACKUP LOG right before and when I check it says my log is 99% free (on a 180GB log).
When I do DBCC LOGINFO('dbname') right before and after I see a dozen entries and they are all over the file and not just at the starting offset areas. The BACKUP LOG doesn't clean out the file completely.
Is there any explanation for this? Even though I'm doing this at off hours, is it possible that someone on the site in that split second is putting new entries in the log? Why are they spread out though? If they just put entries at the beginning I could shrink the file to a normal size still.
I am considering some disaster recovery scenarios.
Lets assume my MDF is gone - the disks are dead.
The LDF is on a different disk channel. Lets assume its fine.
Can I make a "final" TLog backup from the "good" LDF file?
Maybe copy some-earlier-MDF file into place, would that enable a TLog backup from the LDF file?
'Coz if I can then I could use that as a route to getting a zero-loss recovery - make a final TLog backup, and then restore the whole lot from last FULL + All TLog backups thereafter.
i have a very big database and number of people are working on it.. it's log file size is increaseing very day too much.. i am taking log back every 30mint...
i dont' know that wethare i need to truncate the log file after taking the log file backup or not.. i am taking differentail backup every day and full backup every week...
Please tell me do i need to trucate the log file to reduce the file size or i hv to leave as it....
I have a 14GB database whose data content is legacy and is described as static. The log file is significantly large and continues to change size mostly increasing by 2-5GB a day (~60GB now) I have observed over the past two days; it shrank once unexpectly by a few GB. The instance is hosting other databases such as: EnterpriseVaultDirectory, EnterpriseVaultMonitoring, EnterpriseVaultStore, and NetPerfMon - might these seemingly unrelated data sources be involved?
I am trying to a trace to find traffic against the tables, no such luck.
Web applications are playing against it for queries but there should be no UPDATEs beign applied. I can only suspect that other unknown applications are performing operations but have yet to find unexplained connections.
Are there any other reasons why this type of log file activity would happen merely due to queries or stored procedure calls?
Lets also state, "mirroring, indexing, replication" are not at play. I know logging "Full" is not necessary as "Simple" should suffice but I am still hunting down why UPDATEs might be getting through. I realize I might adjust the migrated SQL 2000 security model to deny updates to find what breaks but would rather not take that iniative yet.
The installation is a fresh SQL 2005 Standard setup with SP2 applied; the databases were upgraded.
Is it possible to manually force/call/start the system AUTOSHRINK process? I have an issue that appears only when the engine shrinking process is running and I need this to reproduce my bug.
I know how to start a "regular" database shrink process with:DBCC SHRINKDATABASE(xxxx);, but this is not the same as one started from the database engine.
Hi, I've started 2 dbcc processes using SQL Scheduler overnight. and I've got one is flagged rollback and the other is still running, they are blocked together. I've tried to kill them using kill <spid> without success. How can I delete this processes ? Shall I shutdown the server ?
Hello all, I am running into an interesting scenario on my desktop. I'm running developer edition on Windows XP Professional (9.00.3042.00 SP2 Developer Edition). OS is autopatched via corporate policy and I saw some patches go in last week. This machine is also a hand-me-down so I don't have a clean install of the databases on the machine but I am local admin.
So, starting last week after a forced remote reboot (also a policy) I noticed a few of the databases didn't start back up. I chalked it up to the hard shutdown and went along my merry way. Friday however I know I shut my machine down nicely and this morning when I booted up, I was in the same state I was last Wenesday. 7 of the 18 databases on my machine came up with
FCB:pen: Operating system error 32(The process cannot access the file because it is being used by another process.) occurred while creating or opening file 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataTest.mdf'. Diagnose and correct the operating system error, and retry the operation. and it also logs FCB:pen failed: Could not open file C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataTest.mdf for file number 1. OS error: 32(The process cannot access the file because it is being used by another process.).
I've caught references to the auto close feature being a possible culprit, no dice as the databases in question are set to False. Recovery mode varies on the databases from Simple to Full. If I cycle the SQL Server service, whatever transient issue it was having with those files is gone. As much as I'd love to disable the virus scanner, network security would not be amused. The data and log files appear to have the same permissions as unaffected database files. Nothing's set to read only or archive as I've caught on other forums as possible gremlins. I have sufficient disk space and the databases are set for unrestricted growth.
Any thoughts on what I could look at? If it was everything coming up in RECOVERY_PENDING it's make more sense to me than a hit or miss type of thing I'm experiencing now.
Hi I am using VWD 2008, SQL Express 2005, Reporting Services, Win-XP, IIS5Basically let's say I have 2 pages:Page1: has a SQLDataSource control that populates a GridView from a table from a database file myDB.mdf (no code behind)Page2: has a reportviewer control that show a report with data from the same table from myDB.mdf from the reportserver, (no code behind)I have attached myDB.mdf to the SQL Server Express using the SQL Server Management Studio Express.If I first open Page2 to display the ReportViewer it works ok. or using the Report ManagerNow this is the problem:If after that I try to open Page1 then a get an error message:Cannot open user default database. Login failed.Login failed for user 'myServerASPNET'. Exception Details: System.Data.SqlClient.SqlException: Cannot open user default database. Login failed.Login failed for user 'myServerASPNET'.Then I have to restart the SQL Server to fix it,Now I can open Page1 ok, but if after this I try to open Page2 (ReportViewer) againThen I get this error:" An error has occurred during report processing. o Cannot create a connection to data source 'my_Datasource'. &And this error if open the report using the report manager:" An error has occurred during report processing. o Cannot create a connection to data source 'my_Datasource'. § Unable to open the physical file "C:InetpubwwwrootWebsiteApp_DatamyDB.mdf". Operating system error 32: "32(The process cannot access the file because it is being used by another process.)". &Now if i check the Management Studio Express again, you can see that myDB.mdf was detached. It seems to be there by it has no Tables or definitions, so I have to attach it again..Do you know how to fix this?Thanks in advance,Ed
I have a SQL Agent job that runs at 4:15 in the morning. The job has 5 steps, each step only runs if the preceding step succeeds. The second step, which calls an SSIS package that does the main processing, appears to finish as it goes on to the next step; however, when looking in 'View History' there are 2 entries for this step - the first one shows it as still running (Circled Green Arrow) but with a start and end time. The second entry says the job succeeded.
I have been seeing conflicts, such as deadlocks, with later jobs. I suspect this job is causing the conflicts - maybe the package is still running in the background instead of having actually completed?
what conditions a job step my be showing in the job history as both running AND completed successfully?
I've production sql server 7 sp3 on windows NT. I had a 8GB data file ofwhich 5GB were used and 3GB were unused. I wanted to take back the unused3GB.So I did the following with EM GUI:1. I tried to "truncate fre space from end of the file". Didn't truncatethe file. I believe there was no empty space at the end of the file.2. Next I chose the option to "shrink file to 5GB". And to my horror thedata file instead of taking just 5GB took the empty spaces also and the sizeof the used data file went to 8GB.Any idea what's going on?TIA,SP
Hi, I'm trying to upload the ASPNETDB.MDF file to a hosting server via FTP, and everytime when it was uploaded half way(40% or 50%) I would get an error message saying: "550 ASPNETDB.MDF: The process cannot access the file because it is being used by another process" and then the upload failed. I'm using SQL Express. Does anybody know what's the cause? Thanks a lot
I have the following code in Script task. However, tablesInFile.Rows is sorted by name in ASCII order. Anyway to get the "natural" order of Excel workbook? Or just the first tab? connectionString = "Provider=Microsoft.Jet.OLEDB.4.0;" & _ "Data Source=" & excelFileName & _ ";Extended Properties=Excel 8.0" excelConnection = New OleDbConnection(connectionString) excelConnection.Open() 'tablesInFile = excelConnection.GetSchema("Tables") tablesInFile = excelConnection.GetOleDbSchemaTable(OleDbSchemaGuid.Tables, New Object() {Nothing, Nothing, Nothing, "TABLE_NAME"}) ' .GetSchema("Tables") tableCount = tablesInFile.Rows.Count For Each tableInFile In tablesInFile.Rows
If I want to download a file, but I don't know if it's available yet (actually positive it won't be available for some time), how do I make FTP Task retry/wait until file shows up in the ftp folder?
I have a File System Task Copy file operation to copy a file in an SSIS package. The package when scheduled as a job fails with the following error:
The process cannot access the file 'C:ETLConsignmentAppleAppleRawFile.txt' because it is being used by another process.".
However when I right click on the package and execute it manually from the Integration Services it runs successfully without any problem. I am not certain on how to resolve this issue any inputs will be much appreciated.
Error: 0xC002F304 at Rename file 1, File System Task: An error occurred with the following error message: "The process cannot access the file because it is being used by another process.".
When running two File System Tasks after each other, with the same file, the file is still locked when running the second task. Resulting in an error: 0xC002F304 at Rename file 1, File System Task: An error occurred with the following error message: "The process cannot access the file because it is being used by another process.".
I found a workaround by addind a Execute Process Task before the second File System Task that pings to the localhost. This results in a 5 second delay, but there must be a better solution. Anyone?
On a database with a log file that has an unrestricted file growth, the file size exceeds 1 GB. Since this excessive was caused by a badly written update statement, I want to reduce the size to about 200MB. After reading the BOL I was convinced that I only need to take two actions: truncate the log file (to create some free space in the log file)and shrink it. These are the statements I executed:
backup log ODS with truncate_only dbcc shrinkfile (ODS_Log, truncateonly)
After I executed these statements - BTW, there were no errors - the file size was still the same. Can somebody tell me why?
I have created a new database in SQL Server 7 with the auto grow options set to on. I then added a whole load of new data to the table which made the transaction log file grow to 20Mb.
I then truncated the transaction log to remove all the completed transactions. The Enterprise Manager now shows the Log to only have 3MB of data in it but the file is still 20MB.
I have tried setting the truncate log on checkpoint option, and tried running DBCC SHRINK DATABASE and DBCC SHRINK FILE commands but these seem to have no affect on the file size.
Does anyone have any idea what I might have missed/done wrong?
I have Disk Xtender 2000 which was made by OTG Software , Legato and now EMC. I have an NT 4.0 PC with Microsoft SQL 2000. I have a drive space problem and need to shrink a 38 gig .ldf file called OTG03.ldf I also have a 2 gig .mdf file called OTG03.mdf How can I shrink this .ldf file. I'm not a DBA so being specific is greatly appreciated.
My DB's recover model is SIMPLE. Is it OK to schedule a SHRINK FILE only on the log files regularly? Any GOOD vs BAD about my plan? I want to do this because the log files keeps on increasing. Right now, the log file s on ENABLE AUTOGROWTH, FILE GROWTH = 10%, RESTICTED FILE GROWTH = 2,097,152.
I am geting growth alerts and need to shrink a log file that is 99% full, but it won't let me. Here is the message I get. The transaction log for database 'SOM_System' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
What can I do in order to shrink this log file?? Thanks
I have a database that the '.mdf' file is just huge. The physical size on disk is approximately 110 gig. I run weekly maintenance plans to rebuild the indexes on it.
I ran the 'sp_spaceused' command and got the following results:
I was trying to clear the unused space, the numbers are telling me that I have 81 gig of space unused, but no matter what I do the '.mdf' file will not shrink.
I ran the following command: DBCC SHRINKDATABASE ({dbname}, 10,TRUNCATEONLY)
Hello,Database log file on MSsql2000sp3 is 27gb when database itsself 305mb.I attempted to shrink the log file with Enterprise manager,but it wantsto use a minimum of 26.xxx MB,approximatley 27gb of disk space.when running the dbcc shrinkfile (file_name) message returned is "allvirtual logs are in use'Any ideas how to reduce the log file?Thanks in advance*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
I have few tables, which are replicated and partitioned. They also have archival process. I want to avoid having to run that same process on the subscriber.
Replication of partition switching is easy. However I am not sure how to replicate merge range and empty filegroup/file drops.
There the following article options:
Copy file group associations Copy table partitioning schemes Copy index partitioning schemes
I am not sure if these are enough to implement the replication of merge range and empty filegroup/file drops.
I could not find and option to copy partition functions.