I started my database in last week to user with transfer data from Sybase to sql 2000 server. Intitally log file size was few MB near to 20 MB for each co’s, within 8 days it has reached upto 300 MB still datafile size in few MB , approximately 40MB for each co’s, why log file growing in such manger, how I can manage it?
The TEMPDB transaction log file keeps growing.The database server is new and the transaction log was presized to 1 GB on installation. After installing a number of databases, the log file grew over a day to 38GB. Issuing a manual checkpoint was the only way to free some space to allow it to be shrunk back to a usable size. The usage of the file is still going up.
I am struggling to find what process is causing the log to be used so heavily. Looking at the log reuse wait desc for tempdb returns "Nothing" and tempdb itself isn't being used very much or growing in size.
Why does a log (.ldf) file keep growing and growing and growing? Is this related to the fact that the scheduled Maintenance keeps failing due to exclusive access problems?
Hi, my log files are growing like anything. One of my log file size is 20GB. How i have to reduce the log file size. If i run DBCC command is it come backs... Pls tell me the way how i have to find the free space and reduce logsizes. After taking backups also my log file sizes are not reducing.
I have a database of 22 gb in sql 2000, my database option is set to full recovery mode, the problem i'm having is the tran log is growing too fast, this morning it was 24 gb, more than the database size. Can anyone help how I can keep it in a managable size?
I have a DB with 1 data file, 1 log file and 1 index file. data file is 3 GB but index file is 12 GB. Index file is growing big day by day. This cause performance of DB down. What should I do to prevent index file become bigger and size of index file smaller?
My log file was 2x the size of my actual Database which is obviously too large on a DEV box. I know that my data can be easily recovered so I actually do not even want/need a log file.
After doing some investigation I found that I should turn my database into "Simple Recovery Mode" and after this I used a few scripts to truncate my log file. Things at this point looked great!
Unfortunately my log File is still growing even with this 'simple recovery mode'. So how do I stop this craziness from occurring?
I even unchecked the box 'allow autogrowth' on the database! However, I eventually get errors when creating records in the system because it complains about running out of room in the log file.
Code:
The transaction log for database 'ReportingDB' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
Hello, we are running Microsoft SQL 2005 Express edition (9.0.32).
Recently I just noticed that the database log file of our main database is HUGE. The database data file is only 50MB and the log file is 210GB.
Any idea what is causing this? Seems to be getting bigger with time, in the last 7 days seems to have grown by 100GB. I noticed the following settings under the database:
I am using Append to media backup option in 2000 Version. The size of backup is growing. how can I best create the maintenace plan to clear the history or clear the old files in BACKUP (.bak) file but still be able to restore point in time from same physical file. I
werweuroweuroiweruewr DECLARE @CONTADOR BIGINT SET @CONTADOR = 0
WHILE @CONTADOR <= 100000 BEGIN INSERT INTO TABLA1 (ID,NOMBRE) VALUES(@CONTADOR,'PRUEBAS') SET @CONTADOR = @CONTADOR + 1
END
'werweuroweuroiweruewr' obviously doesn't exists in my database, but it's ignored 110 times. Then I see that TABLA1 own 100 rows. After that fails, of course.
Could you please so kind to give me any explanation for this behaviour. I'm totally stuck, I haven't enought words (in english) in order to define this!
Why SQL executes 110 times that loop and then, oh my god, discover that 'weerrrr...' doesn't exists at all and stop!
If possible and there is explanation for that it could happen in SQL 2005??
I have got another annoying problem. The MDF file size on one of the machines is growing really fast. We zip the mdf/ldf files every day from all the machines in the dataentry dept. On this particular machine, the mdf file size is growing by about 1GB per day. However, when the file is zipped, the zipped file size comes closer to the zipped files from the other machines.
I'm having a problem. When I use the SQL query to make a backup of the database, it worked fine. But everytime I use it, the backed-up file's size kept growing in size. Say I have the file, test.bak whose filesize is 450 MB then I run a new backup to overwrite the existing test.bak file, it just end up as 900 MB. If I run it again, it become 1350 MB and so on.
I have an 19 gig database that somehow has a 100gig log file. The DB MUST BE in full recovery mode, I backup the transaction logs EVERY hour and shrink nightly. but for some reason my logfile WILL NOT SHRINK.
HELP,
I've used both the DBCC Shrinkfile (xxxxxx) and DBCC ShrinkDatabase (xxxxx) and these don't seem to work. I Have No current backup, I have Not capacity for addtional 100 gig worth of backup drive or off-site tape.
Hi All Friends i have an amazing error in my project i wrote a project in ASP.NET 2 a part of this project change a value in a row in a table in sql 2005 to another value i used it in my computer (With Local host and my own iis) and it works properly and without any problem but when i upload it in internet it have this error: "Concurrency violation: the UpdateCommand affected 0 of the expected 1 records. " can anyone help me? i think the error is from my table in sql 2005 but i dont know what is wrong in it thanks for your cares friends and im waiting for your usefull answers
Hi there,I have a data manipulation process written in a Nested Stored procedurethat have four levels deeper. When I run these individual proceduresindividually they all seems to be fine. Where as when I run them alltogether as Nested proces (calling one in another as sub-procedures) Logfile is growing pretty bad like 25 to 30GB.. and finally getting kickedafter running disk space. This process is running around 3hrs on a SQLserever Standard Box having dual processer and 2gb ram.This procedures have bunch of bulk updates and at least one cursor ineacch procedure that gets looped through.I was wondering if anybody experienced this situation or have any clueas to why is this happening and how to resolve this?I am in a pretty bad shape to deliver this product and in need of urgenthelp.Any ideas would be greatly appreciated..Thanks in advance*** Sent via Developersdex http://www.developersdex.com ***
I want to truncate my sharepoint config database and WSS_Logging database logs the size of sharepoint_config database is growing at a pace of ~10GB every week. I have scheduled a weekly full backup. Current .ldf file size is 113GB.
I am using SQL server 2012 with Always On High Availability feature. I am not able to set the recovery mode from Full to Simple as it gives me message that mirroring is running on both server.
In my case to reduce the log file what I need to do.
I upgraded from SQL 6.5 to SQL 7 last month, and so far, everything's been going fine.
However, I'm not using my old SQL 6.5 backup scripts, which, when the backup was done, would dump the transaction log with TRUNCATE_ONLY, shrinking the log size.
My SQL 7 server is set up with a Maintenance Plan which does everything, including backup, but the log file seems to be growing and growing. I'm up to 4.5 gigs now, for a database with a data file of 3.5 gigs.
How do I "dump transaction with TRUNCATE_ONLY" on a SQL 7 database?
I have merge replication setup up for 6 SQLCE Subscribers. I have noticed that the MSmerge_tombstone table is growing at a fast rate regardless of any changes to the data in the database. It seems to be consistantly adding 50 rows of data to the table every 2 minutes. As the table grows it causes the SQLCE subscirbers to fail with the following message:
ERROR: -2147467259 SQL Server Reconciler failed: Run
ERROR: -2147200925 : Failed to enumerate changes in the filtered articles.
ERROR: 0 : The merge process timed out while executing a query. Reconfigure the QueryTimeout parameter and retry the operation.
I'm sure that this is due to the size of the MSmerge_tombstone.
Should the MSmerge_tombstone table grow at this rate? 36,000 rows every 24hrs!
I understand there is the sp_mergecleanupmetadata Stored procedure but if i use this does that mean that because i have to reinitialise all the subscribers, they are going to have to pull down the whole subscription again.
I have since Changed a settings to make subscription expiration date to 8 days instead of never expires but we're still getting 50 rows added every 2 minutes
SQL SERVER 2000 SP3 Hope someone can shed some light on this for me.
I'm in the midst both of developing and using a custom Transformation. Of course, every time I change the assembly version and reopen Visual Studio, I get the error:
DTS.Pipeline: The component metadata for "component "Map SERVICE_CERT String Locale" (195692)" could not be upgraded to the newer version of the component. The PerformUpgrade method failed.
Out of curiosity, I'm wondering, how did it fail? Did it throw an exception? Was it even called? Is it my PerformUpgrade override that is being referred to, or perhaps the base class version?
I've seen several of the threads on this topic in this forum, but they haven't completely solved the problem (I will take the step of creating a policy assembly later today, and maybe that's the problem).
But, is there a reference somewhere to exactly what's going on at package load time as far as loading custom components? I'd love to know which methods are called, in which order, and how SSIS decides which version of the component to load.
I'd also love to see some best practices for component development. For instance, maybe I shouldn't allow the assembly version to change during development? Do I need to remove and re-add the component from the toolbox, and does that affect components already in the package?
Sorry to ask questions I'd no doubt answer for myself within a few months, but this project is due at the end of this month!
I am looking for a script whichs exports data (by DTS?) into a flat file and store the files (according their date stamp in the transactions) with a name like 05_2002.txt, 06_2002.txt etc. The data in the table Transactions will be deleted after some time to prevent fast growing of this particular table.
Our current production deployments using merge replication are averaging 4-5 hours - all of which is downtime for our client sites. Hotfix releases require no downtime, I am talking about maintenance releases where articles are changed/added/removed requiring a full snapshot to be taken.Management is not happy about this deployment window lasting so long.I have two questions:1) Our first suggestion is to do a proof of concept of leaving the sites up while the snapshot is being generated and only briefly bringing them down as the snapshot is applied as leaving them down while we smoke test the new application + DB code. Is this easily accomplished of making SQL Server disconnect the taking of the snapshot with the application so we can time this appropriately?2) Our snapshots take ~45minutes and we have 3 snapshots that are typically required. We currently run these serially.A) Is there a way to take/apply these snapshots in parallel?B) We have a SAN on production and have the capability to do SAN replication. I am not too familiar with SAN replication, but can that somehow be used to make these snapshots run in minutes rather then closer to an hour?Any links/references that are informative for dealing with merge replication deployment best practices would be much appreciated as well.Thanks!-John
We have a number of scheduled reports in our system. But frequently, I need to kick off one or more of them immediately at random times - to run off schedule, so to speak. The only way I know to do this is to actually modify the schedule to set it to run "Once", set the time to run to be a minute or so into the future, then wait for the report to run. Afterwards, I go back into the scheduler and re-establish the original schedule.
Do you know of a way to do this without having to modify and disturb the original schedule?
Picture tells all what i need. Anyway i want to combine upper two tables data like below result sets. Means they should be grouped by bsns_id and its description should be comma separated taken from 2nd table. In sql server 2012 ....
i want to combine upper two tables data like below result sets. Means they should be grouped by bsns_id and its description should be comma separated taken from 2nd table. In sql server 2012.
I wonder if anyone could explain why when monitoring the transaction log size it doesn't appear to be growing! I'm using the following code to test image data types with logging.. I've got 'Truncate log on Checkpoint' switched off and 'Select into Bulk copy' also switched off.
Running the following code I would expect to see the transaction log grow and grow and grow... Monitoring it using perfmon indicates that it isn't in fact logging...
DECLARE @ptrval varbinary(16)
SELECT @ptrval = TEXTPTR(pr_info) FROM pub_info pr INNER JOIN publishers p ON p.pub_id = pr.pub_id AND p.pub_name = 'New Moon Books'
declare @Loop int select @Loop = 0
While @Loop <= 10000 BEGIN
WRITETEXT pub_info.pr_info @ptrval with log 'New Moon Books (NMB) '
I am looking for information that tells me how fast a db is growing in MB and or percentages over a given period of time, ie weekly, monthly, yearly etc. Either in real numbers or estimates. Does 7.0 already store something like this or do I need to create some code for this?
Or does someone have something like this already coded that they would be willing to share?
In SQL Server 7.0 sp1 (NT 4.0 sp5) I have a server that has a tempdb database that continues to grow. This server contains the database for SMS. Over the weekend, the tempdb had grown so much that it filled up the drive (37GB). I have shrunk it down to a much more reasonable size and put a limit on how large it can grow. I'm noticing today that it is beginning to grow again. Is there a way I can look at the information that is in tempdb right now? I have to think that there are open transactions for some reason that can't commit. I know that tempdb gets cleared out when SQL Server is restarted, but I can't be restarting it this often.
On Microsoft's website, I did find an article about SMS Y2K queries using large amounts of Tempdb and failing to complete. The solution they have in this article Q234912 is to install SMS sp1 which is already installed.
I haven't been able to find any other useful information yet on this problem. I would appreciate any help you can offer.
My database has a situation where my transaction log is growing out of control. However I have not been able to figure out where any memory leaks are occuring.
Is there a way to monitor the database in order to find out at when the tlog is growing. Or even better, what sql is being executed that is causing this unreasonable tlog growth?
The log file for database 'P5_Nextel' is full. Back up the transaction log for the database to free up some log space
What i'm doing is, i just resizing the space allocated, but the problem is my disk is now out of space. How can i prevent this kind of problem without adding a new disk? Is there any other way?