Does anyone have any benchmarks for the amount of overhead caused by autoshrink of the log and having autostats enabled? We have a customer that insists that turning off these options was necessary to eliminate a performance problem they were having (Query timeouts), but we are not convinced that these two options would have generated enough overhead to have been the root cause (they also rebuilt all their indexes and made some other unspecified changes that more likely solved the problem).
We are hestitant to have them continue with these options disabled because then we need to rely on them to keep the log file shrunk and the statistics updated and because of the data changes during the day, would prefer to have stats updated automatically rather than on a fixed schedule that may not be as appropriate.
Anyway, if anyone has any feedback on overhead generated and potential performance implications of having either of these options enabled, it would be greatly appreciated.
Hello, I've found this recomendation recently: "To help speed queries or joins on large temp tables, be sure the AUTOSTATS database option is turned on for tempdb"
How can I find out what this option set for our tempdb? If I go to "database options" of tempdb under "System databases", I do not see that option at all. Please help. Thanks
Ran the Andrew Zanevsky script to shrink the Tran log, this works great, but say the log gets shrinked to 30MB, I then expand it to 40MB, then sometime within a 24 hour period it magically gets reset to 30MB. The database is a standalone DB, and is on SQL7 SP1 the script I ran can be found @ http://www.pinnaclepublishing.com/sq/SQMag.nsf/WebIndexByIssue/F36C151BD80FAD8F852568D0007799BE?open. Can anyone please give some insight into this or maybe a flag that got reset somewhere? Thanks in advance Pat
I've got a database that needs to run 24/7. I'm looking into maintanence options and wanted to run the following by y'all:
Ok, I've read the MSDN "Maintaining databases" article and noticed the following statement about autoshrinking: "This technique uses almost no processor time and memory". I also searched these forums and found that many users say autoshrinking heavily lags down sql transfers. So who's right? And if it does lag transfers, by how much?
Another question I have about autoshrink is fragmentation. It would seem to me that over time solely depending on autoshrink would cripple a server in terms of fragmentation; is this the case?
Also, does autoshrink (or manual shrinking or compacting) update the statistics?
Final question!!! I'm programming in native c++, is there a way for me to run commands such as "DBCC SHRINKDATABASE" in native OLE DB code?
I had a DTS task running which was doing an insert (over 17.5 million rows) in a database for which the auto shrink option was enabled.
After a few hours I noticed that the insert was blocking the autoshrink.
Since I didn't want to kill my insert, I tried to kill the autoshrink from the query analyzer, but when trying to do so I got the message that only user processes can be killed.
Does anyone have similar experiences? Should you consider enabling the autoshrink option or is this just trouble?
I have a database that has Truncate on Checkpoint set for the Log file. The Log file is set to AutoGrow. Is it necessary to to run dbcc shrinkdb (or the like) to get Log file to contract? Is there any harm in not contracting the Log file? I'm looking for best efficiency and least-likely-to-fail path as DB sits 'really remote' and there is little opportunity for observation.
Does anyone have any recommendations on re-indexing? I have one table that bears the most growth. It has a clustered index. What would be a suitable data point to watch? I run a SP to save DBCC SHOWCONTIG info along with the duration of a test query, but haven't seen a clear breakover point.
I have a test database and I detach and reattch this database during my testing on my test system. This database is originally from my production system. I have a couple questions that came up during my testing:
1. When you detach/attach a database from a different server with the same setup, does it gather it database setup information from the model database on the current server?
2. When you switch the autoshrink database option from enable to disable, do you have to restart the server?
3. How to you change the default make of the model database?
Hello All,I have been encountering trouble with a SQL Server 2000 Transactionlog file, mainly with the constant growth and lack of the autoshrinkoption. Here are the details:1.)OS is Windows 2000 server sp32.)SQL Server 2000 (Build 2195) sp33.)Database Recovery Mode is set to Full4.)Maintenance Plan for "Transaction Log Backup" is set to removefiles older then 1hr.5.)The "AutoShrink" option is on for the DB properties.Also as part of the Maintenance plan the database in question isbacked up nightly. The problem is that the transaction log file hasgrown so much that it was consuming all of the free space on the harddrive so I then restricted the file growth. When all of the drivespace was consumed or the "database log file is full" message wasencountered I would run the following Query scripts to shrink the filemanually:First – Backup Log <database name> With Truncate_OnlySecond – DBCC Shrinkfile (<database name>_log, 200)Which brings the transaction log file down to 200mbs. I need toautomate this so that I don't have to keep babysitting this databaseor manually shrinking the log file.
hi,Is it over head to use SqlTransaction(begin, commit, rollback) for a single transaction.am not using application block or enterprise library.only a single insert statement.
As part of dealing with a locking problem I am fine tuning a stored procedure that updates a table. The application updates a row by changing every single column except for the primary key, whether 1 or all of the columns have been modified. Although easier to code, this strikes me as taking a sledge hammer to a nut.
Could anyone tell me if there is any benefit in attempting to break this down. That is, code the stored procedure so that only those columns being changed are modified. I am thinking this might reduce dramatically the overhead of writing to the transaction log and making the changes to the actual row.If the benefit is non existant (or insignificant) because of the way Sql Server updates a row it will obviously be a waste of time to generate dynamic sql.
We are having some issues with temporary tables (with # prefixes) within Stored Procedures.
When running a profile trace on them, the stored proc quite happily creates the temp # table (in fact several of them) but whenever it hits the first statement inserting data into one of them (and it doesn't matter which one), there is a 5-6 second delay.
By commenting out one and moving to the next piece of code, the same thing happens.
Following which, the rest of the Stored Prco runs fine and subsequent inserts into the # temp tables also run efficiently.
Is the stored proc getting recompiled perhaps ?
Any advice woul;d be appreciated.
we are running SQL Server 7.0, dont know whether that helps ?
I have been doing some testing with SQL Server 2000 using a packet sniffer, and have found that it is sending bytes with a value of 0x00 between each "valid" character. For example, if it was going to send "hello" over the network, it would be transmitted as "h.e.l.l.o".
Can anybody suggest why this might be. It happens regardless of what client is used - ADO.NET, osql.exe, etc. Is it something to do with the encoding used?
Everything does work fine and the data is received intact, but if these seemingly redundant bytes could be removed then it would increase performance by 100%.
CDC is creating additional tables under System tables.
What is the performance overhead on the database by creating these tables?
I am going to access the CDC records through one ETL tool. Once read the data I am going to delete the records.
If frequency of changes are more once reading the data there may be few records will be added to the CDC. Is CDC is going to truncate the tables or only read records?
I would like to capture about 20 rows from the sysperfinfo table every30 secs on a production server. I am thinking of ways I can reduce thedisk (not network) I/O overhead of this process. Instead of readingthe table from a local SQL Agent job and writing to a local table, I amwondering if I should create the job to capture the data from a remote(less critical) server. The servers are connecting via a gigabit lineand are in the same server room on the same switch. This way theproduction server would be required to do the reads but the otherserver could take care of the I/O of the writes to the capture table.Also, this job would run from 7am to 7pm non-stop. A waitfor delay of30 seconds would control the twice a min scheduling. Running this onthe remote server would free up a few SQL Agent CPU cycles as well.This would be one less job for the production server to worry about.Thanks
I am a developer, and I have a disagreement with my DBA. He has convinced management, that SQL 2005 FullText Index is so much overhead on production, that it should NEVER be used under any circumstances. We have a Cold Fusion site, and somehow he convinced management that a bunch of Cold Fusion developers can create a more efficient full text indexing method than by using SQL 2005 Full Text Index. So now we have to come up with a method for doing this in Cold Fusion.
Is there any statistical data that could possible support or refute his statements? Thanks
I've been asked to put together an estimation for the performance impact that replication would have on our database server during a particular operation. I know that this depends on a lot of different factors, including:
* Number of articles being replicated * Types of articles being replicated * Number of DML transactions that would result in delivery of replicated data