Well my question is how do i analyze db growth per day. is there a tool i can use or a method. I mean i do take a look at the task view and the files but per day it doesnt move in MB wich is weird since this is a warehouse and their are nightly loads to it inserting maybe 30000 record a night on avg.
Has anyone written routines to analyze sysjobhistory? I'm looking for a tool/routine to analyze jobs for failures, trends (such as constantly increasing run times) and other information. It's not as straighforward as I had originally hoped it would be.
I am specifically prohibited from using any 3rd-party tools (too expensive per mgmt).
I am going over the output of a Profiler trace and I've found that the duration for many occurrences of EventClass 15 (Logout) is several seconds, up to a maximum of 20 seconds. That seems excessive just to complete a logout, so my question is, does the duration figure reflect only the time to complete the logout operation or does it include the total time that the connection has been active for?
I have to duplicate a bunch of reports that were produced by a VB6 app. Now I'm using C#, ASP.net and Crystal Reports via VS2003. Each executes a stored procedure in SQL Server (before and now).
For each report I have a Stored Procedure and a View. The View's SQL code is identical to the Stored Procedure except that the two input parameters (startDate and endDate) are removed because Views don't allow parameters.
Some of the reports work perfectly right off the bat. But others are timing out. My initial test of the timing out is to just display the View. If it fails then I know that the the report engine will fail too.
So now I'm trying out the SQL Query Analyzer tool to execute the code in one of the Views. It's now at 29 minutes and still going - at least it hasn't timed out!
My question is this: Is there a way I could examine what's going on with the query to see why it's taking so long?
Hi guys, We have 2 databases: DataBaseSource and DatabaseDestination. We need to truncate all the data in DatabaseDestination and put all the data from DataBaseSource into DatabaseDestination.
What is the best way to do that, we have a lot of data? And what is the best way if we also need to keep a trace of what happened in case we wanna go back and see what happened.
Also , pls, if we use DTS, is it possible that if someone wants to see what the DTS does, is it possible to read the DTS? I mean if I give a dts to sompeone, a new DBA guy in 2 years for example, how can he know what a certain DTS does? I mean does SQL 2005 put the DTS packaege scripts somewhere or is there a friendly way to know what a DTS dsoes exactly?
Also the trace to see if something went bad is important for us?
Sory if i didn t express myself well enough, and thanks a lot for your help.
For one of our SQL server 2005 Ent edition 64 bit SP4 which has transnational replication set up and used for heavy reporting, i was trying to counter out the performance of slow running queries which basically runs and get suspended and most often are seeing waiting:So i tried to analyse the wait stats and come up with below stats where ASYNC_NETWORK_IO dominated for a collection of two weeks data.
I am trying to setup a means to analyze our servers using SQL Server 2005 Data Mining. I will be collecting all the logs from the Event Log of each server and store them in a SQL Server database for analysis.
Has anybody done this before? Can you give some pointers? I would appreciate your assistance.
For one of our database we have an issue where its log file got increased rapidly last week on Fri and Sat. The database is on SQL Â server 2008 R2 with compatibility level at 80. Please see below log grow events :
First, we thought Index maintenance like Re-index and update stats could have been the reason, but when check the schedule that job ran on 16th using below code:
USE ABC GO EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?', ' ', 80)" GOEXEC sp_updatestatsGO
I know above is OLD fashioned, but we believe that should not be the major cause here? How can i determine what happened on 14th and 15th which cause the event to trigger and log file bumps to 80 and 70 GB both days.
I need to discover the actual order in which locks are acquired on a table during a query.
This with a goal of analyzing the lock order of queries against the same table to prevent deadlocks.
I'm using SQL Server 2008 R2.
From Management Studio I execute:
begin transaction <my query> exec sp_lock rollback transaction
In the output I see interesting information about which locks are acquired, but:
- are this locks ordered by the time they're acquired? That is, can I be sure that lock at row n is acquired before lock at row n+1? - if not, how can I get this information?
Hello, I have successfully installed two instances SQL Server 2005 Standard Ed. 64. and integration services (for the MSSQLSERVER instance). When I try to install SP2 x64 everything exept the Database services of the MSSQLSERVER instance was updated well.
********************************************************************************** Prerequisites Check & Status SQLSupport: Passed
********************************************************************************** Products Detected Language Level Patch Level Platform Edition Unterstützungsdateien für das Setup DEU 9.2.3042 x64 Datenbankdienste (AUTODESKVAULT) DEU SP2 2005.090.3042.00 x64 STANDARD Datenbankdienste (MSSQLSERVER) DEU SP2 2005.090.3042.00 x64 STANDARD Integration Services DEU SP2 9.00.3042.00 x64 STANDARD SQL Server Native Client DEU 9.00.3042.00 x64 Clientkomponenten DEU SP2 9.2.3042 x64 STANDARD SQLXML4 DEU 9.00.3042.00 x64 Abwärtskompatibilität DEU 8.05.2004 x64 Microsoft SQL Server VSS Writer DEU 9.00.3042.00 x64
---------------------------------------------------------------------------------- Product : Datenbankdienste (MSSQLSERVER) Product Version (Previous): 3042 Product Version (Final) : Status : Fehler Log File : C:Program FilesMicrosoft SQL Server90Setup BootstrapLOGHotfixSQL9_Hotfix_KB921896_sqlrun_sql.msp.log Error Number : 29504 Error Description : MSP Error: 29504 SQL Server-Setupfehler beim Analysieren von SQL-Skript 'c:ProgrammeMicrosoft SQL ServerMSSQL.1MSSQLInstallsysdbupg.sql'. Fehlercode: Zugriff verweigert . Beheben Sie das Problem, und führen Sie dann das SQL Server-Setup erneut aus, um den Vorgang fortzusetzen. ----------------------------------------------------------------------------------
Does this seem right? We have our transaction logs set to "Truncate Log on Checkpoint" and they still grow over 1GB. Is it possible that one transaction (to a checkpoint) generates this much logged information? Will transaction log backups every 5-10 minutes help me out better or is this just a poorly written application?
I am having a problem with the growth of the tempdb on my SQL 7.0 box. I have over 300+ stored procedures that are running (many with group by and order by in them). This is causing the size of my tempdb to grow to over 30gigs in size. If i reset the services of the DB it shrinks back down to the managable 6 gigs that i expect. Is there a way to have the services restart automatically on a nightly basis or is there a way to have the tempdb deallocate resources once they are used without restarting services? I apreciate any help you can provide, Nathan
I am moving table(14 million rows) data from one server to another, The transactional log file growth during the process is 3 folds compared to actual data file size.
I'm a beginner in SQL Server databases, my problem is this:
i'm making a database witch the frontend is an access project, the database has several stored procedures views and user functions (the normal..), but a few data, (only the experimental), last night i've noticed that the file grow from 22 MB to 89 MB, the objects are the same and also the data, the only diference was that i forgot to put in an event procedure code, the ADO method, "MoveNext", to update various records, and the loop was infinit. Is it possible that SQL statments generated by ADO make the file grow so rapidly!? If so how can i shrink it, because i've tried and and the results was 4%.
We use SQL 2000 and our database is configured to grow automatically by10%. Currently 96% of our database is used. At what point will thedatabase expand - what is the trigger point?
I have done this experiment on one of the tables. There is table calledbuild havinf nvText Field with large no of records. I want to drop thatcolumn and recover space. These are the results I got.SP_SPACEUSED ‘BUILD’ Resultsname rows reserved data index_size unused1. Before Deleting nvText FieldBuild 663211 341440 KB 339464 KB 1944 KB 32 KB2. After Deleting nvText FieldBuild 663211 341440 KB 339464 KB 1944 KB 32 KB3. After Executing the Shrink Database from Enterprise Manager.Build 663211 608280 KB 604744 KB 3456 KB 80 K4. After Executing DBCC DBReindex (build,'',70)Build 663211 124096 KB 123392 KB 712 KB -8 KBCan anyone please explain me after executing step 3 i.e shrink datacolumn as well as index_size shows an increased figure whereas logicallyit should be a reduced figure.Regards,Harcharan*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
insert into DB_Growth (Database_Name, Logical_File_Name, File_Size_MB, Growth_Factor) exec (@l_sql_string)
fetch next from db_name_cursor into @l_db_name end close db_name_cursor deallocate db_name_cursor select * from DB_Growth with (nolock) if object_id('DB_Growth') is not null drop table DB_Growth set nocount off set ansi_warnings on return
insert into DB_Growth (Database_Name, Logical_File_Name, File_Size_MB, Growth_Factor) exec (@l_sql_string)
fetch next from db_name_cursor into @l_db_name end close db_name_cursor deallocate db_name_cursor select * from DB_Growth with (nolock) if object_id('DB_Growth') is not null drop table DB_Growth set nocount off set ansi_warnings on return
insert into DB_Growth (Database_Name, Logical_File_Name, File_Size_MB, Growth_Factor) exec (@l_sql_string)
fetch next from db_name_cursor into @l_db_name end close db_name_cursor deallocate db_name_cursor select * from DB_Growth with (nolock) if object_id('DB_Growth') is not null drop table DB_Growth set nocount off set ansi_warnings on return
I've got a question about the automatic database growth feature of V7. Here's an example:
I have a 1gb db that can grow to max size of 2gb. I set the auto grow option to 75% The first time the db grows it will grab 75% of the free space (1gb)
What happens if the database needs to grow again?
Will the db grow using the remaining free space (25%) or has the database reached its max size because it can't grow any further?
Hello, I started to write a stored procedure to insert into a temp table from sp_spaceused but couldn't get the format right. I did a search in the swynk archive and saw that some people have solved this but I couldn't see any example code. Does anyone have scripts they've written to gather db size info using sp_spaceused or some other mechanism? Thanks in advance, Dan
:eek: I am somewhat confused -- I have a database in production that I restored to a QA environment; upon restore, the size has grown by 200MB.
Both production and QA are running SQL2000 -- the only difference is that QA has the latest security hotfixes installed -- version 8.0.0.665 from KB article at the following link:
We have an SQL2K SP3 Decision Support Server which will be receiving a nightly data dump from the offsite production system. We’ve decided on a backup/restore process to move the data. We’re currently testing the process and have come across a problem. The test backup file is only 500MB, the real DB will be approx. 20GB, and when it is restore the DB is fine but the log file has ballooned to 8 GB.
When the DB is restored the properties of the production DB are forced over our settings. These include no truncation of log on checkpoint and a full recovery model (the destination DB will have simple recovery). We’ve been through the books online and search online forums, but have not found information on this particular problem. We can, of course, shrink the log file after restoring but are concerned about the amount of dead space needed when the real DB is delivered.
I am researching the iformation on how to a percentage of the database growth over time. I have looked on the net, but I didn't like anything I saw. Do you have any ideas?
We have a SQL 2000 database here. As of 3 days ago, it was about 75 megs ( 73 Data / 2 log ). As of today, it is 73 Meg Data / 15 GIG log. The log file went from 2 MEG to 15 GIG in a matter of days, yet the data file size has not changed. also, there are NO users to this database - it's new and has not been placed into production yet. I currently don't have any idea as to what happened - I'm looking into Lumigent's log explorer software now. Does anyone have an explanation as to why the log file would have increased 6000X in size, yet the data is not any larger ?
I have the tran log backups running every 30 mins. When I re-build indexes on my database with the tran log back ups runniing every 30 mins, I get tran log backups whose size is greater than the actual log file (.LDF). I find this really strange as to how can the backup be greater than the log file itself.
I have unrestricted growth on the tran log file ( By 10%). Auto Shrink is disabled.
Have any of you experienced this problem. Kindly share your comments.
I am trying to find a way to calculate everyday my DB Growth, I did find a script on some site but it seems to give me the same information as the taskpad wich is not very specific. Basicaly i would like to know the size of a table in MB or in whatevever conversion possible, so that i will be able to do some forcasting.
I've got a serious problem happening now with the transaction log on one of my databases. The log keeps growing at an alarming rate - avg. 1GB/hr on a ~550MB database; there's nowhere near that much insert/update activity in the database. This is causing space problems with the (hourly) log backups.
The database serves an internal web site; both the web server and the SQL server have been bounced, but this keeps happening.
Profiler shows many, many TransactionLog events in Profiler - orders of magnitude greater than other events. User activity is no greater than it ever is. DBCC OPENTRAN consistently returns no open transactions. I've created a job to checkpoint and truncate the log right before the log backups, but that's obviously only a stopgap for the backup space problem & not a solution.
Do you have any suggestions as to where to go from here? I'm at a loss, and this can't stay as it is. Thanks for ANY input...
While I am importing few data to my SQL Server (2000) database, the free space is gettig increased in GBs. The database's File Growth is 1 MB (same for both Data Files and Transaction Files). The same database I restored on another PC and did the same process, it is working fine as there is no enormous growth in Free Space.
I appear to be doing something wrong. When I set my Table for Unrestricted File Growth and save it, When I go back and look at the settings it is set back to restricted file growth.
It seems to me if you set any settings and apply and/or save it. Should it not keep that setting.
i have a database which has a log file size of 10 Mb. it goes into single user mode automatically . i tried to increase the size of file size of log file from 10 mb to 50 mb... but i want to make it only 20 mb ... i am unable to change since it gives a message .cannot decrease the size of the file .. is there another way to decrease the size of log file .....