I identified a problem today with one of our development DB's where tempororary tables were not being cleaned up by some stored procedures, and this lead to a large tempdb size (about 2 gigs data and a translog of 25 gigs!).
I'm currently running a dbcc shrinkdatabase on it, but its been running for over an hour now. In my experience with smaller DB's, this process normally takes a few minutes. Can anyone give me a ballpack figure as to how long I can expect this to run for? Are we talking an hour, a few hours, a day, a few days?
Hi I am slowly getting to grips with SQL Server. As a part of this, I have been attempting to work on producing more efficient queries. This post is regarding what appears to be a discrepancy between the SQL Server execution plan and the actual time taken by a query to run. My brief is to produce an attendance system for an education establishment (I presume you know I'm not an A-Level student completing a project :p ). Circa 1.5m rows per annum, testing with ~3m rows currently. College_Year could strictly be inferred from the AttDateTime however it is included as a field because it a part of just about every PK this table is ever likely to be linked to. Indexes are not fully optimised yet. Table:CREATE TABLE [dbo].[AttendanceDets] ([College_Year] [smallint] NOT NULL ,[Group_Code] [char] (12) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,[Student_ID] [char] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,[Session_Date] [datetime] NOT NULL ,[Start_Time] [datetime] NOT NULL ,[Att_Code] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ) ON [PRIMARY]GO CREATE CLUSTERED INDEX [IX_AltPK_Clust_AttendanceDets] ON [dbo].[AttendanceDets]([College_Year], [Group_Code], [Student_ID], [Session_Date], [Att_Code]) ON [PRIMARY]GO CREATE INDEX [All] ON [dbo].[AttendanceDets]([College_Year], [Group_Code], [Student_ID], [Session_Date], [Start_Time], [Att_Code]) ON [PRIMARY]GO CREATE INDEX [IX_AttendanceDets] ON [dbo].[AttendanceDets]([Att_Code]) ON [PRIMARY]GOALL inserts are via an overnight sproc - data comes from a third party system. Group_Code is 12 chars (no more no less), student_ID 8 chars (no more no less). I have created a simple sproc. I am using this as a benchmark against which I am testing my options. I appreciate that this sproc is an inefficient jack of all trades - it has been designed as such so I can compare its performance to more specific sprocs and possibly some dynamic SQL. Sproc:CREATE PROCEDURE [dbo].[CAMsp_Att] @College_Year AS SmallInt,@Student_ID AS VarChar(8) = '________', @Group_Code AS VarChar(12) = '____________', @Start_Date AS DateTime = '1950/01/01', @End_Date as DateTime = '2020/01/01', @Att_Code AS VarChar(1) = '_' AS IF @Start_Date = '1950/01/01'SET @Start_Date = CAST(CAST(@College_Year AS Char(4)) + '/08/31' AS DateTime) IF @End_Date = '2020/01/01'SET @End_Date = CAST(CAST(@College_Year +1 AS Char(4)) + '/07/31' AS DateTime) SELECT College_Year, Group_Code, Student_ID, Session_Date, Start_Time, Att_Code FROM dbo.AttendanceDets WHERE College_Year = @College_YearAND Group_Code LIKE @Group_CodeAND Student_ID LIKE @Student_IDAND Session_Date <= @End_DateAND Session_Date >=@Start_DateAND Att_Code LIKE @Att_CodeGOMy confusion lies with running the below script with Show Execution Plan:--SET SHOWPLAN_TEXT ON--Go DECLARE @Time as DateTime Set @Time = GetDate() select College_Year, group_code, Student_ID, Session_Date, Start_Time, Att_Code from attendanceDetswhere College_Year = 2005 AND group_code LIKE '____________' AND Student_ID LIKE '________'AND Session_Date <= '2005-11-16' AND Session_Date >= '2005-11-16' AND Att_Code LIKE '_' Print 'First query took: ' + CAST(DATEDIFF(ms, @Time, GETDATE()) AS VarCHar(5)) + ' milli-Seconds' Set @Time = GetDate() EXEC CAMsp_Att @College_Year = 2005, @Start_Date = '2005-11-16', @End_Date = '2005-11-16' Print 'Second query took: ' + CAST(DATEDIFF(ms, @Time, GETDATE()) AS VarCHar(5)) + ' milli-Seconds'GO --SET SHOWPLAN_TEXT OFF--GOThe execution plan for the first query appears miles more costly than the sproc yet it is effectively the same query with no parameters. However, my understanding is the cached plan substitutes literals for parameters anyway. In any case - the first query cost is listed as 99.52% of the batch, the sproc 0.48% (comparing the IO, cpu costs etc support this). BUT the text output is:(10639 row(s) affected) First query took: 596 milli-Seconds (10639 row(s) affected) Second query took: 2856 milli-SecondsI appreciate that logical and physical performance are not one and the same but can why is there such a huge discrepancy between the two? They are tested on a dedicated test server, and repeated running and switching the order of the queries elicits the same results. Sample data can be provided if requested but I assumed it would not shed much light. BTW - I know that additional indexes can bring the plans and execution time closer together - my question is more about the concept. If you've made it this far - many thanks.If you can enlighten me - infinite thanks.
We recently upgraded from SQL 6.5 to SQL 7. I have a few .sql files that were each running around 5 - 8 minutes under 6.5. These same files now each take over 30 minutes to run. Has anybody had problems with their queries taking longer to run under 7.0? These files are quite large and are comprised of 3 - 4 batches with several queries in each batch. If anybody has any thoughts on the cause please let me know.
I have a tricky question to Microsoft SQL Server 2000/2005. I have failed to find a solution to the problem yet and others have failed too. So I thought that I throw it around here, because the fact that a lot of knowledgeable DBAs hang around in this forum.
I am looking for a script/stored procedure that is able to show me upcoming job executions for a selected date/time range based on the current settings for the jobs configured on the Database server.
Example Date/Time From: 2/16/2007 00:00 a.m. (Friday) Date/Time To: 2/17/2007 12:00 p.m. (Saturday)
A. Job 1 / Schedule 1 = Week Days, 1:00 a.m. B. Job 1 / Schedule 2 = Saturdays, 10:00 a.m. C. Job 2 / Schedule 1 = hourly, between 10:00 a.m. and 1:00 p.m. D. Job 3 / Schedule 1 = every 3rd Saturday of the month, 5:00 a.m. E. Job 4 / Schedule 1 = every 1st Friday of the month, 8:00 a.m.
The routine I have in mind would not return Job E, because 2/16/2007 is the 3rd Friday of the month and not the first.
It needs to return the Job ID, the Job Name, the Schedule ID, the Date and the Time. Disabled Jobs and Schedules are by default excluded from the selection, but an option to include or exclude those would be a bonus. Information such as the min, max and average execution time would be great too.
Notes
The schedules of Job D and Job B overlap as you can see in my example above. This happens only once per month though. I have over 20 jobs with sometimes very frequent execution times, like every 5 minutes or every 20 minutes and jobs that run hourly, daily, weekdays only, weekends only, monthly once etc.
Purpose
I want to do two things. I want to determine where jobs overlap, not just by start date/time, but also by average run time and maximum run time.
I also want to be able to generate a report that shows me what should have been done and what was actually done by the jobs (note on the site, 6 of the jobs create new jobs on the fly for other database servers and this is sometimes not happening properly, without getting any error message. The volume of jobs makes the manual search like a search for a needle in a haystack.)
Findings so far:
I did some digging myself and found following stored procedures that do some of the steps that I need and involved tables for the calculation.
Stored procedures: - sp_get_schedule_description (in db: msdb) (undocumented stored procedure) - sp_add_schedule (in db: msdb) http://msdn2.microsoft.com/en-us/library/ms187320.aspx
It is not a problem to determine the next execution of a job, but that is not what I need, anyway. The problem is that it does not help you to determine all upcoming execution times, if the selected timeframe is long enough that SQL Server executes the job more than once.
There is no way around using the sysjobschedules table and calculate the execution dates and times based on the configured settings. See the Stored procedure: sp_get_schedule_description.
That one breaks down nicely the settings as documented for sp_add_schedule at http://msdn2.microsoft.com/en-us/library/ms187320.aspx, but it does not allow the determination of the exact upcoming dates and times when the job is supposed to be executed.
Another Example If there is only one job scheduled to run
1) every 5 minutes, 2) on every weekday 3) between 1:30pm and 2:00pm
You would get the following results
1) start date/time 7/7/2007 12:00pm, end date/time 7/8/2007 2:00pm
nothing, because the 7/7/2007 and 7/8/2007 are on the weekend
2) start date/time 7/5/2007 12:00pm, end date/time 7/5/2007 1:45pm
Now SQL has a lot more configuration options for the scheduler.
And don't forget that you can have more than one schedule record for any single job, including no-schedule record (which would not interest me).
Autom. when SQL starts freq_type=64 Starts when CPU idle freq_type=128
One Time On Date mm/dd/yyyy at time: hh:mm:ss am/pmfreq_type=1 or Recurring
Occurs Daily freq_type=4 Every x day(s)freq_interval=x Weekly freq_type=8 Every x week(s) onfreq_recurrence_factor=x
Mo [ ], Tu [ ], We [ ], Th [ ], Fr [ ], Sa [ ], Su [ ],
1 = Sunday, 2 = Monday, 4 = Tuesday, 8 = Wednesday, 16 = Thursday, 32 = Friday, 64 = Saturday. Examples: Su and Mo enabled = 3 (1 (Su) + 2 (Mo)), Mo, We and Fr enabled = 42 (2 (Mo) + 8 (We) + 32 (Fr)) Monthly freq_type=16 Day X of every Y month(s)freq_interval=X freq_recurrence_factor=Y or
The 1st,2nd,3rd,4th,LAST WEEKDAYfreq_type=32 of every Y month(s)
freq_relative_interval=1,2,4 (3rd),8 (4th),16(last) freq_interval= 1=Su,2=Mo,3=Tu,4=We,5=Th,6=Fr,7=Sa,8=Day,9=Weekday,10=Weekend day freq_recurrence_factor=Y
Occurs Once at hh:mm:ss AM/PMfreq_subday_type=0x1 or occurs Every X Hours/Minutes Starting: hh:mm:ss A/PM freq_subday_type=0x4 (minutes) or 0x8 (hours) Ending: hh:mm:ss A/PMfreq_subday_interval=X active_start_time active_end_time
Start Date mm/dd/yyy End Date mm/dd/yyyyactive_start_date oractive_end_date No End Dateactive_end_date=99991231
Does anybody has a script that does that or several individual scripts that would have to be combined to do what I want to do?
I just inherited a database nightmare - and I've got a few questions I hope you folks can help with.
I ran a shrinkdatabase command from the enterprise manager (I should have used DBCC I realize now), and then my session died. The process is still going - but this is a huge database, and I'd like to know how long it will run. Is there any way to translate physical io (or anything else) to % complete?
This is being run to open up space on a disk array that is completely full. However, since kicking that off, additional storage has mysteriously been made available. <sigh> I assume that killing the shrink wouldn't involve rollbacks - but might leave the database in an unpredictable state. Is this correct?
Any other recommendations to speed this along? And, yes, I've got all the users off the system.
Does anyone have any idea how long DBCC SHRINKDATABASE takes to run? I have a 20GB database with 10.5 GB used. I know hardware is part of the equation, but I don't know the specifics of the machine this particular DB is on, other than it is an 8 cpu compaq.
Hi all, after using "DBCC SHRINKDATABASE (MyDB, TRUNCATEONLY )" command my log file stoped growing. It still remains 1024KB after 4-5 days of activity. What might be the problem?
should i use the SHRINK DATABASE maintenance plan once a month on all USERS DB? if so what parameters should i put this the "Shrink Database when i grows beyond XXX MB" and the "Amount of free space to remain after shrink"
I have strange problem: 1) When I run "DBCC SHRINKDATABASE" (from my application) on computer with SQL Server 2000, it works fine. 2) When I run "DBCC SHRINKDATABASE" (from my application) on computer with MSDE that is connected the network with other computer with SQL Server 2000, it works fine. 3) When I run "DBCC SHRINKDATABASE" (from my application) on computer with MSDE that isn't connected the network, it stops response. I have wait for 15 minuts but it still doesn't work, neither the database is clear before. Why? 4) Also when I run "DBCC SHRINKDATABASE" (from my application) on computer with MSDE that is connected the network without other computer with SQL Server 2000, it stops response. I have wait for 15 minuts but it still doesn't work, neither the database is clear before. Why?
May be the "DBCC SHRINKDATABASE" need some DLLs that installed on SQL Server 2000 only and doesn't deliver with MSDE? May be this is SQL-DMO or/and SQL-NS?
My appliation created by C# in VS.NET 2003. I am using OLEDB.
Is there anyway to tell how long this will run for -- or how far it has got? I have a large database that has just had most of the data removed. The command has been running for 8 hours and I have just stopped it to let something else run quickly. Any way of telling how much longer it will take?
what is the difference in shrinkdatabase between sql 2000 and sql 2005. i am just doing a plain shrink, no reorg. it runs very fast in 2000 and runs forever in 2005.
2 weeks ago I deleted about 200GB of data from a 300GB+ database. It's a custom DB we want to use to test few things. We wanted a smaller size DB for our testing and since we didn't have any we grabbed a production backup, removed sensitive data and ran a large archiving script on it... Anyway so far so good but our data file was still the same size as before.
So we started a shrinkdatabase... it has been running for 2 weeks now! After about 1 week I interrupted the shrinkdatabase process and ran a dbcc shrinkdatabase('DB', truncateonly) just to see if the data file will get reduced a bit or not. It did get reduced by about 20GB. I assume that dbcc shrinkdatabase('DB', 0) has free up enough pages at the end of the data file so a truncateonly was able to free up some space... Anyway after this we started the dbcc shrinkdatabase('DB', truncateonly) again... still running...
The database was never shrank before and every index is highly fragmented... Is that why it's taking so long? Am I actually going to have to wait for another few weeks before that thing finishes??
Anyone has experience running shrink on large DBs?
I have a database that appears like it's much bigger than it should be. Looking at its properties in Enterprise Manager, it's 13GB big, but it says 7GB is available in free space. It's like it's grown, data was deleted, and it was never shrunk back down.
So I'm considering running a DBCC SHRINKDATABASE but am worried about the ramifications. Here are my concerns:
- Is the data in the database in any danger? Is the SHRINKDATABASE function safe?
- Can SHRINKDATABASE be run with the DB still in use? I'll run it after hours, but want to know if I need to put it in admin mode
- Will SHRINKDATABASE even do what I want?
- I've read the SHRINKDATABASE can fragment indexes. Is this true? If yes, how do I avoid it?
I am working with a large database that has its tables stored on a secondary filegroup. I'm trying to shrink the size of the files but I can't seem to get the system to free up the unused space. I've tried shrinkdatabase and shrinkfile both with and without the truncateonly option. Has anyone else had this problem? Is there a workaround? Any help would be greatly appreciated.
I have maintenance plan on DBABC backup log to .trn job to run every 90 minutes (daily).
in order to keep the log file small, I also set up a job (T-SQL) to run at 4:15 am to backup log ABC with truncate_only, then run dbcc shrinkdatabase (DBABC, 10)
it looks "backup log ABC with truncate_only" has conflicts with the every90 minutes backup transaction log.
Question: could I keep the backup transaction log every90 minutes, but still could shrink the log file. The log file is growing very fast.
Or I have to use differential backup instead of backup tran log?
after moving off VS debugger and into management studio to exercise our SQLCLR sp, we notice that the 2nd execution gets an error suggesting that our static SqlCommand object is getting reused from the 1st execution (of the sp under mgt studio). If this is expected behavior, we have no problem limiting our statics to only completely reusable objects but would first like to know if this is expected? Is the fact that debugger doesnt show this behavior also expected?
Here's my case, I have written a stored procedure which will perform the following: 1. Grab data from a table using cursor, 2. Process data, 3. Write the result into another table
If I execute the stored procedure directly (thru VS.NET, or Query Analyser), it will run, but when I tried to execute it via a scheduled job, it fails.
I used the same record, same parameters, and the same statements to call the stored procedure.
The benefit of the actual execution plan is that you can see the actual number of rows passing through each step - compared to the estimated number of rows.But what about the "cost percentages" ?I believe I've read somewhere that these percentages is still just an estimate and is not based on the real execution.Does anyone know this and preferable have a link to something that documents it?Thanks
I have set up an sql server which has a column called times. This column is to hold race times however when i enter times such as 22:20:89 with the 89 being milliseconds i am not bale to input this it will only allow me to put times in where milliseconds are below 60. This is a problem especially for the shorter races where half a second is a big difference. Thanks Mike
When trying to connect to a server on a slow connection (satelite), Enterprise Manager times out. I am able to connect with Query Analyzer. Tried to change 'remote login timeout', but that did'nt help, EM seems to timeout long before this value is reached. Any ideas ?
I am trying to add a list of times together retrieved from my database.
They are from seperate records in the same table.
They are in a date format e.g. 01-Jan-2006 03:45:00, what I want to do is ignore the 01-Jan-2006 part as well as the :00 AM part and focus on the hours and minutes. I wish to add these up e.g. 03:45 + 04:45 = 08:30. Is there an easy way of doing this?
Below is the SQL statement of getting times from my database if its any help.
SQL StatementSELECT LENGTH_OF_VISIT FROM NOV.MAIN;
Dear All, I need to select records between two datetimes. My Database Structure and sample values are as below Fromdate ToDate 2007-02-20 09:00:00.000 2007-02-20 12:00:00.0002007-02-20 08:00:00.000 2007-02-20 12:00:00.0002007-02-20 06:00:00.000 2007-02-20 13:00:00.000 Query i used select * from tablename where ((fromdate between Convert(datetime,'2007-19-2 10:00',103) and Convert(datetime,'2007-19-2 11:00',103)) or (todate between Convert(datetime,'2007-19-2 10:00',103) and Convert(datetime,'2007-19-2 11:00',103))) My query is not returning values when i am querying between '2007-19-2 10:00' and '2007-19-2 11:00' Please help Thanks
Hello everyone, I have an odd issue and I can't seem to figure it out. I am using SQL Server 2005 and I am using Transactions in combination with Try/Catch blocks. My query (executing the stored procedure) never seems to finish the query in query analyzer. I have removed all statements that exectute or select, etc upon the database except for a simple Select 'test' as test so I would have something in my Begin/End blocks. Below is my stored proc: ALTER PROCEDURE [dbo].[ar_mainCheckRecon] @tripID int, @payment money, @adjustment money, @serviceType tinyint, @checkTpId int, @checkTlId int, @checkNbr varchar(20) AS BEGINSET NOCOUNT ON; BEGIN TRY BEGIN TRAN select 'test' as test
END TRY BEGIN CATCH RAISERROR('Error commiting transaction', 16,1) ROLLBACK TRAN SELECT '0' as success END CATCH
COMMIT TRANSELECT '0' as errorCode
END I call upon the sproc by running this line in query analyzer: exec ar_mainCheckRecon 433636, 0, 500, 0, 500, 0, 23 Is there something I am overlooking as to why the query keeps running and never stops? My current execution time is at 3 mins accoring to query analyzer! Thanks,John