Does anyone know how to tell how long it took for an auto update statistics to run? I looked under DBCC Show_Statistics and it shows the time the stats were last updated, but not how long it took to update them. Thanks.
Recently a production server suffered a critical blocking period and I wanted to know if I could solicit some input. It seems that a stored procedure was in the middle of recompiling while and auto update statistics started. This caused blocking for like an hour on the single object (stored procedure) that was originally called. The table that the update occurred on and that the stored procedure is reading form is quite large. It is 2 mil rows and about 140 columns wide. Some info from sysprocesses is below. The table alone takes up almost 4GB of space, when looking at sp_spaceused. I have some questions. 1. Can the update statistics for a '_WA%' stats cause blocking on a table? 2. Does an update stats on an index survive a restart of SQL server? We tried restarting, but the blocking did not end. 3. If the stored procedure is running under a compile, can the server automatically start an update stats and cause the stored procedure to wait? 4. Can the server automatically start an update stats on more than one column stats at a time, causing one to be blocked by the other? 5. We had never seen this issue before going to SQL2K clustering. Is this something specific to SQL2K and not SQL7 ?
Thanks for your input. John Lee
This is the lock info for the blocking processes.
spid dbid ObjId IndId Type Resource Mode Status name ------ ------ ----------- ------ ---- ---------------- -------- ------ ------------------------- 142 7 2 1 KEY (6f00035ef42b) S GRANT sysindexes 142 7 2 1 KEY (6f00035ef42b) S GRANT sysindexes 142 7 421576540 0 TAB Sch-S GRANT tJob 142 7 1141579105 0 TAB Sch-S GRANT tPatient_info 142 7 1141579105 0 TAB [UPD-STATS] Sch-M GRANT tPatient_info 142 7 1659921035 0 TAB [COMPILE] X GRANT iDBGetPatInfoRecord 142 7 1659921035 0 TAB Sch-S GRANT iDBGetPatInfoRecord
These are the processes that are being blocked:
spid ------ 137 140
Below this is a snapshot of all the SQL processes on the server being blocked. Save the report and send to the whole database group.
Hi all, I have an dtsx (SSIS) for "clone" manually Sql server database to another.
How I copy all stats from one database to another ? I have problem with "auto stats".
When I try DROP statitics for auto stats I get this error:
No se puede DROP el índice 'dbo.ACTIVIDAD_PROVEEDOR.PK_ACTIVIDAD_PROVEEDOR'. No es una colección de estadísticas.
Cannot DROP index 'dbo.ACTIVIDAD_PROVEEDOR.PK_ACTIVIDAD_PROVEEDOR'. Not statitics collection.
What can I do ??
-- Get Stats list SELECT '[' + SCHEMA_NAME(tbl.schema_id) + '].[' + tbl.name + ']' AS [Table_Name_With_Schema], '[' + st.name + ']' AS [Name], '' + SCHEMA_NAME(tbl.schema_id) + '.' + tbl.name + '' + '.' + st.name + '' AS [Estadistica] FROM sys.tables AS tbl INNER JOIN sys.stats st ON st.object_id=tbl.object_id ORDER BY [Table_Name_With_Schema] ASC,[Name] ASC
Thanks in advance, any help will be appreciated, regards, greetings
Hi all, I have an dtsx (SSIS) for "clone" manually Sql server database to another.
How I copy all stats from one database to another ? I have problem with "auto stats".
When I try DROP statitics for auto stats I get this error:
No se puede DROP el índice 'dbo.ACTIVIDAD_PROVEEDOR.PK_ACTIVIDAD_PROVEEDOR'. No es una colección de estadísticas.
Cannot DROP index 'dbo.ACTIVIDAD_PROVEEDOR.PK_ACTIVIDAD_PROVEEDOR'. Not statitics collection.
What can I do ??
-- Get Stats list SELECT '[' + SCHEMA_NAME(tbl.schema_id) + '].[' + tbl.name + ']' AS [Table_Name_With_Schema], '[' + st.name + ']' AS [Name], '' + SCHEMA_NAME(tbl.schema_id) + '.' + tbl.name + '' + '.' + st.name + '' AS [Estadistica] FROM sys.tables AS tbl INNER JOIN sys.stats st ON st.object_id=tbl.object_id ORDER BY [Table_Name_With_Schema] ASC,[Name] ASC
Thanks in advance, any help will be appreciated, regards, greetings
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY]
--Created an insert table DECLARE @COUNT INT
SET @COUNT = 1
WHILE @COUNT <= 1000
begin
insert into CUSTOMER (CUSTOMER_ID, CUSTOMER_NAME)
VALUES (@COUNT, '12345678901234567890')
SET @COUNT = @COUNT + 1
END
Look at Tables then statistics the statistics are empty so i fire update statistics and see 1000 rows in here.
I run again the insert script DECLARE @COUNT INT
SET @COUNT = 1001
WHILE @COUNT <= 2000
begin
insert into CUSTOMER (CUSTOMER_ID, CUSTOMER_NAME)
VALUES (@COUNT, '12345678901234567890')
SET @COUNT = @COUNT + 1
END
Look again at statistics it not firing.
If i do select * from CUSTOMER where CUSTOMER_ID = '2000' then go checks statictics it works.
I was under the impression that when you do insert or delete, update then the statistics are fired.
The sys.sysindexes rowmodctr shows the 1000 rows.
I checked the conditions that sql fires if the no of rows int able > 6 and < 500 then updates when 500 mods made. Also if row > 500 auto update done when 500 = 20% are added
So both are met.
Anyone other any other suggestions about the auto stats ?
With SQL2005 SP2, we are seeing that when auto stats run on one or more indexes of a large table (1.5M rows), then immediately the stored proc using that table starts acting as if the query plan is no longer any good. This causes a drastic slowdown in response time and a corresponding increase of table reads to get the data. E.g, the next execution of the procedure after the auto stats kick in goes from 355 reads to 755000 reads (as depicted by Profiler). Generally, there are about 25 people using the DB at any one time. They connect through a mid-tier VB component.
I tried adding WITH RECOMPILE to the stored proc in question, but that caused almost all executions to run at the higher number. I thought that the WITH RECOMPILE hint would create a new query plan for each execution of the procedure and that plan would the the latest and greatest. Perhaps it did, but most users got stuck with the higher number of reads anyway. After taking the hint out, everyone went back to getting the 335 number and quick response times.
What we are wrestling with is that when those auto stats hit, it really messes up everyone until we manually recompile the procedure. Daily we delete all records in the table that are over 45 days old, so the table stays pretty much the same size. We also set the recompile flag to cause a new plan to be generated that will reflect the smaller amount of data. Should we also run a stats update before recompiling the procedure? Profiler has been very helpful in capturing what is going on, so I think I have a good handle on that. However, I don't understand why WITH RECOMPILE produced a messed up plan for everyone. The compile itself seems to take only 1 ms when done from the query screen.
Has anyone noticed a performance improvement during trading hours when they replaced sp_updatestats with UPDATE STATISTICS FULLSCAN in their nightly maintenance? Or is it negligible?
I'm fairly new to SQL Server and I'm just wondering if it's possible to Update Statistice for all indexes somehow? I'm looking at the Update Statistics command and it doesn't seem to be possible.
The situation we have is a reporting DB that basically has all it's tables truncated and remade every night by some DTS jobs that import from another datasource and change the data and build some denormalzed tables etc. Some of the large Insert operations go from taking 8 mins to taking several hours sometimes and updating the stats seems to fix the problem for a while. So I'd like to make sure the optimizer has all the latest stats for all tables.
I am having the following errors with the script below
Msg 102, Level 15, State 1, Line 44 Incorrect syntax near '?'. Msg 319, Level 15, State 1, Line 47
Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon. it also does not seem to loop around each db
Recently we moved few of our databases from SQL 2000 to SQL 2005 (SP 2) using backup and restore. After the restore I did Reindex and update stats on the databases. Since then we have observed performance issues on SQL 2005 databases but this performance problem vanishes the moment we run (sp_updatestats). Is this a problem with SQL 2005 that we have to run sp_updatestats 2 times a days or 3 times a day. In SQL 2000 we ran it only Once a week and still we never had any performance issues. Is there any config change we need to do to fix this problem in SQL 2005?
(Assuming SQL Server 2000, auto create statistics on, auto updatestatistics on.)DoesDBCC DBREINDEX(<tablename>)update statistics? If yes, are the statistics equivalent to thosethat would be produced by:UPDATE STATISTICS <tablename> WITH FULLSCAN
Microsoft states that dbcc DBREINDEX automatically updates statistics but INDEXDEFRAG does not. If this is the case, does MS mean that only the affected statistics are updated or all statistics? Also, is it a good idea to run 'Update Statistics' after doing INDEXDEFRAG?
How do i update the stats of tables when we insert data into it. I believe Auto stats update happens only when 500+ 20% of the rows are changed for a table. Once we insert say some 1000 records in to a particular table the query time takes too long (more than 1 min). The same query executes faster once i manually update the stats.
I am using Ola Hallengrens scripts for index and stats maintenance but I am wondering what most people to in terms of the maintenance schedules. At present we do an index rebuild reorg weekly, but do people also do update stats nightly?
I suppose there is an element of "it depends" here in that the data may be fairly static so the update stats may not be required, or if heavily updated then perhaps rebuilding indexes may be required more frequently.
We have a file import job. This job typically imports millions of records into a SQL2008 DB. After the load the DB performance goes down the drain. Thus far, their solution has been to rebuild indexes on effected tables. I'd like to come up with a better solution. My guess is that after the load, the statistics are shot until the next stats update.
What is the best way to handle this scenario? There must be some way to keep the stats current during a big data load.
We are planning to standardize our newly deployed sql server, As a part of it we have configured 2 maintenance plans 1) Update Statistics which runs daily and 2) Index Reorganize which runs on weekly.
Apart from above, any other things to be in place for better maintenance of the sql server.
Also, how to Index Rebuild activity for clustered indexes requires any downtime.
When I checked the Properties of the Statistic I can see it is on a varchar(3) field when there are only 3 different values in there - all char(1)
The total size of the data in the table according to the Disk Usage By Top Table Report is 199,680,712KB
So my question is this...
For the UPDATE STATS on this one column with FULL SCAN, does SQL Server read the entire table into the Buffer Pool. If so then if the table had 199,680,712KB of data then why did the session request 145,705,216KB.
Or does SQL Server just read the column and ClusteredIndex Key into the Buffer Pool?
Hi All, I have a table which consists the leave details of an employee. I have the columns like paid leaves,sick leaves,personal leaves in the above table Problem : For eg: An employee joined on 21 May 2008. After 6 months i.e., 21 Nov 2008 I need to update the above columns (i e., increase the no.of leaves) So updation is to be done for every 6 months and for every 1 year. Can anyone say me how to execute the update query based on the duration. Thanks in advance.
This calls the Sp that does the Reindex. It fails at the update statistics with a very generic message. like " Command: UPDATE STATISTICS [xxxx_DB].[dbo].[xxxx_xxx] [_WA_Sys_00000007_49C3F6B7] [SQ... The step failed."
I suspect it has more error but this is all it is showing me when I right click on the job history. therefore, I updated the job step in the advance tab with log to a txt file. Am I on the right track or there is another way to see error some where else.
I looked at the logs but they didn't show any thing.
Which works fine, but what I need to calculate the total duration of a request based on the duration of the tasks completed in the request based on Req_ID. I would like to use the CASE statement I have to determine the SLA_Mins for each task and add them together to get total request SLA_Mins.
Below is the create table schema and data
GO /****** Object: Table [dbo].[MidrangeOtherSourceControl] Script Date: 06/03/2015 18:13:15 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[MidrangeOtherSourceControl]( [Req_ID] [float] NULL, [Service_Name] [nvarchar](255) NULL,
We face slow performance issue for like taking long time for same query execution after We apply index rebuild and reorganize index. But, after execution of query or procedure for 2 -3 times, performance will be faster. I have following questions
1 do we need to update stats after we rebuild an reorganize index. 2. is it will be slow for 1-2 times for every query and stored procedure execution after we rebuild and reorganize index?
We have recently migrated quite a databases around 20 from SQL 2000 and 2005 to SQL server 2008R2.
I am using Ola's script for index maintenance for those with compatibility level above 80 as i heard it supports that way.
Hence separated in 2 way job where for those with compatibility level 80, we are running job with below query on each database with 80 as compared
USE ABC GO EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?', ' ', 80)" GO EXEC sp_updatestats GO
I am not sure if this is the only way in for those databases, because we are seeing the database getting because of that somewhere using above query.( seems log file filling very rapidly).
But above is not the case with those databases , with compatibility 90 and above.
Table 1 - dbo.DimHC_Team - This stores TeamID and TeamName Table 2 - dbo.DimHC_Team_Config - This has TeamID TeamDate (not really used anywhere) TeamHeadCount (Max strength of a team)
Table 3 - dbo.DimHC_Team_Agg - This has TeamID, Team_Date (1st of every month, so that when I see the data for past months I use these dates) , Permanent Employees, Contracted employees, Open positions, Total, HeadCount.
Now, I have a stored procedure which goes like this -
if (not exists (select * from dbo.DimHC_Team_Agg where Team_Date = @datevar2)) begin
insert into dbo.DimHC_Team_Agg (TeamID, Team_Date) select TeamID , @datevar2 from dbo.DimHC_Team_Config end
update dbo.DimHC_Team_Agg set Team_Head_Count = dbo.DimHC_Team_Config.Team_Head_Count from dbo.DimHC_Team_Config, dbo.DimHC_Team_Agg where dbo.DimHC_Team_Agg.Team_Date = @datevar2 and dbo.DimHC_Team_Config.TeamID = dbo.DimHC_Team_Agg.TeamID
update dbo.DimHC_Team_Agg set Perm_Posn = (select count(EmpType) from dbo.uview_DimHC_Emp_view1 where EmpStartDate < = @datevar4 and EmpType = 'Permanent' and (EmpEndDate = '01/01/1753' or EmpEndDate > @datevar3) and TeamID = '1'), Contract_Posn = (select count(EmpType) from dbo.uview_DimHC_Emp_view1 where EmpStartDate < = @datevar4 and EmpType = 'Contractor' and (EmpEndDate = '01/01/1753' or EmpEndDate > @datevar3) and TeamID = '1') ......
What this block is doing is converting today's date to the first of this month. Going one month and 2 months in advance. Then it checks if any employee's start date < 10th of the next month and end date > 31st of the next month and if both the conditions are met then increments the employee count for that team as 1.
This TeamID is then repeated for 13 teams that I have right now. So that every month I will run it and it will update.
Now, I have 2 problems - First being that I had to write this block for 13 teams which isn't really a smart thing to do. Let's say I add a team tomorrow then I will need to modify this stored procedure and then tomorrow I may not be here, so the person who uses it should be able to do it easily. So, I wanted to know if I can loop or something through this with the max limit of TeamID coming from some external source and not like fix it to 20 or something.
Second, let's say in the beginning of this month I add a new team. The way I will do it is I will add it to dbo.DimHC_Team and dbo.DimHC_Team_Config table. Now, if I run this stored procedure then
if (not exists (select * from dbo.DimHC_Team_Agg where Team_Date = @datevar2))
is not going to return NULL as there will be entries with some teams right now. So this new Team won't be inserted in the dbo.DimHC_Team_Agg table and I will have to manually do it.
Can somebody help me as to how can I get over with these two problems?
Sorry for being really long but I thought it best to give all the details.
I'm using SQL 7, there is a setting on DB properties called "Auto update statistics", what kind of statistics does this refers to and how can this stats be accessed?
I'm changing data storage for an asp.net project from MS Access to Sql Server. I've got the web site working, but I need to update the sql server tables with data from our Oracle db daily. What is the "best" way to do that?
I've read about DTS, but have never done anything like that. Would it be worth the time and effort to study? (So far I've created a package, with the import wizard, that doesn't work & I don't have the authority to delete :-)
I know I could create a dataset with my Oracle data and use that to update sql server. But is there a way to schedule an aspx to run authomatically? Would this affect performance? The sql server db isn't very big (30-40,000 records), but the Oracle db is & I need to do quite a bit of manipulation to the data.
This is new to me & I'm don't know what I should be searching for to find help. And if there is a more appropriate place to post this question, please let me know.
We are using SQL Server 2005. The auto update statistics and auto create statistics for a database is set to ON. This database has a very heavy work load. When I checked the individual statitics , still the last updated statistics is in a old date value (few months ago).
Is it necessary to manually update the statistics for the same database? Or can we rely upon "auto update statistics" itself ?
Usually in what frequency the manual UPDATE STATISTICS should be run on production system which has heavy transactions ?
I have got 4 MS Access Database Files, which have got 3 Tables each, means Total 12 Tables which gets updated with new data every evening, by an external application. Means new data gets appended to all these 12 Tables.
I want to have exact same 4 Databases, which have got 3 Tables each, means Total 12 Tables, but WITHIN MS SQL SERVER. And then update all of these 12 Tables every evening, with the corresponding updates from the respective tables from the MS Access Databases.
What are the various options to get this kind of work done in SQL Server. I do not want to Manually Update all these 12 tables every evening into SQL Server. Hopefully there would be some easier method to do this in automatic manner.
How can I set a column in a table to auto update the date and time everytime something in that row is updated or when the row is first added? Thanks ahead for the help,Jason