Auto Stats
Apr 10, 2008
Hi all,
I have an dtsx (SSIS) for "clone" manually Sql server database to another.
How I copy all stats from one database to another ? I have problem with "auto stats".
When I try DROP statitics for auto stats I get this error:
No se puede DROP el índice 'dbo.ACTIVIDAD_PROVEEDOR.PK_ACTIVIDAD_PROVEEDOR'. No es una colección de estadísticas.
Cannot DROP index 'dbo.ACTIVIDAD_PROVEEDOR.PK_ACTIVIDAD_PROVEEDOR'. Not statitics collection.
What can I do ??
-- Get Stats list
SELECT
'[' + SCHEMA_NAME(tbl.schema_id) + '].[' + tbl.name + ']' AS [Table_Name_With_Schema],
'[' + st.name + ']' AS [Name],
'' + SCHEMA_NAME(tbl.schema_id) + '.' + tbl.name + ''
+ '.' + st.name + '' AS [Estadistica]
FROM
sys.tables AS tbl
INNER JOIN sys.stats st ON st.object_id=tbl.object_id
ORDER BY
[Table_Name_With_Schema] ASC,[Name] ASC
Thanks in advance, any help will be appreciated, regards, greetings
View 3 Replies
ADVERTISEMENT
Apr 10, 2008
Hi all,
I have an dtsx (SSIS) for "clone" manually Sql server database to another.
How I copy all stats from one database to another ? I have problem with "auto stats".
When I try DROP statitics for auto stats I get this error:
No se puede DROP el índice 'dbo.ACTIVIDAD_PROVEEDOR.PK_ACTIVIDAD_PROVEEDOR'. No es una colección de estadísticas.
Cannot DROP index 'dbo.ACTIVIDAD_PROVEEDOR.PK_ACTIVIDAD_PROVEEDOR'. Not statitics collection.
What can I do ??
-- Get Stats list
SELECT
'[' + SCHEMA_NAME(tbl.schema_id) + '].[' + tbl.name + ']' AS [Table_Name_With_Schema],
'[' + st.name + ']' AS [Name],
'' + SCHEMA_NAME(tbl.schema_id) + '.' + tbl.name + ''
+ '.' + st.name + '' AS [Estadistica]
FROM
sys.tables AS tbl
INNER JOIN sys.stats st ON st.object_id=tbl.object_id
ORDER BY
[Table_Name_With_Schema] ASC,[Name] ASC
Thanks in advance, any help will be appreciated, regards, greetings
View 1 Replies
View Related
Mar 10, 2003
Does anyone know how to tell how long it took for an auto update statistics to run? I looked under DBCC Show_Statistics and it shows the time the stats were last updated, but not how long it took to update them. Thanks.
View 2 Replies
View Related
Dec 29, 2007
The auto stats not working
I have both Auto Update Statistics and Auto Update Statistics Asynchronously set to True
Created a little test table.
USE [TEST]
GO
/****** Object: Table [dbo].[CUSTOMER] Script Date: 12/29/2007 10:42:49 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[CUSTOMER](
[Customer_Id] [nchar](10) NOT NULL,
[Customer_Name] [nvarchar](1000) NULL,
[Customer_Address] [nvarchar](1000) NULL,
[Customer_Address1] [nchar](1000) NULL,
CONSTRAINT [PK_CUSTOMER] PRIMARY KEY CLUSTERED
(
[Customer_Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY]
--Created an insert table
DECLARE @COUNT INT
SET @COUNT = 1
WHILE @COUNT <= 1000
begin
insert into CUSTOMER (CUSTOMER_ID, CUSTOMER_NAME)
VALUES (@COUNT, '12345678901234567890')
SET @COUNT = @COUNT + 1
END
I then look at Tables then statistics the statistics are empty so i fire update statistics and see 1000 rows in here.
I run again the insert script
DECLARE @COUNT INT
SET @COUNT = 1001
WHILE @COUNT <= 2000
begin
insert into CUSTOMER (CUSTOMER_ID, CUSTOMER_NAME)
VALUES (@COUNT, '12345678901234567890')
SET @COUNT = @COUNT + 1
END
Look again at statistics it does not have 2000 rows in here.
If i do select * from CUSTOMER where CUSTOMER_ID = '2000' then go checks statictics it works.
I was under the impression that when you do insert or delete, update then the statistics are fired.
The sys.sysindexes rowmodctr shows the 1000 rows.
I checked the conditions that sql fires if the no of rows int able > 6 and < 500 then updates when 500 mods made.
Also if row > 500 auto update done when 500 = 20% are added
So both are met.
Anyone other any other suggestions about the auto stats ?
View 7 Replies
View Related
Dec 29, 2007
The auto stats not working
I have both Auto Update Statistics and Auto Update Statistics Asynchronously set to True
USE [TEST]
GO
/****** Object: Table [dbo].[CUSTOMER] Script Date: 12/29/2007 10:42:49 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[CUSTOMER](
[Customer_Id] [nchar](10) NOT NULL,
[Customer_Name] [nvarchar](1000) NULL,
[Customer_Address] [nvarchar](1000) NULL,
[Customer_Address1] [nchar](1000) NULL,
CONSTRAINT [PK_CUSTOMER] PRIMARY KEY CLUSTERED
(
[Customer_Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY]
--Created an insert table
DECLARE @COUNT INT
SET @COUNT = 1
WHILE @COUNT <= 1000
begin
insert into CUSTOMER (CUSTOMER_ID, CUSTOMER_NAME)
VALUES (@COUNT, '12345678901234567890')
SET @COUNT = @COUNT + 1
END
Look at Tables then statistics the statistics are empty so i fire update statistics and see 1000 rows in here.
I run again the insert script
DECLARE @COUNT INT
SET @COUNT = 1001
WHILE @COUNT <= 2000
begin
insert into CUSTOMER (CUSTOMER_ID, CUSTOMER_NAME)
VALUES (@COUNT, '12345678901234567890')
SET @COUNT = @COUNT + 1
END
Look again at statistics it not firing.
If i do select * from CUSTOMER where CUSTOMER_ID = '2000' then go checks statictics it works.
I was under the impression that when you do insert or delete, update then the statistics are fired.
The sys.sysindexes rowmodctr shows the 1000 rows.
I checked the conditions that sql fires if the no of rows int able > 6 and < 500 then updates when 500 mods made.
Also if row > 500 auto update done when 500 = 20% are added
So both are met.
Anyone other any other suggestions about the auto stats ?
View 2 Replies
View Related
Mar 10, 2003
Recently a production server suffered a critical blocking period and I wanted to know if I could solicit some input. It seems that a stored procedure was in the middle of recompiling while and auto update statistics started. This caused blocking for like an hour on the
single object (stored procedure) that was originally called. The table that the update occurred on and that the
stored procedure is reading form is quite large. It is 2 mil rows and about 140 columns wide. Some info from
sysprocesses is below. The table alone takes up almost 4GB of space, when looking at sp_spaceused. I have some
questions.
1. Can the update statistics for a '_WA%' stats cause
blocking on a table?
2. Does an update stats on an index survive a restart of
SQL server? We tried restarting, but the blocking did not
end.
3. If the stored procedure is running under a compile, can
the server automatically start an update stats and cause
the stored procedure to wait?
4. Can the server automatically start an update stats on
more than one column stats at a time, causing one to be
blocked by the other?
5. We had never seen this issue before going to SQL2K
clustering. Is this something specific to SQL2K and not
SQL7 ?
Thanks for your input.
John Lee
This is the lock info for the blocking processes.
spid dbid ObjId IndId Type Resource Mode Status name
------ ------ ----------- ------ ---- ---------------- -------- ------ -------------------------
142 7 2 1 KEY (6f00035ef42b) S GRANT sysindexes
142 7 2 1 KEY (6f00035ef42b) S GRANT sysindexes
142 7 421576540 0 TAB Sch-S GRANT tJob
142 7 1141579105 0 TAB Sch-S GRANT tPatient_info
142 7 1141579105 0 TAB [UPD-STATS] Sch-M GRANT tPatient_info
142 7 1659921035 0 TAB [COMPILE] X GRANT iDBGetPatInfoRecord
142 7 1659921035 0 TAB Sch-S GRANT iDBGetPatInfoRecord
These are the processes that are being blocked:
spid
------
137
140
Below this is a snapshot of all the SQL processes on the server being blocked.
Save the report and send to the whole database group.
spid kpid blocked waittype waittime lastwaittype waitresource
------ ------ ------- -------- ----------- -------------------------------- -----------------------------
140 4292 142 0x0005 68609 LCK_M_X TAB: 7:1659921035 [[COMPILE]]
137 2576 140 0x0005 64671 LCK_M_X TAB: 7:1659921035 [[COMPILE]]
View 1 Replies
View Related
May 31, 2007
With SQL2005 SP2, we are seeing that when auto stats run on one or more indexes of a large table (1.5M rows), then immediately the stored proc using that table starts acting as if the query plan is no longer any good. This causes a drastic slowdown in response time and a corresponding increase of table reads to get the data. E.g, the next execution of the procedure after the auto stats kick in goes from 355 reads to 755000 reads (as depicted by Profiler). Generally, there are about 25 people using the DB at any one time. They connect through a mid-tier VB component.
I tried adding WITH RECOMPILE to the stored proc in question, but that caused almost all executions to run at the higher number. I thought that the WITH RECOMPILE hint would create a new query plan for each execution of the procedure and that plan would the the latest and greatest. Perhaps it did, but most users got stuck with the higher number of reads anyway. After taking the hint out, everyone went back to getting the 335 number and quick response times.
What we are wrestling with is that when those auto stats hit, it really messes up everyone until we manually recompile the procedure. Daily we delete all records in the table that are over 45 days old, so the table stays pretty much the same size. We also set the recompile flag to cause a new plan to be generated that will reflect the smaller amount of data. Should we also run a stats update before recompiling the procedure? Profiler has been very helpful in capturing what is going on, so I think I have a good handle on that. However, I don't understand why WITH RECOMPILE produced a messed up plan for everyone. The compile itself seems to take only 1 ms when done from the query screen.
View 11 Replies
View Related
Feb 16, 2014
Since upgrading from SQL Server Management Studio 2008 R2, I've noticed that it no longer autosaves queries that have not been manually saved first. If a file has been manually saved the autorecover files end up in the following directory:
%appdata%MicrosoftSQL Server Management Studio11.0AutoRecoverDatSolution1
However, I have ended up in the situation where I have unsaved queries when my computer has crashed and have not been able to recover them.
I have also found references to .sql files stored in temp files in the following directory, but the files here seem to be very haphazardly caught:
%userprofile%AppDataLocalTemp
View 2 Replies
View Related
Jan 23, 2004
I have an MS SQL Server table with a Job Number field I need this field to start at a certain number then auto increment from there. Is there a way to do this programatically or within MSDE?
Thanks, Justin.
View 3 Replies
View Related
Jan 5, 2004
Hi all,
I would like to create a stored procedure that will get the statistics of all the table in my database.
e.g.
Get the record count of each table in the database
Please help me.
Thanks in advance.
View 6 Replies
View Related
Oct 19, 2001
Does anyone know of a tried and tested method for collating stats from SQL Server databases. I have a no. of databases hosted on a single server, servicing a no. of applications. I need to find out the following:
CPU usage of each application on the server
Memory usage of each application
etc
Thanks
View 1 Replies
View Related
Aug 11, 2004
Is it true that Update Statistics perform record commits if a query is outstanding?
View 4 Replies
View Related
Aug 8, 2007
Has anyone noticed a performance improvement during trading hours when they replaced sp_updatestats with UPDATE STATISTICS FULLSCAN in their nightly maintenance?
Or is it negligible?
View 6 Replies
View Related
Nov 10, 2007
Hi pals,
Basically i am from Oracle background. I need some help.
In oracle we gather the statistics of a table as
SQL> analyze table <tname> compute statistics for all indexes;
or
SQL> EXEC DBMS_STATS.gather_schema_stats('SCOTT')
I want to gather statistics for a particular database say."pubs" for example.
How to do that?
Any suggestions are appreciated.
Thanks.
Franky
View 1 Replies
View Related
Jul 20, 2005
Hi there...How do I get to extract info like, current Database logged in user orgeneral stats like, ram usage... etc etc etc into a form if I use MicrosoftAccess 2002 for my forms application?Thanks in advanceRudi
View 1 Replies
View Related
Jul 20, 2005
COuld someone tell me if its possible to get hold of stats about allDatabases on a SQL Server.The sort of things I would like areName of DBLocation of DBSize of DBLocation of LogfileSize of DBOwnerUsers Authorised to access the DBIs there something that can do thisThanksDerrick
View 2 Replies
View Related
Feb 1, 2008
Hello all-
Is it possible to reset the values of DMV stats/counters without restarting the SQL service? I'm looking for something more than dbcc freeproccache...more along the lines of index_usage and some of the OS DMVs.
Cheers,
-Brandon Tucker
View 3 Replies
View Related
Nov 3, 2006
Hello ,
I would like a script for MSSQL 2000 for collection of utilization data for cpu and ram. I need to have the data from several servers.
This data for all servers should be stored in a database. I am looking for 10 minute increments.
How can I go about this.
Thanks
View 1 Replies
View Related
Aug 19, 2002
This is probably a simple question, but I am relatively new using SQL Queries.
I tried this which gives me half of what I need... SET STATISTICS IO ON
I also need my query to return the run time that it took to run my query.
Can someone help me please?
Thanks!
David
View 2 Replies
View Related
Apr 17, 2006
SQL Server 2000 on Win2k
I'm fairly new to SQL Server and I'm just wondering if it's possible to Update Statistice for all indexes somehow? I'm looking at the Update Statistics command and it doesn't seem to be possible.
The situation we have is a reporting DB that basically has all it's tables truncated and remade every night by some DTS jobs that import from another datasource and change the data and build some denormalzed tables etc.
Some of the large Insert operations go from taking 8 mins to taking several hours sometimes and updating the stats seems to fix the problem for a while. So I'd like to make sure the optimizer has all the latest stats for all tables.
Any other advice would be appreciated.
Cheers.
View 2 Replies
View Related
Oct 23, 2007
I have client tools installed on a server and I have registered our 30+ instances hosted on various servers to this one MS SQL 2005 Management Studios.
Question:
How can I use this set up to send an e-mail distribution list a nice monthly chat showing the sizes of the database, memory, cpu utilization of all the registered databases?
Many thanks for your help !!
seethem
View 3 Replies
View Related
Nov 26, 2007
all,
reindex just ran on friday... how is it possible to have a logical
scan fragmentation of 97% ... is it because fillfactor is set to
100 and pages cannot be reorganized any better with this arrangement?
thanks
TABLE level scan performed.
- Pages Scanned................................: 15842685
- Extents Scanned..............................: 1981052
- Extent Switches..............................: 2071631
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 95.59% [1980336:2071632]
- Logical Scan Fragmentation ..................: 97.91%
- Extent Scan Fragmentation ...................: 0.21%
- Avg. Bytes Free per Page.....................: 49.2
- Avg. Page Density (full).....................: 99.39%
View 2 Replies
View Related
Jul 20, 2005
Hi chapsJust been having my head messed with...I was running a trace capturing all errors and SQL. Had a bucket oferror 208's (invalid object name). Found the SQL that caused it - anSP.Ran the sp by hand, no messages come up - error 208 logged in thetrace.Couldn't work it out. Then noticed stats missing on one column.Created the stats manually - and suddenly the 208 error stops. Wtf?Is this predicted/expected behaviour? Just me being a noob?Thought I'd just share that. ta ;)SQL2k, sp3a, w2k server.
View 2 Replies
View Related
Jan 31, 2008
Does anyone have any reports built that show the usage of the reports based off the ExecutionLog table, or how you would get the "ReportID" to refer back to human readable report name?
I find it hard to believe others have not wanted to see how many reports were ran yesterday, what reports are not being used anymore, and which ones we may need to cache because they are over used throughout the day.
Thanks for any help!
View 3 Replies
View Related
Aug 14, 2007
When I look at the list of publications I can see the status of the last time the replication ran. Where in the database is that information being pulled from?
View 1 Replies
View Related
Mar 1, 1999
We recently upgraded to Version 7.0 and the Enterprise Manager on the client is not showing the database size or the information in regards to the last backups.
Has anyone else experienced this that can offer a solution?
View 2 Replies
View Related
Feb 19, 2004
Is an index based on a smallint (16 bit) really faster than an index based on an int (32 bit)
If so, how much...
Four tables
Table A: ID smallint (PK)
Text varchar(50)
Table B:ForeginID smallint (indexed - non unique)
Text varchar(50)
rowID int (PK)
Table C: ID int (PK)
Text varchar(50)
Table D:ForeginID int (indexed - non unique)
Text varchar(50)
rowID int (PK)
Table A and C contain identical data
Table B and D contain identical data
(Tables A and B were filled and then copied to Tables C and D)
Tables A/C are loaded with 64,000 records (-32,000 to 32,000)
Tables B/D are loaded with 6,400,000 records ForeginID loaded randomly with values between -32,000 and 32,000
The purpose of this test is to find out if identical queries joined on a smallint are actually faster than int based.
I ran 3 queries on each set:
- Full select
- Select on ID/Foregin ID
- Select on Table2 RowID joined to table 1
Here are the queries:
#1. Full select (smallint) - grouped to limit result set
-----------------------------------------------------------
SELECT intAID, COUNT(intBID)
FROM TESTintA
INNER JOIN TESTintB ON intAID = intBID
GROUP BY intAID
ORDER BY COUNT(intBID) desc
#2. Select on ID/Foregin ID (smallint)
------------------------------------------
SELECT intAID, intBID, strATXT, strBTXT
FROM TESTintA
INNER JOIN TESTintB ON intAID = intBID
WHERE intAID = 29120
#3. Select on Table2 RowID joined to table 1 (smallint)
------------------------------------------
SELECT intAID, intBID, strATXT, strBTXT
FROM TESTintA
INNER JOIN TESTintB ON intAID = intBID
WHERE intPK = 1050
#4. Full select (int) - grouped to limit result set
------------------------------------------
SELECT lngCID, COUNT(lngDID)
FROM TESTlngC
INNER JOIN TEXTlngD ON lngCID = lngDID
GROUP BY lngCID
ORDER BY COUNT(lngDID) desc
#5. Select on ID/Foregin ID (int)
------------------------------------------
SELECT lngCID, lngDID, strTXTC, strTXTD
FROM TESTlngC
INNER JOIN TEXTlngD ON lngCID = lngDID
WHERE lngCID = 29120
#6. Select on Table2 RowID joined to table 1 (int)
------------------------------------------
SELECT lngCID, lngDID, strTXTC, strTXTD
FROM TESTlngC
INNER JOIN TEXTlngD
ON lngCID = lngDID
WHERE intPK = 1050
Here are the results: (run multiple times to verify)
#1. Full select (smallint) - grouped to limit result set
-----------------------------------------------------------
(8 seconds) - before computing statistics on table
(13 seconds) - after computing statistics on table
#2. Select on ID/Foregin ID (smallint)
------------------------------------------
(0 seconds)
#3. Select on Table2 RowID joined to table 1 (smallint)
------------------------------------------
(0 seconds)
#4. Full select (int) - grouped to limit result set
------------------------------------------
(8 seconds) - before computing statistics on table
(7 seconds) - after computing statistics on table
#5. Select on ID/Foregin ID (int)
------------------------------------------
(0 seconds)
#6. Select on Table2 RowID joined to table 1 (int)
------------------------------------------
(0 seconds)
Conclusion: Not only is there a negligible difference in select performance, generating stats on the smallint actually makes it slower.
(perhaps there is some kind of conversion going on here behind the scenes?)
View 3 Replies
View Related
Mar 12, 2015
I am having the following errors with the script below
Msg 102, Level 15, State 1, Line 44
Incorrect syntax near '?'.
Msg 319, Level 15, State 1, Line 47
Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon. it also does not seem to loop around each db
DECLARE @commandNVARCHAR(MAX)
CREATE TABLE #worktable
(
[Database]SYSNAME
,SchemaNameSYSNAME
,ObjectNameSYSNAME
,StatsNameSYSNAME
,ColNameSYSNAME
[code]....
View 6 Replies
View Related
Nov 6, 2007
Recently we moved few of our databases from SQL 2000 to SQL 2005 (SP 2) using backup and restore. After the restore I did Reindex and update stats on the databases. Since then we have observed performance issues on SQL 2005 databases but this performance problem vanishes the moment we run (sp_updatestats). Is this a problem with SQL 2005 that we have to run sp_updatestats 2 times a days or 3 times a day. In SQL 2000 we ran it only Once a week and still we never had any performance issues. Is there any config change we need to do to fix this problem in SQL 2005?
Thanks !
View 11 Replies
View Related
Sep 26, 2005
(Assuming SQL Server 2000, auto create statistics on, auto updatestatistics on.)DoesDBCC DBREINDEX(<tablename>)update statistics? If yes, are the statistics equivalent to thosethat would be produced by:UPDATE STATISTICS <tablename> WITH FULLSCAN
View 1 Replies
View Related
Jan 6, 2007
Is there a way to determine how often a specific report has been viewed by a user? I have 30-40 reports set up (rendering from snapshot), and I'd like to determine which reports are the most frequented.
View 1 Replies
View Related
Aug 4, 2015
For one of our SQL server 2005 Ent edition 64 bit SP4 which has transnational replication set up and used for heavy reporting, i was trying to counter out the performance of slow running queries which basically runs and get suspended and most often are seeing waiting:So i tried to analyse the wait stats and come up with below stats where ASYNC_NETWORK_IO dominated for a collection of two weeks data.
View 15 Replies
View Related
May 13, 2007
I have installed the reports for the performance dashboard and really like it!
However, I'd like to be able to clear the stats in order to run specific procedures etc and see the most inefficient parts in the specific procedure. The documentation says like this about it:
The lifetime of the DMV data depends on the lifetime of the plan in cache. You can determine how long the plan has been cached, and thus the time frame over which these resources have been consumed, by looking at the Plan Cached column
What Plan Cashed column? Can someone explain to me how to clear the stats?
Regards Andreas
View 3 Replies
View Related