SQL 2012 :: Interpreting Query Statistics

Jun 5, 2014

I'm designing a new database which will be the back-end to a heavily-used web-based application (all these terms are relative - I guess the use won't be that heavy in the grand scheme of things, I'm only talking 100 users or so at the very most). Data from the old application database will be migrated to this one, and the old database is around 7GB in size after 5 years of use.

I have two different ways of linking some tables in mind, one which is slightly more complex than the other but which potentially has benefits over the simpler method. However, I'm concerned that I might be 'over-cooking' the design, and that performance would suffer as a result, so I've tried creating the two different versions of the database (the part of it I'm concerned with, anyway), one for each of the solutions I've got in mind, migrated the data into the relevant tables and carried out some queries on the data to collect some statistics.

The problem is that, whilst I can see that the more complex method is more expensive, as expected, I don't really understand if the difference is significant. Since I don't know what the numbers in the Client Statistics window actually mean (there are no units! I'm guessing times are in milliseconds?), or how much of real-world impact the difference will have, I'm finding it hard to interpret my statistics and come to a decision.

Querying the entirety of my tables to return ~20,000 records listing one column from each of the main tables I'm playing with, the simpler method had a Total Execution Time of 199, and the more complex a Total Execution Time of 272. Is that the statistic I should be most concerned with? Is that a difference I should be concerned about? Is the difference likely to be magnified when the database is much larger and in use, such that a difference of 73 milliseconds in this test scenario could end up being as much as a whole second in production, for example?

View 1 Replies


ADVERTISEMENT

Interpreting Index Statistics On SQL 2005

Nov 28, 2006

I ran the DBCC SHOW_STATISTICS command for all of my indexes; I was told that high density numbers are bad, low numbers good. I have some questions about my results, though; I'm not sure how to interpret them.

Of 48 indexes, 14 have a density of 0. Does this mean that the indexes are not selective enough? Does it mean they're garbage and I should toss them?

6 have a density of NULL. They are all primary keys. I suppose this just means that they're never used because these tables are rarely queried. Would this assumption be correct?

13 have a density of 1. I have no idea what this means.

The others have densities ranging from 0.01210491 to 0.5841165. I was told that the lower this number is, the more selective and thus more useful an index is. I think 0.5841165 is too high a number. Would this be correct?

Thanks in advance.

View 14 Replies View Related

Need Help Interpreting Results Of SET STATISTICS TIME ON

Dec 14, 2007

Hi,

I used SET STATISTICS TIME ON to get execution stats for a query. I found that the CPU Time was sometimes greater than the elapsed time. How is this possible? The query does not use any parallelism since I used the query option MAXDOP 1. Is the elapsed time wait time? Is the total execution time the sum of the CPU time and elapsed time?


SQL Server Execution Times:

CPU time = 797 ms, elapsed time = 162 ms.

View 3 Replies View Related

SQL Server 2012 :: Interpreting JSON Data For Reporting Purpose

May 16, 2014

We have a gaming application which generates transactional data in MongoDB which eventually sends the data to SQL Server and it is in JSON format. This data needs to be used for reporting tool but visualizing this data in forms of a table is proving to be difficult. One example of a column we receive is:

{responseCode:0 transactionId:null amount:200.00 message:account balance }

We need to build a sort of ETL or batch job but need to interpret this in a form which SQL Server can understand.

View 9 Replies View Related

SQL 2012 :: Create Statistics On Tables?

Apr 17, 2014

statistics in sql server. how to create it and update it on tables.?

View 9 Replies View Related

SQL 2012 :: Missing Column Statistics In TempDB

Dec 18, 2012

I have a server which is not running optimally and I checked the default trace. I have around 600 entries in the default trace which are all Missing Column Statistics and the database is tempdb.is_auto_create_stats_on and is_auto_update_stats_on are both 1 for tempdb.

View 2 Replies View Related

SQL Server 2012 :: Detect Update Statistics

May 20, 2014

I am working on an existing infrastructure and i do not have liberty to change much right now. I am in a situation where app issues update statistics command quite often. So frequently that sometimes one blocks another. Is there any way i can do something like this

IF ( update_statistics going on)
dont do anything
else
run update statistics

This is temporary solution untill i fix bad inline SQL code (in app) and use SPs.

View 8 Replies View Related

SQL 2012 :: Set Statistics XM Off - Blocking From Using Object Explorer

Jul 9, 2014

Seen activity like this? If so, where does it come from?

dd hh:mm:ss.mss:00 00:18:32.210
session_id:79
login_name:meme
wait_info:(6ms)IO_COMPLETION
CPU:646,180
tempdb_allocations:2,088
tempdb_current:0
blocking_session_id:NULL
reads:171,237,095
writes1,439,934
physical_reads:637,080
used_memory:2
status:rollback

View 4 Replies View Related

SQL 2012 :: Server Trace With Missing Column Statistics?

Jul 23, 2014

Getting events in the default trace saying missing column statistics on a column...

1.The column is the primary key column ( identity )

View 2 Replies View Related

SQL 2012 :: Drop All The Auto Generated Column Statistics?

Dec 29, 2014

My question: Is it okay to drop all the auto generated column statistics? (for the following scenario)

- I am cleaning up unnecessary objects (tables, unused indexes, overlapping statistics etc) from databases and found out there are more than 1400 auto generated column statistics on one database (lets call it A).
- Database A was used to be our reporting database but from last several years we are using database B for reporting. DB A has all the historical data while DB B only has valid records.
- We are updating all the column statistics with full scan nightly on database A and it is talking almost 2.5 hours to do that. Now I want to drop all the "unnecessary" statistics those were created when DB A was reporting database and they are no longer in use. There is no way to know the creation date of the column statistics that I know of. Statistics "last update date" is of no use because of our nightly job. So I was thinking of dropping all the auto generated column statistics and let the sql server create as it needs from now.

View 0 Replies View Related

SQL Server 2012 :: Data Statistics For Population And Distribution

Aug 19, 2015

I have been asked to create a report for one of our clients. The report is pretty basic but I am concerned about the overheads with my planned approach.The report is at a table and field grain to include values for:

* Min column value
* Max column value
* Number of discrete values
* Number of populated values (not NULL)

My current plan is to have a cursor over a limited view of sys.tables and sys.columns that will run a dynamic SQL query to import the results into a table that I can then output.There must be a better way of doing this and I don't have access to any DQS services.

View 1 Replies View Related

SQL 2012 :: Remove Auto Generated Statistics After Adding Index

Nov 21, 2014

We have implemented a very small reporting database which has a main table that started off small and has now grown to around half a million rows. Initially, there were no indexes on the table apart from a clustered index, but as the data has grown, performance has dropped and so we have added a number of indexes. This has resolved the performance issues.

Before creating the indexes SQL Server had auto created a number of statistic objects (_WA_Sys_000... etc). After creating the indexes, new statistic objects where created for the new indexes. In some cases, there are duplicate statistics (auto and index) for the same columns.Should I go through and drop the duplicate auto statistics? Will having duplicates cause issues at all?

View 2 Replies View Related

SQL Server 2012 :: Finding What Table A User-created Statistics Is Located?

Feb 21, 2014

I can easily find user created stat in a databaseSELECT * FROM DB.sys.stats WHERE user_created=1But how do I determine what tables those stats are in? with over 6000 tables I don't feel like looking through all the tables.

View 2 Replies View Related

SQL 2012 :: Sleeping Queries Blocking Rebuild Index And Update Statistics Job In AlwaysOn

Feb 3, 2015

At one of your client sides we have configured Always on with synchronous mode.Also we have schedule rebuild index and update statistics job which runs in night every alternate day. the issue is there are more then 100 sleeping queries which is blocking update statistics job.

I have to stop update statistics job manually once i come to office manually.

Once I have killed blocking sleeping query but then other sleeping query blocked it and so on.

View 4 Replies View Related

Auto Created Statistics And Missing Statistics

Jul 20, 2005

Hello group.I have an issue, which has bothered me for a while now:I'm wondering why the column statistics, which SQL Server wants me tocreate, if I turn off auto-created statistics, are so important to theoptimizer?Example: from Northwind (with auto create stats off), I do the following:SELECT * FROM Customers WHERE Country = 'Sweden'My query plan show a clustered index scan, which is expected - no indexexists for Country. BUT, the query plan also shows, that the optimizer ismissing a statistic on Country, which tells me, that the optimizer wouldbenefit from knowing this.I cannot see why? (and I've been trying for a while now).If I create the missing statistics, nothing happens in the query plan (andwhy should it?). I could understand it, if the optimizer suggested an indexon Country - this would make sense, but if creating the missing index, queryanalyzer creates the statistics with an empty index, which seems to me to beless than usable.I've been thinking long and hard about this, but haven't been able to reacha conclusion :) It has some relevance to my work, because allowing theoptimizer to create missing statistics limits my options for designingindexes (e.g. covering) for some rather wide tables, so I'm thinking why notturn it off altogether. But I would like to know the consequences - hopesomebody has already delved into this, and knows a good explanation.RgdsJesper

View 5 Replies View Related

Unit Of Time-statistics In Client Statistics

Aug 1, 2006

What is the unit of the numbers you get in the Time Statistics-part when running a query in Microsoft SQL Server Management Studio with Client Statistics turned on?

Currently I get mostly 0´s, but if I try and *** up a query on purpose I can get
it up to around 30... Is it milliseconds or som made up number based on clockcycles or... ?

I would also like to know if it´s possible to change the precision.


- Nikolaj

View 3 Replies View Related

Query Plans && Statistics

Jan 23, 2004

Gurus,

I'm trying to get an application finished that works like Query Analizer in
terms of returning query plans and statistics.

Problem the co-author is having:

>In using ADO to connect to SQL Server, I'm trying to retrieve multiple
>datasets AND statistics that are usually returned via the OnInfoMessage
>event. For those that are familiar with SQL Server, I need the results
>returned by the SET STATISTICS IO ON and SET STATISTICS PROFILE ON options.
>Anyone had any luck doing this before?

Can anyone shed any light on this please?

Thanks.

BTW if anyone wants to take a look at the tool so far - to see what I'm
delving into:
http://81.130.213.94/myforum/forum_posts.asp?TID=78&PN=1


Much Appreciated!!

View 3 Replies View Related

Complex Statistics Query

Dec 4, 2007

I'm creating a site for a national league and am having difficulty querying for a particular type of statistic that I'm hoping an expert on here can help me with


My data structure is such that an Event is a league meeting (all games take place on the same day) which has Fixtures. These Fixtures have Fixture_Events (essentially someone scoring a goal, a timeout being called, a penalty being awarded). I also have a Teams table, a Players table, a TeamRoster table (all players registered to a team) and a FixtureAttendees table (players from team who played in game). I'm trying to return 2 statistics.


The first is "How many Shutouts has a goalie had?"
The rule for this in English, how many games has a player participated in, where they have played in goals, and the score has been x-0 in their favour.


The second is "How many game winning goals has a player scored?"
The rule for this is essentially, how many goals has a player scored where the game was previously tied (e.g. 2-2) and they have scored the last goal in the fixture (e.g. 3-2).


My tables



Code BlockCREATE TABLE [dbo].[Fixture_Events](
[FixtureEventID] [int] IDENTITY(1,1) NOT NULL,
[FixtureID] [int] NOT NULL,
[EventType] [nvarchar](50) NOT NULL,
[EventTime] [nchar](5) NOT NULL,
[TeamID] [int] NOT NULL,
[Player1] [int] NOT NULL,
[Player2] [int] NULL,
[EventCode] [nvarchar](50) NULL,
[PenaltyMinutes] [int] NULL,
CONSTRAINT [PK_Fixture_Events] PRIMARY KEY CLUSTERED
(
[FixtureEventID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]







Code BlockCREATE TABLE [dbo].[Events_Fixtures](
[FixtureID] [int] IDENTITY(1,1) NOT NULL,
[EventID] [int] NOT NULL,
[LocationID] [int] NOT NULL,
[FixtureDate] [smalldatetime] NOT NULL,
[HomeTeam] [int] NOT NULL,
[AwayTeam] [int] NOT NULL,
CONSTRAINT [PK_Seasons_Fixtures] PRIMARY KEY CLUSTERED
(
[FixtureID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]





Code BlockCREATE TABLE [dbo].[Fixture_Attendees](
[FixtureID] [int] NOT NULL,
[PlayerID] [int] NOT NULL,
[Goalkeeper] [bit] NOT NULL CONSTRAINT [DF_Fixture_Attendees_Goalkeeper] DEFAULT ((0)),
CONSTRAINT [PK_Fixture_Attendees] PRIMARY KEY CLUSTERED
(
[FixtureID] ASC,
[PlayerID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]







Code BlockCREATE TABLE [dbo].[Fixture_Attendees](
[FixtureID] [int] NOT NULL,
[PlayerID] [int] NOT NULL,
[Goalkeeper] [bit] NOT NULL CONSTRAINT [DF_Fixture_Attendees_Goalkeeper] DEFAULT ((0)),
CONSTRAINT [PK_Fixture_Attendees] PRIMARY KEY CLUSTERED
(
[FixtureID] ASC,
[PlayerID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]





It's a complicated database as I'm tried to model it as accurately as I can. It may actually be easier if I made a backup of the database available rather than trying to post code.


If anybody can help me and required further info, please let me know.


Thanks in advance.

View 3 Replies View Related

Wht Statistics Are Updated After SELECT Query?

Jul 31, 2006

Hi

My understanding is that whenever any INSERT, DELETE, or UPDATE statements execute that impacts significant amount of rows in a table, its statistics must be automatically updated by SQL Server. But it was surprise to see in the profiler that after INSERT no update statistics event happen. But the moment a SELECT is executed on the table the event 'Auto Stats' is shown up. What is the reason behind this? Why the Auto Stats not happening immediately after the INSERT statement?

Following code can be used to illustrate this (in the profiler select the Auto Stats event under Performance):

IF(SELECT OBJECT_ID('t1')) IS NOT NULL
DROP TABLE t1
GO
CREATE TABLE t1(c1 INT, c2 INT IDENTITY)
INSERT INTO t1 (c1) VALUES(1)
INSERT INTO t1 (c1) VALUES(2)
INSERT INTO t1 (c1) VALUES(3)
CREATE NONCLUSTERED INDEX i1 ON t1(c1)

GO

--Now add 10,000 rows to update so that statistics are updated. Look into the profiler:

SET NOCOUNT ON
GO

DECLARE @n INT
SET @n = 1
WHILE @n <= 10000
BEGIN
INSERT INTO t1 (c1) VALUES(2)
SET @n = @n + 1
END

SET NOCOUNT OFF
GO


--Next runt he following statement. The profiler will now show 'Auto Stats' event

SELECT * FROM t1 WHERE c1 = 2



Regards

Sanjay Singh

View 3 Replies View Related

Need Help Interpreting Some SQL

Apr 28, 2008

can someone tell me what the folowing SQL does?
 
SELECT MIN(QUOTENAME(TABLE_SCHEMA) + '.' + QUOTENAME(TABLE_NAME))
FROM INFORMATION_SCHEMA.TABLES
 
 
Thanks in advance, Ralph

View 4 Replies View Related

Interpreting Product A-2 &&>=1.978

Aug 14, 2007

Dear Jamie,
Thanks for the reply.
We have another problem to solve.

on the node we are getting product A -2 >=1.978

What does it mean (-2) ?
It is mentioned as two time slices ago. Please help me to undertand this.
From
menik

View 1 Replies View Related

Index Distribution Statistics And Show Query Plan

Jan 15, 1999

I have an index that shows distribution statistics of 98.20%, which is very poor. I set show query plan and show statis I/O on. This table has 1113675 rows of data.

*************
select orderID, custId, intertcsi from tblorders
where intertcsi = '2815'

STEP 1
The type of query is SELECT
FROM TABLE
tblorders
Nested iteration
Index : indxInterTCSI
orderID custId intertcsi
----------- ----------- ---------
1015245 1011313 2815
2556392 2556392 2815
....


Table: tblOrders scan count 1, logical reads: 104, physical reads: 58, read ahead reads: 0
***************
Then I use the same select statement to force a table scan:

select orderID, custId, intertcsi from tblorders (index=0)
where intertcsi = '2815'

STEP 1
The type of query is SELECT
FROM TABLE
tblorders
Nested iteration
Table Scan
orderID custId intertcsi
----------- ----------- ---------
60472 61084 2815
102184 102333 2815
...
Table: tblOrders scan count 1, logical reads: 110795, physical reads: 6891, read ahead reads: 103980

When the index is not provided, the logical reads and physical reads increased dramatically. Does this tell me that I should keep that index though it is a poor selection? Is that because a huge table like this make the optimizer use the index. The query without using index takes longer time to run.
Any idea or comment would be very appreciated.

View 4 Replies View Related

Need Help Interpreting Error Message From Job

Feb 29, 2000

I have a job whose first step is to run a DTS package via a DTSRUN Operating System Command. I get the following message.

DTSRun: Loading... DTSRun: Executing... Error: -2147220499 (800403ED); Provider Error: 0 (0) Error string: No Steps have been defined for the transformation Package. Error source: Microsoft Data Transformation Services (DTS) Package Help file: sqldts.hlp Help context: 700. Process Exit Code 1. The step failed.

Prior to 2/29/2000, it had run dozens of times successfully, the last time on 2/23/2000.

I would be most appreciative of any help.

Thanks.

View 1 Replies View Related

Whitepaper On Interpreting CHECKDB Results

Aug 10, 2005

Folks,

I'm going to write an advanced whitepaper on interpreting the results of CHECKDB in SQL Server 2005 (mostly applicable to SQL Server 2000 as well), should be available before end of the year. Couple of questions for you:

1) would this be interesting/useful to you?
2) anything in particular you'd like to see covered?

Thanks

Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
(Legalese: This posting is provided "AS IS" with no warranties, and confers no rights.)

View 2 Replies View Related

Interpreting The Percentage In Decision-tree Model

Sep 15, 2006

Hi,

I used a decision-tree mining-model to describe and predict fraud. The table contains 1039 records with 775 distinct value of A-number (the calling party). I used 9 columns in the model. SQL Server reports that only 3 columns are significant in predicting the fraud

- BPN_is_too_short (called party-number is too short)
- Duration_is_zero
- Invalid_area_code

The key-column in A-number, and the predicted column is Is_Fraud with the range of values are only 0 and 1. There's no record with NULL (missing-value) in the column Is_Fraud.

Mining Legend shows in the first split
[-] 625 cases of fraud
[-] 150 cases of non-fraud
[-] 0 cases of missing

In addition to that, Mining Legend shows
[-] 79.69% of fraud
[-] 19.64% of non-fraud
[-] 0.67% Missing

Now when I compare those values, they don't match.
(A) 625/775 is 80.645%, not 79.69%
(B) 150/775 is 19.355%, not 19.64%
(C) 0 cases of NULL (missing value) should imply 0% of missing, not 0.67% of missing

Furthermore in one node (with the split on duration_is_zero), there are 541 cases of fraud and 0 cases of non-fraud. This implies the node is leaf-node. However, Mining Legend shows

514 cases of fraud, 99.35%

0 cases of non-fraud, 0.33%

[F] 0 cases of missing, 0.33%


My questions
(1) Why the values don't match like in cases A through C ?
(2) Why the values don't match even in cases D through F when we have no subtree at all ?

I've searched explanation by reading the mathematical reasoning, entropy, Gini index; but it does not answer the discrepancies of those values and percentages in the Mining Legend.

Regards,

Bernaridho

View 3 Replies View Related

Reporting Services :: Interpreting Specific Report Rendering From What The Log Shows

Jul 7, 2015

We run std 2008.   In my ssrs log I see this for one of our most critical reports...

library!ReportServer_0-64!2244!07/07/2015-08:24:53:: Call to GetPermissionsAction(/somedirectory/somedirectory1).... which I assume is an indication of a report starting to render by first checking permissions.

Around the time my user says he still saw the revolving arrow and he stopped the report because he felt it was running too long, I see...

webserver!ReportServer_0-64!1dbc!07/07/2015-08:54:44:: i INFO: Processed report. Report='/somedirectory/somedirectory1/importantreport', Stream=''

How can it be true that he stopped it and ssrs reports that it processed the report?

About 4 minutes later I see this entry in the log...

webserver!ReportServer_0-64!15e4!07/07/2015-08:58:34:: i INFO: Processed report. Report='/somedirectory/somedirectory1/importantreport', Stream=''

Which processed report message is right?  Could there be multiples cuz of subreports? I see a number of errors and exceptions around these same times but do not know how to tie either to a specific report. Is there a way?

View 3 Replies View Related

SQL Server 2012 :: Adding Count To Query Without Duplicating Original Select Query

Aug 5, 2014

I have the following code.

SELECT _bvSerialMasterFull.SerialNumber, _bvSerialMasterFull.SNStockLink, _bvSerialMasterFull.SNDateLMove, _bvSerialMasterFull.CurrentLoc,
_bvSerialMasterFull.CurrentAccLink, _bvSerialMasterFull.StockCode, _bvSerialMasterFull.CurrentAccount, _bvSerialMasterFull.CurrentLocationDesc,
_bvSerialNumbersFull.SNTxDate, _bvSerialNumbersFull.SNTxReference, _bvSerialNumbersFull.SNTrCodeID, _bvSerialNumbersFull.SNTransType,
_bvSerialNumbersFull.SNWarehouseID, _bvSerialNumbersFull.TransAccount, _bvSerialNumbersFull.TransTypeDesc,

[code]...

However, as you can see, the original select query is run twice and joined together.What I was hoping for is this to be done in the original query without the need to duplicate the original query.

View 2 Replies View Related

SQL Server 2012 :: How To Pull Value Of Query And Not Value Of Variable When Query Using Select Top 1 Value From Table

Jun 26, 2015

how do I get the variables in the cursor, set statement, to NOT update the temp table with the value of the variable ? I want it to pull a date, not the column name stored in the variable...

create table #temptable (columname varchar(150), columnheader varchar(150), earliestdate varchar(120), mostrecentdate varchar(120))
insert into #temptable
SELECT ColumnName, headername, '', '' FROM eddsdbo.[ArtifactViewField] WHERE ItemListType = 'DateTime' AND ArtifactTypeID = 10
--column name
declare @cname varchar(30)

[code]...

View 4 Replies View Related

Statistics

Jul 31, 2000

I need a script to drop all statistics in a database at once.
HELP!

View 5 Replies View Related

Statistics

Jun 14, 2001

I want to be able to generate a script that gives me all statistics that are in my database currently.

Does anyone know how to do this? Is the following correct:


select --a.id as SysIndex_id,
'create statistics ' + a.name + ' on ' + b.name + ' (' + SUBSTRING(A.NAME, 9, LEN(A.NAME)-17) + ')'

as SysIndex_name

--b.name
from
sysindexes A left join sysobjects B
on A.id = B.id
where a.name like '%wa%'
order by b.name

View 1 Replies View Related

Where, Oh Where, Have My Little Statistics Gone?

Aug 3, 2007

Hi all,

As part of my automagical nightly index maintenance application, I am seeing a fairly regular (3-4 failures out of 5 attempts per week) failure on one particular table in my database. The particular line which seems to be failing is this one:

DBCC SHOWCONTIG (WON_Staging_EPSEst) WITH FAST, TABLERESULTS, ALL_INDEXES

The log reports the following transgression(s):Msg 2767, Sev 16: Could not locate statistics 'WON_Staging_EpsEst' in the system catalogs. [SQLSTATE 42000]
Msg 0, Sev 16: [SQLSTATE 01000]
Msg 0, Sev 16: -------------------- Simple ReIndex for [WON_Staging_EpsEst].[IX_WON_Staging_EpsEst] [SQLSTATE 01000]
Msg 2528, Sev 16: DBCC execution completed. If DBCC printed error messages, contact your system administrator. [SQLSTATE 01000]
Msg 0, Sev 16: [SQLSTATE 01000]
Msg 0, Sev 16: -------------------- Post-Maintenance Statistics Report for WON_Staging_EpsEst [SQLSTATE 01000]
Msg 0, Sev 16: Statistics for WON_Staging_EpsEst, WON_Staging_EpsEst [SQLSTATE 01000]
Msg 2528, Sev 16: DBCC execution completed. If DBCC printed error messages, contact your system administrator. [SQLSTATE 01000]
Msg 0, Sev 16: Statistics for WON_Staging_EpsEst, IX_WON_Staging_EpsEst [SQLSTATE 01000]
Msg 2768, Sev 16: Statistics for INDEX 'IX_WON_Staging_EpsEst'. [SQLSTATE 01000]
Updated Rows Rows Sampled Steps Density Average key length
-------------------- -------------------- -------------------- ------ ------------------------ ------------------------
Aug 3 2007 3:22AM 674609 674609 196 2.0958368E-4 8.0

(1 rows(s) affected)

This table is dropped and recreated each day during a data import job. After the table is recreated and repopulated with data (using a bulk import from a flat file), the index is also recreated using the following code:CREATE INDEX [IX_WON_Staging_EpsEst]
ON [dbo].[WON_Staging_EpsEst](OSID, [Year], Period)
ON [PRIMARY]Yet more often than not, that evening, when the index maintenance job runs, it fails with the aforepasted messages complaining of being unable to find table/index statistics.

Worth noting, perhaps, is that this same process is used on roughly 10 data staging tables in this database each day, and none of the other tables fail during the index maintenance job.

Also worth noting, perhaps, is that this IDENTICAL table/code is processed in exactly the same way on TWO other servers, and the failure has not occured in any of the jobs on those other two servers (these other two servers are identical mirrors of the one failing, and contain all the same data, indicies, and everything else.

Any thoughts, suggestions for where to look, or unrestrained abusive comments regarding my ancestry?

Thanks!

View 14 Replies View Related

Statistics

Jun 13, 2008

I have a small doubt.
If we apply a statistics command on a particular table what will it update.
Normally statistics are created automatically by the server or we have to create it.

View 1 Replies View Related

SQL Use Statistics

Jul 20, 2005

Anybody know how many companies worldwide use SQL server and how manyindividual servers this amounts to? Also, at what rate is SQL usegrowing? Can someone at least point me to a source where I could findclose to exact numbers?

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved