My understanding is that whenever any INSERT, DELETE, or UPDATE statements execute that impacts significant amount of rows in a table, its statistics must be automatically updated by SQL Server. But it was surprise to see in the profiler that after INSERT no update statistics event happen. But the moment a SELECT is executed on the table the event 'Auto Stats' is shown up. What is the reason behind this? Why the Auto Stats not happening immediately after the INSERT statement?
Following code can be used to illustrate this (in the profiler select the Auto Stats event under Performance):
IF(SELECT OBJECT_ID('t1')) IS NOT NULL
DROP TABLE t1
GO
CREATE TABLE t1(c1 INT, c2 INT IDENTITY)
INSERT INTO t1 (c1) VALUES(1)
INSERT INTO t1 (c1) VALUES(2)
INSERT INTO t1 (c1) VALUES(3)
CREATE NONCLUSTERED INDEX i1 ON t1(c1)
GO
--Now add 10,000 rows to update so that statistics are updated. Look into the profiler:
SET NOCOUNT ON
GO
DECLARE @n INT
SET @n = 1
WHILE @n <= 10000
BEGIN
INSERT INTO t1 (c1) VALUES(2)
SET @n = @n + 1
END
SET NOCOUNT OFF
GO
--Next runt he following statement. The profiler will now show 'Auto Stats' event
I ran an INDEXDEFRAG against the tables in my DB, & yet the results in DBCC SHOWCONTIG are the same before and after for some of the tables that it defragmented. Why?
In the DBCC SHOWCONTIG before defragging, this was a sample table:
I ran the script from the BOL entry for 'DBCC SHOWCONTIG' to defragment all indexes in a database , under letter E, with @maxfrag = 5%. It returned the following results (there are 5 nonclustered indexes):
I'm working on databases where statistics of some indexes (tables) are changing too frequently. Once I update them manually, one minute after they get 10-20% change, and five minutes after they get over 100% change. Tables get updated very frequently (multiple times in a second).
When I run a query to read from sys.stats, sys.dm_db_stats_properties and other dynamic views, I see that they were last updated when I did it manually, but the change rate overpassed the 500+20% (tables have multiples of 10K rows). Auto create and update statistics are set to true on all databases, and I don't know why sql server does not do that automatically.
I wonder if anyone has any pointers on how to gather statistics for SELECT queries? For instance, say 10 rows are returned by a query, is it possible to log which rows where returned?
I have listed below a sql statement generated by a MS Access query. The Access is the frontend, using a SQL Server 2005 View as the backend. I have already corrected the obvious differences between Access and SQL Server syntax, such as replacing UCase$ with UPPER, replacing '_' with '.' between the db owner and view name, replacing IIF with IF, and replacing "D" with 'D' and "E" with 'E', but it still generates syntax errors (of course, with no explanation). As you can see, it is SUMing a field based on whether the value is 'D' or 'E', then using those calculated values to calculate a percentage. Can anyone out there let me know what I'm doing wrong?
SELECT dbo_vwDisplayUserList.DEPT_DESC, Sum(IIf(UCase$([Essential_Code])="D",1,0)) AS Department_Essential, Sum(IIf(UCase$([Essential_Code])="E",1,0)) AS EOC_Essential, Count(dbo_vwDisplayUserList.UserID) AS Total_Employees, Int([Department_Essential]/[Total_Employees]*100) AS [%Department_Essential], 100-Int([Department_Essential]/[Total_Employees]*100) AS [%EOC_Essential] FROM dbo_vwDisplayUserList GROUP BY dbo_vwDisplayUserList.DEPT_DESC ORDER BY dbo_vwDisplayUserList.DEPT_DESC
Hello group.I have an issue, which has bothered me for a while now:I'm wondering why the column statistics, which SQL Server wants me tocreate, if I turn off auto-created statistics, are so important to theoptimizer?Example: from Northwind (with auto create stats off), I do the following:SELECT * FROM Customers WHERE Country = 'Sweden'My query plan show a clustered index scan, which is expected - no indexexists for Country. BUT, the query plan also shows, that the optimizer ismissing a statistic on Country, which tells me, that the optimizer wouldbenefit from knowing this.I cannot see why? (and I've been trying for a while now).If I create the missing statistics, nothing happens in the query plan (andwhy should it?). I could understand it, if the optimizer suggested an indexon Country - this would make sense, but if creating the missing index, queryanalyzer creates the statistics with an empty index, which seems to me to beless than usable.I've been thinking long and hard about this, but haven't been able to reacha conclusion :) It has some relevance to my work, because allowing theoptimizer to create missing statistics limits my options for designingindexes (e.g. covering) for some rather wide tables, so I'm thinking why notturn it off altogether. But I would like to know the consequences - hopesomebody has already delved into this, and knows a good explanation.RgdsJesper
What is the unit of the numbers you get in the Time Statistics-part when running a query in Microsoft SQL Server Management Studio with Client Statistics turned on?
Currently I get mostly 0´s, but if I try and *** up a query on purpose I can get it up to around 30... Is it milliseconds or som made up number based on clockcycles or... ?
I would also like to know if it´s possible to change the precision.
I'm trying to get an application finished that works like Query Analizer in terms of returning query plans and statistics.
Problem the co-author is having:
>In using ADO to connect to SQL Server, I'm trying to retrieve multiple >datasets AND statistics that are usually returned via the OnInfoMessage >event. For those that are familiar with SQL Server, I need the results >returned by the SET STATISTICS IO ON and SET STATISTICS PROFILE ON options. >Anyone had any luck doing this before?
Can anyone shed any light on this please?
Thanks.
BTW if anyone wants to take a look at the tool so far - to see what I'm delving into: http://81.130.213.94/myforum/forum_posts.asp?TID=78&PN=1
I'm creating a site for a national league and am having difficulty querying for a particular type of statistic that I'm hoping an expert on here can help me with
My data structure is such that an Event is a league meeting (all games take place on the same day) which has Fixtures. These Fixtures have Fixture_Events (essentially someone scoring a goal, a timeout being called, a penalty being awarded). I also have a Teams table, a Players table, a TeamRoster table (all players registered to a team) and a FixtureAttendees table (players from team who played in game). I'm trying to return 2 statistics.
The first is "How many Shutouts has a goalie had?" The rule for this in English, how many games has a player participated in, where they have played in goals, and the score has been x-0 in their favour.
The second is "How many game winning goals has a player scored?" The rule for this is essentially, how many goals has a player scored where the game was previously tied (e.g. 2-2) and they have scored the last goal in the fixture (e.g. 3-2).
My tables
Code BlockCREATE TABLE [dbo].[Fixture_Events]( [FixtureEventID] [int] IDENTITY(1,1) NOT NULL, [FixtureID] [int] NOT NULL, [EventType] [nvarchar](50) NOT NULL, [EventTime] [nchar](5) NOT NULL, [TeamID] [int] NOT NULL, [Player1] [int] NOT NULL, [Player2] [int] NULL, [EventCode] [nvarchar](50) NULL, [PenaltyMinutes] [int] NULL, CONSTRAINT [PK_Fixture_Events] PRIMARY KEY CLUSTERED ( [FixtureEventID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY]
Code BlockCREATE TABLE [dbo].[Events_Fixtures]( [FixtureID] [int] IDENTITY(1,1) NOT NULL, [EventID] [int] NOT NULL, [LocationID] [int] NOT NULL, [FixtureDate] [smalldatetime] NOT NULL, [HomeTeam] [int] NOT NULL, [AwayTeam] [int] NOT NULL, CONSTRAINT [PK_Seasons_Fixtures] PRIMARY KEY CLUSTERED ( [FixtureID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY]
Code BlockCREATE TABLE [dbo].[Fixture_Attendees]( [FixtureID] [int] NOT NULL, [PlayerID] [int] NOT NULL, [Goalkeeper] [bit] NOT NULL CONSTRAINT [DF_Fixture_Attendees_Goalkeeper] DEFAULT ((0)), CONSTRAINT [PK_Fixture_Attendees] PRIMARY KEY CLUSTERED ( [FixtureID] ASC, [PlayerID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY]
Code BlockCREATE TABLE [dbo].[Fixture_Attendees]( [FixtureID] [int] NOT NULL, [PlayerID] [int] NOT NULL, [Goalkeeper] [bit] NOT NULL CONSTRAINT [DF_Fixture_Attendees_Goalkeeper] DEFAULT ((0)), CONSTRAINT [PK_Fixture_Attendees] PRIMARY KEY CLUSTERED ( [FixtureID] ASC, [PlayerID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY]
It's a complicated database as I'm tried to model it as accurately as I can. It may actually be easier if I made a backup of the database available rather than trying to post code.
If anybody can help me and required further info, please let me know.
I'm designing a new database which will be the back-end to a heavily-used web-based application (all these terms are relative - I guess the use won't be that heavy in the grand scheme of things, I'm only talking 100 users or so at the very most). Data from the old application database will be migrated to this one, and the old database is around 7GB in size after 5 years of use.
I have two different ways of linking some tables in mind, one which is slightly more complex than the other but which potentially has benefits over the simpler method. However, I'm concerned that I might be 'over-cooking' the design, and that performance would suffer as a result, so I've tried creating the two different versions of the database (the part of it I'm concerned with, anyway), one for each of the solutions I've got in mind, migrated the data into the relevant tables and carried out some queries on the data to collect some statistics.
The problem is that, whilst I can see that the more complex method is more expensive, as expected, I don't really understand if the difference is significant. Since I don't know what the numbers in the Client Statistics window actually mean (there are no units! I'm guessing times are in milliseconds?), or how much of real-world impact the difference will have, I'm finding it hard to interpret my statistics and come to a decision.
Querying the entirety of my tables to return ~20,000 records listing one column from each of the main tables I'm playing with, the simpler method had a Total Execution Time of 199, and the more complex a Total Execution Time of 272. Is that the statistic I should be most concerned with? Is that a difference I should be concerned about? Is the difference likely to be magnified when the database is much larger and in use, such that a difference of 73 milliseconds in this test scenario could end up being as much as a whole second in production, for example?
Hi everybody, I am a total noob conserning ASP, but I am willing to learn We have a sql2005 SRV(hosted by our ISP, so limited access) and a ASP based forum (WEB WIZ) When I try to login I get this error: Support Error Code:- err_SQLServer_loginUser()_update_USR_CodeFile Name:- functions_login.aspError details:-Microsoft OLE DB Provider for ODBC DriversQuery cannot be updated because the FROM clause is not a single simple table name.Can somebody tell me whats wrong? Thanx in advance. Gerry de Bruijn!
I have a problem. My provider(ISP) is supporting SQL Native Client driver and my forum supplier is only supporting SQLOLEDB. I am trying to access our sql2005 DB located at our ISP.
I have changed this line: strCon = "Provider=SQLOLEDB;Connection Timeout=90;" & strCon To: strCon = "Driver={SQL Native Client};Connection Timeout=90;" & strCon
Now I can access the database, but when I am trying to loging I get this error: Server Error in Forum Application An error has occured while writing to the database. Please contact the forum administrator.
Support Error Code:- err_SQLServer_loginUser()_update_USR_Code File Name:- functions_login.asp
Error details:- Microsoft OLE DB Provider for ODBC Drivers Query cannot be updated because the FROM clause is not a single simple table name.
What can I do?? I am stuck in between and need a solution.....
I have an index that shows distribution statistics of 98.20%, which is very poor. I set show query plan and show statis I/O on. This table has 1113675 rows of data.
************* select orderID, custId, intertcsi from tblorders where intertcsi = '2815'
STEP 1 The type of query is SELECT FROM TABLE tblorders Nested iteration Index : indxInterTCSI orderID custId intertcsi ----------- ----------- --------- 1015245 1011313 2815 2556392 2556392 2815 ....
Table: tblOrders scan count 1, logical reads: 104, physical reads: 58, read ahead reads: 0 *************** Then I use the same select statement to force a table scan:
select orderID, custId, intertcsi from tblorders (index=0) where intertcsi = '2815'
STEP 1 The type of query is SELECT FROM TABLE tblorders Nested iteration Table Scan orderID custId intertcsi ----------- ----------- --------- 60472 61084 2815 102184 102333 2815 ... Table: tblOrders scan count 1, logical reads: 110795, physical reads: 6891, read ahead reads: 103980
When the index is not provided, the logical reads and physical reads increased dramatically. Does this tell me that I should keep that index though it is a poor selection? Is that because a huge table like this make the optimizer use the index. The query without using index takes longer time to run. Any idea or comment would be very appreciated.
i have Two tables... with both the table having LastUpdated Column. And, in my Select Query i m using both the table with inner join.And i want to show the LastUpdated column which has the maxiumum date value. i.e. ( latest Updated Column value).
Hi,I want to save the last modification date when the row is updated. I have a column called "LastModification" in the table, every time the row is update I want to set the value of this column to the current date. So far all I know is that I need to use a trigger and the GetDate() function, but could any body help me with how to set the value of the column to getdate()? thanks for your help.
Not sure if this is possible, but maybe. I have a table that contains a bunch of logs. I'm doing something like SELECT * FROM LOGS. The primary key in this table is LogID. I have another table that contains error messages. Each LogID could have multiple error messages associated with it. To get the error messages. When I perform my first select query listed above, I would like one of the columns to be populated with ALL the error messages for that particular LogID (SELECT * FROM ERRORS WHERE LogID = MyLogID). Any thoughts as to how I could accomplish such a daring feat?
Hi, not exactly too sure if this can be done but I have a need to run a query which will return a list of values from 1 column. Then I need to iterate this list to produce the resultset for return. This is implemented as a stored procedure
declare @OwnerIdent varchar(7) set @OwnerIdent='A12345B'
SELECT table1.val1 FROM table1 INNER JOIN table2 ON table1. Ident = table2.Ident WHERE table2.Ident = @OwnerIdent
'Now for each result of the above I need to run the below query
SELECT Clients.Name , Clients.Address1 , Clients.BPhone, Clients.email FROM Clients INNER JOIN Growers ON Clients.ClientKey = Growers.ClientKey WHERE Growers.PIN = @newpin)
When I run simple select against my view in Query Analyzer, I get result set in one sort order. The sort order differs, when I BCP the same view. Using third technique i.e. Select Into, I have observed the sort order is again different in the resulting table. My question is what is the difference in mechanisim of query analyzer, bcp, and select into. Thanks
have a table with students details in it, i want to select all the students who joined a class on a particular day and then i need another query to select all students who joined classes over the course of date range eg 03/12/2003 to 12/12/2003.
i have tried with the following query, i need help putting my queries together select * from tblstudents where classID='1' and studentstartdate between ('03/12/2004') and ('03/12/2004')
when i run this query i get this message
Server: Msg 242, Level 16, State 3, Line 1 The conversion of a char data type to a datetime data type resulted in an out-of-range datetime value.
the studentstartdate field is set as datetime 8 and the date looks like this in the table 03/12/2004 03:12:15
However, as you can see, the original select query is run twice and joined together.What I was hoping for is this to be done in the original query without the need to duplicate the original query.
how do I get the variables in the cursor, set statement, to NOT update the temp table with the value of the variable ? I want it to pull a date, not the column name stored in the variable...
create table #temptable (columname varchar(150), columnheader varchar(150), earliestdate varchar(120), mostrecentdate varchar(120)) insert into #temptable SELECT ColumnName, headername, '', '' FROM eddsdbo.[ArtifactViewField] WHERE ItemListType = 'DateTime' AND ArtifactTypeID = 10 --column name declare @cname varchar(30)
I have a query that performs a comparison between 2 different databases and returns the results of the comparison. It returns 2 columns. The 1st column is the value of the object being compared, and the 2nd column is a number representing any discrepancies.What I would like to do is use the results from this 1st query in the where clause of another separate query so that this 2nd query will only run for any primary values from the 1st query where a secondary value in the 1st query is not equal to zero.I was thinking of using an "IN" function in the 2nd query to pull data from the 1st column in the 1st query where the 2nd column in the 1st query != 0, but I'm having trouble ironing out the correct syntax, and conceptualizing this optimally.
While I would prefer to only return values from the 1st query where the comparison value != 0 in order to have a concise list to work with, I am having difficulty in that the comparison value is a mathematical calculation of 2 different tables in 2 different databases, and so far I've been forced to include it in the select criteria because the where clause does not accept it.Also, I am not a DBA by trade. I am a system administrator writing SQL code for reporting data from an application I support.
I have a column colC in a table myTable that has a value (e.g. '0X'). The position of a non-zero character in column colC refers to the ordinal position of another column in the table myTable (in the aforementioned example, colB).
To get a column name (i.e., colA or colB) from table myTable, I can join ("ON cte.pos = cn.ORDINAL_POSITION") to INFORMATION_SCHEMA.COLUMNS for that table catalog, schema and name. But I want to show the value of what is in that column (e.g., 'ABC'), not just the name. Hoping for:
COLUMN_NAME Value ----------- ----- colB Â Â Â Â 123 colA Â Â Â Â XYZ
I've tried dynamic SQL to no success, probably not executing the concept correctly...
I hope someone can answer this, I'm not even sure where to start looking for documentation on this. The SQL query I'm referencing is included at the bottom of this post.
I have a query with 3 select statements joined together like tables. It works great, except for the fact that I need to declare a variable and make it a table within two of those 3. The example is below. You'll see that I have three select statements made into tables A, B, and C, and that table A has a variable @years, which is a table.
This works when I just run table A by itself, but when I execute the entire query, I get an error about the "declare" keyword, and then some other errors near the word "as" and the ")" character. These are some of those errors that I find pretty meaningless that just mean I've really thrown something off.
So, am I not allowed to declare a variable within these SELECT tables that I'm creating and joining?
Thanks in advance, Andy
Select * from
(
declare @years table (years int);
insert into @years
select
CASE
WHEN month(getdate()) in (1) THEN year(getdate())-1
WHEN month(getdate()) in (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12) THEN year(getdate())
END
select
u.fullname
, sum(tx.Dm_Time) LastMonthBillhours
, sum(tx.Dm_Time)/((select dm_billabledays from dm_billabledays where Dm_Month = Month(GetDate()))*8) lasmosbillingpercentage
from
Dm_TimeEntry tx
join
systemuserbase u
on
(tx.owninguser = u.systemuserid)
where
Month(tx.Dm_Date) = Month(getdate())-1
and
year(dm_date) = (select years from @years)
and tx.dm_billable = 1
group by u.fullname
) as A
left outer join
(select
u.FullName
, sum(tx.Dm_Time) Billhours
, ((sum(tx.Dm_Time))
/
((day(getdate()) * ((5.0)/(7.0))) * 8)) perc
from
Dm_TimeEntry tx
join
systemuserbase u
on
(tx.owninguser = u.systemuserid)
where
tx.Dm_Billable = '1'
and
month(tx.Dm_Date) = month(GetDate())
and
year(tx.Dm_Date) = year(GetDate())
group by u.fullname) as B
on
A.Fullname = B.Fullname
Left Outer Join
(
select
u.fullname
, sum(tx.Dm_Time) TwomosagoBillhours
, sum(tx.Dm_Time)/((select dm_billabledays from dm_billabledays where Dm_Month = Month(GetDate()))*8) twomosagobillingpercentage
hi, does it make a difference to write the following select statement in either query window or create a sp and then calling the store procedure to be executed..
select * from authors
OR
create procedure authors as
select * from authors
lets assume that we have million records in the author table. is it faster to run the query from within a store procedure or not ? thanks for your input
As part of my automagical nightly index maintenance application, I am seeing a fairly regular (3-4 failures out of 5 attempts per week) failure on one particular table in my database. The particular line which seems to be failing is this one:
DBCC SHOWCONTIG (WON_Staging_EPSEst) WITH FAST, TABLERESULTS, ALL_INDEXES
The log reports the following transgression(s):Msg 2767, Sev 16: Could not locate statistics 'WON_Staging_EpsEst' in the system catalogs. [SQLSTATE 42000] Msg 0, Sev 16: [SQLSTATE 01000] Msg 0, Sev 16: -------------------- Simple ReIndex for [WON_Staging_EpsEst].[IX_WON_Staging_EpsEst] [SQLSTATE 01000] Msg 2528, Sev 16: DBCC execution completed. If DBCC printed error messages, contact your system administrator. [SQLSTATE 01000] Msg 0, Sev 16: [SQLSTATE 01000] Msg 0, Sev 16: -------------------- Post-Maintenance Statistics Report for WON_Staging_EpsEst [SQLSTATE 01000] Msg 0, Sev 16: Statistics for WON_Staging_EpsEst, WON_Staging_EpsEst [SQLSTATE 01000] Msg 2528, Sev 16: DBCC execution completed. If DBCC printed error messages, contact your system administrator. [SQLSTATE 01000] Msg 0, Sev 16: Statistics for WON_Staging_EpsEst, IX_WON_Staging_EpsEst [SQLSTATE 01000] Msg 2768, Sev 16: Statistics for INDEX 'IX_WON_Staging_EpsEst'. [SQLSTATE 01000] Updated Rows Rows Sampled Steps Density Average key length -------------------- -------------------- -------------------- ------ ------------------------ ------------------------ Aug 3 2007 3:22AM 674609 674609 196 2.0958368E-4 8.0
(1 rows(s) affected)
This table is dropped and recreated each day during a data import job. After the table is recreated and repopulated with data (using a bulk import from a flat file), the index is also recreated using the following code:CREATE INDEX [IX_WON_Staging_EpsEst] ON [dbo].[WON_Staging_EpsEst](OSID, [Year], Period) ON [PRIMARY]Yet more often than not, that evening, when the index maintenance job runs, it fails with the aforepasted messages complaining of being unable to find table/index statistics.
Worth noting, perhaps, is that this same process is used on roughly 10 data staging tables in this database each day, and none of the other tables fail during the index maintenance job.
Also worth noting, perhaps, is that this IDENTICAL table/code is processed in exactly the same way on TWO other servers, and the failure has not occured in any of the jobs on those other two servers (these other two servers are identical mirrors of the one failing, and contain all the same data, indicies, and everything else.
Any thoughts, suggestions for where to look, or unrestrained abusive comments regarding my ancestry?
I have a small doubt. If we apply a statistics command on a particular table what will it update. Normally statistics are created automatically by the server or we have to create it.
Anybody know how many companies worldwide use SQL server and how manyindividual servers this amounts to? Also, at what rate is SQL usegrowing? Can someone at least point me to a source where I could findclose to exact numbers?