As part of my data warehouse nightly build, I truncate my tables in my
target database.
As example, I find it is much quicker to do a bulk API load of 13M
records and to do an update/insert of 100K rows. I also drop the
indexes before the builds and reindex after. Thats an aside.
What I am wondering is how is this impacting the statistics? Do I need
to update them?
Not well versed on statistics and any data is welcomed.
Thanks
Rob
Is there any way to determine index usage statistics for a given table? For examle, I have a table, with three indices. I need to know how many times each index was used. Is it possible?
And second part of question: I need to know, which user overloads my base with their giantic queries. Is there any way to determine, how many system resources each of user's sessions uses?
I have done DTS that export data from SQL to .xls, it works perfect, my problem is my table from SQL get truncated everytime before i load data but my .xls file always come with previous records which I don't want. i.e. if my Sql table had 3 rows , when i finish to execute the dts, my .xls come with 3 rows, when I exec again, my table get truncated and my .xls add another 3 rows. How can I solve this
I am using SSIS to replace set of tables daily. One of the table has primary, unique, foreign keys and full-text index. Before truncating, I am dropping the foreign key constraints (to truncate the parent table), truncating the tables and recreating the foreign keys.
I have few questions:
1) Do I need to drop and recreate the unique key as well? (I am not dropping the primary key) - Unique key is identity column created just for the full-text indexing since it was mentioned that key on integer is better than key on varchar and my pk is a varchar.
2) Do I need to drop and recreate the full-text index or just rebuild/repopulate it every time the table is loaded.
This is the first time i am using  full text index and I was able to learn a lot about it from the sites. I would like to understand the correct approach while loading the tables.
Any way to invalidate cached query plans? I would rather target a specific query instead of invalidating all of them. Also any sql server setting that will cause cached query plans to invalidate even though only one character in the queries has changed?
exec sp_executesql N'select cast(5 as int) as DisplaySequence, mt.Description + '' '' + ct.Description as Source, c.FirstName + '' '' + c.LastName as Name, cus.CustomerNumber Code, c.companyname as "Company Name", a.Address1, a.Address2,
[code]....
In this query we have seen (on some databases) simply changing ‘@CustomerId int',@CustomerId=1065’ too ‘@customerId int',@customerId=1065’ fixed the a speed problem….just changed the case on the Customer bind parameter. On other servers this has no effect.the server is using an old cached query plan, but don’t know for sure.
Hi,I need to shrink a database file and was wondering whether it isrequired to run a full backup after the shrink operation.In SQL Server 7.0 shrinkfile was a non-logged operation so wouldinvalidate your transaction logs. Is the same true for 2000?Obviously as a matter of course I would backup before and after theoperation but going forward I may want to implement this on a regularbasis.CheersDee
way to invalidate cached query plans? I would rather target a specific query instead of invalidating all of them.
Also do you know of any sql server setting that will cause cached query plans to invalidate even though only one character in the queries has changed?
exec sp_executesql N'select cast(5 as int) as DisplaySequence, mt.Description + '' '' + ct.Description as Source,
[Code].....
In this query we have seen (on some databases) simply changing ‘@CustomerId int',@CustomerId=1065’ too ‘@customerId int',@customerId=1065’ fixed the a speed problem….just changed the case on the Customer bind parameter. On other servers this has no effect. I’m thinking the server is using an old cached query plan, but don’t know for sure.
Hello group.I have an issue, which has bothered me for a while now:I'm wondering why the column statistics, which SQL Server wants me tocreate, if I turn off auto-created statistics, are so important to theoptimizer?Example: from Northwind (with auto create stats off), I do the following:SELECT * FROM Customers WHERE Country = 'Sweden'My query plan show a clustered index scan, which is expected - no indexexists for Country. BUT, the query plan also shows, that the optimizer ismissing a statistic on Country, which tells me, that the optimizer wouldbenefit from knowing this.I cannot see why? (and I've been trying for a while now).If I create the missing statistics, nothing happens in the query plan (andwhy should it?). I could understand it, if the optimizer suggested an indexon Country - this would make sense, but if creating the missing index, queryanalyzer creates the statistics with an empty index, which seems to me to beless than usable.I've been thinking long and hard about this, but haven't been able to reacha conclusion :) It has some relevance to my work, because allowing theoptimizer to create missing statistics limits my options for designingindexes (e.g. covering) for some rather wide tables, so I'm thinking why notturn it off altogether. But I would like to know the consequences - hopesomebody has already delved into this, and knows a good explanation.RgdsJesper
What is the unit of the numbers you get in the Time Statistics-part when running a query in Microsoft SQL Server Management Studio with Client Statistics turned on?
Currently I get mostly 0´s, but if I try and *** up a query on purpose I can get it up to around 30... Is it milliseconds or som made up number based on clockcycles or... ?
I would also like to know if it´s possible to change the precision.
With a database size of almost 2 GB, I run the 'truncate table eventlog command' which completes successfully, but the database size only decreases by about 10 MB so stays too large - indeed the number of rows in the eventlog table is minimal, but the otehr tables in this database don't show such an amount of tables large enough to cause the size issue either. What could be the reason and how can I reduce it (possibly truncating another table but then which one, how could I determine which is too large and needs truncating?).
Please forgive the elementary nature of my question, but could someone please explain the differences between these two database backup types:
1. Log backup 2. Log backup no truncate
From what I understand and have read, the "no truncate" backup method keeps the entire transaction log indefinitely. Using the truncation method, the transaction log is either 1) compressed or 2) cleaned up so that any completed transactions are removed from the log. Which one of these is true?
And, for the big question: is it better to run a backup of the transaction log with truncation or not? Our current backup scheme is similar to the following:
Full backup every 24 hours transaction log backup every hour with no truncation
Should we insert a truncation backup somewhere in here? What is the danger of removing (or compressing) parts of the transaction log? Will this affect the restore process?
Here I will describe my problem. 1. We are loading large amount of data from database on background thread which is starting on Application_start event in global.aspx.cs file.The data is later cached for subsquent request to improve the performance. 2. Now when we put the application on web farm garden, it is not able to load the application. 3. We are sending the request the servers through Router kind of application. 4 This application is working fine on single server enviornment.
I just have done the SSIS example in the tutorial document included when install SQL 2005 ENT. I have a problem that whenever I test to run, the service load all data from source with out noticing about the data (I mean it load all the data to the destination), I do it several time and it continue to load all without checking. That mean the data is dublicated when the schedule run???
I think there should be a paramete or something like that to help the engine just load the new data to the destination. Could you help please?
I have a big problem and i'm not able to find any hint on the Network.
I have a window2000 pc, VS2005,II5 and SQLServer 2005(dev edition)
I created an SSIS Package (query to DB and the result is loaded into an Excel file) that works fine.
I imported the dtsx file inside my "Stored Packages".
I would like to load and run the package programmatically on a Remote Scenario using the web services.
I created a solution with web service and web page that invoke the web service.
When my code execute: Microsoft.SqlServer.Dts.Runtime.Application.LoadFromDtsServer(packagePath, ".", Nothing)
I got the Error: Microsoft.SqlServer.Dts.Runtime.DtsRuntimeException: The package failed to load due to error 0xC0011008 "Error loading from XML. No further detailed error information can be specified for this problem because no Events object was passed where detailed error information can be stored.". This occurs when CPackage::LoadFromXML fails.
The error message doesn't help so much and there is nothing on the www to give me and advice....
As part of my automagical nightly index maintenance application, I am seeing a fairly regular (3-4 failures out of 5 attempts per week) failure on one particular table in my database. The particular line which seems to be failing is this one:
DBCC SHOWCONTIG (WON_Staging_EPSEst) WITH FAST, TABLERESULTS, ALL_INDEXES
The log reports the following transgression(s):Msg 2767, Sev 16: Could not locate statistics 'WON_Staging_EpsEst' in the system catalogs. [SQLSTATE 42000] Msg 0, Sev 16: [SQLSTATE 01000] Msg 0, Sev 16: -------------------- Simple ReIndex for [WON_Staging_EpsEst].[IX_WON_Staging_EpsEst] [SQLSTATE 01000] Msg 2528, Sev 16: DBCC execution completed. If DBCC printed error messages, contact your system administrator. [SQLSTATE 01000] Msg 0, Sev 16: [SQLSTATE 01000] Msg 0, Sev 16: -------------------- Post-Maintenance Statistics Report for WON_Staging_EpsEst [SQLSTATE 01000] Msg 0, Sev 16: Statistics for WON_Staging_EpsEst, WON_Staging_EpsEst [SQLSTATE 01000] Msg 2528, Sev 16: DBCC execution completed. If DBCC printed error messages, contact your system administrator. [SQLSTATE 01000] Msg 0, Sev 16: Statistics for WON_Staging_EpsEst, IX_WON_Staging_EpsEst [SQLSTATE 01000] Msg 2768, Sev 16: Statistics for INDEX 'IX_WON_Staging_EpsEst'. [SQLSTATE 01000] Updated Rows Rows Sampled Steps Density Average key length -------------------- -------------------- -------------------- ------ ------------------------ ------------------------ Aug 3 2007 3:22AM 674609 674609 196 2.0958368E-4 8.0
(1 rows(s) affected)
This table is dropped and recreated each day during a data import job. After the table is recreated and repopulated with data (using a bulk import from a flat file), the index is also recreated using the following code:CREATE INDEX [IX_WON_Staging_EpsEst] ON [dbo].[WON_Staging_EpsEst](OSID, [Year], Period) ON [PRIMARY]Yet more often than not, that evening, when the index maintenance job runs, it fails with the aforepasted messages complaining of being unable to find table/index statistics.
Worth noting, perhaps, is that this same process is used on roughly 10 data staging tables in this database each day, and none of the other tables fail during the index maintenance job.
Also worth noting, perhaps, is that this IDENTICAL table/code is processed in exactly the same way on TWO other servers, and the failure has not occured in any of the jobs on those other two servers (these other two servers are identical mirrors of the one failing, and contain all the same data, indicies, and everything else.
Any thoughts, suggestions for where to look, or unrestrained abusive comments regarding my ancestry?
I have a small doubt. If we apply a statistics command on a particular table what will it update. Normally statistics are created automatically by the server or we have to create it.
Anybody know how many companies worldwide use SQL server and how manyindividual servers this amounts to? Also, at what rate is SQL usegrowing? Can someone at least point me to a source where I could findclose to exact numbers?
Here my data sample on which I need to perform some stats Time(Sec) Result 1 2 2 8 3 6 4 2 5 2 6 4 7 2 8 7 9 8
What I need from this is a result set that looks as follows GroupNo Value 1 5.33 2 2.67 3 5.67
This is a grouping of the result data in 3's by time. Value is the average of the Group. In need to write a select statement to do this. Note the Group could be done from 1 to 10
The end result of this is to display a Range Chart which shows Results grouped according to requirements. Any Help would nice. Pargat
I would like to know, How can I drop Statistics from tables. My user tables has two indexes and and some statistics created onto them. I would like to drop the statistics indexes and apprecaite, If someone please advice.
What are some ways to analyze index coverage and usage? I have a 18 GB database, half is data, other half is indexes and I want to cut down that number as much as I can without affecting performance. Thanks
Does anyone have any generic scripts that Drop all the statistics that SQL auto generates?? I have hundreds of '_WA_....' indicies that are auto created by SS7 and I just want to get rid of ALL of them. Thanks!
I have been monitoring some indexes on a table with a lot of inserts, no updates and no deletes. I was wanting to determine when to update the statistics on the index. Does anyone know what would be a good target range for the density when you run the dbcc show_statistics?
When the "create statistics" command is run, what table entries are made into system tables?
I want to check for the existence of statistics on certain columns and if they are not there, create them. What is a good way to see if they are already created?
I was wondering how often you should reindex. By looking at dbcc showcontig and statistics I see that I am heavily fragmented and scan density is between 10-30% on my important indexes. I'm thinking of scheduling this to be done nightly. nay help is much appreciated.
I am contemplating creating a job to execute every 5 mins which will update index statisics if they are more than say 8% out. I would like to know what thoughts people have on this? i.e. pros and cons.
I like forward to what you have to say.
I have auto stats on. Our stats are often more than 10% out. At what level do you reckon the query plan might be effected by out of data stats?
It seems to me there are many ways to update statistics for a table. i.e. "sp_updatestats", "sp_recompile", "dbcc updateusage"
Can somebody tell me the difference between those commands and what's the best way for updating your statistics? Does reindexing update the statistics?
Can I copy statistics in SQL Server from one environment to another without copying the actual data. For example from production to development. It is possible to copy statistics in other databases, like DB2/UDB, Oracle. Reason is to execute some poor performing SQLs and analyze their execution plan.
Did not find anything on this subject in BOL. Since statistics is stored in the statblob column of sysindexes, I tried updating statblob column of the index and rowcnt columns of table & index to mimic the copy. After my updates to 'TO' table
showed the results that is identical to the statistics of my FROM table.
But when I execute a small SELECT on (FROM) table(which contains the original, required stats) and the (TO) table (where the statistics is now copied), I get 2 different execution plans. This means I am not successful in my attempt to cheat the optimizer.
I am supporting a SQL Server 6.5 databases that users query using pre-configured reports. The reports use views, stored procedures, and triggers set up by the programmers and accessed thru a client on the workstation.
I need to be able to count which users log in (SQL Server security), how often, and either which reports they use or what tables they select.
I do not have access to the WindowsNT server, so the solution has to work with SQL Server SA rights.
I've a table with half million records that my application uses continious with several UPDATE e SELECT statement (about 5 requests/sec). After several (4-5) hours I've a degrade of performance, but if I update the statistics (of thi table) all return ok.
Now the situation is I create a job to maintenance this table updating statististic two times a day ....
Is it normal? SQL should update statistics by itself? I choose the wrong way or ... what can i do?