If I run the following query, it does exactly what I expect and scans
the clustered index:
SELECT customerStatusID, [Name] FROM CustomerStatus_T
WHERE dateArchived IS NULL
AND Display = 1
AND StatusType = 'Q‘
and gives the following QEP and IO statistics:
|--Clustered Index Scan
(OBJECT:([Reach_Roms].[dbo].[CustomerStatus_T].[PK_ROMS_CustomerStatus]),
WHERE:(([CustomerStatus_T].[DateArchived]=NULL AND
[CustomerStatus_T].[StatusType]='Q') AND
Convert([CustomerStatus_T].[Display])=1))
For some bizarre reason, the optimizer thinks that a NC index lookup
on a single-page table, which ultimately costs 7 IOs, is cheaper than
a table (or Clustered Index) scan of a single page. Why? The
showplan cost also shows that it expects the NC index to be cheaper
(which is presumably why it goes and uses it), but even after running
UPDATE STATISTICS on the table it still chooses the same idiotic query
plan.
Any thoughts, or has anyone seen similar behaviour before, and can
anyone please explain it to me?
p.s. I don't actually WANT to put a NC index on this table, but I
noticed the behaviour by accident which is why I'm asking the question
:-)
In Sql6.5 we use binary sort order for a best performance (I think). I hear that it should be highlighted that the case for binary sort order being the fastest method of sorting and searching a database no longer applies at SQL 7.0 Is it right and why ????
clustering sounds expensive and arcane. Is it ever a better choice than mirroring for high availability? Perhaps when the size of the db copy is too prohibitive under the mirroring option?
If There are very lots of data to retrieve to show in any inquiry forms.each inquiry forms need to use a lot of table. There are two methods I thought First, Prepare data to Temp table when arise any transaction and Program then retrieves data from temp table. Second, Create view for retrieving data Which method is the better choice ? How ? (More fast, More performance or More flexible ? ) Please advise me....
Is VB.net the most logical choice of a RAD tool to use with MS SQL Server? Is VB.net strictly for web apps or can you use it to create projects that run as executables off the server?
SELECT CO.CO_ID, MAX(PR.DOCNO) AS pO_DOCNO ,MAX(CR.DOCNO) AS CR_DOCNO FROM COMPANY CO lEFT OUTER JOIN PROCUREMENT PR ON OG.CO_ID=PR.CO_ID lEFT OUTER JOIN CONTRACT CR ON CO.CO_ID=CR.CO_ID GROUP BY OG.CO_ID
the result is ok but the problem its taking infinte time if i add more tables to the outer join, i have more tables and each with huge number of records
any better way to do this ?? its true performance is a big deaaaal
Hi - I have a rather unreliable host just now - but they offer .net, sqlserver and SSL for a reasonable price.Problem is, the domain is hosted on a shared server - and it keeps goingdown apparantly because of code which is less than clean, on somepeoples sites. (ie. not closing connections etc).I am considering moving to a decicated server - but at this point intime, cannot afford a full SQL Server licence for it - however, thededicated server does offer MSDE. Is it acceptable to go back to usingthis from SQL Server - for a currently low hit site - ie. around 500hits per day.Does MSDE offer stored procedures (I don't use views or triggers)? CanI just take a DTC backup/export of my current SQL Server database, andrestore it to the MSDE one?What would be the cut-off point for using MSDE?Thanks for any info - also, if anyone knows why UK companies charge somuch more dor dedicated servers, than US companies - I'd be interested.Thanks, Mark*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
I created a application in VB and backend database for this is SQL Server 2005. So far I was working on "SQL Express", as a testing one. I have almost 20 Tables and also lot of stored procedure. From the programming point of view, the database is not that much of large. The size grows only when the pictures are added to the database. Its not a client/server application(only desktop application).
If i install Enterprise Edition, will it Consumes large space and memory into the client machine. What about performance issue.
Right now i have two editions SQL Express and SQL Enterprise Edition.
Which edition you would like to suggest on the basis of information provided as above.
I am using a stored procedure to take backup of my database from the Visual Basic Programming.
Before i posted one of my thread with the same thing, so i was recommended to go through with DMOSQL do deal with SQL server with Visual Basic Programming. For me, this takes some time to understand the complete concept.
Because of urgent i am using stored procedure to take backup with the following:
--------------------------------------------------------------------------------------- ALTER PROCEDURE dbo.BackUPBLMSDB
( @RP nvarchar(200) )
AS
declare @backupfilename nvarchar(200) set @backupfilename=@RP BACKUP DATABASE [BLMSDB] TO DISK = @backupfilename WITH NOFORMAT, NOINIT, NAME = N'BLMSDB-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 --------------------------------------------------------------------------------------- Example @RP="D:ackupBackup-1 09-02-2008 11-24"
Every time i am passing a parameter with somename and dateandtime(System). Example
I would like to clear my doubt, is it a good practice to take backup with different names. The above one i store four backups . If the system crashes and i created the new database with the same schema without any data present in the tables, can i restore previous backup database to the newly created database.
Moreover, The first two backups contains "2900KB", the third one is "5400KB" after the data is being modified. Look at the fourth one it is "2900KB". Why the size is being reduced to "2900KB" after taking backup eventhough i didn't delete or added data into the database.
I try do some tests and I get one doubt, why the optimizer don€™t make a constant scan in normal tables, for instance:
Code Snippet --drop table #tmp create table #tmp (id Int Identity(1,1) Primary key, name VarChar(250)) go insert into #tmp(name) values(NEWID()) insert into #tmp(name) values(NEWID()) go set statistics profile on go -- Execution plan create a Constant Scan select * from #tmp where id = 1 and id = 5 go set statistics profile off
GO
--drop table tmp create table tmp (id Int Identity(1,1) Primary key, name VarChar(250)) go insert into tmp(name) values(NEWID()) insert into tmp(name) values(NEWID())
go set statistics profile on -- Why execution plan does not create a Constant Scan for this case? select * from tmp where id = 1 and id = 5 go set statistics profile off
How do I get a particular user to be a choice under the db_owner role for a particular database? The user is listed under logins and even shows to be the db_owner for the database under the database access tab of the login properties. This is SQL 2000. Thanks, David P.
We have a small accounting application which is currently based using DBASE database. We need to change the DB and considering SQL Express. However, is some one can clarify following, it would be very helpful:
1) Application is used mostly by standalone non-technical users. There are cases where more than one user will need to connect to DB.
2) We need to ensure that user can not modify database outside of our application. This is needed to ensure database does not get currpted or passwords lost and then no one can open the database.
3) Installation needs to be simple without providing any options to users except where to install database or point to already installed DB in case its a network environment where 2-3 users can be working on the same database.
4) Application is usually installed on normal desktop machines. So, DB should not load the PC heavily.
Please advice if SQL Express is the right direction even with these constraints? What are the other alternatives? We are open to have a small consulting project as well with someone who can guide us through these issues. Email to contact is rkabra101@yahoo.com
I have a SQL command which I run on two separate servers. Both servers and configured and built the same. On server 1 it takes mere seconds, but on server 2 it takes over 5 minutes.
I have checked the execution plan on both servers and they are completely different. I ran UPDATE STATISTICS WITH FULLSCAN on both servers, but the execution plans were still different.
My question is why are the execution plans so different and how do I get them to execute with the same plan.
I tried this:use northwindgoSELECT OrderDateFROM Orders WHERE OrderDate > '19950101'see the query plan? okSELECT OrderDate, EmployeeIdFROM Orders WHERE OrderDate > '19950101'see the query plan? what appened?the only way to make an index seek instead of an index scan is toforce theindex usage ( with(index=orderdate) ), but I don't like this solutionalso try this:SELECT *FROM Orders WHERE employeeId > 9andSELECT *FROM Orders WHERE employeeId > 8Can someone explain why this appens? and how can I overturn theperformance loss problem (well not in orders table, but in my tablethere are 300K records and making a scan to retrieve 50 records is notexactly what I want)thanks to all
We have several applications that work with product catalog data. Data is entered and maintained, searched, and reported on. We're using CSLA business objects to create our Biz Objects and our front end apps are ASP.NET pages and web services. SQL 2k5 is our database. Currently all data is done in Factory methods in our business objects using SQL Stored Procedures and UDF's.
We want to start storing auditing and statistics data on our product searches. In SQL 2k we were using SQL Profiler to capture data and storing the information in tables, but it really wasn't very flexible and was difficult to maintain. What we want to do is every time someone submits a search we store the critiera and the results. Every time someone edits a product we want to save the old record. This will allow us to provide historical reporting and statistical reporting to our users.
In our old system the search results table was at about 3 million records. And since we've moved to a web based application we're hoping to save this information asynchronously so our search results or postbacks are not held up by saving this audit data. We were talking about writing logic into our biz objects code but it all seemed a bit slow and difficult to do asynchronously. Then I read a couple posts suggesting Service Broker.
Now we're considering either writing triggers on our tables or adding code to our factory stored procedures to send messages to Service Broker that would save the data into our audit tables but not hold up our business processes. We would be saving to the same database on the same server, but different tables.
Does Service Broker seem like it could be the right tool for this job? There looks like a bit of a learning curve and before I jump in i'm looking for some advice or direction.
I'm a beginner to Report Services, and have tons of questions.
Here's the first one:
if the reports are created based on the condition that the user selects, how can I create the reports with Report Services?
For example,
the user can select the fields that will be shown on the reports, as well as the group fields, the sort fields and restrict fields. So I would not be able to pre-create all possible reports and deploy them to the report server, and I think I should create the reports dynamicly based on what the user select.
Could someone tell me how to do it (create and deploy the reports)?
I would like to write my table to a delimited file but I seem to have no choice but to use comma as the delimiter. Is there any way I can choose the delimiter ?
I'm looking for an in depth book, article, faq, whatever, regarding the query optimizer...
I've read the books online pretty thoroughly and have been sql coding for a number of years. The system I work on relies heavily on real time access to data and the number crunching procedures we use are a critical part of the design. For the most part, sometimes through trial and error, I have been able to find ways to achieve the performance we need, but I'm often surprised by the methods that prove most effective.
For example, I have cases where I can only get the performance I'm looking for using table functions, and other cases where indexed temporary tables are the only way. I have statements that run fast as a select statement, but when converted to an update statement limp along, forcing me to resort to cursors, temp tables, or table hints with varying degrees of success.
I'm wondering if anyone has come across material that takes an in depth look at the various technologies available and how to tweek queries. I want to get away from hours of testing and hacking.
Way back when, and at least in version 7 IIRC, the query optimizer gaveup when the where clause in a statement contained more than 4 searchconditions.Does anyone know if such a limitation still exist in MS SQL 2005? TheBOL seems to be silent on the issue.Boa
Is there a DMV or similar in SQL 2012, or SQL 2008, that shows when a statistic was last used by the optimizer? I would like to cleanup some of the auto-generated stats, assuming it's possible to do so. In particular I'm looking to drop those statistics that were created by one-off queries, data loads, etc, and are now doing nothing but adding to the execution time of Update Statistics jobs.
declare @ContactId as integerset @ContactId = 5select *from Person.Contactwhere ContactId = @ContactIdOR @ContactId = -1If you run this in SQL 2005 on the AdventureWorks database,why the logical reads is 561Table 'Contact'. Scan count 1, logical reads 56and not 2 when you run without the second OR condition:declare @ContactId as integerset @ContactId = 5select *from Person.Contactwhere ContactId = @ContactIdHow can i use the same SP and either get one record returnedby passing the ID of the field, or pass a dummy parameter like-1 in order to get ALL the records returned.In this case even when i pass a parameter like ContactID = 5there is still a table scan (clustered index scan in this case)happening for the other OR condition.There's no method to tell SQL to start checking the first conditionwhether or not it is true then if it is false then check the second ORconditon. On the same topic does this mean all OR conditions areALWAYS verified regardless if one of them has already been determinedto be True?Thank you
Guys,what I need is a tool which gives details on the choice of anexecution plan by the SQL Server. For example, the cost for a hashjoin might be 200 and 100 for a nested loop, and therefore a nestedloop is used. Same thing for the access paths for each table/viewinvolved. In Oracle, we turn on event 100053 to see this kind of info.ThanxDaniel
I am trying to resolve performance issues in a third party application. I have run the profiler and found a transaction that performs a table scan against a 6 million row table. This transaction occurs repeatedly, so I thought, just add an index on the columns in the where clause used here. After adding the index, I looked at the estimated execution plan in Query analyzer, and I find that it is still performing the table scan. If I run the query it takes over 60 seconds to run, if i add an index hint, it runs in under a second. I ran DBCC SHOW_STATISTICS to see if the statistics were up to date:
Statistics for INDEX 'IX_Finish_dept'. Updated Rows Rows Sampled Steps Density Average key length -------------------- -------------------- -------------------- ------ ------------------------ ------------------------ Jun 26 2007 5:18PM 6832336 6832336 150 2.1415579E-7 18.0
(1 row(s) affected)
All density Average Length Columns ------------------------ ------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.1875491E-7 8.0 finish 1.9796084E-7 18.0 finish, dept
Hello All, I have a series of Stored Procedure that has a query taking a join of 5 tables. These tables are quiet large with couple of them having around 10 million rows. As this is a DSS application having periodic data loads, I thought of creating Indexed View on top of these tables. Now the problem is that the Indexed View is not directly used by the optimizer. I need to change my queries and put a WITH (NOEXPAND) query hint to make sure the indexed views are used. This is inspite getting dramatic improvement in the query timings (from 64 secs down to 3 secs) after using the Indexed Views. I would like to know what can be the possible reason for the optimizer not using the Indexed View by itself. Is it because my Indexed View caters to multiple queries or I am missing out on something basic.
An interesting discussion yesterday. One of the programmers asked about the use of the NOLOCK optimizer hint with an iterator table aka table of numbers. His comment was that this optimizer hint was not efficient. Rather than give a knee-jerk response I thought it would be better to ask. The main circumstance is that the iterator table is completely static with a fill factor of 100%. My purpose is to eliminate lock contention if I can.
Are there reasons to not use the NOLOCK hint in this case to potentially improve performance?
I have a report that includes two multi-valued parameters. In the Default Values section, I choose 'from query' and select dataset and value field. In the Available Values section, I choose 'from query' select the same dataset and value field, and in the label field I select the relevant label field. When I run the report my multi-valued parameters look like I selected the option 'select all' (all options are selected). How can I keep the multi-valued parameters cleared from selections until the user select his choice? Thanks in advance.