SQL Server 2008 :: Newly Created Index And Execution Plan
Jun 17, 2015
I run a query
select col1, col2, col3, col4
from Table
where col2=5
order by col1
I have a primary key on the column.The execution plan showing the clustered index scan cost 30% & sort cost 70%..When I run the query I got missing index hint on col2 with 95% impact.So I created the non clustered index on col2.The total executed time decreased by around 80ms but I didn't see any Index name that is using in the execution plan.After creating the index also I am seeing same execution plan
The execution plan showing the clustered index scan cost 30% & sort cost 70% but I can see the total time is reducing & Logical reads on that table is reducing.I am sure that index is useful but why there is no change in the execution plan?
I have query with an expensive Key Lookup on a joined table. The predicate is the column that I'm joining on, and the output list contains two columns from the joined table.
I've created a basic non-clustered index covering the predicate column and include-ing the two output columns. However, the execution plan ignores this, and insists on using the primary key of the joined table to do the expensive key lookup. I've tried adding the included columns to the index directly and there's no change. I've also tried running dbcc freeproccache and no change.
Is it possible to check query execution plan of a store procedure from create script (before creating it)?
Basically the developers want to know how a newly developed procedure will perform in production environment. Now, I don't want to create it in production for just checking the execution plan. However they've provided SQL script for the procedure. Now wondering is there any way to look at the execution plan for this procedure from the script provided?
Is there a way to leave the graphical 'Include Execution Plan' on by default in SSMS? I don't know how many times I run a long-running query, say to myself, "wow, that took a while; I wonder what the execution plan looks like?" only to realize that I left it turned off. Now I have to turn it on, and wait for the query to run again. I'm guessing there's a setting in the options somewhere to always leave it on, but I'm not sure where
We've been running a mirrored database (using certificates since we don't have a domain) and it's all working well. Last week we decided to add a witness for automatic failovers, but for some reason I just can not get the witness to connect to the Partner2 server.
See screenshot here
Please help me troubleshoot this - I re-created the endpoints / users / certificates but it's still not working. Where can I get more information on what exactly the problem is? Can I test the endpoints somehow?
I'm in the process of trying to optimize a stored procedure with many queries. The execution plan provides a missing non-clustered index on nearly every query, and they're all fairly similar. The only real difference between them are what's in the INCLUDE statement. The two key columns are listed in every missing index. Let's say each query is approximately 5% of the total batch and 90% of the queries all fall into the category I listed above. How should I go about creating the missing indexes? Create all of the missing indexes or create one generic one that has all the INCLUDE columns? Create a minimal index with just a few of the common INCLUDE columns?
Here's an example of what I'm talking about with the missing indexes:
/* USE [DB] GO CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>] ON [dbo].[TABLE_1] ([COLUMN_A],[COLUMN_B]) INCLUDE ([C4ABCD],[C4ARTX],[C4ASTX],[C4ADNB],[C4AFNB],[C4BKVA]) GO */ /*
The Query Processor estimates that implementing the following index could improve the query cost by 99.9044%.
*/ /* USE [DB] GO CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>] ON [dbo].[TABLE_1] ([COLUMN_A],[COLUMN_B]) INCLUDE ([C4ARTX],[C4ASTX],[C4ADNB],[C4CZST]) GO */
/*
The Query Processor estimates that implementing the following index could improve the query cost by 99.5418%.
*/ /* USE [DB] GO CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>] ON [dbo].[TABLE_1] ([COLUMN_A],[COLUMN_B]) INCLUDE ([C4ABCD],[C4ARTX],[C4ASTX],[C4ADNB],[C4AFNB],[C4BKVA]) GO */
If you display the execution plan and run the following:SET STATISTICS IO ONgoSELECT ProductID, SupplierIDFROM ProductsWHERE SupplierID = 1I don't understand how come there is noBookmark Lookup operation happening to get theProductID?I only see an Index Seek happening on SupplierID.There is no composite index SupplierID + ProductIDso what am I not understanding here?Thank you
I am new to SQL Programming. I am learning the basics. I am trying to create a simple query like this -
SELECT Column_1, Column_2, Column_3, 10*Column_1 AS Column_4, 10*Column_2 AS Column_5,
-- I am not being able to understand how to do this particular step Column_1*Column_5 As Column_6
FROM Table_1 First 3 Columns are available within the Original Table_1 The Column_4 and Column_5 have been created by me, by doing some Calculations related to the original columns.
Now, when I try to do FURTHER CALCULATION on these newly created columns, then SQL Server does not allows that.
I was hoping that I will be able to use the Newly Created Columns 4 and 5 within this same query to do further more calculations, but that does not seems to be the case, or am I doing something wrong here ?
If I have to create a new column by the name of Column_6, which is actually a multiplication of Original Column_1 and Newly Created Column_5 "I tried this - Column_1*Column_5 As Column_6", then what is the possible solution for me ?
I have tried to present my problem in the simplest possible manner. The actual query has many original columns from Table_1 and many Calculated columns that are created by me. And now I have to do various calculations that involve making use of both these type of columns.
I am getting ready to start a project where I am charged with moving out old data from production into a newly created historical DB. We have about 8 tables that are internal audit tables, that are big and full of old data. These tables are barely used and are taking up way too much space and time for maintenance.
I would like to create a way (SSIS?) to look at the date field in each of the 8 tables and copy out anything older than two years into my newly created history DB. Then deleting the older records from the source DB.
I don't know if SSIS is the best method to use. If it is, what containers to use to move over data, then how to do delete from source?
Can I do the mass deletes on my audit source tables without impacting performance/indexes/fragmentation?
I have a four tables called plandescription, plandetail and analysisdetail. The table plandescription has the columns DetailQuestionID which is the primary and identity column and a QuestionDescription column.
The table plandetail consists of the column PlanDetailID which the primary and identity column, DetailQuestionID which is the foreign key attribute of plandescription table and a planID column.
The third table analysisdetail consists of a analysisID which the primary and identity column, PlanDetailID which is the foreign key attribute of plandetail table and a scenario.
Below is the schema of the three tables
I have a two web form that will insert, update and delete data into these three tables in a two transaction. One web form will perform CRUD operations in plandescription and plandetail table. When the user inserts QuestionDescription and planid in this web form, I will insert the QuestionDescription Value in the plandescription table and will generate a DetailQuestionID value and this value is fed to the plandetail table with the planid. Here I will generate a PlanDetailID.
Once this transaction is done, I will show the second web form in which the user enters the scenario and this will be mapped with the plandescription using the PlanDetailID.
This schema cannot be changes as this is the client requirement. When I insert values I don’t have any problem. However when I update existing data, I need to delete existing PlanDetailID in the plandetail table and recreate PlanDetailID data for that DetailQuestionID and planID. This is because, the user will be adding or deleting a planID associated with the QuestionDescription.
Once I recreate PlanDetailID for that DetailQuestionID and planID, I need to update the old PlanDetailID with the new PlanDetailID in the third table analysisdetail for the associated analysisID.
I created a #Temp table called #DetailTable to insert the values analysisID, planid and old PlanDetailID and new PlanDetailID so that I can have them in update statement once I delete the data from plandetail table for that PlanDetailID.
Then I deleted the plandetailid from the plandetail table and recreate PlanDetailID for that DetailQuestionID. During my recreation I fetched the new PlanDetailID’s created into another temp table called #InsertedRows
After this I am running a while loop to update the temp table #DetailTable with the newly created PlanDetailID for the appropriate planID’s. The problem is here. When I have the same number of planID’s for example 2 planID’s 1,2 I will have only two old PlanDetailID and new PlanDetailID for that planID and analysisID.But When I add a new PlanID or remove a existing planID I am getting null value for that newly added or deleted planID. This is affecting my update statement of analysisdetail table as PlanDetailID cannot be null.
I tried to remove the Null value from the #DetailTable by running the update statement of analysis detail in a while loop however its not working.
DECLARE @categoryid INT = 8 DECLARE @DetailQuestionID INT = 1380 /*------- I need the query to run for the below three data. Here i'm updating my planids that already exists in my database*/ DECLARE @planids VARCHAR(MAX) = '2,4,5'
hello, i have a problem in my sql. im using a stored procedure and i want to get the newly created id number to be used to insert in the other table.it works like:insert into table() values()select @id = (newly created id number) from tableinsert into table1() values(@id) something like that.
I am trying to execute three dts packages from a Job that I created. When creating the steps I am setting the Type To: Operating System Command(CmdExec).
The Command I am using is: dtsrun /S servername /E /dtsPackageName
For some reason whenever I try to run thes jobs it fails. Within this Job I have three steps. For each step I am using the same job but with a different package name. I have a trusted connection so my command should work just fine. What am I doing wrong. Can some please explain this process.
SELECT Column_1, Column_2, Column_3, 10*Column_1 AS Column_4, 10*Column_2 AS Column_5, -- I am not being able to understand how to do this particular step Column_1*Column_5 As Column_6 FROM Table_1
First 3 Columns are available within the Original Table_1 The Column_4 and Column_5 have been created by me, by doing some Calculations related to the original columns.
Now, when I try to do FURTHER CALCULATION on these newly created columns, then SQL Server does not allows that.
I was hoping that I will be able to use the Newly Created Columns 4 and 5 within this same query to do further more calculations, but that does not seems to be the case, or am I doing something wrong here ?
If I have to create a new column by the name of Column_6, which is actually a multiplication of Original Column_1 and Newly Created Column_5 "I tried this - Column_1*Column_5 As Column_6", then what is the possible solution for me ?
I have tried to present my problem in the simplest possible manner. The actual query has many original columns from Table_1 and many Calculated columns that are created by me.And now I have to do various calculations that involve making use of both these type of columns.
I had DTS the Northwind sample database from SqlServer 2000 to 2005. It's went ok and no errors. Then, I created a SP named upGetCustomer, bascially it queries the Customers table and list some of it's fields and order by CustomerId,CustomerName, Country decrementally. SP is so simple and has no errors.
Then, I created a SqlServer project in VS2005 using C#. Add store procedure class as below, using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure] public static void Customers(string customerid) { using (SqlConnection sqlConn = new SqlConnection("context connection=true")) { sqlConn.Open(); SqlPipe sqlPipe = SqlContext.Pipe; SqlParameter param = new SqlParameter("@custId", SqlDbType.VarChar, 5); param.Value = customerid; SqlCommand sqlCmd = new SqlCommand(); sqlCmd.CommandType = CommandType.StoredProcedure; sqlCmd.CommandText = "upGetCustomer"; sqlCmd.Parameters.Add(param); SqlDataReader rdr = sqlCmd.ExecuteReader(); SqlContext.Pipe.Send(rdr); } }
I had taken method of created SQLCLR from source in Microsoft website, but coding is mine, and hopes it is correct. I used SqlConnection string as 'context connection=true' which I still not quite sure what does it means, hopes it mean current connection I am using on my Windows authentication.
The project did compiled & deployed OK on the VS2005 side.
The problem is that when I tried to locate the assembly on the Sql Server 2005, but can't find it anywhere on Sql Server.
I did try complied and deloyed again, it keeps saying there is an error in deployment as customers assembly is already exist on the database. I then, tried to remove this assembly on the database using SQL script, drop assembly customers in the Northwind database but got error saying it does not exist. So where is it???
The benefit of the actual execution plan is that you can see the actual number of rows passing through each step - compared to the estimated number of rows.But what about the "cost percentages" ?I believe I've read somewhere that these percentages is still just an estimate and is not based on the real execution.Does anyone know this and preferable have a link to something that documents it?Thanks
As you can I have a uniqueidentifier (UniqueId) column which I populate with NewID() I'm trying to return this as I need it for other functionality of the website but I can't figure out how I can get it after the insert has completed?
SELECT Column_1, Column_2, Column_3, 10*Column_1 AS Column_4, 10*Column_2 AS Column_5,
-- I am not being able to understand how to do this particular step Column_1*Column_5 As Column_6
FROM Table_1
First 3 Columns are available within the Original Table_1
The Column_4 and Column_5 have been created by me, by doing some Calculations related to the original columns.
Now, when I try to do FURTHER CALCULATION on these newly created columns, then SQL Server does not allows that.
I was hoping that I will be able to use the Newly Created Columns 4 and 5 within this same query to do further more calculations, but that does not seems to be the case?
If I have to create a new column by the name of Column_6, which is actually a multiplication of Original Column_1 and Newly Created Column_5 "I tried this - Column_1*Column_5 As Column_6", then what is the possible solution?
I have tried to present my problem in the simplest possible manner. The actual query has many original columns from Table_1 and many Calculated columns that are created by me. And now I have to do various calculations that involve making use of both these type of columns.
I have a table that I have created a table and desire to do some basic math by adding a few new columns. The problem is that i cant get this to work without create many new select statements. The new columns that I wish to add refer to other newly created columns. Is there a way I can do this with CTW or subqueries? Unless it is a best practice to chain out the logic for the newly created columns
I have an example from AdventureWorksDW since the data is very accessible. I can safely create EMP_TENURE and PTO_REMAINING is this select statement. I would then need to create a new select statement to define 'BONUS' and then another select statement to define 'NEW_COL1' and so on.
Im still pretty new at SQL and am trying to learn how to complete such a task using subqueries or CTE.
Hi! I have a piece of code that connects to the master database and uses the simple 'CREATE DATABASE MYDATABASE' statement to create a new database on an sql server 2005 express instance.
I then close that connection and try to open a new connection using my newly created database. I use integrated authentication in both cases. However, I get the following error: Cannot open database "MYDATABASE" requested by the login. The login failed.Login failed for user 'DOMAINNAMEusername'.System.Data.SqlClient.SqlException: Cannot open database "MYDATABASE" requested by the login. The login failed.
I don't get this error all the time. I tried to run my code many times and it some times work. But in some cases it doesn't. The database is always created but some times I cannot connect to it. But again, if I wait a little and try again with the same database, the connection is made.
I know that my connections are opened and closed normally and they don't remain open. So I don't believe I reach the maximum number of available connections.
I also sometimes get this exception : A connection was successfully established with the server, but then an error occurred during the login process. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.
I created a new database in SQL Server 2000, using the command "CREATE DATABASE ContactManager ON PRIMARY(NAME = ContactManager,FILENAME ='C:Program FilesMicrosoft SQL ServerMSSQLDataContactManager.mdf',SIZE = 5MB,MAXSIZE = 20MB,FILEGROWTH = 5MB)LOG ON(NAME = ContactManager,FILENAME = 'C:Program FilesMicrosoft SQL ServerMSSQLDataContactManager.ldf',SIZE = 2MB,MAXSIZE = 10MB,FILEGROWTH = 2MB)Go" Databse was successfully created and I created two tables also in it. Now when i try to create a new connection in Visual Web Developer 2005, I get the error message : "Directory lookup for the file "C:Program FilesMicrosoft SQL ServerMSSQLDataContactManager.mdf" failed with the operating system error 5 (Acess is denied.). Cannot attatch the file "C:Program FilesMicrosoft SQL ServerMSSQLDataContactManager.mdf" as 'ContactManager'."
I have many new tables for which i need to write Insert,Update and delete triggers manually. Is there any way to generate triggers script which takes table name as a input parameter and print/generate trigger's script?
Why are the user databases that were created in the MSSQLSERVER default instance showing up in the newly created ALPHAONE instance? I'm successfully logging into the alphaone database as it shows as "DASAlphaOne,1xxxPport) at the top of the treeview in ssms. I'm logging in as sa and can edit anything.
The issue here is that all the user databases are shown and can even be edited. I created this instance in an effort to hide databases from whoever is not supposed to see.I was expecting a clean instance with only the system databases..Is there something that can be set to keep each instance's databases private into itself?
I guess this should be rather simple to answer but i just don't get how to do this. I just want to import some data from an Access mdb file into a SQL Server 2005 Database.
As I dont want to create all tables in Sql Server 2005 beforehand i'd like to create them out of my SSIS Package. Therefore i use the 'Execute SQL Task' in the control flow before stepping into the dataflow where only a oledb-source and an oledb-destination exist...
VERY simple... My problem is, that i can't select the table from the selectionlist in the oledb-destination because it does not exist before executing the package...
If i create the table by hand just to be able to select it, that does not help. because if everything is set up and then i delete the table (because it will be created by the package anyway) an error occurs before executing the package - in the validation phase:
Package Validation Error: "Invalid object name 'myToBeCreatedTable'".
Can't i create tables on the fly which i use then in my dataflow tasks?
Kind regards,
Wolfgang
Hello dear Forum,
I guess this should be rather simple to answer but i just don't get how to do this. I just want to import some data from an Access mdb file into a SQL Server 2005 Database.
As I dont want to create all tables in Sql Server 2005 beforehand i'd like to create them out of my SSIS Package. Therefore i use the 'Execute SQL Task' in the control flow before stepping into the dataflow where only a oledb-source and an oledb-destination exist...
VERY simple... My problem is, that i can't select the table from the selectionlist in the oledb-destination because it does not exist before executing the package...
If i create the table by hand just to be able to select it, that does not help. because if everything is set up and then i delete the table (because it will be created by the package anyway) an error occurs before executing the package - in the validation phase:
Package Validation Error: "Invalid object name 'myToBeCreatedTable'".
Can't i create tables on the fly which i use then in my dataflow tasks?
I guess this should be rather simple to answer but i just don't get how to do this. I just want to import some data from an Access mdb file into a SQL Server 2005 Database.
As I dont want to create all tables in Sql Server 2005 beforehand i'd like to create them out of my SSIS Package. Therefore i use the 'Execute SQL Task' in the control flow before stepping into the dataflow where only a oledb-source and an oledb-destination exist...
VERY simple... My problem is, that i can't select the table from the selectionlist in the oledb-destination because it does not exist before executing the package...
If i create the table by hand just to be able to select it, that does not help. because if everything is set up and then i delete the table (because it will be created by the package anyway) an error occurs before executing the package - in the validation phase:
Package Validation Error: "Invalid object name 'myToBeCreatedTable'".
Can't i create tables on the fly which i use then in my dataflow tasks?
Can anyone explain me simple language and easy to understand query execution plan. I am a fresher assigned to read and evaluate execution plan. i do not understand where is the problem. what percentage is considered as good sql and what percentage is considered as bad sql.
how do i understand whether there is a problem in sql or joins or index or anything else. Please explain me step by step what should be considered and what recomenendation should i give for each problem.
As a developer, we always say "using a stored procedure, instead of a cliet side SQL statement, provides performance benefits". However, it seems it has not been true anymore since SQL Server 7.0.
See SQL online "Execution Plan Caching and Reuse" at http://msdn.microsoft.com/library/default.asp?url=/nhp/default.asp?contentid=28000409
I am quite confused with the following questions: 1. it seems since SQL 7.0, a SQL statement in client side uses the existing execution plan as a stored procedure does. That means SP doesn't has much advantage over SQL statement in terms of performance.
2. It seems, a stored procedure is not always compled ONLY once. If a stored procedure is not used for a long time, it could be kicked out from procedure cashe.
3. In order to use an existing execution plan, it seems that we have to use the fully qualified identifier, such as SELECT * FROM Northwind.dbo.Employees
instead of SELECT * FROM Employees
However, I rarely see anyone uses these kind of fully qualified references for objects both in SQL statements and SP. For example, in the sample database pubs and NorthWind, they don't use the fully qualified expression. I only see the use of it in master database.
I guess I might miss something in the issues above. I would like to get any explanation from SQL guru or anybody. Thanks a lot.
Need table has clusted index on needid column and NeedCategory have composite clustered index on needid and categoryid.
Now take a look on following query and execution plan for the query.
SELECT N.NeedId,N.NeedName,N.ProviderName FROM dbo.Need N JOIN dbo.NeedCategory NC ON nc.NeedId = n.NeedId WHERE IsActive=1 AND CategoryId= 2 ORDER BY NeedName
* Clustered index scan on need table is happens for Isactive= 1.
* Clustered index scan on needcategory table is happens for CategoryId=2
My question is,
1. Why scan happens before the join occurs? if it happens after join then the filter would be lighter. Even if optimizer chooses the scan to execute first.
2. Is there any chance to rearrange the execution plan manually?
Hi I am having a query SELECT Dur1.rootId FROM DurableEventTab Dur1 WHERE (Dur1.dev_ReferenceClusterRoot = 'iyrwd.52' ) AND Dur1.dev_Action = 'Order:Ordered') AND (Dur1.dev_Active = 1) AND (Dur1.dev_PurgeState = 0) AND (Dur1.dev_PartitionNumber = 0)
This table has a primary key : aribapk11 and the indexes on the dev_ReferenceClusterRoot, dev_Action,dev_purgestate .
Now when I fire this query the query execution plan is actaull doing a Clustered Index scan on the PK :aribaPK11 . What I was expecting was an index seek on the key defined on dev_referenceClusterRoot. Please not the index seek is the behaviour in sql server 2000.
Any idea what is going wrong ?
Clustered Index Scan(OBJECT:([typhoon1902].[dbo].[DurableEventTab].[AribaPK7] AS [Dur1]), WHERE:([typhoon1902].[dbo].[DurableEventTab].[dev_Active] as [Dur1].[dev_Active]=(1.) AND [typhoon1902].[dbo].[DurableEventTab].[dev_PurgeState] as [Dur1].[dev_PurgeState]=(0) AND [typhoon1902].[dbo].[DurableEventTab].[dev_PartitionNumber] as [Dur1].[dev_PartitionNumber]=(0) AND [typhoon1902].[dbo].[DurableEventTab].[dev_ReferenceClusterRoot] as [Dur1].[dev_ReferenceClusterRoot]='iyrwd.52' AND [typhoon1902].[dbo].[DurableEventTab].[dev_Action] as [Dur1].[dev_Action]=N'Order:Ordered')) 0 0 Clustered Index Scan Clustered Index Scan OBJECT:([typhoon1902].[dbo].[DurableEventTab].[AribaPK7] AS [Dur1]), WHERE:([typhoon1902].[dbo].[DurableEventTab].[dev_Active] as [Dur1].[dev_Active]=(1.) AND [typhoon1902].[dbo].[DurableEventTab].[dev_PurgeState] as [Dur1].[dev_PurgeState]=(0) AND [typhoon1902].[dbo].[DurableEventTab].[dev_PartitionNumber] as [Dur1].[dev_PartitionNumber]=(0) AND [typhoon1902].[dbo].[DurableEventTab].[dev_ReferenceClusterRoot] as [Dur1].[dev_ReferenceClusterRoot]='iyrwd.52' AND [typhoon1902].[dbo].[DurableEventTab].[dev_Action] as [Dur1].[dev_Action]=N'Order:Ordered') [Dur1].[rootId] 1 0.00386574 0.0002263 71 0.00409204 [Dur1].[rootId] PLAN_ROW 0 1
I'm new to using SQL Server. I've been asked to optimize a series of scripts that queries over 4 millions records. I've managed to add indexes and remove a cursor, which increased performance. Now when I run the execution plan, the only query that cost is a DELETE statement from the main table. It shows a SORT which cost 71%. The table has 2 columns and a unique index. Here is the current index:
ALTER TABLE [dbo].[Qry] ADD CONSTRAINT [Qry_PK] PRIMARY KEY NONCLUSTERED ( [QryNum] ASC, [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = ON, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO
Question: Will the SORT affect the overall performance? If so, is there anything I should change within the index that would speed up my query?
I am setting up SQL audit on sql servers in my environment based on requirement. I want to create database specifications ASAP database created. I tried DDL trigger but Audit doesn't support triggers. So I created audit specifications on model database. the only problem with this is every specification created on new database with same name.database specification name includes newly created database name or other methods to create database specifications on newly created databases.