SQL Server 2012 :: Order Of Execution Of Statement With GO Statements?
Jan 16, 2014
Curious if I have the code below as an example and I execute this code does sql execute from top to bottom? And does the Update run and complete before the delete occurs? Or does SQL execute the update and delete in parallel?
I'm new to using SQL Server. I've been asked to optimize a series of scripts that queries over 4 millions records. I've managed to add indexes and remove a cursor, which increased performance. Now when I run the execution plan, the only query that cost is a DELETE statement from the main table. It shows a SORT which cost 71%. The table has 2 columns and a unique index. Here is the current index:
ALTER TABLE [dbo].[Qry] ADD CONSTRAINT [Qry_PK] PRIMARY KEY NONCLUSTERED ( [QryNum] ASC, [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = ON, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO
Question: Will the SORT affect the overall performance? If so, is there anything I should change within the index that would speed up my query?
I'm trying to set up a statement that gives me a field called 'BINNO' if the payor = Commercial. But, I have a few customers that don't have Commercial. They have a Payor of Grant or Part D. How would I set up a statement that looks for Commerical 1st...then Grant or Part D. I started with this
case when inscomp.payor = 'COMMERCIAL' then INSCOMP.BINNO
For example in a Select Statement we have many tables and we have Where Clause with many conditions with AND operations. Do the SQL SERVER would apply the Where clause after all fetch or can dynamically decide about to include the related Tables from Select Statement Orderly with respect to where clause predicates? (SQL SERVER would not fetch data of those tables for its Select, where the AND condition in Where clause fails or by logic would be fruitless/not-related.)
Hi, I want to know if I can execute a set of batch statements ( basically create statements) during the installation of our product. I have used Access with ADP connected to MS SQL Server 7.0 and I am trying to merge the database and table creations with the installation procedure. Please tell me how I can go ahead with this. (I have tried using the installshield and some other similar products with not of much use). Thanks in advance, Mangala
I think I have a problem with the execution plan caching in context of prepared statements. Please comment and advise.
When caching a new execution plan SQL Server apparently takes into account the actual query parameters and the current situation of the SQL Server (open transactions, transaction locks, current workload and so on). That can cause the same prepared statement to be executed verry badly with other parameters.
I am having trouble with a production system where some queries more or less suddenly start running extremly bad. The reason is an execution plan which might be optimal for some cases but is in general verry bad. Forcing a recompile of execution plans either by updating statistics or running sp_recompile solves the problem for some time. But after an unpredictable time the bad execution plan is comming back. Probably the good execution plan might also be reinstalled after som time but I cannot wait for this to happen.
The factor between good and bad execution plan is about 160 and increasing (30ms vs. 5000ms).
I am using query analyzer to build a database, I want to do certain command in order, that is: not to execute the next statement until the previous one has been finish execution. What is the command used for this purpose
It will be part of the stored proc, but for now I couldn't even get it running in ssms. It will be two parameters/variables, one for order by column name and other for order by direction, i.e. desc or asc.I have tried following three ways, but none is working:
(1) order by case when @Sort_by= '[A_ID]' AND @Sort_Dir ='Desc' then A_ID end desc case when @Sort_by= '[A_ID]' AND @Sort_Dir ='Asc' then A_ID end asc
(2) order by case when @Sort_by= '[A_ID]' AND @Sort_Dir ='Desc' then A_ID desc end case when @Sort_by= '[A_ID]' AND @Sort_Dir ='Asc' then A_ID asc end
(3) ORDER BY CASE @Sort_by when '[A_ID]' then [A_ID] end Case @Sort_Dir when 'Desc' then desc end
SELECT COUNT(id) as viewcount from location_views WHERE createdate>DATEADD(dd,-30,getdate()) AND objectid=357 SELECT COUNT(id)*2 as clickcount FROM extlinks WHERE createdate>DATEADD(dd,-30,getdate()) AND objectid=357
But I want to add the COUNT statements, so this is what I did:
select COUNT(vws.id)+COUNT(lnks.id)*2 AS totalcount FROM location_views vws,extlinks lnks WHERE (vws.createdate>DATEADD(dd,-30,getdate()) AND vws.objectid=357) OR (lnks.createdate>DATEADD(dd,-30,getdate()) AND lnks.objectid=357)
Turns out the query becomes immensely slow. There must be something I'm doing wrong here which results in such bad performance, but what is it?
My team is starting to implement error handling in our sprocs. One question we have is whether or not to use unique error numbers for custom errors (ie Errors we throw after doing some sort of validity check, not SQL Server errors). For example, we might check the value of a parameter and then throw an error that says "Parameter State_Date must be less than today, please retry".
We are using SQL Server 2012 and will be using the THROW statement, not RAISERROR, so we don't HAVE to put the numbers in sys.messages. Also, we are going to log the errors in a table, along with the error message, sproc name, line number, etc.
Is it useful to maintain a custom list of error numbers and messages? Or is it just as useful to use one standard error number and add a custom error message (which we can then search for in our code, or use the sproc name & line number we logged)? And if it is worth maintaining a list of numbers plus messages, should we go ahead and put them in sys.messages?
Ok I have a query "SELECT ColumnNames FROM tbl1" let's say the values returned are "age,sex,race".
Now I want to be able to create an "update" statement like "UPATE tbl2 SET Col2 = age + sex + race" dynamically and execute this UPDATE statement. So, if the next select statement returns "age, sex, race, gender" then the script should create "UPDATE tbl2 SET Col2 = age + sex + race + gender" and execute it.
i was tasked to created an UPDATE statement for 6 tables , i would like to update 4 columns within the 6 tables , they all contains the same column names. the table gets its information from the source table, however the data that is transferd to the 6 tables are sometimes incorrect , i need to write a UPDATE statement that will automatically correct the data. the Update statement should also contact a where clause
the columns are [No] , [Salesperson Code], [Country Code] and [Country Name]
i was thinking of doing
Update [tablename] SET [No] = CASE WHEN [No] ='AF01' THEN 'Country Code' = 'ZA7' AND 'Country Name' = 'South Africa' ELSE 'Null' END
Need table has clusted index on needid column and NeedCategory have composite clustered index on needid and categoryid.
Now take a look on following query and execution plan for the query.
SELECT N.NeedId,N.NeedName,N.ProviderName FROM dbo.Need N JOIN dbo.NeedCategory NC ON nc.NeedId = n.NeedId WHERE IsActive=1 AND CategoryId= 2 ORDER BY NeedName
* Clustered index scan on need table is happens for Isactive= 1.
* Clustered index scan on needcategory table is happens for CategoryId=2
My question is,
1. Why scan happens before the join occurs? if it happens after join then the filter would be lighter. Even if optimizer chooses the scan to execute first.
2. Is there any chance to rearrange the execution plan manually?
I'm all of a sudden getting this error on a Stored Procedure that has not been touched since it was created.
Msg 266, Level 16, State 2, Procedure usp_ArchivexactControlPoint, Line 0 Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 0, current count = 1.
CREATE PROCEDURE [dbo].[usp_ArchivexactControlPoint] AS DECLARE @TableName VARCHAR (50)
If I have a List with three tables in it, and let's say that in the first two tables, I'm adding something to a global variable and in the 3rd table I'm calling that global variable...can I be 100% sure that the call to that global variable in the 3rd table will always be the result from the first two tables?
I mean will it always be true that the first two tables are analyzed and "executed" before that third table?
Thanks for any help. If there are any links on this topic on the microsoft web site, please specify them. I have not been able to find anything
I have a stored procedure that I would like to order by the order of the parameters it takes in starting for current item number 1, prior item number 1, current item number 2, prior item number 2, and so on.
currently I am ordering it by:
c.CurrentItemNumber , p.PriorItemNumber
among other fields, but I would like to replace the part above with the parameters 1 through 5 (current and prior).
Is this possible?
This is my stored procedure for reference:
ALTER PROCEDURE [cost].[Ingredient_Cost_Comparison] ( @CurrentSalesQuoteNumberNVARCHAR(20) ,@PriorSalesQuoteNumberNVARCHAR(20) ,@CurrentItemNumber1NVARCHAR(20) ,@PriorItemNumber1NVARCHAR(20) ,@CurrentItemNumber2NVARCHAR(20)
Set up a trace with the events RPC:Completed, SQL:BatchCompleted, SQL:BatchStarting, and SQL:StmtCompleted.
When I issue the statement: SELECT * FROM XyzView there is nothing captured in Profiler. If I script out the view and then execute the select statement that defines the view, it does show up in Profiler.
I've tried adding a lot of the other events, i.e. SP:StmtCompleted and the various other StmtStarting events and the trace still does not capture anything.
Am I capturing the wrong events or is this known behavior? My goal is to see what the overhead is for using a view versus persisting the results of the view as a table and referencing that instead. The view in question is against static data, joins 9 tables, and is referenced a lot.
I can use the stats generated when I execute the select that defines the view but I still find this to be curious behavior so I assume I'm doing something wrong.
I have a package that has 12 data pump tasks all executing in parallel.
It is transferring raw data from an AS400 DW to a MSSQLSvr Staging area.
Each pump task on completion assigns values to a set of global variables, then having done this passes these as parameters to a sproc which inserts them into a table.
This seems to work for 4 or 5 of the pump tasks but, the rest of the rows in the table are all the same because the remaining pump tasks are all executing before the sprocs.
Is there a way to make sure that the entire set of job steps completes, before starting another job set of steps while still keeping them running in parallel.
I had wondered if there was a way to use the PumpComplete phase of each pump step to fire off the sproc, but can't see how you execute the step.
Is there any way to enforce task execution order within a control flow? I need to have tasks execute after each other, because for example Task B depends on the execution of Task A.
Doesn't the success constraint enforce execution order? When I run my package the tasks seem to execute in a random order which is not what I want.
What I am trying to do is:
Loop through all Xml files in directory
For each Xml file: For categories, products & fields: Truncate staging table Insert data from Xml file into staging table
After all Xml files have been processed then import data from staging to main tables
Here is my control flow: http://www.myfootballonline.com/flow.png
I have nested lists and I want to set a global value in my custom code AFTER a specific table footer row. Does anybody know in what order the table elements are rendered? I have tried adding my piece of code into a group value, the hidden property, the color property, and sending it as a parameter to a subreport, but it still sets that variable first before rendering the table footer row that I want to display before I set that variable. I have been pulling my hair out trying to do this one! Help!
I want to select all the records, and them them be in alphabetical order first by lastname, then by firstname, then by address. HOWEVER, and this is the tricky part, I want to group names together that have the same address. So, in this example, I want the results to be in this order:
HallC6309 N Olive HallP6309 N Olive <---- grouped with the C record because they have the same address HallE5488 W Catalina <---- back to alphabetical by first name HallJ7222 N Cocopas
I have a stored proc that is executing in 2 sec on production and test database. It is taking more than a min on dev environment.
I have verified sqlserver version is same on both of the server.Prod is running on 2012Sp1 however dev don't have sp1. I am downloading it.
Both are 64bit, has same collation and compatibility level.I have confirmed that sp on both servers has same execution plan. I have reset and import stats from prod too.
Is the order of execution guaranteed to go from top to bottom in a transaction that has multiple statements like below?
BEGIN TRAN T1; UPDATE table1 ...; UPDATE table2 ...; SELECT * from table1; UPDATE table3 ...; COMMIT TRAN T1;
How about here?
BEGIN TRAN T1; UPDATE table1 ...; BEGIN TRAN M2 ; UPDATE table2 ...; SELECT * from table1; COMMIT TRAN M2; UPDATE table3 ...; COMMIT TRAN T1;
how can i guarantee that statements will be executed from top to bottom in a transaction batch like above? I am not interested in the errors in statements. I just want whole thing to either execute fully from top to bottom or none executes
How can I get and or set the order in which the cascading deletes of a table are executed?? I have table A with cascading deletes to Table B and Table C. Records in table B can not be deleted if they are referenced from table C. So if I delete C, then B and then A; that would work. But B then C and then A might be prohibited due to the constraint between B and C. Therefor the order of execution of the cascading delete is important.
I have a Conditional Split, where there are two outputs.
First output is a dataset which is to be inserted into the database.
Second output is a dataset for which the data already exists in the DB. I just need to update those data.
I have a doubt here. I want the insertion to be done first and then updation. Is there any property to be set for insertion or updation, something that maintains the order of execution or priority of execution.
Please do ask me if you need any further clarification.
I am designing a ETL system to extract data from multiple systems. I have designed a batch control application and database to manage the process. I was thinking of extending this to include the execution of the SSIS packages. I would basically store all of the package details in the database, and when I am executing a particular systems load, I would get the list of packages required and loop through them in a ForEach loop. The question I have is can I guarantee the order of execution? I will put an order or execution in the DB and when I select the data, I can order by these columns. I am concerned that in putting the data into a record set in SSIS its order could be changed resulting in the packages executing incorrectly.
Has anyone done anything similar to this and and run into problems, or is it not an issue? Many Thanks Michael
Code Block SELECT DISTINCT Field01 AS 'Field01', Field02 AS 'Field02' FROM myTables WHERE Conditions are true ORDER BY Field01
The results are just as I need:
Field01 Field02
------------- ----------------------
192473 8461760
192474 22810
Because other reasons. I need to modify that query to:
Code Block SELECT DISTINCT Field01 AS 'Field01', Field02 AS 'Field02' INTO AuxiliaryTable FROM myTables WHERE Conditions are true ORDER BY Field01 SELECT DISTINCT [Field02] FROM AuxTable The the results are:
Field02
----------------------
22810 8461760
And what I need is (without showing any other field):
Field02
----------------------
8461760 22810
Is there any good suggestion? Thanks in advance for any help, Aldo.
I want to save every query executed from a given software, let's say Multi Script for example, and save in a table query text, execution time and rows count among other possible useful information. Right now I've created a sp and a job that runs every 1 milliseconds but I can't figure out how to get execution time and rows count. Another problem with this is that if the query takes too long I end up with several rows in my table.