DB Engine :: MAXDOP Impact On The Queries Under Execution?
May 17, 2015What would happen to the the queries which are under execution when I change the MAXDOP value from say 0 to 1?
View 11 RepliesWhat would happen to the the queries which are under execution when I change the MAXDOP value from say 0 to 1?
View 11 RepliesFor example in a Select Statement we have many tables and we have Where Clause with many conditions with AND operations. Do the SQL SERVER would apply the Where clause after all fetch or can dynamically decide about to include the related Tables from Select Statement Orderly with respect to where clause predicates? (SQL SERVER would not fetch data of those tables for its Select, where the AND condition in Where clause fails or by logic would be fruitless/not-related.)
View 5 Replies View Related
If I have 6-8 queries running in parallel, Whether having a Single connection Manager (for the same source) for all the Extract performs faster or having Distinct Connection Manager for each of the extract performs faster ?
Regards
Subhash Subramanyam
We have 3 maintenance jobs configured in this particular DB instance:
Daily backup of system database - SubPlan1 (Check Database Integrity Task --> Rebuild Index Task-->Backup Database Task)Daily backup of user databases - Five subplans for each task : (Check DB integrity --> Rebuild Index -->Backup User Database, Backup Log -->Cleanup History)Weekly maintenance - SubPlan1 (Check Database integrity job (system+user DB) + rebuild index job (system+user DB) )
PROBLEM: I just noticed that the User DB Rebuild Index task has been running since the 03/04 and the Weekly maintenance plan - subplan1 since the 12/04.
Which job is "safe" to stop without impacting the database?
Evening all,
I'm trying to do some profiling of a mobile application to determine where our performance bottleneck is. We have some conflicting information suggesting that inefficient usage of SqlCE might be the cause - but that code exists in a black-box library so we can't see what it's doing.
Are there any tools or configuration options to get the SqlCE execution engine to reveal what connections/queries it's being asked to perform? A simple list with some timestamps would be sufficient - just so we can map from our high-level data...
Any thoughts would be appreciated!
Jack
Pls tell me where i will be able to find a good material on interpreting the Execution plans................how do i compare 2 diff plans for Quries written in 2 diff ways...giving same output
View 2 Replies View Relatedhow to eliminate a key lookup from the execution plan
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SET NOCOUNT ON
SELECT COUNT(ph.lid) AS Total
FROM PLB ph
WHERE ph.lPhysician = @Physician
AND ph.BSF = CAST(0 AS bit)
[code]....
I have some VB.NET code that starts a transaction and after that executes one by one a lot of queries. Somehow, when I take out the transaction part, my queries are getting executed in around 10 min. With the transaction in place it takes me more than 30 min on one query and then I get timeout.
I have checked sp_lock myprocessid and I've noticed there are a lot of exclusive locks on different objects. Using sp_who I could not see any deadlocks.
I even tried to set the isolation level to Read UNCOMMITED and still have the same problem.
As I said, once I execute my queries without being in a transaction everything works great.
Can you help me to find out the problem?
Thanks,
Laura
When I attempt to run my DTS Pkg that contains ActiveX task, I get following error:
"ActiveX Scripting was not able to initialize the script execution engine"
When I run this from another computer, it runs fine.
I have tried reregistrate DTS DLLs:
Regsvr32.exe "C:Program FilesMicrosoft SQL Server80ToolsBinnaxscphst.dll"
Regsvr32.exe "C:Program FilesMicrosoft SQL Server80ToolsBinndtspkg.dll"
Regsvr32.exe "C:Program FilesMicrosoft SQL Server80ToolsBinndtspump.dll"
But it does not help.
I am worried the great site for DTS issues "http://www.sqldts.com " is down :-(
Any ideas ?
My System:
Windows Server 2003 + SP1
MS SQL Srrver 2000 + SP4
I will appreciate any thoughts you have!
I have a logon trigger on a SQL Server 2008 R2 Express Advanced production database to prevent remote logons. The trigger works fine. When I need to connect via my local machine, remotely, I connect via a VPN, can connect with SSMS and do whatever I need. However, if I use a linked server on my local machine (still connected via VPN), I receive the logon error
Msg 17892, Level 14, State 1, Line 1
Logon failed for login 'sa' due to trigger execution.
The trigger is below and it logs any failures, except for the linked server.
If I disable the trigger, the linked server connects ok.
CREATE TRIGGER [tr_MasterLogon]
ON ALL SERVER WITH EXECUTE AS 'sa'
FOR LOGON
AS
BEGIN
DECLARE
@ClientAddress varchar(48) = (SELECT client_net_address
[Code] ....
I am running a huge SSRS 2008 report and it gets rendered sometimes and sometimes it gives me an error,"The report execution has expired or cannot be found. (rsExecutionNotFound)”.
When I see SQL Server log after this issue comes I see following message there; "A significant part of sql server memory has been paged out. This may result in performance degradation. Duration: 300 seconds. Working set(KB): 97672, memory utilization 42%" .what is the issue and how to fix it.
know if there is any way out to run execution plan for parameterized queries?
As application is sending queries which are mostly parameterized in nature and values being used are very robust in nature, So i can not even make a guess.
Hi
Having some issues with our apps.
We are trying to get our applications to work with sql2005.
Ive got the databases "setup", and all our apps run fine...
...except for when queries are made without the owner of the
table being specified in the query.
The connection is opened with the username that is associated with that owner.
And it fails in Manager as well. Is there something im missing, because you should
be able to do this.
eg:
select * from <table_name>
Gives the error:
Msg 208, Level 16, State 1, Line 1
Invalid object name '<table_name>'.
However if i were to query like this:
select * from <owner>.<table_name>
it works fine.
We know we can use the event lock_deadlock and xml_deadlock_report to capture the deadlock info, however I also want to capture the execution plans for all of the SPIDs in the deadlock graph, how to output the execution plans to the extended events trace results either ? such as if there is an action for execution plan or workaround for it ?If there is no built in action for execution plan , may I know if we can add the customized info to the extended events results file also ? Such as when the deadlock related event happens , then we can run a query to get some info ,then added the info along with other info such as sql_text, dbname etc to the events trace results file either ? The reason is if we also know the execution plans when the deadlock happens, it is useful to turning the query based on the execution plans to reduce deadlock happening .
View 5 Replies View RelatedWhen you execute the below query limiting MAXDOP to 1(serial execution), the CPU_Utilized_in_Seconds reported by sys.dm_exec_query_stats is accurate - it will nearly match your wall clock execution time.
>> in theory sys.dm_exec_query_stats always works:
--example provided by www.sqlworkshops.com
--reset cache to collect fresh set of statistics
dbcc freeproccache
go
--execute a sample query serially that takes x amount of seconds
select max(t1.c2 + t2.c2) from tab7 t1 cross join tab7 t2 option (maxdop 1)
go
--now query sys.dm_exec_query_stats to find CPU Utilized by the above query
select (total_worker_time * 1.0) / 1000000 as CPU_Utilized_in_Seconds, * from sys.dm_exec_query_stats
cross apply sys.dm_exec_sql_text(sql_handle)
where text like '%select max(t1.c2 + t2.c2) from tab7 t1 cross join tab7 t2%' and
text not like '%sys.dm_exec_query_stats%' --to eliminate our probe
go
>> CPU_Utilized_in_Seconds will be around 6 to 18 seconds based on your CPU speed - which is what you expect
But when you execute the query without limiting MAXDOP to 1, say 0(parallel execution), the CPU_Utilized_in_Seconds reported by sys.dm_exec_query_stats is inaccurate - will not match your wall clock execution time.
>> in practice sys.dm_exec_qyery_stats does not always works:
--example provided by www.sqlworkshops.com
--reset cache to collect fresh set of statistics
dbcc freeproccache
go
--execute a sample query in parallel that takes x amount of seconds
select max(t1.c2 + t2.c2) from tab7 t1 cross join tab7 t2
go
--now query sys.dm_exec_query_stats to find CPU Utilized by the above query
select (total_worker_time * 1.0) / 1000000 as CPU_Utilized_in_Seconds, * from sys.dm_exec_query_stats
cross apply sys.dm_exec_sql_text(sql_handle)
where text like '%select max(t1.c2 + t2.c2) from tab7 t1 cross join tab7 t2%' and
text not like '%sys.dm_exec_query_stats%' --to eliminate our probe
go
>> CPU_Utilized_in_Seconds will be around 0.00xxxx seconds - which you do not expect!
You can read my full atricle at www.sqlworkshops.com/dm_exec_query_stats.htm
sqlworkshops
www.sqlworkshops.com
Usually CPU intensive query execute in parallel. Most customer use default configuration where 'max degree of parallelism' is set to '0' where it is more common for CPU intensive queries to execute in parallel.
A customer tells you they have high CPU utilization on their server and asks you to identify the issue. Without knowing that sys.dm_exec_query_stats reports incorrect CPU utilization when a query executes in parallel, you might query sys.dm_exec_query_stats and tell your customer that there is no query that is CPU intensive. Sooner or later the customer might find the query that you missed to point out.
Now you see the theoretical explanation and practical usage!!
We are just finishing our migration to SQL 2012. In our old environment, the instance which held our SharePoint databases also served other applications. We did not experience any performance related issues in the past due to this.
SharePoint basically requires MAXDOP to be 1, which is correct on the old server. Since this configuration may not be ideal for other applications that may be put within our environment, we our entertaining the idea of isolating SharePoint into its own instance, probably on the same box.
My manager wants me to come up with performance trace data to better prove that we need to go this route since we apparently have had issues in the past by blindly following Microsoft's best practices.
1.MAXDOP configuration - I understand this may be a 2 pronged approach that would require looking at various execution plans and CPU related counters in Perfmon. SharePoint likely requires a maxdop of 1 due to the nature of the application (lots of concurrent processes). What is the best way to show this need graphically?
2. Memory configuration for multiple instances - Does the Total Server Memory reveal all the memory that a given SQL instance is utilizing? Should I use this counter to identify appropriate min/max memory configurations for multiple instances on a single cluster?
The problem with the perfmon approach is that it's scope is limited to just the server. Since our SharePoint environment is currently being shared with other applications, I understand that I may have to utilize DMV statistics to narrow down my analysis.
Referencing an article regarding MAXDOP and cost threshold for parallelism from Brent Ozar's website: [URL] .....
We have a 2 physical CPUs that are 4 cores each with hyper threading enabled. When looking through the task manager, under the performance tab, I see 16 CPU threads.We have set the MAXDOP value is set at 4.
Reading further, cost threshold for parallelism setting is recommended at 50 to start with.
Our setting is at the default 5.
SQL Server 2012 Performance Dashboard Main advices me this:
Since the application is from a vendor and I have no control over its code, how can improve this sitation?
OK, I have figured out how to hide the sys views and Information_Schema views from users but before I try it on the live database I have a question:
If I Deny Select on the Master Public Role for the sys views and Information_Schema views, what impact will this have for users other than not being able to see those views? Anyone know this?
Your feedback is greatly appreciated.
Thanks.
I want to determine the performance impact caused by the extensive use of the 'select into #' statement in a production environment. The current situation is that our reports team extensively uses the 'select into #' statement to build smaller subsets of data. These subsets are then used as the basis to create summary style reports and exports. All this is accomplished via the use of SQL pass-through.
After these reports/exports are completed and tested, they are then released to our operations department and the users. The reports/exports then can be run against the production server at the discretion of the user, provided they have the appropriate permissions. These reports/exports target the live data on the primary production server that already has been designated for the use of the application software.
Now I know that reporting against a transactional-based server, where the users run the application, is not a very good idea. (Inherited) I am currently migrating all reports/exports to a reporting server. Although it will still be transaction-based, the reports/exports will be isolated from user activity. Eventually we will be moving toward a warehouse scenario.
I also know that the extensive use of the 'select into #' statement is not a coding practice for use in production. I provided several alternatives to this practice
1) insert..select 2) insert..execute - from stored procedure
I have read that in the in sql 6.5 that this may cause severe performance and locking behaviors in system db's and tempdb. However, in the following document on the Microsoft Knowledge Base, it indicates that SQL 7.0 may have corrected this issue.
Q153441 - FIX SELECT INTO Locking Behavior.htm
Despite the indication of being corrected, I am still not convinced. I am frequently seeing drastic performance hits, especially when several of the reports are running. (which is very common) My concern is that moving these reports/exports to a reporting server may save the users; I believe that it may be migrating the problem to another location. I will be working with the developers to optimize their code and will investigate index issues.
** To make a long story short. I would like someone who has experience with this provide me with the top 5+ reasons not to use the 'select into #' methodology in a production environment. Further, if anyone has any documentation, I would surely like the info.
Thanks, Dave
Hello, everyone:
I want to add a column, INT NOT NULL DEFAULT 0, to a table. There are about 9 mil. records, 57 columns in the table. SQL 2k on Win 2003. What impact maybe bring?
1) Is there a down time on database and server?
2) Is it possible to insert records during adding new column?
3) How long will be taken roughly?
Thanks a lot.
ZYT
Hi All,
I want to know what will be the impact of changing the primarykey on a table which already has a lot of data.
For example, column A is unique, primary key. I want to make column B as unique, primary key.
Can I do that? What will be the impact on database performance?
Thanks
Sri
Hi,
Anybody have any idea howmuch % of performance will be affect if we are using varchar instead of char data type?.
Thanks,
Ravi
Hi,
I am looking for a tool that is similar to SQL Impact (Quest). Quest has discontinued the tool.
This tool should be able to detect all database object dependencies for SQL Server, Sybase and Oracle. The objects should include tables, views, stored procedures, indexes and other objects. This should also detect DB object dependencies in front end applications as well.
Any suggestions are greatly appreciated...
Thanks!
I have been collecting information about 20 performance counters (memory, IO, cpu, SQL) that refresh every 15 seconds, would that have any performance hit in the server? what are best practices when collecting information via performance counters?
Thanks
I want to use "on delete cascade" in one of my tables but I'm worried though whether this can affect the performance when having millions of records. To explain more I'm working on a social networking website and I have two tables UserAccounts, in which I only keep the username and password and a few related fields, and Profiles in which I keep the profile data for users, I want to be sure that I won't have any records in the Profiles table without corresponding records in the UserAccounts table. Please see the DDL below to understand more the structure of the tables:
CREATE TABLE UserAccounts
(
UserID INT NOT NULL IDENTITY(1,1) PRIMARY KEY,
UserName VARCHAR(20) NOT NULL,
Password VARCHAR(20) NOT NULL,
--other fields (e.g. last login .. etc)
)
CREATE TABLE Profiles
(
UserID INT NOT NULL REFERENCES UserAccount(UserID),
-- other fields (e.g. birthdate, nationality .. etc)
)
Any suggestions are highly appreciated...
Hi,
Does anyone know how the key influencers impact values are calculated? Thanks!!
Kate
Hi,
I am currently working on with the ASP encryption of my application. I've tried to test the encryption of the connection string using the capicom.dll in my local, and it works successfully. However, I am not quite sure if this will still work after my OS is upgraded to WIN2K3 (my current OS is WINXP). Will this dll component be impacted after the OS Upgrade? or will there be no impact at all?
Any inputs from you guys would be much appreciated.
Thank you.
Thanks to all participants.
I am using SQL Server 2000 with replication object for two location. Log size on publisher go upto 25 times of data file size, I mean 80 MB Data files has maintains 2 GB log file and it is same for all five co's working on same windows 2000 advanced server board.
Since last week server randamly get disconnected from user applications and at that time few tables are not openable at server.
Can any one give a reason ? Why this type misbehaviou done by SQL Server 2000?
Thanks.
I have a question regarding FUll and differential backup.
We we take full or diff back up, does it create lot of logs ie. Does full or diff backup has any impact on log size?
Thanks
We have the following scenario:
Server A replicates Database A to Server B.
Server C has Database A on it as well, but in standby mode. We are applying the transaction logs generated by Database A on Server A to the database on Server C leaving it in standby mode each time.
Let's say we had planned maintenance for Server Aand dumped the last set of transactions on Server A in standby mode to be applied to to Server C. What happens to the replica on Server B? When I start to use Server C, can I backup its transactions and apply them to Server A, and then have those transactions replicated to Server B? And then what do I do when the maintenance is complete so that I can swithc back to Server A and have the replication continue on as before the maintenance to Server B?
Thanls
Just a quick easy question. If I alter a table (add a column to the table), will it take the table offline during the ALTER process? I am adding the column to the end of the table not in the middle. I know if I add it in the middle it will offline the table.
View 4 Replies View Relatedafter moving off VS debugger and into management studio to exercise our SQLCLR sp, we notice that the 2nd execution gets an error suggesting that our static SqlCommand object is getting reused from the 1st execution (of the sp under mgt studio). If this is expected behavior, we have no problem limiting our statics to only completely reusable objects but would first like to know if this is expected? Is the fact that debugger doesnt show this behavior also expected?
View 4 Replies View Related