Just started looking into Assemblies within SQL Server in the hope of creating a multi-threaded application and when writing a basic test assembly I came across the following problem...
When I create a worker thread within my assembly I am unable to do the following in my newly created thread:
Stop on a breakpoint in VS2005 Use Debug.Writeline to output to the output window
I know the thread is running as I am able to write to the EventLog, but VS2005 seems to be unaware of any thread other than the main SQL server thread.
If anyone could tell me whether there it is possible to debug non-SQL server threads, whether I am doing something wrong, or whether I should be doing things differently that would be great!
Thanks in advance for any help!!
The code is as follows (apologies as its not very well written!):
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Threading;
using System.Diagnostics;
public partial class StoredProcedures
{
/// <summary>
/// This stored procedure starts the comms
/// </summary>
[Microsoft.SqlServer.Server.SqlProcedure]
public static void StartComms()
{
try
{
// Output some debug output to show the thread is running
Debug.WriteLine("Comms is starting");
// Create the thread to run in the main thread class
Thread zThread = new Thread(new ThreadStart(ThreadFunction));
// Start the thread running
zThread.Start();
// Sleep to let the other one run
Thread.Sleep(2000);
// Output some debug output to show the thread is running
Debug.WriteLine("Comms is about to wait for thread to finish");
// Wait for the thread to complete
zThread.Join();
// Output some debug output to show the thread is running
Debug.WriteLine("Comms is ending");
}
catch
{
throw;
}
}
/// <summary>
/// The main thread function
/// </summary>
public static void ThreadFunction()
{
try
{
// Flag to keep thread running
bool bRunning = true;
// While the thread is running
while (bRunning == true)
{
// Output some debug output to show the thread is running
I would like to provide the db_datareader and db_executor role to a particular SQL Server Login in a database But, I would like to avoid any INSERT's, UPDATE's or DELETE's that may happen by calling the stored procedures
I tried assigning the db_denydatawriter role but it doesn't seem to be doing the trick as the INSERT's, UPDATE's and DELETE's were still working
Is there any way to provide the db_datareader and db_executor role but avoid any DML actions.
i have a proc that have 3 in parameter that are actually values of some of the columns in that table one parameter for each table.what is the optimized way to write query to get records on the basis of in parameters from these tables.
I have a query written that filters on joined table data. The SELECT looks like this:
SELECT * FROM tbl_bol AS a LEFT OUTER JOIN bol_status AS b ON b.bol_status_id = a.bol_status_id LEFT OUTER JOIN tbl_carrier AS c ON c.carrier_id = a.carrier_id WHERE (a.carrier_name LIKE 'five%') AND (a.accrueamt = 0) AND (a.imported = 1) AND (b.description = 'tendered') AND (a.ship_date BETWEEN '9/1/13' AND '9/30/13') ORDER BY a.bol_number DESC
If I want to do an UPDATE query that uses those filters in the WHERE clause, how do I go about doing that? It doesn't look like you can used joined tables in the UPDATE line like this:
UPDATE tbl_bol AS a LEFT OUTER JOIN bol_status AS b ON b.bol_status_id = a.bol_status_id LEFT OUTER JOIN tbl_carrier AS c ON c.carrier_id = a.carrier_id SET accrueamt='1348' WHERE (a.carrier_name LIKE 'five%') AND (a.accrueamt = 0) AND (a.imported = 1) AND (b.description = 'tendered') AND (a.ship_date BETWEEN '9/1/13' AND '9/30/13')
what the ideal CPU count and Max Degree of Parallelism are for a 3rd party database server.The server has 12 CPUs, 32GB RAM and all database sizes add up to < 30GB so they can all fit in memory (I tried to force this by doing a select * from every table). On certain payroll days, the CPU gets maxed out to 100% for a few seconds.
MAXDOP was originally set to the default 0. We later changed it to 8 based on several 'best-practices' articles. However the vendor suggests to change it to 1 (no parallelism), while others suggest changing it to 4, so that one run-away query doesn't hog most of the CPUs.
I'd like to find out how many CPUs are actually being used by queries. There is a Degree of Parallelism event in URL.... The BinaryData column says :
0x00000000, indicates a serial plan running in serial. 0x01000000, indicates a parallel plan running in serial. >= 0x02000000 indicates a parallel plan running in parallel.- What does "parallel plan running in serial" mean ?
I see a lot of 0x01000000, and a few 0x08000000's in my trace.How can i determine whether one query is hogging CPUs and if reducing it to 4 will work?
I create a main program which will launch two jobs at a time, each job does some processes and at the end I'm trying to delete those jobs after storing the job details in one of the custom table I created (cleanup sub-program).
Out of two jobs I am able to store one job details (like job_name,job_id,start_time and end_time of the job) in the custom table and able to delete that job, but the job that's getting completed at the end is not getting captured nor getting deleted from sysjobs and sysjobhistory tables.
I had included this step (which will call the cleanup sub-program to store the job details and delete it) at the end. I can see that this cleanup procedure getting called from debug message but it is neither storing details nor deleting the job.
When I execute this cleanup program separately, it does store the job details and delete it.
I have a stored procedure that calls several views that rely on each other. In the past these views used to go parallel and use up all 100% of the CPU (12 cores), and now when the same stored procedure runs it only uses 8% of the CPU (1 core). This extends the time spent on the query from roughly 10-15 sec to 2-3min. I'm not quite sure why this is happening.
Are there some obvious things to look at when optimizing views to utilize all cores/threads? Also, it doesn't matter if I set Cost Threshold for Parallelism to 1 or 50 or 5, it is always the same, and I have Max Degree of Parallelism set to 0 as well, which should mean to use all cores when available.
I come from other ETL tools (Oracle Warehouse Builder, BODI, BODS & DataStage) and i'm having trouble finding the best practice for scheduling a collection of packages to be processed parallel en retry those that fail. I created a staging project which contains all the packages (50) that extract data from 1 source system and grouped the packages into 2 sequence containers to make sure that the 'heavy' packages are started first and together in parallel.
I soon discovered that there is no standard option to have one child package retry on failure. Currently if 1 package fails the whole project is retried. I explored checkpoints as a solution but that seems a dead end when running packages in parallel.
There seem to be 2 solutions for my issue:
(1) create a loop around every EPT with 3 variables (waittime, retry_counter & succes_flag) (2) create an event handler to keep a list of ID's that failed and enable/disable EPT's based on that list (there's a lot more to it).
Option 1 seems like a lot of bloatware in what i expected to be standard functionality. I'm still investigating option 2.
How do others handle this kind of scheduling? Is it so different with SSIS that i'm approaching this incorrectly ?
I have a table with 8 columns, I need to update data in multiple columns on this table, this table contains 1 million records, having single update was taking time so I broke the single update into multiple update statements and running multiple update statements in parallel, Each update statement updates different column.
This approach is working fine but I am getting the deadlock error.
Transaction (Process ID 65) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
We have an application that runs Jobs, each of which affect ## number of child objects (usually around 1M). When a thread gets to 5000 updated child objects it bulk inserts into a table called ActionLog with the child Id and JobId.
When the job is complete a sproc SUMs the children from the ActionLog table: select sum(id) from ACTIONLOG where JOBID = @JobId;
It then updates the Jobs table AffectedObjectCount column with the sum(*) from above.
Instead of writing to the ActionLog table and calculating the SUM at the end I would like to do this 'real time'. After the bulk insert I would like to update the AffectedObjectCount column with the number of rows that were just bulk inserted. I tried this in the past and ran into major contention issues. There are usually 20 threads running a job so there exists a lot of potential for deadlocks.
Is there a recommended way to handle updating one column on one row from multiple threads? What is the best practice for a counter like this?
I have a scenario where i have to run update task on multiple servers in parallel and once all of them are completed (success or failure) another task is to be run on another server
1. in maintenance plan, if we add tasks which are not joined, will they run in paralled at the same time 2. if we link the last task to all the tasks with link type 'completed' will the last task complete after all tasks are completed or when any one of them is completed (i have big doubt here)
the business requirement behind this is to bring data from multiple servers into shadow copies locally and then process them together. its ok if some server data transfer fails, but its not ok to start processing centrally while data transfer is going on. further, we want to run data transfer from multiple servers in paralleled to save time.
Is the SQL Server Profiler Reads Column Incorrect For Parallel Plans?
I often use profiler as one tool to identify bad plans. The reads column gives me a good indication of excessive IO to dig into and correct if necessary. I often use it with Showplan so I can see what a query does, replicate it and fix it.
However I have just lost some faith in it. I am looking at a poorly performing query joining five tables. A parallel plan has been generated and one table is being scanned (in parallel) due to a missing index. This table had in excess of 4 million rows in it. The rest hitd indexes well. However the entire query generates ONLY 12 READS.
Once corrected, a single processor plan is used. This looks really efficient and uses 120 reads. That looks the right figure to me.
Does the profiler only display one thread of a parallel plan perhaps? Or something else?
ProductCode - CenterId - Region 13265 - 10 - Asia 13265 - 12 - Asia 13265 - 9 - America 11110 - 10 - Asia 11110 - 9 - America 12365 - 12 - Asia 12365 - 8 - Europe 45620 - 10 - Asia 45620 - 12 - Asia
What I need this query to do is to pull one instance of a product code where the "Asia" appears more than once within the table? Thanks for the help!
I've setup a deadlock monitor using extended events like this.
CREATE EVENT SESSION [deadlock] ON SERVER ADD EVENT sqlserver.lock_deadlock( ACTION(package0.process_id,sqlserver.client_app_name,sqlserver.client_hostname, sqlserver.database_id,sqlserver.database_name,sqlserver.plan_handle)
[Code] ....
Deadlock happened couple of days ago. I'm trying to determine the cause of deadlocks. What script should I use to pull that information to see what objects/processes caused deadlock?
I am getting deadlock in my production, i was taken deadlock information from trace file , i found deadlock graph but i am unable to find exact scenario . I am attaching deadlock trace file.
I have a weekly Maintenance Plan Reindex job that has failed because of a deadlock. My question seems simple enough and I'm ashamed to say I ought to know this answer, but here goes: Does the rest of a given job continue after such failures (this one was maybe 3/4 through the log) occur?
I have another post here regarding SQL 2005 running a query 50% slower than on 2000. It was discovered that 2005 runs the query in series whereas 2000 runs it in parallel.
Even with "Cost Threshold For Parallelism" set to a default value – 0, 2005 still executes my query in series. Does anyone know how to force a query to run in parallel in SQL 2005. I specifically want to set it at the database level.
I have a SQL 7 db with a union query (view), and I'm getting the error, "Thequery processor could not start the necessary thread resources for parallelquery execution." This union query has been in place for about two years nowwith no problems until just now, though I haven't changed anything. Also, Ihave a local copy of the database on my machine, and the query runs fine.As noted, I haven't changed anything in the query, nor in the SQL settings.There is a network administrator, so it's possible that he may have changeda setting, but I don't know what. The query is reproduced below. Any ideasas to what's going on would be appreciated.NeilMain query:SELECT Tmp.INVCUST, Tmp.SDNBR, Tmp.SDBOOK, Tmp.SDIVCLN,Tmp.SDPAID, Tmp.SDPRICE, Tmp.SDCOPIES, Tmp.Location,INVTRY.AUTHILL1, Tmp.INVDATE, INVTRY.SaleSrc,INVTRY.HoldInitFROM (SELECT INVDATE, INVCUST, SDNBR, SDBOOK, SDIVCLN,SDPAID, SDPRICE, SDCOPIES, 'P' AS LocationFROM vwInvoiceDetUNION ALLSELECT INVDATE, INVCUST, SDNBR, SDBOOK, SDIVCLN,SDPAID, SDPRICE, SDCOPIES, 'N' AS LocationFROM vwInvoiceDetNUNION ALLSELECT INVDATE, INVCUST, SDNBR, SDBOOK, SDIVCLN,SDPAID, SDPRICE, SDCOPIES, 'M' AS LocationFROM vwInvoiceDetM) Tmp INNER JOINdbo.INVTRY ON Tmp.SDBOOK = dbo.INVTRY.[Index]vwInvoiceDet:SELECT tabInvoice.INVDATE, tabInvoice.INVCUST,SALEDET.SDNBR, SALEDET.SDBOOK, SALEDET.SDINVNUM,SALEDET.SDPRICE, SALEDET.SDPAID, SALEDET.SDCOPIES,SALEDET.SDIVCLN, tabInvoice.INVNBR, SALEDET.SDIDFROM dbo.tabInvoice INNER JOINdbo.SALEDET ONdbo.tabInvoice.INVNBR = dbo.SALEDET.SDNBR(vwInvoiceDetN and vwInvoiceDetM are similar to vwInvoiceDet.)
SQL Server 2000 SP3ALast week one of our processes starting issuing or suffering deadlockdetected errors every 15 minutes or so.I have read several articles at MS on the subject. I set a couple ofstartup parameters related to producing deadlock detection informationand ran SQL Profiler. I found the SQL statements being issued by thedeadlocked statements. In every deadlock the same UPDATE statementappears however the data values being searched on are different. Thebest I can tell from trying to query the actual data each update hitsonly one or very few rows. No indexed column is updated so the indexesshould not be the source of conflict.Looking at the query I noticed that the query does not have anavailable index and Query Analyzer shows that the full table scan isbeing done in parallel.My question: Does SQL Server change or modify its locking rules whenqueries are converted to be ran using parallel processing? If so, doyou have a reference?Here is the deadlock entries posted to the error log:SPID=167ResType:LockOwner Stype:'OR' Mode: IX SPID:63 ECID:0 Ec:(0x65971510)Value:0x3c577e60 Cost:(0/0)Input Buf: Language Event: UPDATE Station_Upload setStation_Accept_Status = 'ACC',HeadStatus ='ACC',LastProcessedSta='110',HeadPartType='1' WHERE Part_Serial_No ='SCH1119323' AND Station = 'H110'SPID=63ResType:LockOwner Stype:'OR' Mode: IX SPID:167 ECID:0 Ec:(0x65801510)Value:0x3c27d060 Cost:(0/0)Input Buf: Language Event: UPDATE Station_Upload setStation_Accept_Status = 'ACC',HeadStatus ='ACC',LastProcessedSta='70',HeadPartType='1' WHERE Part_Serial_No ='SCH1119060' AND Station = 'H070'I have suggested adding an index to support the query.Any ideas?Thanks -- Mark D Powell --
I am learning Extended event to capture deadlock which already happened, for this in my SQL SERVER 2012.. I am simulating a deadlock . With [URL]... where given a query to find the deadlock details using extended event here is the code..Retrieve from Extended Events in 2012.
SELECT XEvent.query('(event/data/value/deadlock)[1]') AS DeadlockGraph FROM ( SELECT XEvent.query('.') AS XEvent FROM ( SELECT CAST(target_data AS XML) AS TargetData FROM sys.dm_xe_session_targets st JOIN sys.dm_xe_sessions s
[code]...
This code creates a deadlock but when i run the above Extended events query to get the details of deadlock, it doesnot display any results.
In order to troubleshoot some deadlocking that I am seeing on SQL Server, I am trying to capture the Deadlock XML by enabling the Events Extraction Settings option 'Save Deadlock XML events separately' and specifying a Deadlock XML results file.
Meanwhile, I am also tracing the Deadlock graph, Lock:Deadlock, and Lock:Deadlock Chain events. Yet the xdl file remains empty even though I am getting hits on the events themselves in the SQL Profiler trace.
Also, I have the following trace flag settings enabled.
TraceFlagStatusGlobalSession 1204110 1222110
Why the xdl file remains empty even though (I think) it should contain some XML for deadlocks that are actually happening?
We have (running SQL 2012 Std) and have enabled trace flag 1204 and 1222 for capturing deadlock through extended events. I have enabled deadlock notification through email .But it only send deadlock event occurred notification from the sql server error log . I was wondering if its possible to email the deadlock details they get generated in extended events via DB mail.
I am getting a number of deadlocks when inserting and deleting items from the same table.
The delete statement has a U lock and awaiting an IX lock on an index that covers the column in the where clause.
The insert statement has a IX lock and awaiting a U lock on the same index.
The delete statement is deleting about 5000 rows, where as the insert statement is inserting a single row.
Both these statements are found in stored procedures being called from LINQ to SQL.
I am wondering if there is a way I can prevent the delete statement taking the U lock out?My thinking being if the delete didn't take out the U lock then it would not deadlock with the insert. Are there any hints I could use to avoid the particular lock above?
I have seen various examples of multiple updates causing a deadlock, which can be fixed by adding multiple indexes. However, as I am inserting and deleting rows I imagine that all the indexes will need to be updated by both operations.
I have inherited the architecture and don't have the time to redesign everything at present. My backup plan is to deprioritize the delete and build in a retry mechanism.
However, it would be really good if I could find a more elegant way to handle deleting and inserting rows at the same time.
I am profiling a web application that is using the Microsoft JDBC driver, version 1.1 to connect to a sql server 2005 database. Each java.sql.Statement that is created, within the application, gets a query timeout value set on it ( statement.setQueryTimeout(...) ).
I have discovered that the JDBC driver creates a new thread to monitor each Statement and the query timeout value. When the application is under load these threads are getting created faster then they are being destroyed and I am concerned that this will cause a performance problem in production.
One option I have is to remove the query timeout value and the monitor threads will not be created, another is to change JDBC drivers.
I'm curious is there any way to control this behavior so that these threads are not created or are managed more efficiently. Is there a workaround that anyone is aware of? Is this considered a bug?
I have found a similar bug here for the 2000 driver: http://support.microsoft.com/default.aspx/kb/894552
We have around 5 SP’s which are inserting data into Table A,and these will run in parallel.From the temp tables in the SP,data will be loaded to Table A. We are getting deadlock here.No Begin and End Transaction used in the stored procedure.