Parallel Execution In SQL Server
Oct 30, 2007
Is there any way to run a stored procedure in parallel to another one? i.e. I have a stored procedure that sends an email. I then scan a table and send any unsent emails. I do not want the second part to slow the response to the user.
View 2 Replies
ADVERTISEMENT
Dec 6, 2001
I'd like to run 4 stored procedures in parallel on 4 different processors on our server.
Does anyone know the syntax to do that? (Could not find it on the web so far).
Anything else I need to do in addition to the specific syntax (i.e. any specific properties to set in Enterprise Manager)?
Your help is greatly appreciated!
Lana
View 1 Replies
View Related
Feb 24, 2015
We have a monitoring tool that find a query that is using most of execution time of all sessions on the server.
I located it, and found it is a data flow task in an SSIS package.
It export data from a table which has big mount of data to another database.
I know it only executes one time, but I see in the monitoring tool it executes 4 times.
I am wondering is it because SSIS is doing it in parallel execution automatically?
We use all default settings, and the server physical cpu is 4.
Also it says the query is slow is because it has a wait called PREEMPTIVE_OS_WAITFORSINGLEOBJECT
not sure what does that mean
View 0 Replies
View Related
Jun 2, 2007
I have another post here regarding SQL 2005 running a query 50% slower than on 2000. It was discovered that 2005 runs the query in series whereas 2000 runs it in parallel.
Even with "Cost Threshold For Parallelism" set to a default value – 0, 2005 still executes my query in series. Does anyone know how to force a query to run in parallel in SQL 2005. I specifically want to set it at the database level.
View 12 Replies
View Related
Jul 23, 2005
I have a SQL 7 db with a union query (view), and I'm getting the error, "Thequery processor could not start the necessary thread resources for parallelquery execution." This union query has been in place for about two years nowwith no problems until just now, though I haven't changed anything. Also, Ihave a local copy of the database on my machine, and the query runs fine.As noted, I haven't changed anything in the query, nor in the SQL settings.There is a network administrator, so it's possible that he may have changeda setting, but I don't know what. The query is reproduced below. Any ideasas to what's going on would be appreciated.NeilMain query:SELECT Tmp.INVCUST, Tmp.SDNBR, Tmp.SDBOOK, Tmp.SDIVCLN,Tmp.SDPAID, Tmp.SDPRICE, Tmp.SDCOPIES, Tmp.Location,INVTRY.AUTHILL1, Tmp.INVDATE, INVTRY.SaleSrc,INVTRY.HoldInitFROM (SELECT INVDATE, INVCUST, SDNBR, SDBOOK, SDIVCLN,SDPAID, SDPRICE, SDCOPIES, 'P' AS LocationFROM vwInvoiceDetUNION ALLSELECT INVDATE, INVCUST, SDNBR, SDBOOK, SDIVCLN,SDPAID, SDPRICE, SDCOPIES, 'N' AS LocationFROM vwInvoiceDetNUNION ALLSELECT INVDATE, INVCUST, SDNBR, SDBOOK, SDIVCLN,SDPAID, SDPRICE, SDCOPIES, 'M' AS LocationFROM vwInvoiceDetM) Tmp INNER JOINdbo.INVTRY ON Tmp.SDBOOK = dbo.INVTRY.[Index]vwInvoiceDet:SELECT tabInvoice.INVDATE, tabInvoice.INVCUST,SALEDET.SDNBR, SALEDET.SDBOOK, SALEDET.SDINVNUM,SALEDET.SDPRICE, SALEDET.SDPAID, SALEDET.SDCOPIES,SALEDET.SDIVCLN, tabInvoice.INVNBR, SALEDET.SDIDFROM dbo.tabInvoice INNER JOINdbo.SALEDET ONdbo.tabInvoice.INVNBR = dbo.SALEDET.SDNBR(vwInvoiceDetN and vwInvoiceDetM are similar to vwInvoiceDet.)
View 2 Replies
View Related
Apr 16, 2007
Hi,
We have a process that builds our data warehouse.
The processes execute SPs in serial order.
Each SP builds separate table.
Each table is build destructive (truncate and Insert Into).
I've tried to change the configuration by running 4 SP in parallel by SSIS to shorten the update time.
I've noticed in two declines in performance:
1. Each SP execution time is higher in the parallel execution in around 50% then in the serial execution. The CPU utilization is the same.
2. On each parallel execution we have a decline in performance of around 5 -10% compare to the previous parallel execution.
Do you have any directions to inquire?
Btw €“ we have Itanium 64bit x8 with 32GB memory
Thanks,
Assaf
View 3 Replies
View Related
Nov 10, 2005
Is is possible to get the iterations in a foreach loop to run in parallel? What I need to do is to spawn an arbitrary number of parallel execution paths that all look exactly the same. The number is equal to the number of input files, which varies from time to time. Any help is appreciated!
View 27 Replies
View Related
Aug 8, 2006
Hi,
we're accessing a SQL Server as a source for some SSIS packages using quite complex SQL commands. We have dataflows getting data from up to 10 queries. The problem is that SSIS starts all these queries in parallel using up all the memory of the server (the source SQL server, not the server SSIS is running on). So the queries are very slow. Is there any way to force SSIS to start the queries after each other?
I already browsed the web for some answers on that and I'm not very optimistic... Maybe the only solution is really to feed the result of the query in raw files and process them later...
Thanks,
View 6 Replies
View Related
Mar 5, 2015
I have been trying to the query optimizer to generate a parallel execution plan but no matter the MaxDOP (0) or Cost Threshold (5) settings I use it will only execute in serial.
UPDATE [dbo].[Targus_201412_V7_B]
SET [URBAN] =(
CASE
WHEN [METRO_STATUS] = 'Urban' THEN 1
ELSE 0
END)
View 9 Replies
View Related
Apr 12, 2007
I've got an SSRS report that is set up using a data-driven subscription to supply input parameters to the stored procedure that is called to generate the report results.
I was wondering if there is any way to specify the execution processing method (running the reports in parallel or serially). The subscription that we have set up appears to be running all of the reports in parallel which is causing massive load on our servers.
Thanks.
View 3 Replies
View Related
Mar 18, 2015
I create a main program which will launch two jobs at a time, each job does some processes and at the end I'm trying to delete those jobs after storing the job details in one of the custom table I created (cleanup sub-program).
Out of two jobs I am able to store one job details (like job_name,job_id,start_time and end_time of the job) in the custom table and able to delete that job, but the job that's getting completed at the end is not getting captured nor getting deleted from sysjobs and sysjobhistory tables.
I had included this step (which will call the cleanup sub-program to store the job details and delete it) at the end. I can see that this cleanup procedure getting called from debug message but it is neither storing details nor deleting the job.
When I execute this cleanup program separately, it does store the job details and delete it.
View 0 Replies
View Related
Jan 22, 2015
I have a stored procedure that calls several views that rely on each other. In the past these views used to go parallel and use up all 100% of the CPU (12 cores), and now when the same stored procedure runs it only uses 8% of the CPU (1 core). This extends the time spent on the query from roughly 10-15 sec to 2-3min. I'm not quite sure why this is happening.
Are there some obvious things to look at when optimizing views to utilize all cores/threads? Also, it doesn't matter if I set Cost Threshold for Parallelism to 1 or 50 or 5, it is always the same, and I have Max Degree of Parallelism set to 0 as well, which should mean to use all cores when available.
View 9 Replies
View Related
Aug 5, 2015
I have a table with 8 columns, I need to update data in multiple columns on this table, this table contains 1 million records, having single update was taking time so I broke the single update into multiple update statements and running multiple update statements in parallel, Each update statement updates different column.
This approach is working fine but I am getting the deadlock error.
Transaction (Process ID 65) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
I tried with various lock hints but no success.
View 4 Replies
View Related
Nov 24, 2006
Hi,
I'm a bit confused about the editions. Will SQL Server Compact v3.1 be a successor of SQL Server Mobile v3.0 (but with desktop and mobile targets) or will the two editions coexist with SQL Server Mobile on Smart Devices and SQL Server Compact on desktop PCs?
I'm drawing this conclusion because the CTP & now also the RC1 of SQL Server Compact are only compiled for desktops (and I've also seen that the .NET library is compiled against .NET Framework 2.0 desktop edition). If I'm wrong, why are no smart device CABs provided?
My second question:
Is the file format of SQL Server Mobile & Compact the same? Can I create a SDF file with the Compact Edition on the desktop, move it to the smart device and use it there with the "older" Mobile Edition? I'm asking, because we are starting a project where we need to create a SDF on the desktop, because of performance reasons.
Thanks in advance,
Klaus
View 5 Replies
View Related
Aug 19, 2015
I often use profiler as one tool to identify bad plans. The reads column gives me a good indication of excessive IO to dig into and correct if necessary. I often use it with Showplan so I can see what a query does, replicate it and fix it.
However I have just lost some faith in it. I am looking at a poorly performing query joining five tables. A parallel plan has been generated and one table is being scanned (in parallel) due to a missing index. This table had in excess of 4 million rows in it. The rest hitd indexes well. However the entire query generates ONLY 12 READS.
Once corrected, a single processor plan is used. This looks really efficient and uses 120 reads. That looks the right figure to me.
Clearly 12 reads is wrong. Does the profiler only display one thread of a parallel plan perhaps? Or something else?
View 1 Replies
View Related
Aug 17, 2007
Hello. I have a 32-bit SQL 2005 (SP1) server that is my current Production server. I also have a 64-bit SQL 2005 (SP1) server that will become my Production server. I have several SSIS packages that load/refresh data on a nightly basis to a few of my databases from DB2 (MVS). I have the packages setup on the current Production server (32-bit) and all is working well through the Microsoft OLE DB provider for DB2. However, on the 64-bit server, I am experiencing some issues with the SSIS packages failing due to large loads. Loads that are loading tables with 500K, or less, data seem to run without issue (through SQL Agent Jobs). But, larger table loads are failing.
I do have a linked server set up on the 64-bit server to the 32-bit server, for other processes. And because of this I have lightweight pooling turned off on the 64-bit server (because of distributed querying). Lightweight pooling is turned on on the 32-bit server. Could this be what is causing some of my issue? Since I don't have the lightweight pooling option turned on (on the 64-bit server), am I not getting the proper amount of through-put for my 8 dual core CPU server?
Thanks
Scottye
View 1 Replies
View Related
Jan 11, 2008
Hi,
I have stumbled on a problem with running a large number of SSIS packages in parallel, using the €œdtexec€? command from inside an SQL Server job.
I€™ve described the environment, the goal and the problem below. Sorry if it€™s a bit too long, but I tried to be as clear as possible.
The environment:
Windows Server 2003 Enterprise x64 Edition, SQL Server 2005 32bit Enterprise Edition SP2.
The goal:
We have a large number of text files that we€™re loading into a staging area of a data warehouse (based on SQL Server 2k5, as said above).
We have one €œmain€? SSIS package that takes a list of files to load from an XML file, loops through that list and for each file in the list starts an SSIS package by using €œdtexec€? command. The command is started asynchronously by using system.diagnostics.process.start() method. This means that a large number of SSIS packages are started in parallel. These packages perform the actual loading (with BULK insert).
I have successfully run the loading process from the command prompt (using the dtexec command to start the main package) a number of times.
In order to move the loading to a production environment and schedule it, we have set up an SQL Server Agent job. We€™ve created a proxy user with the necessary rights (the same user that runs the job from command prompt), created an the SQL Agent job (there is one step of type €œcmdexec€? that runs the €œmain€? SSIS package with the €œdtexec€? command).
If the input XML file for the main package contains a small number of files (for example 10), the SQL Server Agent job works fine €“ the SSIS packages are started in parallel and they finish work successfully.
The problem:
When the number of the concurrently started SSIS packages gets too big, the packages start to fail. When a large number of SSIS package executions are already taking place, the new dtexec commands fail after 0 seconds of work with an empty error message.
Please bear in mind that the same loading still works perfectly from command prompt on the same server with the same user. It only fails when run from the SQL Agent Job.
I€™ve tried to understand the limit, when do the packages start to fail, and I believe that the threshold is 80 parallel executions (I understand that it might not be desirable to start so many SSIS packages at once, but I€™d like to do it despite this).
Additional information:
The dtexec utility provides an error message where the package variables are shown and the fact that the package ran 0 seconds, but the €œMessage€? is empty (€œMessage: €œ).
Turning the logging on in all the packages does not provide an error message either, just a lot of run-time information.
The try-catch block around the process.start() script in the main package€™s script task also does not reveal any errors.
I€™ve increased the €œmax worker threads€? number for the cmdexec subsystem in the msdb.dbo.syssubsystems table to a safely high number and restarted the SQL Server, but this had no effect either.
The request:
Can anyone give ideas what could be the cause of the problem?
If you have any ideas about how to further debug the problem, they are also very welcome.
Thanks in advance!
Eero Ringmäe
View 2 Replies
View Related
Aug 23, 2007
after moving off VS debugger and into management studio to exercise our SQLCLR sp, we notice that the 2nd execution gets an error suggesting that our static SqlCommand object is getting reused from the 1st execution (of the sp under mgt studio). If this is expected behavior, we have no problem limiting our statics to only completely reusable objects but would first like to know if this is expected? Is the fact that debugger doesnt show this behavior also expected?
View 4 Replies
View Related
Dec 7, 2005
Hi I am slowly getting to grips with SQL Server. As a part of this, I have been attempting to work on producing more efficient queries. This post is regarding what appears to be a discrepancy between the SQL Server execution plan and the actual time taken by a query to run. My brief is to produce an attendance system for an education establishment (I presume you know I'm not an A-Level student completing a project :p ). Circa 1.5m rows per annum, testing with ~3m rows currently. College_Year could strictly be inferred from the AttDateTime however it is included as a field because it a part of just about every PK this table is ever likely to be linked to. Indexes are not fully optimised yet. Table:CREATE TABLE [dbo].[AttendanceDets] ([College_Year] [smallint] NOT NULL ,[Group_Code] [char] (12) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,[Student_ID] [char] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,[Session_Date] [datetime] NOT NULL ,[Start_Time] [datetime] NOT NULL ,[Att_Code] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ) ON [PRIMARY]GO CREATE CLUSTERED INDEX [IX_AltPK_Clust_AttendanceDets] ON [dbo].[AttendanceDets]([College_Year], [Group_Code], [Student_ID], [Session_Date], [Att_Code]) ON [PRIMARY]GO CREATE INDEX [All] ON [dbo].[AttendanceDets]([College_Year], [Group_Code], [Student_ID], [Session_Date], [Start_Time], [Att_Code]) ON [PRIMARY]GO CREATE INDEX [IX_AttendanceDets] ON [dbo].[AttendanceDets]([Att_Code]) ON [PRIMARY]GOALL inserts are via an overnight sproc - data comes from a third party system. Group_Code is 12 chars (no more no less), student_ID 8 chars (no more no less). I have created a simple sproc. I am using this as a benchmark against which I am testing my options. I appreciate that this sproc is an inefficient jack of all trades - it has been designed as such so I can compare its performance to more specific sprocs and possibly some dynamic SQL. Sproc:CREATE PROCEDURE [dbo].[CAMsp_Att] @College_Year AS SmallInt,@Student_ID AS VarChar(8) = '________', @Group_Code AS VarChar(12) = '____________', @Start_Date AS DateTime = '1950/01/01', @End_Date as DateTime = '2020/01/01', @Att_Code AS VarChar(1) = '_' AS IF @Start_Date = '1950/01/01'SET @Start_Date = CAST(CAST(@College_Year AS Char(4)) + '/08/31' AS DateTime) IF @End_Date = '2020/01/01'SET @End_Date = CAST(CAST(@College_Year +1 AS Char(4)) + '/07/31' AS DateTime) SELECT College_Year, Group_Code, Student_ID, Session_Date, Start_Time, Att_Code FROM dbo.AttendanceDets WHERE College_Year = @College_YearAND Group_Code LIKE @Group_CodeAND Student_ID LIKE @Student_IDAND Session_Date <= @End_DateAND Session_Date >=@Start_DateAND Att_Code LIKE @Att_CodeGOMy confusion lies with running the below script with Show Execution Plan:--SET SHOWPLAN_TEXT ON--Go DECLARE @Time as DateTime Set @Time = GetDate() select College_Year, group_code, Student_ID, Session_Date, Start_Time, Att_Code from attendanceDetswhere College_Year = 2005 AND group_code LIKE '____________' AND Student_ID LIKE '________'AND Session_Date <= '2005-11-16' AND Session_Date >= '2005-11-16' AND Att_Code LIKE '_' Print 'First query took: ' + CAST(DATEDIFF(ms, @Time, GETDATE()) AS VarCHar(5)) + ' milli-Seconds' Set @Time = GetDate() EXEC CAMsp_Att @College_Year = 2005, @Start_Date = '2005-11-16', @End_Date = '2005-11-16' Print 'Second query took: ' + CAST(DATEDIFF(ms, @Time, GETDATE()) AS VarCHar(5)) + ' milli-Seconds'GO --SET SHOWPLAN_TEXT OFF--GOThe execution plan for the first query appears miles more costly than the sproc yet it is effectively the same query with no parameters. However, my understanding is the cached plan substitutes literals for parameters anyway. In any case - the first query cost is listed as 99.52% of the batch, the sproc 0.48% (comparing the IO, cpu costs etc support this). BUT the text output is:(10639 row(s) affected) First query took: 596 milli-Seconds (10639 row(s) affected) Second query took: 2856 milli-SecondsI appreciate that logical and physical performance are not one and the same but can why is there such a huge discrepancy between the two? They are tested on a dedicated test server, and repeated running and switching the order of the queries elicits the same results. Sample data can be provided if requested but I assumed it would not shed much light. BTW - I know that additional indexes can bring the plans and execution time closer together - my question is more about the concept. If you've made it this far - many thanks.If you can enlighten me - infinite thanks.
View 10 Replies
View Related
Aug 3, 2007
Hello :
How to execute a procedure stored during execution of the report, that is before the poster the data.
Thnak you.
View 4 Replies
View Related
Apr 25, 2001
Hi All,
I am working on SQL Server 7.0. Every weekend we go for reindexing of some tables. I want to know if it is possible to run the re-indexing of tables in parallel so that I can save time.
Our database is of size 80GB and one table is around 22GB. Rebuilding of index on this table takes a lot of time and we are unable to index the other tables.
Any solutions/suggestions are mostly appreciated.
Regards,
Mitra
View 2 Replies
View Related
Jan 4, 2008
hi,
we currently use the Database Maintenance Plan to do backups for our SQL Server 2000 databases.
I notice that the database are backed up one after the other.
I would like to know how to run the backups in parallel rather than sequentially.
To do this, is there any dependency on the number of CPUs?
regards
Andrew
View 2 Replies
View Related
Oct 22, 2006
I just thought I would share this info:
I created the package to download 4 ftp files at once.
I set the MaxConcurrentExecutables for the SSIS package to 4. So in BIDS in downloads 4 files at the same time.
However, when I started the job I noticed that only 3 files were downloaded at the time (looking at temp files in download directory)
Solution:
Sure enough after digging around for awhile - in Step properties for SSIS package - there is execution tab - and "Maximum Concurrent Executables" was -1 (which for some reason defaults to 3 concurrent processes even on our dual CPU server) - so after chanign that value to 4 - tada - all 4 files in parallel
View 1 Replies
View Related
Apr 13, 1999
One of the developers here wants to run a PB script consisting of 11
inserts to 11 respective tables at the same time. This is on a development
box.
Currently, we are running on MSSQL 6.5 ,sp4. We have very general
parameters set, nothing special. We have max async io set to the default
The inserts run serially now. What I need to do is run them in parallel.
My questions are:
1) How do I do this -- run in parallel?
2) What parameters do I need to set?
3)What else would I need to do?
Any information you can provide will be greatly appeciated. Thanks.
David Spaisman
View 1 Replies
View Related
Mar 19, 2014
Assuming I have a line, is there a function I can call to create a parallel line at a given distance away.i.e - with the below I would want to draw a parallel line to the one output.
DECLARE @line geometry = 'LINESTRING(1 1, 2 2, 3 3, 4 4)'
SELECT @line
View 9 Replies
View Related
Jun 14, 2006
Hi ,I need to place the results of two different queries in the same resulttable parallel to each other.So if the result of the first query is1 122 343 45and the second query is1 342 443 98the results should be displayed as1 12 342 34 443 45 98If a union is done for both the queries , we get the results in rows.How can the above be done.Thanks in advance,vivekian
View 7 Replies
View Related
Aug 21, 2007
Hi,All.
I'm writing test cases on C# for a few methods that make changes in database.To prevent making changes I used BeginTransaction-Rollback,everything was good.But this doesn't work if tested method has BeginTransaction-Rollback code itself.An error appears in NUnit: System.InvalidOperationException : SqlConnection does not support parallel transactions.
Do smb know how to solve the problem?
*******************************tc below********************
[Test]
[Category("Access")]
public void UpdateGroupUserTable(/*Int32 userID, GroupList dataset*/)
{
r.BeginTransaction();
try
{
r.UpdateGroupUserTable(userId,gl);
}
finally
{
r.Rollback();
}
*******************************tested method below********************
public void UpdateGroupUserTable(Int32 userID, GroupList dataset)
{
CheckDisposed();
UpdateCommand.CommandType = CommandType.Text;
UpdateCommand.Parameters.Clear();
UpdateCommand.Parameters.Add("@USERID", userID);
BeginTransaction();
try
{
Adapter.Update(dataset.acl_group);
Commit();
}
catch (SqlException)
{
Rollback();
throw;
}
}
View 5 Replies
View Related
May 13, 2008
I have several packages within secuence containers and into one main dtsx package with a checkpoint configuration and when I run it some succeed and some don´t. The problem is that when I rerun it checkpoint doesn´t seem to work ´cause some of the successful packages are rerun as well (and not skipped as it should be...) In other words, the process does not begin on the point of failure..
Seems to be that packages that finish after the failure point (and succeed) are not registered in the checkpoint file, then when I rerun the main package these succeeded packages are rerun too....
Any ideas??
View 1 Replies
View Related
May 11, 2007
Hello,
I have created a project to do de-dupification of addresses.
I understand that Fuzzy Grouping will take less time if it has lesser data volume to process.
My source feed file is sometimes huge. So I am splitting the input into multiple branches based on
the first letter of the city. There are 7 branches in the process.
Source File Feed
|
Split data into 7 groups
|
------------------------------------------------------------------------------------------------------------------------------------------
| | | | | | |
FzGrpg FzGrpg FzGrpg FzGrpg FzGrpg FzGrpg FzGrpg
| | | | | | |
Split Split Split Split Split Split Split
| | | | | | |
------------- -------------- -------------- -------------- -------------- -------------- --------------
| | | | | | | | | | | | | |
<- - - - - - - Write the Canonicals and Dupes from each of these splits into database - - - - - - - - ->
When I designed this I was hoping that each of the Fuzzy Grouping tasks will execute in parallel.
But in reality they are processing one after the other.
Is there anyway to make them execute in parallel?
Appreciate your help.
Thanks
KM
View 12 Replies
View Related
Nov 29, 2004
Here's my case, I have written a stored procedure which will perform the following:
1. Grab data from a table using cursor,
2. Process data,
3. Write the result into another table
If I execute the stored procedure directly (thru VS.NET, or Query Analyser), it will run, but when I tried to execute it via a scheduled job, it fails.
I used the same record, same parameters, and the same statements to call the stored procedure.
Any idea?
View 3 Replies
View Related
Nov 6, 2015
what the ideal CPU count and Max Degree of Parallelism are for a 3rd party database server.The server has 12 CPUs, 32GB RAM and all database sizes add up to < 30GB so they can all fit in memory (I tried to force this by doing a select * from every table). On certain payroll days, the CPU gets maxed out to 100% for a few seconds.
MAXDOP was originally set to the default 0. We later changed it to 8 based on several 'best-practices' articles. However the vendor suggests to change it to 1 (no parallelism), while others suggest changing it to 4, so that one run-away query doesn't hog most of the CPUs.
I'd like to find out how many CPUs are actually being used by queries. There is a Degree of Parallelism event in URL.... The BinaryData column says :
0x00000000, indicates a serial plan running in serial.
0x01000000, indicates a parallel plan running in serial.
>= 0x02000000 indicates a parallel plan running in parallel.- What does "parallel plan running in serial" mean ?
I see a lot of 0x01000000, and a few 0x08000000's in my trace.How can i determine whether one query is hogging CPUs and if reducing it to 4 will work?
View 4 Replies
View Related
Dec 9, 2005
SQL Server 2000 SP3ALast week one of our processes starting issuing or suffering deadlockdetected errors every 15 minutes or so.I have read several articles at MS on the subject. I set a couple ofstartup parameters related to producing deadlock detection informationand ran SQL Profiler. I found the SQL statements being issued by thedeadlocked statements. In every deadlock the same UPDATE statementappears however the data values being searched on are different. Thebest I can tell from trying to query the actual data each update hitsonly one or very few rows. No indexed column is updated so the indexesshould not be the source of conflict.Looking at the query I noticed that the query does not have anavailable index and Query Analyzer shows that the full table scan isbeing done in parallel.My question: Does SQL Server change or modify its locking rules whenqueries are converted to be ran using parallel processing? If so, doyou have a reference?Here is the deadlock entries posted to the error log:SPID=167ResType:LockOwner Stype:'OR' Mode: IX SPID:63 ECID:0 Ec:(0x65971510)Value:0x3c577e60 Cost:(0/0)Input Buf: Language Event: UPDATE Station_Upload setStation_Accept_Status = 'ACC',HeadStatus ='ACC',LastProcessedSta='110',HeadPartType='1' WHERE Part_Serial_No ='SCH1119323' AND Station = 'H110'SPID=63ResType:LockOwner Stype:'OR' Mode: IX SPID:167 ECID:0 Ec:(0x65801510)Value:0x3c27d060 Cost:(0/0)Input Buf: Language Event: UPDATE Station_Upload setStation_Accept_Status = 'ACC',HeadStatus ='ACC',LastProcessedSta='70',HeadPartType='1' WHERE Part_Serial_No ='SCH1119060' AND Station = 'H070'I have suggested adding an index to support the query.Any ideas?Thanks -- Mark D Powell --
View 4 Replies
View Related