We are using partitions and all the table are properly aligned as per the partition keys. When this particular sp, which is inserting data to a table from a different table based on the partitionkeys is called by Web UI where threading has been applied, dead lock appears.
Let me make it more clear.
ThreadOne:
Insert into table A(partitionKey,BatchId,...)
select * from table B where partitionkey = 1
ThreadTwo:
Insert into table A(partitionKey,BatchId,...)
select * from table B where partitionkey = 2
I can see sometimes it gives deadlock for this procedure, not sure about the reason, as far as I guess since the tables are partitioned and escalation is set to Auto the deadlock should not occur.
I want to find a way to get partition info for all the tables in all the databases for a server. Showing database name, table name, schema name, partition by (maybe; year, month, day, number, alpha), column used in partition, current active partition, last partition (for date partitions I want to know if the partition goes untill 2007, so I can add 2008)
all I've come up with so far is:
Code Block
SELECT distinct o.name From sys.partitions p inner join sys.objects o on (o.object_id = p.object_id) where o.type_desc = 'USER_TABLE' and p.partition_number > 1
I am implementing a table partitioning on our database with TSQL. At the moment (it is under developing) the data are correctly located in the relavant file group. Our target is to meke that the oldest partions/File groups can be backup and removed from the database. This to reduce the size of DB (time period is used for partitioning). Then, if the need arises, restoring the filegroup to make reporting or analysis. Take care that data are conitnuosly added and thus new File groups are added to represent the new time period (eg: new file group is the new month). Based on your experience is it possible a solution like that?
When do partitioned tables/indexes become beneficial? When a table has several million rows? Hundreds of millions of rows?
My tables all have clustered indexes based on the bigint identity PK. I am considering partitioning some of the larger tables by year. If the field I use is not part of the current clustered index then I can't use create index to create my partitions? I need to create an empty table for each year and then use the Alter Table switch? I have header/detail/sub-detail tables. As long as I create the partition function using a similar date field the partitions will be able to be joined? How do I insure my indexes will be aligned? Once I set up the partitions I assume new rows will be stored in the proper partitions based on the value of the date field.
I've read BOL, etc & they are good sources for theory but I need a "Building Partitions for Dummies" type paper with step by step explanations. Anything out there like that?
I have database with a large table (30 Billion rows) because it is so big I separated the data in quarterly tables and created a partitioned view (with hints for the date column) about 1 billions a quarter. (all in separated filegroups). The tables themselfes are partitioned by date again, so you slice out one day
However the full-backup of grows and grows and the mainpart of it is "old" but needed data.
So I was thinking to put the older data in a separate database (with separated backup) and then point to the table in my view.
While this is technical possible (leaving out the WITH SCHEMABINDING) I wonder what negative consequences it will have.
I already had to lose "with schemabing".
I have to use separate partioning functions - for each database its own - (partition schemas where already separated due to separated filegroups)
What about query optimization, does the optimizer care that there are two databases?
We have some Deadlock alerts set up in SQL Agent that email us when the performance counter for deadlocks goes above zero. I've used the following script to identify the event file which has deadlock information in there.
select CAST(target_data as xml) as TargetData from sys.dm_xe_session_targets st join sys.dm_xe_sessions s on s.address = st.event_session_address where name = 'system_health'
Now that is fine, and we're looking into that (number of deadlocks appear to be 0.5) but out of interest ran a SQL Profiler session to capture the details as well and nothing is showing, I've received a few alerts and the trace file has information in there - but profiler shows absolutely nothing (all deadlock events are captured)
Hello all, I was wondering if anyone else ran into this and if how you got around it. In a nut shell the SQL optimizer it NOT pruning the additional partitions from the execution plan as would be expected when applying a constraint directly against the partitioned table€™s partition key, Instead its scanning every partition that you have set up in you partition function range.. Yet when you apply the actual value against the table the plan return as expected.
Hmm.... strange......ghost...ooooooo?
I have created a simple test to reproduce:
Code Snippet
CREATE PARTITION FUNCTION [PTFunction](int) AS RANGE LEFT FOR VALUES (1,2,3)
GO
CREATE PARTITION SCHEME [PTDataScheme] AS PARTITION [PTFunction] TO ([PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY])
GO
CREATE TABLE tblPartitionTest(
ID int identity(1,1) ,
PartitionKey int,
Sales money)
ON PTDataScheme(PartitionKey)
GO
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,10.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,20.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,30.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,40.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,50.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,10.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,20.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,30.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,40.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,50.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,10.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,20.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,30.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,40.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,50.00);
We have a highly transactional database. It was owned by a third party before but now both the database and the application is on our site and we are trying to improve this project. So, we have a big (902919 rows), heap table, which is getting bigger and bigger everyday and sometimes deadlocks occur. The table has only 4 columns, "token", "type", "value" and "cacheTime", unique index cannot be created. It has one index on "token"(char(36)) and "type"(varchar(50)) ("value" should also be included but it is nvarchar(max)).
I apologize in advance if this is something obvious I've missed ... fresh eyes/brain and all that.
If I have a table that is using a particular partition scheme/function, is there a quick and easy way to determine which column of that table is being used for partitioning? We're examining a number of legacy structures and we're hoping to reduce the time it's going to take us to get the report management wants.
The driver table , which keeps track of what datamarts ran and for what date range gets updated frequently during the etl run . There can be as many as 250 updates issued on this table in a single second.
Now this table is a heap , and there are no indexes on it .
During these updates , we encounter deadlocks causing the ETL job to fail .
In order to troubleshoot some deadlocking that I am seeing on SQL Server, I am trying to capture the Deadlock XML by enabling the Events Extraction Settings option 'Save Deadlock XML events separately' and specifying a Deadlock XML results file.
Meanwhile, I am also tracing the Deadlock graph, Lock:Deadlock, and Lock:Deadlock Chain events. Yet the xdl file remains empty even though I am getting hits on the events themselves in the SQL Profiler trace.
Also, I have the following trace flag settings enabled.
TraceFlagStatusGlobalSession 1204110 1222110
Why the xdl file remains empty even though (I think) it should contain some XML for deadlocks that are actually happening?
We have a partitioned view with 4 underlying tables. The view and eachof the underlying tables are in seperate databases on the same server.Inserts and deletes on the view work fine. We then add insert anddelete triggers to each of the underlying tables. The triggers modifya different set of tables in the same database as the view (differentthan the underlying table). The problem is those triggers aren't firedwhen inserting or deleteing via the view. Inserting or deleteing theunderlying table directly causes the the triggers to fire, but not whenthe tables are accessed as a result of using the view.Am I missing something? The triggers are 'for insert' and 'fordelete'. No 'instead of' or 'after' triggers.
I have a few databases that are using Partitioned Views in order to manage the table sizes and they all work well for our purposes. Recently I noticed a table that had grown to 400+ million rows and want to partition it as well, so I went about creating new base tables based on the initial table's structure, just adding a column to both table and primary key to be able to build a Partitioned View on them.The first time around, on a test system, everything worked flawlessly but when I put the same structure in place on the production system I get the dreaded "UNION ALL view 'DBName.dbo.RptReportData' is not updatable because the primary key of table '[DBName].[dbo].[RptReportData_201405]' is not included in the union result. [SQLSTATE 42000] (Error 4444)" error.
I have searched high and low and everything I see points to a few directives in order for a UNION ALL view to be updatable:
- Need a partitioning column that is part of the primary key - Need a CHECK constraint that make the base tables exclusive, i.e. data cannot belong to more than one table - Cannot have IDENTITY or calculated columns in the base tables - The INSERT statement needs to specify all columns with actual values, i.e. not DEFAULT
Well, according to me, my structure fulfills these conditions but the INSERT fails anyway. CREATE scripts below scripted from SQL Server. I only modified them to be on a single row - it is easier to verify that they are identical in a text editor that way.
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO
My database's design is set out here. In summary, I'm trying to model a Stock Exchange for a Technical Analysis application written using Visual C++. In order to create the hierachy I'm using a Nested Set Model. I'm now trying to write code to add and delete equities (or, more generically, nodes) to the database using a form presented to the user in my application. I have example SQL code to create the necessary add and delete procedures that calculate the changes to the values in the lft and rgt columns, but these examples focus around a single table, where as my design aggregates rows from multiple tables using UNION ALL:
Code Snippet CREATE VIEW vw_NSM_DBHierarchy -- Nested Set Model Database Hierarchy AS SELECT clmStockExchange, clmLeft, clmRight FROM tblStockExchange_ UNION ALL SELECT clmMarkets, clmLeft, clmRight FROM tblMarkets_ UNION ALL SELECT clmSectors, clmLeft, clmRight FROM tblSectors_ UNION ALL SELECT clmEPIC, clmLeft, clmRight FROM tblEquities_
Essentially, I'm trying to create an updateable view but I receive the error "UNION ALL View is not updatable because a partitioning column was not found". I suspect that my design in wrong or lacks and this problem is highlighting the design flaws so any suggestions would be greatly appreciated.
I have a query that joins two large partitioned tables and depending on the values in the where clause, I can get dramatically different performance results.
The first query completed in around 7s and has 47,000 logical reads.
select mo.monitor_id,
mo.site_id,
mo.testtime,
sum(mo.NumBytes),
sum(mo.DNSTime),
sum(mo.ConnectTime),
sum(mo.FirstByteTime),
sum(mo.ContentTime),
sum(mo.RelocTime)
from monitor_raw mr(nolock), monitor_object mo(nolock)
where mr.monitor_id in (5339, 5341, 5342, 943842, 943866)
and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'
and mo.returncode = 200
and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)
and mr.escalationlevel = 0
and mr.monitor_id = mo.monitor_id
and mr.testtime = mo.testtime
and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime
The second query takes 188s to complete and has 1.8m logical reads. The only difference between the two is the value of the monitor_ids in the where clause.
select mo.monitor_id,
mo.site_id,
mo.testtime,
sum(mo.NumBytes),
sum(mo.DNSTime),
sum(mo.ConnectTime),
sum(mo.FirstByteTime),
sum(mo.ContentTime),
sum(mo.RelocTime)
from monitor_raw mr(nolock), monitor_object mo(nolock)
where mr.monitor_id in (152682, 5339, 5341, 5342, 268080)
and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'
and mo.returncode = 200
and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)
and mr.escalationlevel = 0
and mr.monitor_id = mo.monitor_id
and mr.testtime = mo.testtime
and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime
The two tables have clustered indexes on monitor_id, testtime and site_id. Comparing the execution plan, I can see why there is such a difference in performance. The second query performs a clustered index seek on the monitor_object table starting at the lowest monitor_id, testtime & site_id through the highest monitor_id, testtime & site_id. The first query performs a clustered index seek where the monitor_id, testtime and site_id equals the same values from the monitor_raw table.
My question is, how can I force the second query to use the same execution plan as the first so that I can get better performance?
One possible workaround that I could use is to execute five individual queries, one for each monitor_id and then union the results together but this would require significant code changes to my stored procs.
I am using the following select statement to get the row count from SQL linked server table.
SELECT Count(*) FROM OPENQUERY (CMSPROD, 'Select * From MHDLIB.MHSERV0P')
MHDLIB is the library name in IBM DB2 database. The above query gives me only the row count of table MHSERV0P. However, I need to get the names, rowcounts, and sizes of all tables that exist in MHDLIB librray. Is it possible at all?
Our system is reasonably complex with a lot of non-trivial stored procedures. As the load on our DB increased we're now getting more and more deadlocks (10 per day or so from about a million stored proc executions).
We try to avoid transactions where we can, and we do attempt to optimse stored procs to steer clear of deadlock conditions, but with the sheer number of stored procedures we can't possibly avoid all deadlock conditions.
One solution I'm considering is to re-run stored procs that failed because of a deadlock. In the .net code we'll run the stored proc, check for a deadlock error and if one happened, wait 100ms and try again.
we have a production inviremont that is running for about 10 months. Since a couple of weeks we are having problems with "Deadlocks".
This cant be due to an increase in data size on the tables that are having the issues because these are cleaned in the same transaction that populates them.
These tables are used to store temporary data that the production system needs to calculate the correct price for any given order. This transaction takes between 0.5 to 1 second to commit.
We are running on a dual processor machine with 1 Gb of RAM with SQL Server 7 - sp 3, Windows NT 4 sp 6, Microsoft Transaction Server.
In all our queries and stored procedures we use the optimizer hints (nolock) for select statements and (rowlock) for updates or deletes.
Is there any way to totally avoid deadlocks. In some critical applications we have removed transactions entirely, counting on other means to maintain database consistency. We still get deadlocks in this area. These are mainly inserts, and the only thing I can think is that updates to the indexes are causing multiple page locks which result in deadlocks. Is this true?
Will deadlocks be eliminated in 7.0 with row level locking for this situation? Or will index page splits still cause a possibility of deadlock contention?
I have a problem with a SP in 6.5. When i try to run a Stored Proc which is a simple select statement dumped into a temp table in a particular database, I lock other users who are tring to log into other databases some in tempdb database. When i try to kill the process the rollback takes almost 45 mins or so..till then no one can log on to the server.
The SP works fine when no one is logged into the Great Plains server. One more thing i observed is that, the SP when run results on a deadlock only when the owner is a user. If the owner is DBO it works fine.
I've got a deadlock problem. The log below has been generated. The problem is that during one day, I have more than 300 deadlocks like it. Before, the were not so many deadlocks. During past year, the number of users has grow (from 100 before to 500 or 700 now)
*** Deadlock Detected *** - Requested by: SPID 360 ECID 0 Mode "S" - Held by: SPID 113 ECID 0 Mode "S" Index: aaaaa_PK Table: TABLE_1 Database: MYDB == Lock: KEY: 22:325576198:1 (ff009ae5078d) - Requested by: SPID 113 ECID 0 Mode "S" - Held by: SPID 374 ECID 0 Mode "X" Index: aaaaa_PK Table: TABLE_1 Database: MYDB == Lock: KEY: 22:325576198:1 (ff009ae5078d) - Requested by: SPID 374 ECID 0 Mode "IX" - Held by: SPID 360 ECID 0 Mode "S" Table: TABLE_2 Database: MYDB == Lock: PAG: 22:1:2428 == Deadlock Lock participant information: Input Buf: S E L E C T the_rest_of_the_query SPID: 360 ECID: 0 Statement Type: UNKNOWN TOKEN Line #: 1 Input Buf: s p _ e x e c u t e 8 Input Buf: s p _ c u r s o r 8À B 8 8f ç @ Table I Input Buf: S E L E C T the_rest_of_the_query SPID: 360 ECID: 0 Statement Type: SELECT Line #: 1 == Session participant information: == Deadlock Detected at: ==> Process 360 chosen as deadlock victim
I have done : - rebuild indexes on all tables (fillfactor 90) - analysed memory activity
Could a lack of memory be at the origin of the problem ? Which counters in perfmon are significant for memory lack ?
Could the index fill factor could be at the origin of the problem ? At time, it is at 90 percent.
Config : Winnt4 Server, MS-SQL 7 SP4 , 2 GB of RAM , 2 x Xeon 700
I have an application built on top of a questionable DB design which requires overcomplicated selects. The application is experiencing deadlocks regularly, in some cases with only one concurrent user.
I set the trace flag 1204 but am not seeing anything in the Error.log and I initiated a trace in profiler which does not seem to show any deadlock. Despite having recreated the problem which show my browser hanging indefinitely. When I run the following queries:
SELECT spid, waittime, lastwaittype, waitresource FROM master..sysprocesses WHERE waittime > 10000 AND spid > 50
SELECT spid, cmd, status, loginame, open_tran, datediff(s, last_batch, getdate ()) AS [WaitTime(s)] FROM master..sysprocesses p WHERE open_tran > 0 AND spid > 50 AND datediff (s, last_batch, getdate ()) > 30 ANd EXISTS (SELECT * FROM master..syslockinfo l WHERE req_spid = p.spid AND rsc_type <> 2)
I get:
55860978LCK_M_XPAG: 13:1:2573
54AWAITING COMMANDsleeping sa 11499 55UPDATE sleeping sa 21499
We have a SQL 2005 transaction database server that suddenly started to issue deadlock errors last week on most of the databases on that server and a lot of timeout errors. Before that, that database server performed very well and timeouts were minimal to zero. I am not sure what changed for it to have these performance problems.
The only major change we did was to convert several varchar columns to nvarchar in several tables (as part of internationalization initiatives). We did not modify the procs from varchar to nvarchar though but would be doing that phase by phase.
There is also one proc in which we used the snapshot isolation level of sql server 2005. These are only 2 major changes done within the past 2 weeks. Would these be the cause for these deadlocks and timeouts on our web-based application?
Hi EverybodyI am new to sqlserver 2000.I know basics of locks.but i dont know how toresolve deadlock issues.I am cofusing by reading articles with 90%information and remaining 10% missing.Can any one help me which is the goodsite to learn and resolve deadlocks.Note: I create deadlock. when i try to trace deadlock using dbcc traceon(1205,3604,-1).In error log showing nothing about the deadlock.showing created traceon.........Any help would be appreciated.--Message posted via http://www.sqlmonster.com
We have a problem with a table giving us deadlock issues and we can'tfigure out why.It's a table we write to fairly often perhaps 50 times a minute. Andalso do a select of 200 rows at a time from 4 servers every 5 minutes or so.We are only keeping 48 hours worth of rows in the table which averagesat 30000 a day on a busy day.This table has 1 PK and 2 FKs plus one TEXT column which does notparticipate in the WHERE clause.We are using binded variables.We have applied the latest patch to SQL2003 server running onWindows2003. The patch is supposed to resolve deadlock issues.Anyone have any advice on how to alleviate this problem.Thanks
Morning All, Am getting the following error from a number of users and am sort of wondering where to start in terms of diagnosing the problem. If anyone could give me any pointers on where to start in diagnosing the issue I would be grateful. "System.Data.SqlClient.SqlException: Transaction (Process ID 282) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction."
I know blocks and Deadlocks are different but how related are they? Seems like when I get reports of deadlocks I always have blocks and the blocks grow as time passes.
My database has been in production for 10 months with no deadlock problems. Three weeks ago I had to restore the database and I am now experiencing several deadlocks. SQL server is suppose to handle this and kill a process to resolve the deadlock but it is not. The deadlock occures on a stored procedure which is a simple select on a table with 187 records. DBCC ran with no problems. Any ideas what might be causing this? Any why is SQL not handling? Your help is greatly appreciated!
I need some help in reducing deadlocks in 6.5 I have tested with `Insert Row Locking` turned off and it reduced the number of deadlocks. What i need to know is if removing the foreign key relationships on tables reduces/eliminates Deadlocks. If any of you have any info on this please let me know.
I've starting to explore the Distributed Partitoned Views, in order to use it in the next project, and I've found the article: "MS SQL Server Distributed Partitioned Views" By Don Schlichting
I came across the following problem: While running sample: USE test GO
CREATE VIEW AllAuthors AS
SELECT * FROM AuthorsAM, TEST1.test.dbo.AuthorsNZ
GO
I got the error message: Server: Msg 4506, Level 16, State 1, Procedure AllAuthors, Line 5 Column names in each view or function must be unique. Column name 'au_lname' in view or function 'AllAuthors' is specified more than once.
Could anyone please explain? Can't i use the same column names in both tables?