I am redesigning an application that distributes heldesk tickets to our
50 engineers automatically. When the engineer logs into their window a
stored procedure executes that searches through all open tickets and
assigns a predetermined amount of the open tickets to that engineer.The
problem I am running into is that if 2 or more engineers log in at the
same time the stored procedure will distribute the same set of tickets
multiple times.
Originally this was fixed by "reworking" the way SQL Server handles
transactions. The original developer wrote his code like this:
-----
DECLARE @RET_STAT INT
SELECT 'X' INTO #TEMP
BEGIN TRAN
UPDATE #TEMP SET 'X' = 'Y'
SELECT TOP 1 @TICKET_# =TICKET_NUMBER FROM TICKETS WHERE STATUS = 'O'
EXEC @RET_STAT = USP_MOVE2QUEUE @TICKET_#, @USERID
IF @RET_STAT <> 0
ROLLBACK TRAN
RETURN @RET_STAT
END
COMMIT TRAN
-----
The UPDATE of the #TEMP table forces the transaction to kick off and
locks the row in table TICKETS until the entire transaction has
completed.
I would like to get rid of the #TEMP table and start using isolation
levels, but I am unsure which isolation level would continue to lock
the selected data and not allow anyone else access. Do I need a
combination of isolation level and "WITH (ROWLOCK)"?
Additionally, the TICKETS table is used throughout the application and
I cannot exclusively lock the entire table just for the distribution
process. It is VERY high I/O!
Have the need for going to a table to get an identity value. This is for updating an existing database, blah blah blah. Here is the schema of the table we are using:CREATE TABLE [TableIdentityValue] ( [TableName] [varchar] (50) , [NextNegativeIdentity] [int] NOT NULL , [NextPositiveIdentity] [int] NOT NULL , CONSTRAINT [PK_TableIdentityValue] PRIMARY KEY CLUSTERED ( [TableName] ) ON [PRIMARY] ) ON [PRIMARY]GO Now, depending on the type of data we are inserting into a table, we need to get either a negative or positive number for the PK. There are two sprocs that control the obtaining of those values:CREATE PROCEDURE GetNegativeIdentity @tableName varchar(50)AS DECLARE @nextNegativeIdentityValue int BEGIN TRANSACTION SET TRANSACTION ISOLATION LEVEL SERIALIZABLE SET @nextNegativeIdentityValue = ( SELECT NextNegativeIdentity FROM TableIdentityValue WITH (ROWLOCK) WHERE TableName = @tableName ) UPDATE TableIdentityValue SET NextNegativeIdentity = @nextNegativeIdentityValue - 1 WHERE TableName = @tableName COMMIT TRANSACTION RETURN @nextNegativeIdentityValueGOCREATE PROCEDURE GetPositiveIdentity @tableName varchar(50)AS DECLARE @nextPositiveIdentityValue int BEGIN TRANSACTION SET TRANSACTION ISOLATION LEVEL SERIALIZABLE SET @nextPositiveIdentityValue = ( SELECT NextPositiveIdentity FROM TableIdentityValue WHERE TableName = @tableName ) UPDATE TableIdentityValue SET NextPositiveIdentity = @nextPositiveIdentityValue + 1 WHERE TableName = @tableName COMMIT TRANSACTION RETURN @nextPositiveIdentityValueGOSo, the thing is, we need the read and update of the value from the specific TableIdentityValue row to be atomic - we don't want anyone else reading or modifying that data. The problem is knowing which level of isolation to use and/or locking, and how to implement that. I have tried a few different things that seemed to make sense, like placing a ROWLOCK on the SELECT statement, but is that lock going to hold for the entire length of the transaction? Also, I read that using some of the lock hints can be accomplished in the sense that some isolation levels are the same as some lock hints (e.g. setting isolation level to SERIALIZABLE "has the same effect as setting HOLDLOCK on all tables in all SELECT statements in a transaction" according to SQL Books Online.Any help is appreciated!
I am trying to get my head around locking (row, table) and Isolation Levels. We have written a large .NET/SQL application and one day last week we had about two dozen people in our company do some semi "stress/load" testing of the app.
On quite a few occassions, a few of the users would receive the following error:
"Transaction (Process ID xx) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction."
We are handling this on two fronts, the app and the database. The error handling in the app is being modified to capture this specific error and to retry the transaction.
However, from the database side, I am trying to find the most affective and efficient change to make regarding locking. I have been doing a lot of reading online and in BOL to get a better grasp of locking, but what I would really like is feedback from the community (forum) and get your thoughts on what changes I should make, if any, on the db side.
Hi all. I have a question. I 've read already about isolation lavels, but I don't understand how in practic set proper isolation if I have say 100 transactions..what is the algoriphm?
What is the TRANSACTION ISOLATION LEVEL settings for MSSQL like the default setting in Oracle. In Oracle the default setting allows one session to read consistent data without waiting for the other sessions to commit/rollback the data.
For eg: In Mssql, if I update table A in the first session, and in another session (second session) if I select from table A, the second session waits till the first session completes the updates and commit or rollbacks.
But in Oracle , if I update table A in the first session, and in another session (second session) if I select from table A, the second session will perform a read from the ROLLBACK SEGS and give a read consistent data without waiting for the first session to commit or rollback the transaction.
Is this type of behaviour is possible is MSSQL. And If YES how can I do it?
Not sure if this is more a .Net question or SQL Server, but I think it belongs here.
I have a small .Net app that reads records from a bunch of files from disk and inserts them into a database table. There could be several hundred files resulting in 100,000 records or more each time its run. Since it's a large table there are of course a few indexes on it so the insert takes a while. For larger sessions it could run as long as an hour. I need it to run in a transaction so that if anything happens while it's running the records from that run were committed on an all or nothing basis. However, I don't want to lock the table at all while the insert is happening. These aren't transaction records or anything like that, and the batches are separated by client so there will be no conflicts (no need to lock the table).
Unfortunately, no matter what I use for the isolation level of the transaction the table always ends up locked for reads. Data from previous runs is live at this point and we can't allow that. I have the choice of the following isolation levels when I create the transaction, but none seems to work: Chaos ReadCommitted ReadUncommitted RepeatableRead Serializable Snapshot Unspecified
I would expect Chaos, ReadUncommitted, or Snapshot be okay here, but I can't seem to get it working. Any thoughts?
Hi,I have 1 SQL statement selecting data from various tables and updating othertables.The question then is how do I prevent other applications from modifying thetables that I'm working on (that is while my transaction is being executed)?I know that the isolation level should be either REPEATABLE READ orSERIALIZABLE. But I need confirmation on if one of these actually solve myissue - prevents other applications/threads from modifying/inserting datainto the same tables that I'm working on.Thanks in advance,Daniel
I currently have a requirement for access to a SQL Server 2000 box using Access 2003. The queries will sometimes be quite demanding which in turn might affect the rest of the SQL users on the system.
Does anyone know of any setting in Access so that I can achieve the same result as setting the TRANSACTION ISOLATION level using T-SQL?
I need to set the Isolation Level (in ADO) for the Non-transaction queries to SNAPSHOT.
Both the ADO.Connection.IsolationLevel Property and the SQL Server SET TRANSACTION ISOLATION LEVEL command set the Isolation Level for the Transaction queries but no for the non-transaction queries.
I cannot use the READ_COMMITTED_SNAPSHOT database option, becaus when I am in a transaction I need the READ COMMITTED Isolation Level not the SNAPSHOT Isolation Level.
I don't want to rewrite the entire code of my existing application to add (NOLOCK).
I have a question about the "readCommitted" transaction isolation level.I have a client that is updating a record on a table.I suspend the execution after the UPDATE but before the commit statement.Than another client is trying to read the same record.As transaction isolation is set to "readCommited" I expected that the secondclient will read the old version of the record (before the update).Instead, the second client hangs and wait until the first client do thecommit.I expect this behavior if transaction isolation is set to "serializable"Is this behavior correct?Thanks,D.
Is there a way to read data from a linked server,within a transaction, without using DTC?The data on the linked server is static, thereforethere is no need for two-phase commit. There isno need for locking data on the linked server, becauseit is not being updated (either from the remote server,or from the local server).I don't want to run DTC because:1.) there have been security-related flaws with DTC inthe past2.) the application doesn't do distributed updates, andbecause the data on on the remote server is static,there is really no data integrity exposure withoutDTC.I cannot specify "WITH (NOLOCK)" on the select fromthe linked server:Server: Msg 7377, Level 16, State 1, Line 6Cannot specify an index or locking hint for a remote data source.I tried setting the isolation level:SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTEDbut that seems to have no effect on the requirement to use DTC.I still get the message:MSDTC on server 'LOCALSERVER' is unavailable.Is there some other way around this? Is it possible to provide someconnection string parameter, in the linked server setup, that wouldspecify the "READ UNCOMMITTED" isolation level for the linked server,so that DTC wouldn't be necessary.(In other words, can I tell SQL Server, "trust me, this won't hurt"?)Environment: SQL Server 2000 sp4The SQL does something like:declare @x char(4), @k int, @rc1 int, @rc2 intset @k=123BEGIN TRANselect @x=xfrom remoteserver.remotedatabase.dbo.tablewhere k=@kupdate localdatabase.dbo.table1set x=@xwhere k=@kset @rc1=@@errorupdate localdatabase.dbo.table2set x=@xwhere k=@kset @rc2=@@errorif (@rc1 = 0 AND @rc2=0) COMMIT TRANelse ROLLBACK TRAN
I have some locks issues on production database (win 2k3 SP1, sql server 2k5). In fact, I have an asynchronous process that makes SELECT TOP 1 in a table and UPDATE the selected row. The transaction isolation level for doing this action is READUNCOMMITTED. The isolation level readuncommitted is ignored for the update if I'm not wrong. On the other hand, I have some transactional activities with the isolation level read uncommitted too.
But when I control the database activity, I find very often locks between the asynchronous part and the transactional part. This is the transactional activity that is locking the asynchronous activity. The transactional activity is a simple SELECT and this type of query, in spite of the isolation level readuncommitted, makes exclusives locks when the asynchronous makes LCK_M_U.
I tried to modify the strored procedure for the SELECT/UPDATE of the asynchronous process with a "UPDATE my_table ... FROM my_table" query in order to reduce the transaction time. But the problem is always present.
Can someone help me to understand how a select query with the isolation level readuncommtted can make exclusives locks ?
Hi,we are executing the following query in a stored procedure using snapshot isolation level:DELETE FROM tBackgroundProcessProgressReportFROM tBackgroundProcessProgressReport LEFT OUTER JOIN tBackgroundProcess ON tBackgroundProcess.BackgroundProcessProgressReportID = tBackgroundProcessProgressReport.BackgroundProcessProgressReportID LEFT OUTER JOIN tBackgroundProcessProgressReportItem ON tBackgroundProcessProgressReport.BackgroundProcessProgressReportID = tBackgroundProcessProgressReportItem.BackgroundProcessProgressReportIDWHERE (tBackgroundProcess.BackgroundProcessID IS NULL) AND (tBackgroundProcessProgressReportItem.BackgroundProcessProgressReportItemID IS NULL)The query should delete records from tBackgroundProcessProgressReport which are not connected with the other two tables.However, for some reasone we get the following exception:System.Data.SqlClient.SqlException: Snapshot isolation transaction aborted due to update conflict. You cannot use snapshot isolation to access table 'dbo.tBackgroundProcess' directly or indirectly in database 'RHSS_PRD_PT_Engine' to update, delete, or insert the row that has been modified or deleted by another transaction. Retry the transaction or change the isolation level for the update/delete statement.The exception specifies that we are not allowed to update/delete/insert records in tBackgroundProcess, but the query indeed deletes records from tBackgroundProcessProgressReport, not from the table in the exception.Is the exception raised because of the join?Has someone encountered this issue before?Thanks,Yani
I'm investigating a poorly performing procedure that I have never seen before. The procedure sets the transaction isolation level, and I suspect it might be doing so incorrectly, but I can't be sure. I'm pasting a bastardized version of the proc below, with all the names changed and the SQL mucked up enough to get through the corporate web filters.
The transaction isolation level is set, but there is no explicit transaction. Am I right that there are two implicit transactions in this procedure and each of them uses snapshot isolation?
SET NOCOUNT ON; SET TRANSACTION ISOLATION LEVEL SNAPSHOT; DECLARE @l_some_type varchar(20), @some_type_code varchar(3), @error int, @error_msg varchar(50);
We have a service that inserts some rows into a parent table (P) and child table (C). This operation is atomic and performed within a transaction.
We also have a service that queries these tables such that rows are (should only be) returned from P where there are no children for that parent.
The SQL that performs this is simplified below:
SELECT P.SomeCol FROM P LEFT OUTER JOIN C ON P.PKofP_Value = C.PkofP_Value WHERE C.PkofPValue IS NULL AND P.SomeOtherCol=0
Our expectation is that the query service should only return rows from P where there are no rows in C.
However, this seems not to be the case, and occasionally we find that rows from P are returned where there are matching rows in C.
We are sure that the process that inserts rows into P and C does so within a single transaction.
We have traced this with SQLTrace and can see the txn stag and committing and all operations using the same transactionid within the transaction.
We are running the default isolation level committed.
In SQLTrace we can see the query process start, the inserter process start and complete and then the query process continue (after presumably being blocked).
So how can the query process "miss" the child rows and return the parent from the above query?
Is it possible that, in this isolation level, the inserter process can block the query process such that when the inserter process commits and when the query process continues it does not see the child rows inserted because they were inserted in the table/index "behind" where the query process has already read - some kind of phantom phenomenon?
I have just started working the 2047 OLAP and came arcross the Analysis Service Limits. It states that the levels in a cube has a limit of 256 and the leves per dimension is 64. I am confused.
What is the definition of (levels in a cube)! and how are the levels in a cube different from (levels per dimension)
I'm having trouble getting a FOR XML query to get the relationships correct when there are 3 levels of data.
In this example, I have 3 tables, GG_Grandpas, DD_Dads, KK_Kids. As you would expect, the Dads table is a child of the Grandpas table, and the Kids table is a child of the Dads table.
I'm using the Bush family in this example, these are the relationships: - George SR --- George JR ------ Jenna ------ Barbara --- Jeb ------ Jeb JR ------ Noelle
These statements will create and populate the tables for the example with the above relationships:
SET NOCOUNT ON DROP TABLE KK_Kids, DD_Dads, GG_Grandpas CREATE TABLE GG_Grandpas ( GG_Grandpa_Key varchar(20) NOT NULL, GG_GrandpaName varchar(20)) CREATE TABLE DD_Dads ( DD_Dad_Key varchar(20) NOT NULL, DD_Grandpa_Key varchar(20) NOT NULL, DD_DadName varchar(20)) CREATE TABLE KK_Kids ( KK_Kid_Key varchar(20) NOT NULL, KK_Dad_Key varchar(20) NOT NULL, KK_KidName varchar(20))
INSERT INTO GG_Grandpas VALUES ('GG_GEORGESR_KEY', 'GEORGE SR') INSERT INTO DD_Dads VALUES ('DD_GEORGEJR_KEY', 'GG_GEORGESR_KEY', 'GEORGE JR') INSERT INTO DD_Dads VALUES ('DD_JEB_KEY', 'GG_GEORGESR_KEY', 'JEB') INSERT INTO KK_Kids VALUES ( 'KK_Jenna_Key', 'DD_GEORGEJR_KEY', 'Jenna' ) INSERT INTO KK_Kids VALUES ( 'KK_Barbara_Key', 'DD_GEORGEJR_KEY', 'Barbara' ) INSERT INTO KK_Kids VALUES ( 'KK_Noelle_Key', 'DD_JEB_KEY', 'Noelle' ) INSERT INTO KK_Kids VALUES ( 'KK_JebJR_Key', 'DD_JEB_KEY', 'Jeb Junior' )
So the question is, how do I get it to maintain the proper relationships between the records when I do an FOR XML query? Here is the query I am trying to get to work. Right now it puts all the Kids under a single Dad, rather than having them under their correct dads. I am getting this, which is not what I want:
- George SR --- George JR --- Jeb ------ Jenna ------ Barbara ------ Jeb JR ------ Noelle
SELECT 1 as Tag, NULL as Parent, GG_GrandpaName as [GG_Grandpas!1!GG_GrandpaName], GG_Grandpa_Key as [GG_Grandpas!1!GG_Grandpa_Key!id], NULL as [DD_Dads!2!DD_DadName], NULL as [DD_Dads!2!DD_Dad_Key!id], NULL as [DD_Dads!2!DD_Grandpa_Key!idref], NULL as [KK_Kids!3!KK_KidName], NULL as [KK_Kids!3!KK_Dad_Key!idref] FROM GG_Grandpas UNION ALL SELECT 2 , 1 , NULL , GG_Grandpa_Key , DD_DadName , DD_Dad_Key , DD_Grandpa_Key , NULL , NULL FROM GG_Grandpas, DD_Dads WHERE GG_Grandpa_Key = DD_Grandpa_Key UNION ALL SELECT 3 , 2 , NULL , GG_Grandpa_Key , NULL , DD_Dad_Key , NULL , KK_KidName , KK_Dad_Key FROM GG_Grandpas, DD_Dads , KK_Kids WHERE GG_Grandpa_Key = DD_Grandpa_Key AND DD_Dad_Key = KK_Dad_Key
FOR XML EXPLICIT
I've tried it all different ways, but no luck so far. Any ideas?
How do I get my data to show starting at the first row instead of skipping down?
Refer to the attachment.
Code: CREATE PROCEDURE [dbo].[uspReportData] -- Add the parameters for the stored procedure here @Metric1 as varchar(50) = NULL, @Metric2 as varchar(50) = NULL, @Metric3 as varchar(50) = NULL, @Metric4 as varchar(50) = NULL, @Metric5 as varchar(50) = NULL, @Metric6 as varchar(50) = NULL, @Metric7 as varchar(50) = NULL, @Metric8 as varchar(50) = NULL,
we are building a DW for a company that operates in 10 countries with the home country being the major portion of the data......
Previous efforts have always had the data separated by schemas and so to ask a question about a specific country required the schema number to be provided.
I am proposing that the 10 schemas, and therefore 10x the number of tables, indexes etc, be removed in favour of using partitioning.
However, we want to partition by country and by periods...that is we would like to create monthly partitions as normal.
No matter how I read the documentation and test this out, it seems to me that this multiple levels of partitioning can only be achieved if I create a field on the table that is some manipultion of the key for the company reporting structure and the period. I think I can take the first, add 10M and then add the period key.
But I am unsure if the optimiser is going to do it's partition elimination properly on such a calculated field.
Has anyone attempted such a multi-level partitioning scheme in SQL Server? I am thinking people must have as one level of partitioning was seen to be too restrictive many years ago.....
I suspect I know the answer but I'll ask away. We currently have our SSIS packages set up to log to SQL Server. Currently they log OnError, OnInformation and OnTaskFailed. If I'd like to have it log OnPipeLineRowsSent, is there anyway I can get that done without opening up the package and editing it? I know the change is trivial from the IDE but the deployment process at my current engagement is quite lengthy. If something breaks in production, I'd like to know if it'll be possible to turn up the chattiness of logging without going through a full deploy scenario.
I was looking at the parameters for dtexec/dtexecui and I see that you can configure where something logs but nothing about the verbosity of the logs generated. Is it something I'm missing with that or is that all you can set there?
The only other option that jumps out at me is to develop a custom script or component that sets the logging level based on a parameter. Anyone have a thought as to how much effort that would be---something easily tackled or probably more trouble that it's worth?
Is anyone aware if the database engine build levels will affect the mirroring process. we're in the process of upgrading a PROD environment to a new build however like to delay onto the disaster recovery (DR) server in case of issues. The DR is the mirror in the setup and so would have a differnet build level.
Is this likely to affect anything? All info seems to point to only versions differences causing a problem but not the build.
I have to spec out a new server, and I have the option of using RAID-5 or RAID-1 drive sets. I am limited to 24 drives in groups of 6, and I have to have one hotspare per group, so up to 5 usable drives per channel.
I need 80-100GB of space total.
Okay, you're waiting for the question....I have heard many differing opinions on which is better, RAID 1 or RAID 5. If I have 2 Large disks (say 36GB) on a Raid-1, I assume having 4 smaller 9GB drives on a RAID-5 will be faster, but I am not sure due to overhead and the like.
Does anyone know where I can get more information on RAID performance and how it is going to affect me? The database is going to be Read-Write, with a ton of small transactions, and the occasional (usually on a weekend) aggregation.
I have captured some trace output for performance evaluation for an application which has just been upgraded. Originally, this application can only run with database compatible 70.
So, after we have switched this level from 70 to 80, I noticed that all T-SQL statements which executed thru the use of "sp_executesql" have much higher IO usuage. The usage increased from approximately 50 I/O to approximately 12000 I/O. When I reviewed the profiler output, I noticed that all select statements which executed thru this "sp_executesql" statements performed "index scan".
When I switch back to run with database compatibility level 70, my profiler output shows that all these "sp_executesql" statements performed "index seek".
All these statements use the same unique non-clustered index.
Is it a SQL Server bug? Does anyone know which service pack or hot fix address this problem?
I have a Star schema based dimension called Customer which has these levels:
ALL Customers Level1: Customer Type Level2: Customer Sub Type Level3: Customer Name
When a user is browsing the cube, is it possible to hide the the 1st level (and all it's sub-levels)? For example, If the Customer Type = "Low Ranked" then I do not want it to be displayed to the user while (s)he is selecting from the dimension. HOWEVER I only want it to be hidden from being displayed but it's effect should always reflect e.g. Suppose: [list=1] Sales (measure count) for Customers with Type "High Ranked" = 100 Sales (measure count) for Customers with Type "Medium Ranked" = 50 Sales (measure count) for Customers with Type "Low Ranked" = 10 [/list=1] Now if the user selects 'ALL Customer Type' in the dimension he/she should get a total Sale (measure count) of 160 (i.e. 100+50+10).
However when the user expands the Customers Dimension (i.e. ALL Customers), the resulting child nodes should only list 2 nodes i.e. High Ranked and Medium Ranked.
I went to the cube editor --> Advanced Properties and looked at the 'Hide Member If' property but amongst the 5 options there is none which allows me to specify the criteria.
Maybe the solution already is in one of those 5 options and thus please help me.
We seem to have a problem with permission levels and connecting to an MSDE (MSSQL) server. If the user is under the Domain Admins group, the the access projet (front end) will open correctly and connect to the data server. If they are not part of that group then the front end can ever establish a file to the database server. We do not want to make all the users Domain Admins, so is there a way to make MSDE let them trough even though they are on a lower level.
I've done many tests, and also tried many things. I've even went to the extent to give Full Control to the whole MSSQL folder in program files for Everyone. I have made sure that the database file itself inherieted it's parents security settings, which were what I had just described.
Any ideas how how to make MSDE let anyone connect? Thanks in advance!
There will be three levels of data imposed at the Application Layer
Level 1: ParentID = 0 An Item Like Geography Level 2: ParentID = a Level 1 WikiID A sub Topic like Volcanoes Level 3: ParentID = Level 2 WikiID A bottom Topic like Pyroclastic Flows
I Need a SQL statement that Will Produce the Output where The output will be produced like this: Level 1 Level 2 Level 2 Level 2 Level 1 Level 2 Level 2
I Built this but its wrong and has no order by Group by Statements Select * from WikiData where ParentID = 0 or ParentID IN (Select * from WikiData where ParentID = 0)
The script below can be used to determine the reference levels of all tables in a database in order to be able to create a script to load tables in the correct order to prevent Foreign Key violations.
This script returns 3 result sets. The first shows the tables in order by level and table name. The second shows tables and tables that reference it in order by table and referencing table. The third shows tables and tables it references in order by table and referenced table.
Tables at level 0 have no related tables, except self-references. Tables at level 1 reference no other table, but are referenced by other tables. Tables at levels 2 and above are tables which reference lower level tables and may be referenced by higher levels. Tables with a level of NULL may indicate a circular reference (example: TableA references TableB and TableB references TableA).
Tables at levels 0 and 1 can be loaded first without FK violations, and then the tables at higher levels can be loaded in order by level from lower to higher to prevent FK violations. All tables at the same level can be loaded at the same time without FK violations.
Tested on SQL 2000 only. Please post any errors found.
Edit 2006/10/10: Fixed bug with tables that have multiple references, and moved tables that have only self-references to level 1 from level 0.
This script finds table references and ranks them by level in order to be able to load tables with FK references in the correct order. Tables can then be loaded one level at a time from lower to higher. This script also shows all the relationships for each table by tables it references and by tables that reference it.
Level 0 is tables which have no FK relationships.
Level 1 is tables which reference no other tables, except themselves, and are only referenced by higher level tables or themselves.
Levels 2 and above are tables which reference lower levels and may be referenced by higher levels or themselves.
declare @table table ( TABLE_NAME nvarchar(200) not null primary key clustered ) set nocount off
print 'Load tables for database '+db_name()
insert into @table select TABLE_NAME = a.TABLE_SCHEMA+'.'+a.TABLE_NAME from INFORMATION_SCHEMA.TABLES a where a.TABLE_TYPE = 'BASE TABLE'and a.TABLE_SCHEMA+'.'+a.TABLE_NAME <> 'dbo.dtproperties' order by 1
print 'Load PK/FK references' insert into @r selectdistinct PK_TABLE = b.TABLE_SCHEMA+'.'+b.TABLE_NAME, FK_TABLE = c.TABLE_SCHEMA+'.'+c.TABLE_NAME from INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS a join INFORMATION_SCHEMA.TABLE_CONSTRAINTS b on a.CONSTRAINT_SCHEMA = b.CONSTRAINT_SCHEMA and a.UNIQUE_CONSTRAINT_NAME = b.CONSTRAINT_NAME join INFORMATION_SCHEMA.TABLE_CONSTRAINTS c on a.CONSTRAINT_SCHEMA = c.CONSTRAINT_SCHEMA and a.CONSTRAINT_NAME = c.CONSTRAINT_NAME order by 1,2
print 'Make copy of PK/FK references' insert into @rs select * from @r order by 1,2
print 'Load un-referenced tables as level 0' insert into @t select REF_LEVEL = 0, a.TABLE_NAME from @table a where a.TABLE_NAME not in ( select PK_TABLE from @r union all select FK_TABLE from @r ) order by 1
-- select * from @r print 'Remove self references' delete from @r where PK_TABLE = FK_TABLE
declare @level int set @level = 0
while @level < 100 begin set @level = @level + 1
print 'Delete lower level references' delete from @r where PK_TABLE in ( select TABLE_NAME from @t ) or FK_TABLE in ( select TABLE_NAME from @t )
insert into @t select REF_LEVEL =@level, a.TABLE_NAME from @table a where a.TABLE_NAME not in ( select FK_TABLE from @r ) and a.TABLE_NAME not in ( select TABLE_NAME from @t ) order by 1
if not exists (select * from @r ) begin print 'Done loading table levels' print '' break end
end
print 'Count of Tables by level' print ''
select REF_LEVEL, TABLE_COUNT = count(*) from @t group by REF_LEVEL order by REF_LEVEL
print 'Tables in order by level and table name' print 'Note: Null REF_LEVEL nay indicate possible circular reference' print '' select b.REF_LEVEL, TABLE_NAME = convert(varchar(40),a.TABLE_NAME) from @table a left join @t b on a.TABLE_NAME = b.TABLE_NAME order by b.REF_LEVEL, a.TABLE_NAME
print 'Tables and Referencing Tables' print '' select b.REF_LEVEL, TABLE_NAME = convert(varchar(40),a.TABLE_NAME), REFERENCING_TABLE =convert(varchar(40),c.FK_TABLE) from @table a left join @t b on a.TABLE_NAME = b.TABLE_NAME left join @rs c on a.TABLE_NAME = c.PK_TABLE order by a.TABLE_NAME, c.FK_TABLE
print 'Tables and Tables Referenced' print '' select b.REF_LEVEL, TABLE_NAME = convert(varchar(40),a.TABLE_NAME), TABLE_REFERENCED =convert(varchar(40),c.PK_TABLE) from @table a left join @t b on a.TABLE_NAME = b.TABLE_NAME left join @rs c on a.TABLE_NAME = c.FK_TABLE order by a.TABLE_NAME, c.PK_TABLE