Differing RAID Levels And Their Performance...

Jan 18, 2001

Hi fellas,

I have to spec out a new server, and I have the option of using RAID-5 or RAID-1 drive sets. I am limited to 24 drives in groups of 6,
and I have to have one hotspare per group, so up to 5 usable drives per channel.

I need 80-100GB of space total.

Okay, you're waiting for the question....I have heard many differing opinions on which is better, RAID 1 or RAID 5. If I have 2 Large disks (say 36GB)
on a Raid-1, I assume having 4 smaller 9GB drives on a RAID-5 will be faster, but I am not sure due to overhead and the like.

Does anyone know where I can get more information on RAID performance and how it is going to affect me? The database is going to be Read-Write, with
a ton of small transactions, and the occasional (usually on a weekend) aggregation.

Any help would be much appreciated,

Joe

View 2 Replies


ADVERTISEMENT

SQL Server Databases On RAID 5 Or RAID 10

Apr 4, 2007

I am configuring a new database server, without SAN access, and want to know what is the best practice for SCSI RAID configuration. Do most folks prefer RAID 5 or RAID 10 configurations where their databases will reside?

View 8 Replies View Related

BCP With Differing Criteria

Jul 12, 2002

I am familiar and happy with using BCP to export from SQL Server to a flat file

.. 1) Is there any way to pass a parameter to the sql script file each time so that i can vary the selection critria the script file uses each time?

.. 2) Can i batch the BCP calls together so they all use this parameter with some kind of 'super' BCP cammand?

Thanks in anticipation

View 3 Replies View Related

RAID 1 Or RAID 5 For Mdf Files?

Mar 27, 2008

I've always heard that RAID 5 (or better, RAID 10) is preferred for the actual database (mdf), but RAID 1 for logging.

If I have a dedicated physical volume for each, what's the performance hit for selecting RAID 1 for the MDF files? 3%, 20%, 200%?

Doing so (all RAID1) will allow me to have a separate physical volume for the TEMP database - that is heavily used by my app.

View 1 Replies View Related

RAID 5 Beats RAID 10

May 1, 2006

RAID 5 beats RAID 10Can I get some feedback on these results? We were having some seriousIO issues according to PerfMon so I really pushed for RAID 10. Theresults are not what I expected.I have 2 identical servers.Hardware:PowerEdge 28502 dual core dual core Xeon 2800 MHz4GB RAMController Cards: Perc4/DC (2 arrays), Perc4e/Di (1 array)PowerVault 220SEach Array consisted of 6-300 GB drives.Server 1 = Raid 103, 6-disk arraysServer 2 = Raid 5 (~838 GB each)3, 6-disk arrays (~1360 GB each)TestWinner% FasterSQL Server - UpdateRAID 513Heavy ETLRAID 516SQLIO - Rand WriteRAID 1040SQLIO - Rand ReadRAID 1030SQLIO - Seq WriteRAID 515SQLIO - Seq ReadRAID 5MixedDisktt - Seq WriteRAID 518Disktt - Seq ReadRAID 52000Disktt - Rand ReadRAID 562Pass Mark - mixedRAID 10VariesPass Mark -Simulate SQL ServerRAID 51%I have much more detail than this if anyone is interested.

View 13 Replies View Related

Linking On 2 Differing Columns

Jun 12, 2007

Hi I have two tables I want to link
On table1 the column is nvarchar (length 53) the value might be XXX0123 on the second table the column is varchar (length 80) and the value might be 123.
The important thing for this company is that 'XXX0123' is the same as '123' But it isn't always going to be the last 3 characters because the first table might be 'XXX0012' and the second value would be '12'

I just want to say SELECT column1, column2 from table1, table2 WHERE
(the two are linked)

hope that makes sense.

View 17 Replies View Related

SSIS - How To Deal With A Csv File That Contains Multiple Records With A Differing Number Of Fields

Apr 20, 2007

what is the best way to uses ssis to deal with a CSV file that contains a number of records that contain a different number of fields ?

I could just load as a single column delimited by <CRLF> but then I would have to write the code to parse the line, effectively detokenising the columns myself but if that is the case then why uses ssis at all ?

View 13 Replies View Related

Converting Csv Files From One Format To Another Format With Differing Columns

Dec 19, 2007

Hi,


I have a set of csv files and a set of Format Specification files for each of the csv files. I need to convert the csv files into another format of csv files as specified in the Format Specification files. All the columns of the input csv files do not have a mapping with the columns of the output csv files. How can I achieve this using SSIS ? This is an urgent requirement. Please reply asap. Thanks.

View 1 Replies View Related

Levels In A Cube

Feb 24, 2004

I have just started working the 2047 OLAP and came arcross the Analysis Service Limits. It states that the levels in a cube has a limit of 256 and the leves per dimension is 64. I am confused.

What is the definition of (levels in a cube)! and how are the levels in a cube different from (levels per dimension)

View 1 Replies View Related

FOR XML Query With More Than 2 Levels Of Data

Mar 30, 2006

I'm having trouble getting a FOR XML query to get the relationships correct when there are 3 levels of data.

In this example, I have 3 tables, GG_Grandpas, DD_Dads, KK_Kids. As you would expect, the Dads table is a child of the Grandpas table, and the Kids table is a child of the Dads table.

I'm using the Bush family in this example, these are the relationships:
- George SR
--- George JR
------ Jenna
------ Barbara
--- Jeb
------ Jeb JR
------ Noelle

These statements will create and populate the tables for the example with the above relationships:

SET NOCOUNT ON
DROP TABLE KK_Kids, DD_Dads, GG_Grandpas
CREATE TABLE GG_Grandpas ( GG_Grandpa_Key varchar(20) NOT NULL, GG_GrandpaName varchar(20))
CREATE TABLE DD_Dads ( DD_Dad_Key varchar(20) NOT NULL, DD_Grandpa_Key varchar(20) NOT NULL, DD_DadName varchar(20))
CREATE TABLE KK_Kids ( KK_Kid_Key varchar(20) NOT NULL, KK_Dad_Key varchar(20) NOT NULL, KK_KidName varchar(20))

ALTER TABLE GG_Grandpas ADD CONSTRAINT PK_GG PRIMARY KEY (GG_Grandpa_Key)
ALTER TABLE DD_Dads ADD CONSTRAINT PK_DD PRIMARY KEY (DD_Dad_Key)
ALTER TABLE KK_Kids ADD CONSTRAINT PK_KK PRIMARY KEY (KK_Kid_Key)
ALTER TABLE DD_Dads ADD CONSTRAINT FK_DD FOREIGN KEY (DD_Grandpa_Key) REFERENCES GG_Grandpas (GG_Grandpa_Key)
ALTER TABLE KK_Kids ADD CONSTRAINT FK_KK FOREIGN KEY (KK_Dad_Key) REFERENCES DD_Dads (DD_Dad_Key)

INSERT INTO GG_Grandpas VALUES ('GG_GEORGESR_KEY', 'GEORGE SR')
INSERT INTO DD_Dads VALUES ('DD_GEORGEJR_KEY', 'GG_GEORGESR_KEY', 'GEORGE JR')
INSERT INTO DD_Dads VALUES ('DD_JEB_KEY', 'GG_GEORGESR_KEY', 'JEB')
INSERT INTO KK_Kids VALUES ( 'KK_Jenna_Key', 'DD_GEORGEJR_KEY', 'Jenna' )
INSERT INTO KK_Kids VALUES ( 'KK_Barbara_Key', 'DD_GEORGEJR_KEY', 'Barbara' )
INSERT INTO KK_Kids VALUES ( 'KK_Noelle_Key', 'DD_JEB_KEY', 'Noelle' )
INSERT INTO KK_Kids VALUES ( 'KK_JebJR_Key', 'DD_JEB_KEY', 'Jeb Junior' )


So the question is, how do I get it to maintain the proper relationships between the records when I do an FOR XML query? Here is the query I am trying to get to work. Right now it puts all the Kids under a single Dad, rather than having them under their correct dads.
I am getting this, which is not what I want:

- George SR
--- George JR
--- Jeb
------ Jenna
------ Barbara
------ Jeb JR
------ Noelle


SELECT 1 as Tag,
NULL as Parent,
GG_GrandpaName as [GG_Grandpas!1!GG_GrandpaName],
GG_Grandpa_Key as [GG_Grandpas!1!GG_Grandpa_Key!id],
NULL as [DD_Dads!2!DD_DadName],
NULL as [DD_Dads!2!DD_Dad_Key!id],
NULL as [DD_Dads!2!DD_Grandpa_Key!idref],
NULL as [KK_Kids!3!KK_KidName],
NULL as [KK_Kids!3!KK_Dad_Key!idref]
FROM GG_Grandpas
UNION ALL
SELECT 2 ,
1 ,
NULL ,
GG_Grandpa_Key ,
DD_DadName ,
DD_Dad_Key ,
DD_Grandpa_Key ,
NULL ,
NULL
FROM GG_Grandpas, DD_Dads
WHERE GG_Grandpa_Key = DD_Grandpa_Key
UNION ALL
SELECT 3 ,
2 ,
NULL ,
GG_Grandpa_Key ,
NULL ,
DD_Dad_Key ,
NULL ,
KK_KidName ,
KK_Dad_Key
FROM GG_Grandpas, DD_Dads , KK_Kids
WHERE GG_Grandpa_Key = DD_Grandpa_Key
AND DD_Dad_Key = KK_Dad_Key

FOR XML EXPLICIT


I've tried it all different ways, but no luck so far.
Any ideas?

View 5 Replies View Related

Results Are Returning On Different Row Levels?

Feb 26, 2014

How do I get my data to show starting at the first row instead of skipping down?

Refer to the attachment.

Code:
CREATE PROCEDURE [dbo].[uspReportData]
-- Add the parameters for the stored procedure here
@Metric1 as varchar(50) = NULL, @Metric2 as varchar(50) = NULL, @Metric3 as varchar(50) = NULL, @Metric4 as varchar(50) = NULL,
@Metric5 as varchar(50) = NULL, @Metric6 as varchar(50) = NULL, @Metric7 as varchar(50) = NULL, @Metric8 as varchar(50) = NULL,

[code].....

View 1 Replies View Related

Transaction Isolation Levels

Feb 15, 2006

I am redesigning an application that distributes heldesk tickets to our50 engineers automatically. When the engineer logs into their window astored procedure executes that searches through all open tickets andassigns a predetermined amount of the open tickets to that engineer.Theproblem I am running into is that if 2 or more engineers log in at thesame time the stored procedure will distribute the same set of ticketsmultiple times.Originally this was fixed by "reworking" the way SQL Server handlestransactions. The original developer wrote his code like this:-----DECLARE @RET_STAT INTSELECT 'X' INTO #TEMPBEGIN TRANUPDATE #TEMP SET 'X' = 'Y'SELECT TOP 1 @TICKET_# =TICKET_NUMBER FROM TICKETS WHERE STATUS = 'O'EXEC @RET_STAT = USP_MOVE2QUEUE @TICKET_#, @USERIDIF @RET_STAT <> 0ROLLBACK TRANRETURN @RET_STATENDCOMMIT TRAN-----The UPDATE of the #TEMP table forces the transaction to kick off andlocks the row in table TICKETS until the entire transaction hascompleted.I would like to get rid of the #TEMP table and start using isolationlevels, but I am unsure which isolation level would continue to lockthe selected data and not allow anyone else access. Do I need acombination of isolation level and "WITH (ROWLOCK)"?Additionally, the TICKETS table is used throughout the application andI cannot exclusively lock the entire table just for the distributionprocess. It is VERY high I/O!Thanks for the help.

View 3 Replies View Related

Locks - Isolation Levels

May 22, 2006

Good morning,

I am trying to get my head around locking (row, table) and Isolation Levels. We have written a large .NET/SQL application and one day last week we had about two dozen people in our company do some semi "stress/load" testing of the app.

On quite a few occassions, a few of the users would receive the following error:

"Transaction (Process ID xx) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction."

We are handling this on two fronts, the app and the database. The error handling in the app is being modified to capture this specific error and to retry the transaction.

However, from the database side, I am trying to find the most affective and efficient change to make regarding locking. I have been doing a lot of reading online and in BOL to get a better grasp of locking, but what I would really like is feedback from the community (forum) and get your thoughts on what changes I should make, if any, on the db side.

Thanks...

Scott

View 5 Replies View Related

Get The Names For Different Levels In A Table

Aug 20, 2007

hi

I've a table with coln names

ID
Name
ParentID
Level


I've list with different levels

say

ex.

the Data is:-

ID Name ParentID Level
1 Root null 1
2 Trunk 1 2
3 Branch 2 3
4 Leaf 3 4
5 Stem 3 4



How to write the query for getting the Names for different levels for corresponding ParentID....

Output should be like:-
Leaf Branch Trunk Root
Stem Branch Trunk Root

View 1 Replies View Related

Multiple Levels Of Partitioning In EE

Jun 4, 2007

Hi All,

we are building a DW for a company that operates in 10 countries with the home country being the major portion of the data......



Previous efforts have always had the data separated by schemas and so to ask a question about a specific country required the schema number to be provided.



I am proposing that the 10 schemas, and therefore 10x the number of tables, indexes etc, be removed in favour of using partitioning.



However, we want to partition by country and by periods...that is we would like to create monthly partitions as normal.



No matter how I read the documentation and test this out, it seems to me that this multiple levels of partitioning can only be achieved if I create a field on the table that is some manipultion of the key for the company reporting structure and the period. I think I can take the first, add 10M and then add the period key.



But I am unsure if the optimiser is going to do it's partition elimination properly on such a calculated field.



Has anyone attempted such a multi-level partitioning scheme in SQL Server? I am thinking people must have as one level of partitioning was seen to be too restrictive many years ago.....



Thanks in Advance for your comments.



Best Regards

Peter Nolan

View 9 Replies View Related

Dynamically Set Logging Levels?

Sep 19, 2007

Hello All,

I suspect I know the answer but I'll ask away. We currently have our SSIS packages set up to log to SQL Server. Currently they log OnError, OnInformation and OnTaskFailed. If I'd like to have it log OnPipeLineRowsSent, is there anyway I can get that done without opening up the package and editing it? I know the change is trivial from the IDE but the deployment process at my current engagement is quite lengthy. If something breaks in production, I'd like to know if it'll be possible to turn up the chattiness of logging without going through a full deploy scenario.

I was looking at the parameters for dtexec/dtexecui and I see that you can configure where something logs but nothing about the verbosity of the logs generated. Is it something I'm missing with that or is that all you can set there?


The only other option that jumps out at me is to develop a custom script or component that sets the logging level based on a parameter. Anyone have a thought as to how much effort that would be---something easily tackled or probably more trouble that it's worth?


Thanks for the help

View 1 Replies View Related

Mirroring And Build Levels

Nov 30, 2007

All,

Is anyone aware if the database engine build levels will affect the mirroring process. we're in the process of upgrading a PROD environment to a new build however like to delay onto the disaster recovery (DR) server in case of issues. The DR is the mirror in the setup and so would have a differnet build level.

Is this likely to affect anything? All info seems to point to only versions differences causing a problem but not the build.

Is someone able to confirm this.

Thanks......

View 5 Replies View Related

SQL/RAID

May 29, 2001

My SQL 7 is on RAID 5. Sometimes on non-peak hours, on RAID disks first
two lights ( from left ) are constantly on for hours. NT Task manager, nothing
unusual, SQL current activity - no running user processes. Isn't second light
on RAID comes on if any disk activity ( Read/Write ).

Suggestions are appreciated.

Thanks,
Ivan

View 2 Replies View Related

NT RAID /SQL

Sep 28, 1998

I`ve tried implementing NT Software Raid / Stripping with Parity
and am unable to stripe disc that are more than 2g and
use SQL. I have not found any info in technet. Any ideas! Thanks.

View 2 Replies View Related

RAID 5

Dec 20, 1998

Should one install RAID 5 for SQL Server or just use separate hard drives, one for the data and one for the transaction log?

View 4 Replies View Related

RAID

Apr 15, 2008

could any one tell me about the difference between RAID and SHARED DISK ARRAY

View 3 Replies View Related

Which Raid?

Jul 20, 2005

Hi,I was going to buy a server with Raid 1 as I thought that it meant that ifone of the two mirrored drives fail, you simply take it out and put a newone in. At which point presumably the hardware takes over and copies theother drive over to mirror it again.However, my sql server admin book, says raid 1 is bad, as it means you havelots of downtime, when recovering from a broken drive.Can anyone give me some advice on this? What is the best Raid to use whenyou are running SQL server on the server.ThanksJJ

View 1 Replies View Related

Sp_executesql And Database Compatibility Levels

Apr 15, 2003

I have captured some trace output for performance evaluation for an application which has just been upgraded. Originally, this application can only run with database compatible 70.

So, after we have switched this level from 70 to 80, I noticed that all T-SQL statements which executed thru the use of "sp_executesql" have much higher IO usuage. The usage increased from approximately 50 I/O to approximately 12000 I/O. When I reviewed the profiler output, I noticed that all select statements which executed thru this "sp_executesql" statements performed "index scan".

When I switch back to run with database compatibility level 70, my profiler output shows that all these "sp_executesql" statements performed "index seek".

All these statements use the same unique non-clustered index.

Is it a SQL Server bug? Does anyone know which service pack or hot fix address this problem?

Thanks...byyu :)

View 7 Replies View Related

AS Question: How To Hide Certain Levels Of A Dimension

Apr 22, 2004

Hi,

I have a Star schema based dimension called Customer which has these levels:

ALL Customers
Level1: Customer Type
Level2: Customer Sub Type
Level3: Customer Name


When a user is browsing the cube, is it possible to hide the the 1st level (and all it's sub-levels)? For example, If the Customer Type = "Low Ranked" then I do not want it to be
displayed to the user while (s)he is selecting from the dimension. HOWEVER I only want it to be hidden from being displayed but it's effect should always reflect e.g. Suppose:
[list=1]
Sales (measure count) for Customers with Type "High Ranked" = 100
Sales (measure count) for Customers with Type "Medium Ranked" = 50
Sales (measure count) for Customers with Type "Low Ranked" = 10
[/list=1]
Now if the user selects 'ALL Customer Type' in the dimension he/she should get a total Sale (measure count) of 160 (i.e. 100+50+10).

However when the user expands the Customers Dimension (i.e. ALL Customers), the resulting child nodes should only list 2 nodes i.e. High Ranked and Medium Ranked.

I went to the cube editor --> Advanced Properties and looked at the 'Hide Member If' property but amongst the 5 options there is none which allows me to specify the criteria.

Maybe the solution already is in one of those 5 options and thus please help me.

Many thanks in advance.

View 2 Replies View Related

Permission Levels For MSDE, ASAP

Dec 17, 2004

We seem to have a problem with permission levels and connecting to an MSDE (MSSQL) server. If the user is under the Domain Admins group, the the access projet (front end) will open correctly and connect to the data server. If they are not part of that group then the front end can ever establish a file to the database server. We do not want to make all the users Domain Admins, so is there a way to make MSDE let them trough even though they are on a lower level.

I've done many tests, and also tried many things. I've even went to the extent to give Full Control to the whole MSSQL folder in program files for Everyone. I have made sure that the database file itself inherieted it's parents security settings, which were what I had just described.

Any ideas how how to make MSDE let anyone connect? Thanks in advance!

View 10 Replies View Related

Contents Sort 3 Levels Of Data

Dec 21, 2007

I Have a table of Data (WikiData)

WikiIDint
ParentIDint
sTitlevarchar(50)
sDescriptionvarchar(MAX)

There will be three levels of data imposed at the Application Layer

Level 1: ParentID = 0
An Item Like Geography
Level 2: ParentID = a Level 1 WikiID
A sub Topic like Volcanoes
Level 3: ParentID = Level 2 WikiID
A bottom Topic like Pyroclastic Flows

I Need a SQL statement that Will Produce the Output where The output will be produced like this:
Level 1
Level 2
Level 2
Level 2
Level 1
Level 2
Level 2

I Built this but its wrong and has no order by Group by Statements
Select * from WikiData where ParentID = 0 or ParentID IN (Select * from WikiData where ParentID = 0)

View 12 Replies View Related

Find Table Reference Levels

Oct 3, 2006

The script below can be used to determine the reference levels of all tables in a database in order to be able to create a script to load tables in the correct order to prevent Foreign Key violations.

This script returns 3 result sets. The first shows the tables in order by level and table name. The second shows tables and tables that reference it in order by table and referencing table. The third shows tables and tables it references in order by table and referenced table.

Tables at level 0 have no related tables, except self-references. Tables at level 1 reference no other table, but are referenced by other tables. Tables at levels 2 and above are tables which reference lower level tables and may be referenced by higher levels. Tables with a level of NULL may indicate a circular reference (example: TableA references TableB and TableB references TableA).

Tables at levels 0 and 1 can be loaded first without FK violations, and then the tables at higher levels can be loaded in order by level from lower to higher to prevent FK violations. All tables at the same level can be loaded at the same time without FK violations.

Tested on SQL 2000 only. Please post any errors found.

Edit 2006/10/10:
Fixed bug with tables that have multiple references, and moved tables that have only self-references to level 1 from level 0.


-- Start of Script - Find_Table_Reference_Levels.sql
/*
Find Table Reference Levels

This script finds table references and ranks them by level in order
to be able to load tables with FK references in the correct order.
Tables can then be loaded one level at a time from lower to higher.
This script also shows all the relationships for each table
by tables it references and by tables that reference it.

Level 0 is tables which have no FK relationships.

Level 1 is tables which reference no other tables, except
themselves, and are only referenced by higher level tables
or themselves.

Levels 2 and above are tables which reference lower levels
and may be referenced by higher levels or themselves.

*/

declare @r table (
PK_TABLE nvarchar(200),
FK_TABLE nvarchar(200),
primary key clustered (PK_TABLE,FK_TABLE))

declare @rs table (
PK_TABLE nvarchar(200),
FK_TABLE nvarchar(200),
primary key clustered (PK_TABLE,FK_TABLE))

declare @t table (
REF_LEVEL int,
TABLE_NAME nvarchar(200) not null primary key clustered )

declare @table table (
TABLE_NAME nvarchar(200) not null primary key clustered )
set nocount off

print 'Load tables for database '+db_name()

insert into @table
select
TABLE_NAME = a.TABLE_SCHEMA+'.'+a.TABLE_NAME
from
INFORMATION_SCHEMA.TABLES a
where
a.TABLE_TYPE = 'BASE TABLE'and
a.TABLE_SCHEMA+'.'+a.TABLE_NAME <> 'dbo.dtproperties'
order by
1

print 'Load PK/FK references'
insert into @r
selectdistinct
PK_TABLE =
b.TABLE_SCHEMA+'.'+b.TABLE_NAME,
FK_TABLE =
c.TABLE_SCHEMA+'.'+c.TABLE_NAME
from
INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS a
join
INFORMATION_SCHEMA.TABLE_CONSTRAINTS b
on
a.CONSTRAINT_SCHEMA = b.CONSTRAINT_SCHEMA and
a.UNIQUE_CONSTRAINT_NAME = b.CONSTRAINT_NAME
join
INFORMATION_SCHEMA.TABLE_CONSTRAINTS c
on
a.CONSTRAINT_SCHEMA = c.CONSTRAINT_SCHEMA and
a.CONSTRAINT_NAME = c.CONSTRAINT_NAME
order by
1,2

print 'Make copy of PK/FK references'
insert into @rs
select
*
from
@r
order by
1,2

print 'Load un-referenced tables as level 0'
insert into @t
select
REF_LEVEL = 0,
a.TABLE_NAME
from
@table a
where
a.TABLE_NAME not in
(
select PK_TABLE from @r union all
select FK_TABLE from @r
)
order by
1

-- select * from @r
print 'Remove self references'
delete from @r
where
PK_TABLE = FK_TABLE

declare @level int
set @level = 0

while @level < 100
begin
set @level = @level + 1

print 'Delete lower level references'
delete from @r
where
PK_TABLE in
( select TABLE_NAME from @t )
or
FK_TABLE in
( select TABLE_NAME from @t )

print 'Load level '+convert(varchar(20),@level)+' tables'

insert into @t
select
REF_LEVEL =@level,
a.TABLE_NAME
from
@table a
where
a.TABLE_NAME not in
( select FK_TABLE from @r )
and
a.TABLE_NAME not in
( select TABLE_NAME from @t )
order by
1

if not exists (select * from @r )
begin
print 'Done loading table levels'
print ''
break
end

end


print 'Count of Tables by level'
print ''

select
REF_LEVEL,
TABLE_COUNT = count(*)
from
@t
group by
REF_LEVEL
order by
REF_LEVEL

print 'Tables in order by level and table name'
print 'Note: Null REF_LEVEL nay indicate possible circular reference'
print ''
select
b.REF_LEVEL,
TABLE_NAME = convert(varchar(40),a.TABLE_NAME)
from
@table a
left join
@t b
on a.TABLE_NAME = b.TABLE_NAME
order by
b.REF_LEVEL,
a.TABLE_NAME

print 'Tables and Referencing Tables'
print ''
select
b.REF_LEVEL,
TABLE_NAME = convert(varchar(40),a.TABLE_NAME),
REFERENCING_TABLE =convert(varchar(40),c.FK_TABLE)
from
@table a
left join
@t b
on a.TABLE_NAME = b.TABLE_NAME
left join
@rs c
on a.TABLE_NAME = c.PK_TABLE
order by
a.TABLE_NAME,
c.FK_TABLE


print 'Tables and Tables Referenced'
print ''
select
b.REF_LEVEL,
TABLE_NAME = convert(varchar(40),a.TABLE_NAME),
TABLE_REFERENCED =convert(varchar(40),c.PK_TABLE)
from
@table a
left join
@t b
on a.TABLE_NAME = b.TABLE_NAME
left join
@rs c
on a.TABLE_NAME = c.FK_TABLE
order by
a.TABLE_NAME,
c.PK_TABLE


-- End of Script



Results from Northwind database:

Load tables for database Northwind

(13 row(s) affected)

Load PK/FK references

(13 row(s) affected)

Make copy of PK/FK references

(13 row(s) affected)

Load un-referenced tables as level 0

(0 row(s) affected)

Remove self references

(1 row(s) affected)

Delete lower level references

(0 row(s) affected)

Load level 1 tables

(7 row(s) affected)

Delete lower level references

(9 row(s) affected)

Load level 2 tables

(4 row(s) affected)

Delete lower level references

(3 row(s) affected)

Load level 3 tables

(2 row(s) affected)

Done loading table levels

Count of Tables by level

REF_LEVEL TABLE_COUNT
----------- -----------
1 7
2 4
3 2

(3 row(s) affected)

Tables in order by level and table name
Note: Null REF_LEVEL nay indicate possible circular reference

REF_LEVEL TABLE_NAME
----------- ----------------------------------------
1 dbo.Categories
1 dbo.CustomerDemographics
1 dbo.Customers
1 dbo.Employees
1 dbo.Region
1 dbo.Shippers
1 dbo.Suppliers
2 dbo.CustomerCustomerDemo
2 dbo.Orders
2 dbo.Products
2 dbo.Territories
3 dbo.EmployeeTerritories
3 dbo.Order Details

(13 row(s) affected)

Tables and Referencing Tables

REF_LEVEL TABLE_NAME REFERENCING_TABLE
----------- ---------------------------------------- ----------------------------------------
1 dbo.Categories dbo.Products
2 dbo.CustomerCustomerDemo NULL
1 dbo.CustomerDemographics dbo.CustomerCustomerDemo
1 dbo.Customers dbo.CustomerCustomerDemo
1 dbo.Customers dbo.Orders
1 dbo.Employees dbo.Employees
1 dbo.Employees dbo.EmployeeTerritories
1 dbo.Employees dbo.Orders
3 dbo.EmployeeTerritories NULL
3 dbo.Order Details NULL
2 dbo.Orders dbo.Order Details
2 dbo.Products dbo.Order Details
1 dbo.Region dbo.Territories
1 dbo.Shippers dbo.Orders
1 dbo.Suppliers dbo.Products
2 dbo.Territories dbo.EmployeeTerritories

(16 row(s) affected)

Tables and Tables Referenced

REF_LEVEL TABLE_NAME TABLE_REFERENCED
----------- ---------------------------------------- ----------------------------------------
1 dbo.Categories NULL
2 dbo.CustomerCustomerDemo dbo.CustomerDemographics
2 dbo.CustomerCustomerDemo dbo.Customers
1 dbo.CustomerDemographics NULL
1 dbo.Customers NULL
1 dbo.Employees dbo.Employees
3 dbo.EmployeeTerritories dbo.Employees
3 dbo.EmployeeTerritories dbo.Territories
3 dbo.Order Details dbo.Orders
3 dbo.Order Details dbo.Products
2 dbo.Orders dbo.Customers
2 dbo.Orders dbo.Employees
2 dbo.Orders dbo.Shippers
2 dbo.Products dbo.Categories
2 dbo.Products dbo.Suppliers
1 dbo.Region NULL
1 dbo.Shippers NULL
1 dbo.Suppliers NULL
2 dbo.Territories dbo.Region

(19 row(s) affected)







CODO ERGO SUM

View 20 Replies View Related

Finding Out Db Compatibility Levels In 2005

Dec 31, 2007

In a new environment where we have 2005 servers and loads of databases that are running in compatibility mode with 2000 (80)

Are there any queries we can run on that can tell me what compatibility level each db is?

Looked on the web for ages apart from here but only info for changing the compatibility level - sp_dbcmptlevel – but queies only seem to be for changing it not and not reporting on it.

But I obviously do not want to do that for a production environment.

I want to do a mini report as to what db’s are still (I think all) 2000 mode and then suggest upgrades – I can go into the properties of each one but I know there must be an easier way – finding a load of admin queries that require cross apply and for that I need 2005 (90) compatibility….or is that only for the master db?

View 4 Replies View Related

SQl Agents And Package Protection Levels

Feb 20, 2008

All:
I am aware that I am raising an issue/question that has quite a number of ancestors in this forum. In reviewing some of the threads I still believe my situation has a bit of a twist; but that could just be me.

The process I used until a change I made recently worked just fine. A handful of my packages connect to our ERP system that only supports an ODBC connection. I set the Protection Level to the default, and then deploy the packages to the server. I use an agent to run the jobs that include these packages as steps. I have hardcoded the userID and password in the SQL jobs and so they have run fine.

In an effort to reduce maintenance on the packages I decided to run the packages from the File System instead of deploying them to the server. Now, the packages are not running as I have not changes the Protection Level yet. I did test running one of the packages using a Proxy I have created but that does not work either.

Based upon what I have read it appears that the first thing I need to do is change the Protection Level to DoNotSaveSensitive. How do I then pass the ID and password to the agent?

a. Create a confirguation file?
b. Create a package template?
c. Both of the above

To reiterate I do not wish to deploy the packages to the server; I prefer to run the packages from the File System. Further, I just have one box on which everything happens; there are no migration issues across servers.

Some insights from this group will be greatly appreciated.

Thank you!

View 8 Replies View Related

SQL 2005 Replication And Compatibility Levels

Jun 11, 2006

Hi There,

We have been using SQL 2005 in our dev enviroment for a few weeks, and I have just completed migrating our production server to a 64bit instance of SQL 2005 standard.

I have 3 publications, 1 snapshot, and 2 transactional. All go to SQL 2000 databases.

Our dev enviroment is in compatibility level 90, and our application is working fine. My concern is, will changing the production compatibility level from 80 to 90 affect / break replication to SQL 2000 Servers?

Thank you.

View 1 Replies View Related

Raid & Transaction Log ?

Sep 13, 2001

Im setting up a hardware raid 5 solution for one of our db servers. The data files will reside on the stripe. We dont realy want to raid more drives for the Transaction log if its not nessesary. If the drive with the log crashes is the data file for the database useless ?

View 1 Replies View Related

What Level Raid Is Best?

Nov 29, 2005

Hi guys,
On a new server with 4 disks, what level of raid is best to apply. In terms of what's important, I'd say speed is at the top of the list.

BJ

View 1 Replies View Related

Which Raid To Configure

Mar 17, 2004

Hello,
I run a small homw office. I am planning to purchase a dell powerdge 1750 server to install SQL server on that.
I am confused here about which RAID should I install on this server RAID 1 or RAID 5.
The dell customer rep could not tell me the advantages of installing only RAID 1 or only RAID 5 or installing both RAID 1 and RAID 5

View 7 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved