Background: We are changing the way we pay commissions to our rep groups. We used to pay when the order was placed, now we want to pay when the invoice is paid.
Problem: The commision information is currently stored in the customer order, not in the invoice. These orders get deleted a couple weeks after the order was completed (shipped).
I want to create another, rather dynamic, table/structure that will store the order number and the commission percentage.
This info in this table should:
Be deleted: if the order has been deleted and the invoice either does not exist or was payed some period of time ago (maybe 6 months)
Be updated: if the customer order has been updated (i.e. the commission was changed)
Be inserted: if the order exists but the order number is not in the new table.
ApplicantID FirstName LastName CompanyName Line1 Line2 City
State Zip PhoneNum
Owner
OwnerID FirstName LastName CompanyName Line1 Line2 City
State Zip PhoneNum
Now i know what im doing with the applicantID and ownerID...but the BZAcase# is a number/unique identifier that looks like this....2007-VU-000, 2007-VU-001, 2007-VU-003....so my question is 1. how do i get the last three numbers to increment each time a new application is created? 2. how do i retrieve the last record in the table??? 3. Do you have any other suggestions?? i have to have the number and what type of form they applied for in the "case#"???
hi i have written a procedure for stock report. its working fine. please go through the sp and give me some Suggestions. please tell me where i need to improve my code. thanks
Note: User is required to execute this procedure daily. i am taking the sum of issues,purchases,returns,physical adjustments for each and every product from last updated date to today's date and storing it in a table i,e stock_Dump. from this table i generate the date wise stock report
SET QUOTED_IDENTIFIER OFF GO SET ANSI_NULLS ON GO
CREATE PROCEDURE dbo.spUpdateStock @strReturn varchar(70) output AS BEGIN declare @maxDt smalld atetime if exists(Select * from Stock_Dump where Txn_Date= Convert(varchar,Getdate(),101)) BEGIN set @strReturn='Stock Table already generated for the day. cannot generate it again' END
ELSE BEGIN TRUNCATE TABLE Stock_Dump_Temp select @maxDt=max(Txn_Date) from Stock_Dump /* insert (Opening stock) Closing stock for all all the products from last max Date*/ INSERT INTO Stock_Dump_Temp Select Product_code, convert(varchar,GetDate(),101) as Txn_Date, Closing_Stock as Opening_Stock , 0,0,0,0,0,0,0 from Stock_Dump Where Txn_Date=Convert(varchar,@maxDt,101) /* Issues*/ INSERT INTO Stock_Dump_Temp Select Product_code, convert(varchar,GetDate(),101) as Txn_Date,0, Sum(Qty) as Issue_Qty,0,0,0,0,0,0 from Issue_Details Where Issue_No IN(Select Issue_No from Issue_Hdr Where Issue_Date > Convert(varchar,@maxDt,101) and Issue_Date <= Convert(varchar,getdate(),101)) Group by Product_Code /* Goods receipt*/ INSERT INTO Stock_Dump_Temp Select Product_code, convert(varchar,GetDate(),101) as Txn_Date,0,0, Sum(Qty) as Purchase,0,0,0,0,0 from Dlv_note_Details Where Dlv_Note_No IN(Select Dlv_Note_No from Dlv_Hdr Where Dlv_Note_Date > Convert(varchar,@maxDt,101) and Dlv_Note_Date <= Convert(varchar,getdate(),101)) Group by Product_Code /* Rejection after receipt*/ INSERT INTO Stock_Dump_Temp Select Product_code, convert(varchar,GetDate(),101) as Txn_Date,0,0, 0,Sum(Qty) as Rejected,0,0,0,0 from Rejection_Details Where Rejection_No IN (Select Rejection_No from Rejection_Hdr Where Rejection_Date > Convert(varchar,@maxDt,101) and Rejection_Date <= Convert(varchar,getdate(),101)) Group by Product_Code /* Issues returns*/ INSERT INTO Stock_Dump_Temp Select Product_code, convert(varchar,GetDate(),101) as Txn_Date,0,0, 0,0,Sum(Qty) As Issue_Returns,0,0,0 from Issue_Return_Details Where Issue_R_No IN(Select Issue_R_No from Issue_Return_Hdr Where Return_Date > Convert(varchar,@maxDt,101) and Return_Date <= Convert(varchar,getdate(),101)) Group by Product_Code /* Physical Stock + */ INSERT INTO Stock_Dump_Temp Select Product_code, convert(varchar,GetDate(),101) as Txn_Date,0,0, 0,0,0,Sum(Var_Qty) as Phy_Qty_P,0,0 from Physical_Details Where Var_Qty>0 and Txn_No IN(Select txn_No from Physical_Hdr Where Txn_Date > Convert(varchar,@maxDt,101) and Txn_Date <= Convert(varchar,getdate(),101)) Group by Product_Code /* Physical -*/ INSERT INTO Stock_Dump_Temp Select Product_code, convert(varchar,GetDate(),101) as Txn_Date,0,0, 0,0,0,0,Sum(Var_Qty) as Phy_Qty_M,0 from Physical_Details Where Var_Qty<0 and Txn_No IN(Select txn_No from Physical_Hdr Where Txn_Date > Convert(varchar,@maxDt,101) and Txn_Date <= Convert(varchar,getdate(),101)) Group by Product_Code /* insert all the records into actual table i,e Stock_dump from Stock_dump_temp (temporory table)*/ INSERT INTO Stock_Dump Select Product_code,Txn_Date, Sum(Opening_Stock) as Opening_Stock,Sum(Issue_Qty) as Issue_Qty,Sum(purchase) as Purchase,Sum(Rejected) as Rejected,Sum(Issue_Returns) as Issue_returns, Sum(Phy_Qty_P) as Phy_Qty_P,Sum(Phy_Qty_M) as Phy_Qty_M,0 as Closing_Stock from Stock_Dump_Temp Group By ProducT_Code,Txn_Date /* update closing stock*/ UPDATE Stock_Dump Set Closing_Stock=abs((Opening_Stock+Purchase+Issue_Returns+Phy_Qty_P)-(Issue_Qty+Rejected+Phy_Qty_M)) Where Txn_Date=Convert(varchar,Getdate(),101) /* delete unwanted records */ DELETE From Stock_Dump Where Opening_Stock=0 and Issue_Qty=0 and Purchase=0 and Rejected=0 and Issue_Returns=0 and Phy_Qty_M=0 and Phy_Qty_P=0
set @strReturn='Stock Table Update Successfully' return END
END GO SET QUOTED_IDENTIFIER OFF GO SET ANSI_NULLS ON GO
I have a database which contains more than 20000 stored procedureswhich were created withansi nulls off. This i found out using the querySELECT name,AnsiNullsOn FROM(SELECT name, OBJECTPROPERTY(id, 'ExecIsAnsiNullsOn') AS AnsiNullsOnFROM sysobjects WHERE type = 'P' ) A WHERE AnsiNullsOn=0Is there any way that i can set this property to 1 for all the storedprocedures i have??I know the alternate method is to drop the procedure and execute thescripts again with AnsiNullsOn = 1.Is there any other simple ways?? It will be very helpful for me..
Now I have lets say VISID 1 to 50. I'm using this SP to change the text on a button. Now I have 50 buttons. So I run this SP, then I run this in my vb.net code
Code Snippet Dim constr As New SqlConnection(PVDBConn) Try 'Variable to hold the results Dim results As String = String.Empty cmdUpd = New SqlCommand("SelVis1Name", constr) cmdUpd.CommandType = CommandType.StoredProcedure constr.Open() 'Set results to the value returned from ExecuteScalar() results = CType(cmdUpd.ExecuteScalar(), String) constr.Close() 'Set our buttons text to that value Button1.Text = results Catch ex As Exception MsgBox(ex.Message.ToString) End Try
At any time, when I start my program, I may need to label 10 buttons, or up to 50. Now I will have this number in a text file. Can I grab that number from a text file, and pass it into a SP?
And can I write this SP only once, to work for more than one label per time. Or do I have to write this sp 50 times?
I have a database that will be used by two or more organizations. I would like to use pass phrase encryption to encrypt a couple of columns.
I'm looking for suggestions on how I might set up the db to let the organization change the pass phrase that is used for their encryption?
I don't really want to hard code it into stored procedures or select statements with parameters. I will be using SSL if that should make a difference with what you suggest.
I have two stored procedures (l'll call them P1 & P2). P1, after a lot of processing, creates a temporary table that is used by P2 after an "exec P1" is done. I've separated the logic into two stored procedures because, ultimately, other sprocs will need the output of P1.
I get an error if I use #tempTable as the output table in P1 because it no longer exists after P1 finishes. ##tempTable works, but I'm concerned about concurrency issues. Any suggestions on what construct(s) I should be using?
Hi All, I am new to SQL Server but have been doing database programming since last 3 years. I recently attended MOC (Microsfot Official Curriculum) training on SQL Server and have started to use at my company. I am comfortable with SQL but want to dig deeper into T-SQL side. I searched on the Internet but not many good books available in that either they are ranked very low or are very old i.e. written around 1999/2000 or covers SQL Server 2000 as a whole. Can anybody suggest me any T-SQL book which was written recently and focuses purely or majorly on T-SQL?
Thanks to all for your time and advice in advance.
I'm looking for some help on how i should index this table.
current table has about 500k records in it. the fields in the table are: member_num (varchar(12), not null) first_name (varchar(20), null) last_name (varchar(20), null) ssn (varchar(50), null) address1 (nvarchar(200), null) address2 (nvarchar(200), null) city (nvarchar(200), null) state (nvarchar(200), null) zip (nvarchar(100), null) phone1 (nvarchar(50), null)
all of the fields are searchable through an asp.net webform.
my first stab at this consisted of creating a clustered index on member_num and then creating a separate index for each of the remaining fields.
What I have. I have a spreadsheet that is used in 4 or more locations on a daily basis by 1-3 ppl per locations. The spreadsheet is used to gather Quality Control information. So everyday there are a couple of spreadsheets from each system that is used to generate weekly and monthly reports. This is becoming to much work and I would like to automate the process.
What I have access to. I currently run a Sharepoint 2007 Server for all our collaboration and document needs. I also have the ability to setup any sql server.
What I want. I want the QC techs in each system to be able to upload the data at the end of each day and be done with it. This way they do not have email or do a weekly report. I would prefer to use Sharepoint and create reports weekly and monthly that can be pulled just by going to a site.
I'm knowledgeable in Sharepoint and Excel. I have some skills in VBA. I haven't dealt with SQL any, but willing to learn. Also I'm knowledgeable in Microsoft Access as well.
Any suggestions on how I could accomplish this would be appreciated.
Well, as a VB/VBA applications developer I'm not well prepared for this, but it looks like I will be riding herd on a production SQL Server.
TSQL I know well enough to get along, but where can I get a fast fix on all the logins, security, and process management info? Today we had a DTS package crash overnight and it took me forever to figure out that it had left half a dozen tables locked. (Note that the scripts for the DTS package are being re-written as we speak with use of transactions and NOLOCK.) Meanwhile tech support was handling a whole mess of grumpy users.
Are there any books you would recommend as resources/references? Is there a particular author who is good at writing the stuff you really need to know in English that can be read by a mere mortal like I? I am fond of the Microsoft resources/help files but I'd like to have somthing that holds highlighter and post-it flags a bit better. Not to mention something that focuses more on the beast as a whole rather than the minutia at length.
I have a set of tables with about the same structure
dataID, recordID, 15 other columns
dataID is unique but is never referenced in queries
recordID is one of the most referenced columns but only has a cardinality of about 30%
The current structure has a clustered PK on (dataID,recordID)
Someone suggested reversing the clustered PK to (recordID,dataID) because of the number of references to recordID but that didn't seem to boost performance any
After staring at this for a while I came up with something but I'd like some advice whether it makes sense or not.
create a non-clustered PK on dataID create a non-unique clustered index on recordID
Let me know if any other information is needed. Thanks
I would be teaching an applied database course to buisness majorundergrads. I'm looking for a book that introduces database conceptsusing SQLServer as the database. I would really appreciate if you couldrecommend me a few such books.ThanksNemo
I'm trying to count the number of records in 'game_dates' where thecolumns home_team_id or away_team_id have the same value. E.g., iwant to know the number of records for each team_id where team_id ishome_team_id or away_team_id.I'm doing this in two separate select statements now. Example:SELECT count(home_team_id),home_team_id FROM gamesWHERE league_id = 218 and ((home_score IS NOT NULL OR away_score ISNOT NULL) OR (home_score <> 0 OR away_score <> 0))GROUP BY home_team_idandSELECT count(away_team_id),away_team_id FROM gamesWHERE league_id = 218 and ((home_score IS NOT NULL OR away_score ISNOT NULL) OR (home_score <> 0 OR away_score <> 0))GROUP BY away_team_idand then combining the results. Is there anyway to combine these toqueries into one query? ...and have a single result set returned withtwo columns (count,team_id)?Thanks,Glenn
Need some suggestions for senior management for DR Purposes:
Background:
WSS/MOSS2007 is being used as a Document Management solution.
17 Servers geographically dispersed around the UK. Each server runs WSS 3, SQL Server 2005 and IIS. Each server is linked into a PiP cloud via 2MB MPLS.
At each location; We are looking at 20 core databases; each pre-sized to 10GB. If I take one site as an example, the previous nights backup totalled 135GB.
The company has taken a centralised view on backup's, so SQL Server Data and Log files are replicated using Double-Take to a central location where by the files are taken onto tape daily (Full backup of all files).
As a precaution, I take a Full SQL Server backup daily and also Tran Logs every 4 hours locally and keep it there for 2 days; however if the site goes boom I loose those, so for this purpose; please forget they exist.
As I expect; when I restore the mdf and ldf files from tape, I will get errors when I attach those files into SQL Server for transactional inconsistencies which I'm well aware of.
Other options I've considered are:
1) DB Mirroring. Not a bad option, but still have to get the DB to the Mirror Server in the first place. Also DB Mirroring is not recommended for more than 10 mirrored databases.
2) Log Shipping. Same issue as above; Have to get the data here in the first place. Then once Log Shipping is setup; if I have a failure; I need to start the whole lot off again.
3) Transactional Replication. Issue is with the initial replication getting the data from A to B, then if I need to use it in a DR situation; I will get issues saying this table is being used for replication. This can be worked around, but it's a not a quick process...
4) 3rd Party Backup Compression. E.G. Litespeed; Redgate SQL Backup, etc. Good; Tests have shown a 42% compression for us, however if I refer to the earlier example of 135GB, this compresses to 81GB. Throw in the theoretical max for a 2MB link of 19GB / 24 Hours, this would take 4 Days to copy.
Other thoughts I've come up with are:
A) Split the tables into different file groups; not sure how easy this would be as the DB's and Tables already exist.
B) Full/Diff/Tran. Still have the issue of scheduling the full backup over the weekend and taking 4 days to get here.
C) Local Tape Backups. Issue is relying on someone to change the tape on a daily basis. It's not centrally managed and how do we restore in a DR situation ?
Hi folks, I have a very typical database for an ASP.net application. There is a table which will contain a hierarchical data..much like files-folders structure of a file system. But we know that the table will be a giant one in production. There will be a huge collection of data need to persist in it. we are already facing some performance problem with some queries during the QA/test machine. Currently there is a table which is keeping all file and folder information and another table maintaing their hierarchy relation using two column namely, parentID and childID. My first question is, would it be better to keep this hierarchy relation into the same table rather using a different one? (much like managerID and empID in AdventureWorks sample?) My Second question, what is the best way to design this kind of structure to get the highest performance benifit?
All kind of thoughts will be appreciated much! thanks
To extract data from this table we are using a 4 table join to each of the factid's
Our where clause in this query is based on (where factid1 = something)
So we have a composite clustered index led by factid1.
Our plan is to reduce the size of this table by introducing the kind of schema, we would like to introduce this to keep the table size to a minimum and hopefully increase the performance of our extracts from this table.
factid4 int intersectid int value decimal 14,4
And then the intersect table with fact2,fact2,fact3 combinations
factid1 int factid2 int factid3 int intersectid
This kind of schema reduces the size of this table substantially but performance of our extract is very poor.
Does anyone have any suggestions on schemas that will give us high performance?
Or does anyone think that the original schema will outperform any alternative schema.
Greetings all, I'm a developer tasked with securing up a SQL Server 2005 SP2 database. I'm not exactly a DBA but I'm giving it my best shot. I was hoping someone could offer some suggestions/tips on how I could approach this task. The amount of documentation on this type of thing is somewhat overwhelming. I'm a little pressed for time and was hoping someone could offer some help. Maybe even provide some feedback as if I'm in the "weeds" or not.
Ok, here's the deal... At the moment I am using Windows authentication. From what I have read this is the preferred method over SQL authentication. I'd like to continue using this approach if possible.
The database can be has 3 principals 1. ASP.NET (Network Service on Windows Server 2003) 2. Windows Service running on the host server 3. A Data Access Layer assembly running on some other server
All the principals access the db using stored procedures only. Each uses a subset of all the stored procedures, some of them overlap.
My initial though was this: For the ASP.NET I would perform the following: 1. sp_grantlogin [NT AUTHORITYNETWORK SERVICE] 2. sp_grantdbaccess [NT AUTHORITYNETWORK SERVICE] 3. Grant Execute on [For each sproc used] to [NT AUTHORITYNETWORK SERVICE]
For The Windows Service and the Data Access Layer principal, I was thinking something like this: 1. Create a separate windows login for each principal 2. Create a db login for each principal login From Windows 3. Grant execute on each of the sprocs used for each role
Question: How do I Deny Select, Insert, Update and Delete privs for all tables regardless of the principal (public user)?
Again, any help and or suggestions would greatly be appreciated. Thank!
I have a matrix with a dynamic number of columns (1-10). The trouble is that hiding one ore more columns still leaves space reserved for all 10 columns, which is ugly. This is because the size of the TextBox that oversees the columns is not dynamic, and it is set to the size of all 10 text boxes.
In other words, a matrix with 5 columns looks like this:
Item Total
Col1 Col2 Col3 Col4 Col5 Total
5 5 5 5 5 25
While a matrix with 2 columns (the last 3 have visibility set to false) looks like this:
Item Total
Col1 Col2 Total
5 5 25
I am going crazy trying to solve this one. Does anyone have any ideas at all that can help me? Merging all the columns into a single column would not work well for me, as each column is a drill-down for the others. And making each column small (.1in), doesn't work either because there is no "no-wrap" property.
ANY suggestions would be appreciated. How have others dealt with this issue?
Hi Any help with this would be greatly appreciated. I have two tables First Table is called "Team" see columns and data below TeamId, TeamName, MemberId 1, White Team, 1 2, Grey Team, Null
Second Table is called "Members" see columns and data below MemberId, Name 1, Jim Smith
I want to display both tables in a gridview as follows TeamId, TeamName, MemberId, Name 1, White Team, 1, Jim Smith 2, White Team , Null, Null
I'm using the following sql procedure to do this
Select Team.TeamId, Team.TeamName, Team.MemberId, Meember.Name From Team Inner Join Members on Members.MemberId = Team.MemberId
My Problem is that this select statement returns the first row but not the second row. The reason for this is the second row's memberId is Null. However, I still need to display this row even if the data is some of the data is null. Can anyone point out the correct sql statement for this?
I have started working with MS SQL Server 2000 recently. I have a scenario in which I require to know within the given period of time ( say 5 mins) , which all tables from a particular database got modified. I do not want to write a trigger for each and every table for all the 13 databases , my application deals with. I have even tried the following query: declare @curdate datetime select @curdate=getdate() select name, refdate from sysobjects where xtype = 'U' and refdate =@curdate
But nope it does not help me, since refdate is something else. Can anybody tell me how can you figure out from sysobjects when was a particular object last accessed , even this would serve my purpose.
I have trying to get this done via profiler. My applications api's connect to the database under some credentials which i do not know since I do not have access to source code ( i am doing black box testing). So I can't even put a trace on one particular user account. What I am doing currently is trapping all store procedure events..., but then its too much of work...
Hence I wanted to know , is there any way out for the situation where given a database name and a time span we can find out the tables modified/accessed within that time span from that database ???
I work for a telco. We've got a table in a database which shows phone calls made by customers and when they made them.
I need to generate a list of customers who have made phonecalls last month and have NOT had a five days in a row without making any calls.
Can any of you help? I'm not sure how to tackle this one without getting a very bloated and inelligent solution. Basically, the only solution I can think of is generating 31 tables, one for each day and then just checking calls made on each day.
Does anybody have any suggestions to rewrite the 2nd WHEN part of the query??? Thank you.
------------------------------------------------------------ update t_pgba_hdr set HCFA_PLACE_TRMT_CD2 = case when (select max(b.HCFA_PLACE_TRMT_CD) from t_pgba_hdr as b where t_pgba_hdr.clm_id2 = b.clm_id2) like '[A-Z]%' then '99' when (select ltrim(rtrim(max(b.HCFA_PLACE_TRMT_CD))) from t_pgba_hdr as b where t_pgba_hdr.clm_id2 = b.clm_id2) in '[0-9]' then '0' + (select ltrim(rtrim(max(b.HCFA_PLACE_TRMT_CD))) from t_pgba_hdr as b where t_pgba_hdr.clm_id2 = b.clm_id2) end
So what I need to do is store applications info, such as application name, path, server it's installed on, etc., into a table.
I thinking of designing the table like this but not sure if this is a good design:
ApplicationInfo --------------- ID Application -- pk Path ServerName
There are apps that are installed on all servers and there are some apps installed only on a few servers. I was thinking of making the "Application" field unique so that only 1 instance of the application name exists and then comma delimit the "ServerName" field values.
So with this approach records would look like this:
Field Value ----------------------------------- ID "1" Application "Adobe" Path "C:Program FilesAdobe" Server "ServerA,ServerB,ServerC"
ID "2" Application "Microsoft Office" Path "C:Program FilesMicrosoft Office" Server "ServerB,ServerC"
I have been tasked with designing an automated process to restore production data to our testing environments on an as needed basis. The schedule would revolve around our software testing and deployment schedules. I'm looking for suggestions on best practices for this task in the form of advise / links to references / etc.. Instead of presenting all of my requirements here, I'll spare you that information :). Since part of it also needs to encompass data stored in Oracle (10g). I've done a several Google searches but would like to validate / invalidate my research against the advise of the experts here.
Basically this is ONE table that contains header and detail data ordered sequentially. There are not unique identifiers for the rows. The rows are ordered sequentially so that each SALESPERSON is followed by one or more CLIENTs.
If I could merge the rows, the result would look like:
SALESPERSON SALESPERSON_ID CLIENT DATE_FROM DATE_TO AGE TOM 12345 MARYSMITH 1/1/2008 12/31/2008 46 TOM 12345 JANEDOW 1/1/2008 12/31/2008 24 ED 56789 TOMJONES 1/1/2008 12/31/2008 65 ANTHONY 243546 BEVBLACK 1/1/2008 12/31/2008 15 ANTHONY 243546 JEANTHOMAS 1/1/2008 12/31/2008 29
I am not how to do this with this data.
I also thought maybe it would be better to add unique identifiers to each set of SALESPERSONs/CLIENTs, and work with the data that way, but I am not sure how to do that.
Any help or suggestions would be appreciated. I have no ability to change this data - I have to try to work with it if possible.
Hi,We will be developing a management reporting software for a bank. Theuser will see reports that get updated in near real time( e.g. every 5min), with data regarding transaction amounts, etc. across a number ofdimensions such as day, time, region, etc.The dimensions will be hierarchical in nature. So a zone will havestates and a state will have cities.The raw data has to be pulled from several different databases, such asa DB for ATM transactions, another for home loan applications, etc.It should be easy to add customized reports if a different view of thedata is desired.Our clients suggest that we use a tool called Clementine developed bySPSS. But we have the liberty to choose a different tool if that servesour purpose.Clementine seems to allow a data flow to be defined which might be ofuse to pur project. Does anyone have any idea how this could bedifferent from SQL server's Data Transformation Services?Any other thoughts regarding the approach to be taken, will beappreciated.ThanksYash
I got a server that has a RAID-5 array partitioned into C: and D:drives (OS Win2K Adv. Server installed on C:). The server also has amapping to a NAS device using the latest protocols that trick thesystem into thinking the map is actually a local SCSII drive. That'sdrive X:.This server is used only for SQL, and contains an OLTP database thatsees a lot of use and is pretty heavily indexed.I am toying with the idea of centralizing my data storage on the NAS(data center network segment is 1-gigabit ethernet). So I wasthinking about putting my primary data file on the NAS (drive X:) andkeeping all tables there, creating a secondary data file on localRAID-5 (drive D:) and putting all non-clustered indexes there, as wellas keeping the tempdb there and specifying the sort in tempdb option.Log files would also remain on D:.If anyone can suggest a better scenario given the above setup - I'dlove to hear it. Much appreciated.Alexey Aksyonenko
Hi,I am to make a DB that will handle over a million inserttions everymonth. Right Now I am to design it. I was wondering if any of you havea tutorial or some guide that can talk about the best practices that aDBA has to folow before he designs the new huge DB.The DB will be used with ASP and will be online on a Dedicatedwebserver in US only.I will be thankful if anyone can guide me to a tutorial or tell theirown expiriences about such DBs.Regardsjaunty Edward
OK we currently have a single SQL 2000 Server for our DW with a DR SQL 2000 Server. We are wanting to create a setup where we have a Failover Cluster of SQL 2005 here at the main office with a DR SQL 2005 system at our DR site. My question is... How would you all do the failover and stuff? How many servers would I need and what would be the job/role for each server. Some things to take note we are implimenting a SAN in our network and we are also implimenting a Virtual Server system on our network. To my understanding you do not really wish to run SQL Servers on a Virtual machine if possible. So I am already planning on making the SQL Servers Physical systems. We are also planning on putting the DB's on the SAN and have the SAN replicate all the data to DR. So... How would you all invision things to be setup? Is there any good documentation I can read about this type of setup. Thank you in advance for all the advice you can provide.