Can anyone point me to MS Best Practices that advise you against installing non-SQL-related applications on a SQL DB server? I feel it to be common sense but I have been asked to justify this with a direct statement from Microsoft. Help?
I have a server that i would like to use as a dedicated DB Server. It is running Win Server 2k3 and SQL Server 2005. The typical upstream bandwidth is anywhere from 384Kb/s to 700Kb/s. Is this enough to host multiple SQL Server 2005 DB's? The DB's will be used for database driven websites. Please let me know what your thoughts are. Thanks,
Each have two Gig Network cards configured with different IP addresses.
Each running multiple instances of SQL Server 64.
I am trying to set up a mirror where mirroring traffic between servers will be dedicated to a secondary IP address on the second NIC.
I am also trying to avoid Windows authentication.
Interesting enough: Security Configuration screen suggesting you to use fully qualified TCP addresses and, at the same time, does not give you such an option...
Would someone please point me in the right direction?
For single instance installations I try to follow a standard configuration where our log files go to E, data files to F, tempdb to H and backups to G. With a new project we have decided to combine our Test and Dev environments on one server with Test being a default instance of SQL Server and Dev being a named instance. My plan was to stick with the above convention, but I was asked if it would be better for each instance to have dedicated data, log and tempdb drives. The problem is we would not have a dedicated controller for each drive. We would have one controller for the default and named instance data drives, one for the log drives and one for tempdb. The other issue is having to modify some of my admin scripts/procedures to work with different drive letters. That's now a real big deal, but does add more work to my plate.
Do you think it's worth having separate drives for a Test/QA server or is it typically sufficeint to use shared?
However (the distributor) is not visible to Apollo2 so I am unable to subscribe.
Do you have any idea what I need to do to make it visible? I setup an ftp site to drop the publication (as I was unable to create an UNC Share) and I was hoping to be able to subscribe to it. Probably it is a matter of permissions...maybe a firewall? should I ask my provider to open a port? do I need to create the same 3 accounts in the Subscriber? something like Apollo2SubscriberUser?
Hello all I got a Problem when I try to store Data from a Flatfile to a DB. The following Error appears in the Progress Control: An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Violation of PRIMARY KEY constraint 'PK_Products_1'. Cannot insert duplicate key in object 'dbo.Products'.".
I have a Flat File Source, and would like to store the needed records in a DB. In Column 0 in the Flatfile I have multiple Entries with equal Values. In the DB this Column is set as Primary Key and can only have one Record with the same Value in this Column.
How can I read out (or store) only one Record with the same Value from the Flatfile to store it in the DB?
How can I check if there is a Record from the Flatfile in the DB with the same value in the Primary Key?
How can I change any of the remaining Columns with different Values in the DB to match with the Flatfile?
I was wondering if anyone could tell me why I could not connect to my server with the DAC when the server CPU utilization for all CPU's was greater than 90%.
BACKGROUND: I have successfully established DAC connections to this server in other periods when the CPU utilization was not pinned so high. I have done so by RDP'ing into the server, opening SSMS and cancelling the request to connect Object Explorer. Finally, I start a new query adding the "Admin" prefix to my server.
As I mentioned, when the server has CPU resource available, this works fine. This indicates that all the groundwork to setting up and using the DAC is laid correctly (i.e. DAC is enabled, I am connected locally, I am logged in with a user role with proper permissions, etc.).
This leads to my question. Why can I not connect to SQL Server 2005 Standard Edition using the DAC when all the (8-way) CPU utilization is at 90% or higher?
I need to make this connection to analyze why the server is so busy.
I'm planning an SQL 2005 deployment and would like to know if an SQL server should be kept as dedicated as possible. Should apps that access SQL be kept on other servers? What are the do's and don't's along these lines? The plan here is to install everything that accesses SQL on the same server with the database, but this seems like a bad move to me. Am I right?
I'm thinking of using SQL Server Agent Service for my PDA app. But, I want to use different accounts for SQL Server and SQL Server Agent Service. How can we do this in SQL Server 2005? Do we do this when installing it? Thanks
I hope someone can help with my current issue as i have now spent the last 8 hours trying to solve it! Here goes...
I am currently moving all my websites from our in house set up to a FASTHOSTS dedicated server. Our in house set up consists of separate server for web and data and works well even with my recent foray into .Net sites.
I have just moved the most recent of my .Net sites (http://www.alfadealerads.co.uk/home.aspx) to the FASTHOSTS server and cannot get the site to connect to the database (this has also been installed on the FASTHOSTS server). One of the pages that connects to the DB is http://www.alfadealerads.co.uk/links.aspx, where i am getting the error..
System.Data.SqlClient.SqlException: Cannot open database requested in login 'MasseratiChauffeurDrive'. Login fails. Login failed for user 'lateral'.
This DB has been restored from back up from the original DB. I have recreated the user name on the new SQL server and added it to the new DB also. I am using a webconfig to connect to the DB, the exact same one that works on my current live server (http://www.maseratichauffeurdrive.com/home.aspx) only with the obvious change to the SQL server name.
The only difference i can see in the 2 setups is that the new one has both the webserver and the SQL server on the one machine and that machine uses domains.
My current in house setup does not use domains and as mentioned before web and date are 2 separate servers.
Can anyone help? Or even understand what the hell i have written? lol
We have a database in SQL Server 2008 R2 with mirroring and want that replication is done by dedicated network.We stop the endpoint and when we try to run the following command, syntax error occurred:
Msg 102, Level 15, State 1, Line 1 Incorrect syntax near '192.168.1.14'.
What is the correct syntax of the command line below?
ALTER ENDPOINT Endpoint_Mirroring AS TCP (LISTENER_IP = '192.168.1.14')
I wonder if somebody here could recommend a good article about MS Service Broker. I'm looking for some advice and tips in designing applications using SQL Service Broker, mainly QN. For instance, maintenance routines and common faulty scenarios I might find later when my solution is implemented. I have googled for a while but all I can find are recopied examples of QN.
Hi I use data presentation controls like gridview, formsview in my application. In many of the webforms i also use multiple datasources mainly for the purpose of 2 way data binding for controls within data presentation controls.I am concerned about the performance issues this might cause as users using these pages increase.What is the likely performance impact ?Once the databind is done and values are populated in the respective controls, does the database connection of datasource control get closed, or is it open?What are the best practices while implementing datasource controls?
I'm looking for some documentation on SQL 2K Installation tips on a Windows 2000 Member Server platform as well as best practices for ongoing maintenance .
Real world experience as well as Microsoft propaganda are all welcome.
I am looking for some examples of how to manage DDL scripts amongvarious versions of a production db and development and testing. Ihave tried a few things in the past, and it always gets very muddledand cumbersome.I need to be able to build any version of the database from scratch,BUT I also need to maintain an upgrade path from any version to anylater version. So it is not enough to just maintain a master buildscript, but I don't want to maintain 2 different things (modify themaster build scripts AND create a new "ALTER" script for each versionchange).I thought I had seen an article somewhere that layed out a process formanaging this, but I can't find it now (I thought it was in SQL ServerMag). Does anybody know of this article or have a resource they couldpoint me to that outlines best practices in this area?Thanks,Jason Wood, DBA in training.
Hi All,My question is what are the best practices for administering largeDBs. (My coworker is the DB administrator. I'm more of thedeveloper. But slowly being sucked in.) My main concern is that wehave some DBs that take approx 3 hrs a night just to rebuild theindexes. I know that with MSSQL 2000, I can use partitioned views tobreak out the table(s) into smaller databases and tables. But we alsohave an older server that runs MSSQL 7. Lastly how do you handledrive space issues? Do you spread out the DB across multiple MDFfiles on different drives? Thanks in advance.
Please forgive me if I have overlooked a thread that answers this question, but I assure you that I have looked.
I would really appreciate a guide of sorts that would tell me the correct steps to take to properly secure a column in my database. I don't need specifics on how to do each step, I either have those already or can find them myself. In fact, I have already successfully encrypted and decrypted some data. I just want to make sure that I create the right keys and certificates and that I follow best-practices as far as backups and stuff is concerned.
Environment is SQL Server 2005 x64 Enterprise running under Windows Server 2003 x64 Enterprise with four processors and 16GB of ram.
I have 28 data copy routines I would like to add to a SSIS package. They use the Data Reader Source to an ODBC database (InterSystems Cache) and copy the table contents to a SQL2005 database for reporting needs. The data rows in these 28 routines range from only 100 rows to over 6 million rows depending on the table. I have tested these individually and they work fine. My question is, is it a good practice to have all of these routines in a single package or can I expect performance degragation?
I've got a table that has frequent updates to it. I want 100% change tracking on this table though, so we can rollback to any previous version, or just see any changes people make.
Is there a best practice for things like this? Currently, I'm using a trigger on UPDATE to take the previous values and store them in a history table. This keeps track of who changes what, and when. Plus the most recent data is seperate and more performant to access.
I've also heard about putting an 'IsActive' flag on the main table and any changes that are made just get marked as In-Active and a new record gets added.
I am new to SSIS, but done alot of DTS 2000 development.
What is the concensus for developing SSIS packages? Do you just place objects and change the properties of each object, having multiple objects basically doing the same thing, with different properties? Or do you set object's properties and then change properties by code in scripts? Ie Execute SQL, setting connections and SQL Statement by code in a script? Is this even possible? With Microsoft OOP I assume this is possible.
Script> Set properties of ExecuteSQL > set flow to ExecuteSQL.
Is this this only way to do it in SSIS? http://sqljunkies.com/WebLog/ashvinis/archive/2005/06/15/15829.aspx For some reason I figured that SSIS would have this kind of stuff built into it, it seems a function that many would use.
I wonder if anyone knows what would be the best case scenario for the property 'maxinsertcommitsize' for the sql destination task if I want to load 6m records into a target. Is the best setting 0 (try loading all in one batch) or should I choose a different value for example 1000000 per batch?
Hi, i am newbie in ASP.net world. i am using 3 tier application architechture for my web based application. data base is sql server 2000. i have looked at object and sql datasource objects but i think they are not suitable for my requirements. so i am planning to directly use ado.net to access data from database.( i.e. creating connection, then creating commands n executing them) now what i am looking for is the best known practices for the above task. i have following solutions in my mind please let me know if i am missing some or which could be the best aproach.
careate one class which will handle all the database requests so that all the pages and business objects request that class to to do all the db related stuff. (creating connection, command n execution)
Little bit of a newbie question here...I have a database with about 20 or so tables in a relational model. I am now working on an output scheme and had a quick question regarding best practices for outputting. Would it be best to
1) Set up a view that basically joins all of these tables together, then bind a DataSet/DataTable to it and output as needed? 2) Setup individual views for each table and run through them?
We are preparing to stand up a server with an HP tape library backup system using Data Protector software.
Our intention is to copy backup files from each of our db servers to the file system of the server with the tape library. There will be two tape rotation schemes where there will be a daily offsite tape and one with a monthly offsite rotation for less critical applications. Our systems group isn't happy about that configuration but are willing to implement what we want.
My questions are: Is there any one out there using one of these and if so can you give me a 10,000 foot view of your process?
Addtionally, is this the best way to utilize this resource?
Any serious suggestions and comments are greatly welcomed and appreciated.
Since price obviously change over time, I was wondering what the is the best table schema to use to reflect these changes, while still remembering previous price values (like for generating reports on previous sales...)
is it better to include a "Price SMALLMONEY" field in the purchases table (which kind of de-normalizes it) or is it better to have a separate ProductPrice table that keeps track of changing prices like so:
Hey Gang, I got a question about authentication. I have a Just loaded SQL Server on my virtual box and loaded Microsofts bpa and microsoft security anlyzer. I get a funny reading. The security scan work but the bpa scan does not. I also look at the database that I can get access to and notice a new database schema. I was thinking if I remove this database will it have any affect on the present database.
I was wondering what the best way to write a GROUP BY clause when there are many (and time consuming) operations in the fields by grouped.
Fictious example:
SELECT DeptNo, AVG(Salary) FROM Department GROUP BY DeptNo;
This will give me the average salary per department. Let's say, however that I had 10-15 fields being returned (along with the AVG(Salary)) and some fields even had operations being performed on them. Is it better to create a temporary table to calculate the sum per department (or a VIEW) and then perform a JOIN with the rest of the data?
Can any one direct me to sources for best practices of field types and sizesto use for commonly used information such as address, names, city, businessnames ....Thanks, Brian