We are preparing to stand up a server with an HP tape library backup system using Data Protector software.
Our intention is to copy backup files from each of our db servers to the file system of the server with the tape library. There will be two tape rotation schemes where there will be a daily offsite tape and one with a monthly offsite rotation for less critical applications. Our systems group isn't happy about that configuration but are willing to implement what we want.
My questions are:
Is there any one out there using one of these and if so can you give me a 10,000 foot view of your process?
Addtionally, is this the best way to utilize this resource?
Any serious suggestions and comments are greatly welcomed and appreciated.
I'm about to embark on creating a maintenance plan to back up all databases on one of our SQL 2005 servers. I am looking for some advice on best practices for doing this.
I have it in my mind that i want to be taking a full database backup once a week, with differential backups on a daily basis and transactional backups performed every 2 to 4 hours.
Do i need to create three maintenance plans for this, i.e. 1 for full, 1 for differential, and 1 for transactional?
If i want to only keep the backups from the last week, is this done by setting up a maintenance cleanup task in the full backup plan to clear all bak files that are a week old? If so i'll also probably require one to remove the trn files also.
When using the backup command from the context menu in SSMS there is an option to name the backup set. How does this work when using maintenance plans as i haven't been able to find this option whilst trying out some of the features?
I'm sure to have more questions on this subject, but any help on the above queries would be most appreciated.
I was once told that I was not to use the GUI to setup a maintenance plan. What the person said was that I needed to setup a different plan to do each tab of the GUI instead of going through the tabs and making sure the times do not overlap. Has anyone ever heard of this or is this an old wise tale told by only one person? I am using version 8 of SQL on XP machines with 8 connections to that database and need to back up the database, clean it up and everything the GUI has, so teach me the correct ways and tell me why, thanks!
We have a live OLTP database for which we create full backups every week and differential backups every day. Recently we added an OLAP database, which we need to update daily with changes from the live database.
This is the process we are planning to use. 1. Restore last full OLTP backup. 2. Apply the last differential OLTP backup. At this point we should have a replica of the live OLTP database. 3. Update OLAP database based on the OLTP replica database. 4. Delete the OLTP replica database.
Two questions. 1. If different from the process above, how is this OLTP-to-OLAP transformation typically done in the industry? 2. What is the best way to implement this process with SQL Server 2005?
I have problems using SQL Profiler from a Windows2000 client PC to an SQL 7 server which is on a Windows NT machine. I can use profiler o.k. from an NT client (using Named Pipes), but can't seem to get it to work on my Windows2000 PC. I tried changing the Default Network Library on my PC to use TCP/IP, but it still does not work. The server has a default of Named Pipes. Can I get the server to use either Named Pipes or TCP/IP depending on whether the client communicating is an NT or Windows2000 PC? Or is this not the problem? Is there a problem with Windows2000 and Named Pipes?
I have a question about MS sql 2000 and hope someone can give me a hand though this may not be the most suitable place to post.
I have created multiple databases storing multi-country data. Tables and store proc in each db are exactly the same except they are placed and run in different dbs in runtime.
My question is: Is it possible to centralize all the store proc and function into a central db (or a dummy db) for easy maintenance but it allows us to point to the desired database environment in runtime even I run the store proc in central database.
I find the "Use database" command but it does allow us to use it in store proc. Any advice or suggestions???
I have created a custom library(CodeLibrary) which internally references the dlls Microsoft.TeamFoundation.Client and Microsoft.TeamFoundation.WorkItemTracking.Client.
I added this custom Dll codelibrary.dll to my SSRS report. and the expression of one of the field as
=codelibrary.codefunction.GetValue(1000) codefunction is the class and GetValue is the method.
When I preview the report, I get the error "Error while loading code module:
'CodeLibrary,Version=1.0.0.0,Culture=neutral,PublicKeyToken=null'. Could not load file or assembly 'CodeLibrary,Version1.0.0.0, Culture=neutral,PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified."
I am using VS2013, I have placed the custom library DLL in the path
C:Program FilesMicrosoft Visual Studio 12.0Common7IDEPrivateAssemblies
I have tested the custom library with a WPF application and it works fine.
I am not able to figure out what is causing this error.
I wonder if somebody here could recommend a good article about MS Service Broker. I'm looking for some advice and tips in designing applications using SQL Service Broker, mainly QN. For instance, maintenance routines and common faulty scenarios I might find later when my solution is implemented. I have googled for a while but all I can find are recopied examples of QN.
Hi I use data presentation controls like gridview, formsview in my application. In many of the webforms i also use multiple datasources mainly for the purpose of 2 way data binding for controls within data presentation controls.I am concerned about the performance issues this might cause as users using these pages increase.What is the likely performance impact ?Once the databind is done and values are populated in the respective controls, does the database connection of datasource control get closed, or is it open?What are the best practices while implementing datasource controls?
I'm looking for some documentation on SQL 2K Installation tips on a Windows 2000 Member Server platform as well as best practices for ongoing maintenance .
Real world experience as well as Microsoft propaganda are all welcome.
I am looking for some examples of how to manage DDL scripts amongvarious versions of a production db and development and testing. Ihave tried a few things in the past, and it always gets very muddledand cumbersome.I need to be able to build any version of the database from scratch,BUT I also need to maintain an upgrade path from any version to anylater version. So it is not enough to just maintain a master buildscript, but I don't want to maintain 2 different things (modify themaster build scripts AND create a new "ALTER" script for each versionchange).I thought I had seen an article somewhere that layed out a process formanaging this, but I can't find it now (I thought it was in SQL ServerMag). Does anybody know of this article or have a resource they couldpoint me to that outlines best practices in this area?Thanks,Jason Wood, DBA in training.
Hi All,My question is what are the best practices for administering largeDBs. (My coworker is the DB administrator. I'm more of thedeveloper. But slowly being sucked in.) My main concern is that wehave some DBs that take approx 3 hrs a night just to rebuild theindexes. I know that with MSSQL 2000, I can use partitioned views tobreak out the table(s) into smaller databases and tables. But we alsohave an older server that runs MSSQL 7. Lastly how do you handledrive space issues? Do you spread out the DB across multiple MDFfiles on different drives? Thanks in advance.
Please forgive me if I have overlooked a thread that answers this question, but I assure you that I have looked.
I would really appreciate a guide of sorts that would tell me the correct steps to take to properly secure a column in my database. I don't need specifics on how to do each step, I either have those already or can find them myself. In fact, I have already successfully encrypted and decrypted some data. I just want to make sure that I create the right keys and certificates and that I follow best-practices as far as backups and stuff is concerned.
Environment is SQL Server 2005 x64 Enterprise running under Windows Server 2003 x64 Enterprise with four processors and 16GB of ram.
I have 28 data copy routines I would like to add to a SSIS package. They use the Data Reader Source to an ODBC database (InterSystems Cache) and copy the table contents to a SQL2005 database for reporting needs. The data rows in these 28 routines range from only 100 rows to over 6 million rows depending on the table. I have tested these individually and they work fine. My question is, is it a good practice to have all of these routines in a single package or can I expect performance degragation?
I've got a table that has frequent updates to it. I want 100% change tracking on this table though, so we can rollback to any previous version, or just see any changes people make.
Is there a best practice for things like this? Currently, I'm using a trigger on UPDATE to take the previous values and store them in a history table. This keeps track of who changes what, and when. Plus the most recent data is seperate and more performant to access.
I've also heard about putting an 'IsActive' flag on the main table and any changes that are made just get marked as In-Active and a new record gets added.
I am new to SSIS, but done alot of DTS 2000 development.
What is the concensus for developing SSIS packages? Do you just place objects and change the properties of each object, having multiple objects basically doing the same thing, with different properties? Or do you set object's properties and then change properties by code in scripts? Ie Execute SQL, setting connections and SQL Statement by code in a script? Is this even possible? With Microsoft OOP I assume this is possible.
Script> Set properties of ExecuteSQL > set flow to ExecuteSQL.
Is this this only way to do it in SSIS? http://sqljunkies.com/WebLog/ashvinis/archive/2005/06/15/15829.aspx For some reason I figured that SSIS would have this kind of stuff built into it, it seems a function that many would use.
I wonder if anyone knows what would be the best case scenario for the property 'maxinsertcommitsize' for the sql destination task if I want to load 6m records into a target. Is the best setting 0 (try loading all in one batch) or should I choose a different value for example 1000000 per batch?
I'm getting this message on my third automated backup of the transaction logs of the day. Both databases are in full recovery mode, both successfully backed up at 01.00. The transaction logs backed up perfectly happily at 01:30 and 05:30, but failed at 09:30.
The only difference between 05:30 and 09:30's backups is that the log files were shrunk at 08:15 (the databases in question are the ones that sit under ILM2007, and keeping the log files small keeps the system running better).
Is it possible that shrinking the log files causes the database to think that there hasn't been a full database backup?
Hi, i am newbie in ASP.net world. i am using 3 tier application architechture for my web based application. data base is sql server 2000. i have looked at object and sql datasource objects but i think they are not suitable for my requirements. so i am planning to directly use ado.net to access data from database.( i.e. creating connection, then creating commands n executing them) now what i am looking for is the best known practices for the above task. i have following solutions in my mind please let me know if i am missing some or which could be the best aproach.
careate one class which will handle all the database requests so that all the pages and business objects request that class to to do all the db related stuff. (creating connection, command n execution)
Little bit of a newbie question here...I have a database with about 20 or so tables in a relational model. I am now working on an output scheme and had a quick question regarding best practices for outputting. Would it be best to
1) Set up a view that basically joins all of these tables together, then bind a DataSet/DataTable to it and output as needed? 2) Setup individual views for each table and run through them?
Since price obviously change over time, I was wondering what the is the best table schema to use to reflect these changes, while still remembering previous price values (like for generating reports on previous sales...)
is it better to include a "Price SMALLMONEY" field in the purchases table (which kind of de-normalizes it) or is it better to have a separate ProductPrice table that keeps track of changing prices like so:
Hey Gang, I got a question about authentication. I have a Just loaded SQL Server on my virtual box and loaded Microsofts bpa and microsoft security anlyzer. I get a funny reading. The security scan work but the bpa scan does not. I also look at the database that I can get access to and notice a new database schema. I was thinking if I remove this database will it have any affect on the present database.
I was wondering what the best way to write a GROUP BY clause when there are many (and time consuming) operations in the fields by grouped.
Fictious example:
SELECT DeptNo, AVG(Salary) FROM Department GROUP BY DeptNo;
This will give me the average salary per department. Let's say, however that I had 10-15 fields being returned (along with the AVG(Salary)) and some fields even had operations being performed on them. Is it better to create a temporary table to calculate the sum per department (or a VIEW) and then perform a JOIN with the rest of the data?
Can any one direct me to sources for best practices of field types and sizesto use for commonly used information such as address, names, city, businessnames ....Thanks, Brian
I am hoping someone can give some advice on the following things:
I have read a few times about a data access layer in an n-tier application. I am assuming that this should be done using sprocs. Is there an advantage of using sprocs instead of views ( in situations where the same thing could be accomplished using either)? Will a sproc run faster than a view? Can any share any info?
Are sprocs best suited for data access and to enforce business rules?
I know SQL Server has reserved words that shouldn't be used. I am wondering what the best thing to do is in the following situation? What is the best way to handle storing a customer or clients address? I am working from a book that shows the name of a column as "Address". I have found that with SQL Server 2005 Express that this is a reserved word(it is shown in blue in the query window). I want to keep my names short. I am trying to avoid a name like "StreetAddress". Is my book teaching bad habits? ...........................................thanks...........................................................
Among best practices for SQL Server service accounts on page 8, it is recommended to 'use a separate account for each service'. I created separate account for each service as advised and assign account to relevant Windows group created for each SQL Server service during SQL setup.
Now when I run Best Practices Analyzer, its report seemed to contradict what the above article said. For example, BPA reports excerpts:
"We recommend that the service SQLBrowser on host MachineName be run under Network Service Account". I get similar recommendation for SQLSERVERAGENT account as well. Most importantly, it recommends that MSFTESQL be run under SQL Server Service Account.
Can anyone point me to MS Best Practices that advise you against installing non-SQL-related applications on a SQL DB server? I feel it to be common sense but I have been asked to justify this with a direct statement from Microsoft. Help?