Is there any improvments in SQL 2008 backup methods such as spliting backup files in manageable size(s), compress the backups and/or improving the speed of backup?
Though there are "commercial" tools available, it would be nicer if Microsoft SQL team can incorporate some core needed features in these areas for Small/Medium size businesses.
Currently we use a SQL maintenance plan to do a full backup of all our databases daily (about 40 databases on our production server). As you can imagine, this eats up disk space quickly so currently we manually zip the backup files and/or move them to an archive drive. I considered writing an application to walk through the backup folder structure and zip any .bak file it finds, but I know there are some third party tools out there that will backup/restore a MS SQL database. I was wondering if any of these also zip the backups once they are created. Any recommendations or suggestions are welcome.
I have a SQL 2012 enterprise server, and I'm using Commvault as my backups. So commvault can restore a .bak file to my server, but it cannot use sql compression on the file apparently. So what would be a 150GB .bak backup file is now 600GB. I have to manually upload these files to an auditing firm on an sftp server and the transfer times are now huge.
Is there a way to use something in sql to compress this already existing .bak file down?
We are currently running Backup Exec v7.0 and are backing up two SQL 6.5 servers on our network. The backups have begun taking excessive amounts of time considering the limited amounts of data on the servers. Does anyone have an idea of parameters we could check on the servers or other settings that should be checked in Backup Exec? Also, this problem seems to have surfaced after we loaded Network Associates VirusScan NT onto our NT Servers and Workstations. Has anyone had problems with this product on their SQL servers?
Our DBAs want to speed up SQL database backup of a database of 4 GB data size into a network drive on another server. We are using WINNT 4.0 (SP3)& SQL Server 6.5 (SP4). We know that "Backup buffer size" SQL configuration parameter is a way of doing it. Below are results of increasing the value: 1. If the value is more than 10, the runnable value is always 10 even when stopping and starting SQL server services. 2. The SQL backup took much longer time than before when increasing the value from 5 to 10.
How to properly do it ? What could be the problem ? and are there any other considerations to meet ?
Note that total RAM of servers is 512 MB out of which 128 MB is allocated to SQL server.
Anyone's experience and cooperation will be highly appreciated. Regards, Einas (DBA)
I have a query below which filters detail field in the #TempLogins table. The details field is a text field which contains many types of text strings, some containing urls that have parts like "ResultID=5" which is what is contained in the ResultIDSearch and ResultSetIDSearch fields. The records with entries like "ResultID=5" are the ones I'm trying to filter for.
The problem I have is that the query takes way too long to run. The TempLogin table has around 200 K records and the TempSearch table has around 80 K records.
select * from #TempLogins a where exists (select 1 from #TempSearch t1 where a.detail like '%' + t1.ResultIDSearch + '%' or a.detail like '%' + t1.ResultSetIDSearch + '%')
I have a pretty large DB and a fairly complex query. If I drop buffers and clear cache the query runs in 20 seconds returning 25K rows. Subsequent runs are 2 seconds. Is this the result of the results being cached, execution being cached, other? Are there good ways to close the gap between the initial and later runs? Does the cache stay present until the service restarts or does SQL recycle the memory and if so, based on what criteria?
I want to do sql db backup.But how can I backup db to split backup files? The reason I want to split the backup file is becasue single file size is too big and I want to write to dvd.
I am trying to break apart a list of filenames that was inserted into a database. It only breaks out the first one then moves onto the next record. If I do them individually then seem to work but not the whole table when queried. I need to break out each file into a temp table then insert them into a documents field in a database.
my filenames look like so and can have from 1 file name to 10 file names in the string.
This is my current method, I needed to create a cursor around it to go through all the records, split out the filenames and insert into a temp table. But if there is a better way ill do it. The problem with this is only the first file is getting inserted into the temp table and nothing else even if the filename has 4 files in it.
Create table #tempFiles (OldStrId int, OldPercent int, strfilename varchar(max), RequestId int, OblId int) declare @OldStr int, @OldPer int, @FileName varchar(max), @intcount int; Declare filenames CURSOR FOR Select intSTRBonusID, intPercentID, strFileName from tblSTR where strFileName > '' UNION ALL Select intSTRBonusID, intPercentID, strFileName from tblSTRHist where intPercentID in (61,62) and strFileName > ''
I've a table that has salescode(124!080) and salesamount(125.65!19.25) and I need to split the columns. Salesman(124) has commission(125.65). Here is the DDL:
USE tempdb; GO DECLARE @TEST_DATA TABLE ( DT_ID INT IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED , InvNoVARCHAR(10) NOT NULL , SalesCode NCHAR(80) NOT NULL , Amount NCHAR(80) NOT NULL
I need to split the amount equally into 12 months from Jan 2015 through Dec 2015.There is no date column in the table and the total amount has to be splitted equally.Guess I can't use Pivot here because the date column is not there ...How can I achieve this ?
CREATE TABLE #tbl_data ( Region Varchar(25), Amount FLOAT,
SQL Server 2008 r2 - 6 GB memory...I attempted a backup on a 500GB database but it was taking way too long. I checked the resources on the box and saw the CPU at 100%. I checked the SQL Server activity log and saw a hung query (user was not even logged on) that had multiple threads so I killed it and now the CPU utilization is back to normal.
Trouble is, now all of the threads in the activity monitor for the backup show 'suspended' and the backup appears to be not doing anything.
I've written a custom script to delete backup files from location. But unable to modify now to count the number of files are deleted. How to modify the script...
/* Script to delete older than N days backup from a specific directory */
USE [db_admin] GO IF OBJECT_ID('usp_DeleteBackup', 'P') IS NOT NULL DROP PROC usp_DeleteBackup GO
I am working now on optimization of an update query for a particular table and I want to measure the number of page splits after each update. How to check it?
I am trying to split the annual cost into monthly numbers based on the contract Period.Since the contract period varies from company to company not sure how to implement the logic.
Our front end saves all IP addresses used by a customer as a comma separated string, we need to analyse these to check for blocked IPs which are all stored in another table.
A LIKE statement comparing each string with the 100 or so excluded IPs will be very expensive so I'm thinking it would be less so to split out the comma separated values into tables.
The problem we have is that we never know how many IPs could be stored against a customer, so I'm guessing a function would be the way forward but this is the point I get stuck.
I can remove the 1st IP address into a new column and produce the new list ready for the next removal, also as part of this we would need to create new columns on the fly depending on how many IPs are in the column.
This needs to be repeated for each row
SELECT IP_List , LEFT(IP_List, CHARINDEX(',', IP_List) - 1) AS IP_1 , REPLACE(IP_List, LEFT(IP_List, CHARINDEX(',', IP_List) +0), '') AS NewIPList1 FROM IpExclusionTest
I was contacted by the SAN team to test backup/restore of larger databases using a split-mirror backup (BCV) or clone that is taken from production db server and copied to another sql box. They want to use this process once a week. I see the mounted drives with the data/log files. All looks good. Initially I attempted to attach the databases and received (Unable to open the physical file db.mdf Operating System Error 5 Access is denied). I manually granting SQLServerMSSQLUser$<computer_name>$<instance_name> on all of the physical files 20 total. That worked.
Since this will be weekly, the SAN team performed the copy again and now none of the databases can communicate with the newly copied files. NTFS permissions need to be set again. I'm getting (Operating System error 21: the device is not ready). Is there something that I'm missing in this process how the vendor BCV clones the data and SQL communicates with the copied files as I was thinking it would be more automated process?
I am trying to join two tables and looks like the data is messed up. I want to split the rows into columns as there is more than one value in the row. But somehow I don't see a pattern in here to split the rows.
This how the data is
Create Table #Sample (Numbers Varchar(MAX)) Insert INTO #Sample Values('1000') Insert INTO #Sample Values ('1024 AND 1025') Insert INTO #Sample Values ('109 ,110,111') Insert INTO #Sample Values ('Old # 1033 replaced with new Invoice # 1544') Insert INTO #Sample Values ('1355 Cancelled and Invoice 1922 added') Select * from #Sample
This is what is expected...
Create Table #Result (Numbers Varchar(MAX)) Insert INTO #Result Values('1000') Insert INTO #Result Values ('1024') Insert INTO #Result Values ('1025') Insert INTO #Result Values ('109') Insert INTO #Result Values ('110')
[Code] ....
How I can implement this ? I believe if there are any numbers I need to split into two columns .
I have several data bases on a server (SQL Server 2000 only, no web server installed) and lately, as the company keeps gowing, my users complain saying the server gets slow, (this dbs are well designed and recieve optimizations and integrity checks, etc) because of this, Im thinking about getting a new server to repleace my old ProLiant ML 330 which was bought 4 years ago but Im concerned about what server arquitecture or characteristic can help me best to improve response performance, is it HD speed? Processor speed? or more Ram? I want to make a good decision, so I´d really appreciate your help...
Deciding whether or not to use a CTE or this simple faster approach utilizing system tables, hijacking them.
SELECT s.ORDER_NUMBER, s.PRODUCT_ID, 1 AS QTY, s.VALUE/s.QTY AS VALUE FROM @SPLITROW s INNER JOIN master.dbo.spt_values t ON t.type='P' AND t.number BETWEEN 1 AND s.QTY
Just wanted to know if its okay to use system tables in a production environment and if there are any pit falls of using them ?
We currently use a split-mirror backup strategy for our Sybase database, which has a "quiesce database" command to suspend all transactions. By quiescing the database before splitting the mirror, we suspend all transactions to ensure we get a stable backup of the environment. It works very well for us and I'm trying to understand how we could implement this with our SQL Server 2005 DB.
(I'm aware of SQL Server mirroring and that there are other ways of possibly backing up the DB. In this post however, I'm only interested in how I would make the split-mirror strategy work if I wanted to pursue it. I'm trying to avoid paying for software that uses the VDI as it's quite costly.)
Can someone help me with how I would accomplish a split-mirror backup strategy in SQL Server 2005 (without using a vendor's software that uses the VDI)? I have to imagine there's something similar to the "quiesce database" command in SQL Server...
I was planning on running a service where thousands of text messages are stored. Obviously I'd want to make the most of my DB space, and was wondering if there's some way for SQL to compress text down to the smallest space possible. If not, is there some kind of ASP component I could download to do this? Failing that, I could always write a simple one, which takes the most common letter combinations, and shortens them down to a single character.
If wanted to set up Logshipping between two sqlserver 2005 enterprise edition servers.The transaction log backup is being taken every 2 hours and with approx size of 1GB - 1.5 GB. Now before the copy job runs,i want to compress the tlog file on primary server,copy it to secondary server and uncompress it on secondary and then apply transaction logs to secondary server.I need the procedure how to do it.Please help out.