We have a couple of 200GB databases that are recreated each night on a SQL2012 server connected to a disk array. The SQL disk is an auto-tiered combo of 10k and 7k drives in a RAID1 lun, and both data and log files are there.
Recently, some room has opened up on an older array that contains smaller, but many, 15k drives that I could use in a RAID1 config. Being that I'd like to split up the mdf and ldf files, which would you put onto the new (faster) disk?
EDIT: Add'l info: the only current performance issues I see in the SQL Log are FlushCache messages occuring throughout the night, when all activity happens for this DB. Things like this: "FlushCache: cleaned up 388690 bufs with 23474 writes in 409743 ms (avoided 179747 new dirty bufs) for db 47:0"
hello,all I am new to Sql 2000,I installed sql 2000 database in C disk,but Now I found my C disk space is smaller than before,So I want to move my databse(include data and structure) from C Disk to D Disk(its space is very large) . is it possible to do it ? if its can be done ,do I need to change my asp.net program source code (exp: chaneg my crystal report connectstring ) ? thanks in advanced!
During investigation problem with disk i found some issue, that every 2 minutes is writing on disk , it looks like to mdf file of database, but in almost 6-8milions Bytes / sec , it is about 1-2 sec , but every 2minutes... this is normal behaviour ? this is synchonyous writing to disk from memory.
Is there anyway to check if server is having disk latency or IO issues?Found below in SQL error log
Date10/1/2014 8:28:58 AM LogSQL Server (Current - 10/1/2014 12:00:00 AM)
Sourcespid10s
Message SQL Server has encountered 8500 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [D:Fin.mdf] in database [Fin] (5). The OS file handle is 0x0000000000001368. The offset of the latest long I/O is: 0x0001104a7da000
We're having some issues with where our backups write to, so I've been watching and monitoring the performance, when I noticed today that restore labelonly from disk has been running almost non stop for the past few hours.
The account running the query is the SQL Server's service account, and the program is "Microsoft SQL Server".
Every minute or so the SPID changes which made me think it was related to the transaction logs, the "restore labelonly" runs for as long as each database in the transaction log backup.
Example: Database A transaction log backup takes 1 minute and the SPID XX for restore labelonly runs 1 minute Database B transaction log backup starts and there is a new SPID for restore labelonly.
I hope this makes sense because I normally don't see this restore labelonly running.
I have a windows 2012 cluster environment that consists of two SQL servers nodes with Quorum disk configured as witness.
Manual failover between nodes is working fine, however the sql instance virtual is not seeing the Quorum disk.
Moreover the Quorum disk has the same number as another cluster storage disk, is that considered a problem?
When I move the SQL instance from a node to anohter, should the Quorum Disk change ownership as well to that destination node ? if it is not changing ownership what would be the problem??
I am wondering what would be the best disk/RAID setup for a Windows server 2008 R2 OS and SQL Server 2012 database that has heavy read/write. I have the following disks I can use:
4x 15k 146GB 2x 10k 600GB
According to the server build requirements for the application, I need 100GB for the OS and 290GB for the drive containing the SQL mdf there are no stated requirements for the ldf, but would like to know if it should be allocated elsewhere?I should do RAID 10 for the 15k drives for SQL and RAID 1 for the OS on the 10k.
If I return the Average, Minimum, and Maximum values for the counter Physical Disk: Avg. Disk Queue Length, and those values are 10, 0, 87 respectively, which value do I use to compute the Avg. Disk Queue Length for a 4 disk array(RAID 10): Average, Minimum, or Maximum? The disk(lun) is on a SAN.
-- Initialize Control Mechanism DECLARE@Drive TINYINT, @SQL VARCHAR(100)
SET@Drive = 97
-- Setup Staging Area DECLARE@Drives TABLE ( Drive CHAR(1), Info VARCHAR(80) )
WHILE @Drive <= 122 BEGIN SET@SQL = 'EXEC XP_CMDSHELL ''fsutil volume diskfree ' + CHAR(@Drive) + ':'''
INSERT@Drives ( Info ) EXEC(@SQL)
UPDATE@Drives SETDrive = CHAR(@Drive) WHEREDrive IS NULL
SET@Drive = @Drive + 1 END
-- Show the expected output SELECTDrive, SUM(CASE WHEN Info LIKE 'Total # of bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS TotalBytes, SUM(CASE WHEN Info LIKE 'Total # of free bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS FreeBytes, SUM(CASE WHEN Info LIKE 'Total # of avail free bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS AvailFreeBytes FROM( SELECTDrive, Info FROM@Drives WHEREInfo LIKE 'Total # of %' ) AS d GROUP BYDrive ORDER BYDrive
I am trying to setup a test cluster and am having an issue. When I try to create the resource of a physical disk it takes both the drive e: and drive q: and doesn't seperate them into two physical disks as resources. This means when I try to associate the quorum disk it links the to physcial disk resource of drive e and q. Then when I try to install SQL2k5 I get the warning about installing SQL on the quorum disk. Am I missing something? Is there a way to seperate e and q onto two physical disk resources so I can specifically associate the quorum to q and the sql to e or should I be setting the quorum disk to a majority node set? Thanks in advance.
I have a three tier system using SQL server 2000, we are currently experiencing IO bottle necks on our SCSI Raid 10 array, which holds the Data and the logs in separate partitions.
So my options as I understand it are:
Get Enterprise edition
or
Get another physical raid 10 array and separate the logs and data i.e. data on one array and logs on the other array.
I would like to try the latter but I am totally unsure how much difference this will make or whether it will make any difference at all.
Does anyone know how much performance increase I will get from using two arrays as opposed to one?
Any other advice on this scenario would be greatly appreciated.
hi all, if i have a comma delimited string and want to insert each delimited substring into a table which of the following way is faster?pass the whole string into the a stored procedure and loop through the delimited string and pick out the substring and insert into the table orloop and pass the substring into a stored procedure and insert N times?or any other better ways someone could suggest me to do thanks!
I was just wondering if this can be done any faster? code-wise that is...
Don't mind the converts, can't do without them, as the data discipline for the source table isn't always reliable, while I have to be absolutely sure the destination data ends in the required format.
I have SQL file but it run slowly when comes to huge record. How do I make it faster. I do create an index but how to make use the index? Pls help me on this...
I'm sonewhat new to MS SQL Server and I'm wondering about which of thefollowing two queries would be faster:DECLARE @ResidencesBuilt intDECLARE @BarracksBuilt intDECLARE @AirBaysBuilt intDECLARE @NuclearPlantsBuilt intDECLARE @FusionPlantsBuilt intDECLARE @StarMinesBuilt intDECLARE @TrainingCampsBuilt intDECLARE @FactoriesBuilt intSELECT@ResidencesBuilt = SUM(CASE WHEN BuildingType = 0 THEN Built END),@BarracksBuilt = SUM(CASE WHEN BuildingType = 1 THEN Built END),@AirBaysBuilt = SUM(CASE WHEN BuildingType = 2 THEN Built END),@NuclearPlantsBuilt = SUM(CASE WHEN BuildingType = 3 THEN Built END),@FusionPlantsBuilt = SUM(CASE WHEN BuildingType = 4 THEN Built END),@StarMinesBuilt = SUM(CASE WHEN BuildingType = 5 THEN Built END),@TrainingCampsBuilt = SUM(CASE WHEN BuildingType = 6 THEN Built END),@FactoriesBuilt = SUM(CASE WHEN BuildingType = 7 THEN Built END)FROM BuildingsGROUP BY kdIDHAVING kdID = 2902Or:DECLARE @ResidencesBuilt intDECLARE @BarracksBuilt intDECLARE @AirBaysBuilt intDECLARE @NuclearPlantsBuilt intDECLARE @FusionPlantsBuilt intDECLARE @StarMinesBuilt intDECLARE @TrainingCampsBuilt intDECLARE @FactoriesBuilt intSET @ResidencesBuilt = (SELECT Built FROM Buildings WHERE BuildingType = 0AND kdID = 2902)SET @BarracksBuilt = (SELECT Built FROM Buildings WHERE BuildingType = 1 ANDkdID = 2902)SET @AirBaysBuilt = (SELECT Built FROM Buildings WHERE BuildingType = 2 ANDkdID = 2902)SET @NuclearPlantsBuilt = (SELECT Built FROM Buildings WHERE BuildingType =3 AND kdID = 2902)SET @FusionPlantsBuilt = (SELECT Built FROM Buildings WHERE BuildingType = 4AND kdID = 2902)SET @StarMinesBuilt = (SELECT Built FROM Buildings WHERE BuildingType = 5AND kdID = 2902)SET @TrainingCampsBuilt = (SELECT Built FROM Buildings WHERE BuildingType =6 AND kdID = 2902)SET @FactoriesBuilt = (SELECT Built FROM Buildings WHERE BuildingType = 7AND kdID = 2902)The data source is:kdID BuildingType Built2902 6 02902 7 02902 4 02902 0 802902 2 02902 1 52902 3 402902 5 10Or:CREATE TABLE [dbo].[Buildings] ([kdID] [int],[BuildingType] [tinyint],[Built] [int])INSERT INTO Buildings (kdID, BuildingType, Built) VALUES (2902, 0, 80)INSERT INTO Buildings (kdID, BuildingType, Built) VALUES (2902, 1, 5)INSERT INTO Buildings (kdID, BuildingType, Built) VALUES (2902, 2, 0)INSERT INTO Buildings (kdID, BuildingType, Built) VALUES (2902, 3, 40)INSERT INTO Buildings (kdID, BuildingType, Built) VALUES (2902, 4, 0)INSERT INTO Buildings (kdID, BuildingType, Built) VALUES (2902, 5, 10)INSERT INTO Buildings (kdID, BuildingType, Built) VALUES (2902, 6, 0)INSERT INTO Buildings (kdID, BuildingType, Built) VALUES (2902, 7, 0)Analyzer says the first would be faster, but it has a lot of SUM()'s andwhatnot so I'm not too sure about this. There are also about 1000 rows inthe actual Buildings table. This will be a part of a stored procedure.
I want to know the # of users on our web site for each month in a given year. I'm looking for a faster way to do this--perhaps one that can leverage an index instead of reading the entire table! (My avg disk queue right now is above 7 and the query takes about 90 seconds).
Here's my current SP. Basically I'm calculating each month/year and using UNION to join them together, then pivot to rotate.
Hi all, I m new to this forum and this is my first question. I m having 2 pages in my web site ... page 1 query directly to db using sqldatasource, the second page query through a BLL then DAL by following the step in this tutorial (http://www.asp.net/learn/dataaccess/tutorial02vb.aspx?tabid=63).... Page 1 is using a "Like" query search and the Page 2 is the normal displaying some product detail.... Under normal circumstances, one will expect Page 1 will be way fastest than the Page 2... however the problem is Page 1 is in thunder speed while Page 2 takes 10 secs to load... 10 seconds is really not acceptable... I really couldnt figure out what happens... both Page 1 and Page 2 are using the same connection string which connection through a DSN.... How is the connection different by using sqldatasource and DAL?? Could someone please help.... ThanksP.S. I m using a Pervasive database
Hi y'all, I've recently run a profiler on my code and following query took 7 seconds: SELECT TOP 10 UI, COUNT(UI) AS Expr1 FROM table WHERE (UI <> 'custom_welcome') GROUP BY UI ORDER BY COUNT(UI) DESC Is it possible to rewrite this so my code gets faster? It's also possible that it's due to the size of the table? Thanks in advance! I'll let you know how long your query takes :)
I have a cursor prcedure that is pretty slow because as the cursor moves through the data I have three select statement on the same table to find other rows information. Is there a better way to do this? Simple Example of Code is: DECLARE MyVARABLES 1 to X DECLARE c1 CURSOR FOR SELECT MyData1, MyData2 to X FROM MyTable FOR UPDATE OF MyUpdateData --Start Cursor OPEN c1 FETCH NEXT FROM c1 INTO MyVariables --LOOP WHILE @@FETCH_STATUS = 0 BEGIN ----------------------- -- Get other rows data to add to this rows data ......GUESSING THIS IS THE SLOW PART as the table is LARGESELECT MyVar1 = MyData1 FROM MyTable WHERE MyTableColumns = MyVariables AND MyTableColumns2 <> MyVariables2 --FINDS OTHER ROW (I have three of these)
--Calculate & Update If MyVarable = 'this or that' BEGIN UPDATE MyTable SET MyUpdateData = MyVar1 * x *y WHERE CURRENT OF c1 END ------------------- -- NextFETCH NEXT FROM c1 INTO MyVarables 1 to xEND CLOSE c1 DEALLOCATE c1
Hi! I M basically an application developer & use simple sql queries in my programmings. I do not have much idea abt tuning/auditing part & thatswhy i m unable to answer them properly in my interviews. Can anybody give me some tips?????
Question 1: In a stored procedure, One SELECT stmt is there & depending upon the @rowcount, it updates around 14000 records which is also written inside this stored procedure. Instead of writing this way, there is some other way which is faster than this. Can anybody tell me the correct way???
Question 2:Can anybody give me few examples like this?????? I need them desparetly.
I´ve created a class to make some standard transaction development a little bit faster. The destructor seem to run, but something makes this object slow down the database, if SqlTransaction and/or SqlConnection isnt manualy handled with the method Commit(). Any ideas on how to handle the SqlTransaction and SqlConnection better?
public class DataTransaction { private bool blnError = false; private ArrayList arrErrorList = new ArrayList(); private SqlConnectionobjConnection = new SqlConnection(System.Configuration.ConfigurationSettings.AppSettings["ConnectionString"].ToString()); private SqlTransactionobjTransaction;
Previosuly I was executing 2 DTS packages one afte the other manually and together they took a CONSIDERABLE time. The 1st one was pulling data from the OLPT, doing transformations and populating the tables in my Datamart and the 2nd one was doing a FULL process of all the dimensions and cubes.
However I tried scheduling the DTSs as jobs and havethen merged the 2 resulting jobs as a SINGLE job having 2 sequential steps. To my surprise the resulting job takes less than half the time (actually even lesser) as compared with my original approach i.e. running the DTSs. And I am talking about major improvement in terms of completion of the tasks here :)
Am i getting over excited here or is this natural? I assume that if this is correct then jobs much be some sort of "compiled" version as compared to DTS and maybe that's why I have this terrific improvement in terms of execution times.
I have rewritten a stored procedure that consists of a single select that selects from a view. Essentially I combined the select in the view and the select in the sp into one select. I am now trying to determine if the new version is faster.
The estimated execution plan gives a ratio of 96% : 4% in favour of the new version when I run them together from a query window but when I try to time them I can't get a satisfactory result.
If I run each query once and display the difference between start and end time, they display 0. If I run each one 100, 200, etc times I get different results each time.
If I had a WHERE clause that had to compare a string to another string would it be faster one way or another if I broke it down to three different, smaller searches?
Previosuly I was executing 2 DTS packages one afte the other manually and together they took a CONSIDERABLE time. The 1st one was pulling data from the OLPT, doing transformations and populating the tables in my Datamart and the 2nd one was doing a FULL process of all the dimensions and cubes.
However I tried scheduling the DTSs as jobs and havethen merged the 2 resulting jobs as a SINGLe job having 2 sequential steps. To my surprise the resulting job takes less than half the time (actually even lesser) as compared with my original approach i.e. running the DTSs.
Am i getting over excited here or is this natural? I assume that if this is correct then jobs much be some sort of "compiled" version as compared to DTS and maybe that's why I have this terrific improvement in terms of execution times.
I have a live database and an archive database. I update the archive tables once a day from the live tables using:
INSERT INTO arc_table SELECT * FROM cur_table AS cur WHERE NOT EXISTS (SELECT * FROM arc_table AS arc WHERE arc.key = cur.key)
GO
This inserts newer records into the archive tables from the live tables.
I have two different methods to clean the live tables once a week but keep data from the previous week. Both methods have been verified to delete the same rows.
DELETE cur_table WHERE EXISTS (SELECT key FROM arc_table AS arc WHERE arc.key = cur_table.key) AND date_time < GetDate() - 7
GO
Second method modified from BOL - deletes identical rows
DELETE cur_table FROM (SELECT key FROM arc_table) AS arc WHERE arc.key = cur_table.key AND date_time < GetDate() - 7
GO
I read that "WHERE [NOT] EXISTS" is faster than "WHERE [NOT] IN" but this is the first time I have seen DELETE xx FROM (SELECT ----)
I'd like to know which procedure will be faster and/or better.
HelloI need this really faster in mS SQL 2000Usernumber (int)reportid (FK)reportreportid (PK)Category (int)SELECT A, B, C, D INTO UserCopy FROM UserWHERE User.reportid IN (SELECT MAX(report.reportID) AS maxReport FROM Report GROUP BY report.Category) AND user.number NOT IN (120,144,206,345,221,789,548,666,1204,4875,22,135, 777,444)can return a more than 1000 rows (an the table = 10.000 rows): SELECT MAX(report.reportID) AS maxReport FROM Report GROUP BY report.Categoryand the table user has a few millions rowsReport.ReportId is a Primary key for User.reportid (FK) for the moment it takes up to 3 minutes, i need to do that in 30 seconds maximumthank you for helping