SQL Server 2008 :: How To Improve Speed Of Initial Query Vice Subsequent Queries
Apr 23, 2015
I have a pretty large DB and a fairly complex query. If I drop buffers and clear cache the query runs in 20 seconds returning 25K rows. Subsequent runs are 2 seconds. Is this the result of the results being cached, execution being cached, other? Are there good ways to close the gap between the initial and later runs? Does the cache stay present until the service restarts or does SQL recycle the memory and if so, based on what criteria?
View 5 Replies
ADVERTISEMENT
Oct 13, 2015
I have a table (F_POLICY_TRANSACTION).This table has a couple of million rows in it.I am using a column named POLICY_TRANSACTION_BKEY to select records to delete (approximately 750k using the code below)This column has a non-clustered index applied..This is the code I have used:
WHILE 1 = 1
BEGIN
DELETE TOP(50000)
FROM F_POLICY_TRANSACTION with (tablockx)
[code]....
Problem is, it takes around 10 minutes to run.Is there any way it can be made more efficient?I have tried varying the rowcount with no success
View 9 Replies
View Related
Dec 18, 2007
hi
in asp.net,i used sql server to store records.
in a table has 1 million records,but when i update the record,it is very slowly.
is "create index" helpful for "update" operation?
i need help,thanks a lot.
View 4 Replies
View Related
Feb 7, 2007
I have a table which has around 132000 rows with 24 columns. I use rda.pull download the data to PDA. For query these data, I must create a index on 5 character columns.
The data download time is good enough, around 4 mins. But it takes 12mins to create the index.
Please help to give me any idea on how to improve the whole synchroniztion speed. Thanks!
View 6 Replies
View Related
Nov 15, 2006
i have found the speed of sort is very slow in my sql (because sql is very complicated, i can't paste it on this page), how can i improve the speed of sort ?
are there better methods?
thks
View 4 Replies
View Related
Sep 12, 2011
We have installed SQL Server 2008 R2 SP1 instance and it's having Share Point 2010 databases.
We have 2 dedicated drives for Tempdb on SAN with 50 GB space. Both tempdb data & log files are created with default size. I would like to presize them.
What are the best values to start with?
U ->Tempdbdata having tempdb.mdf file
V->Tempdblog having templog.ldf file
View 9 Replies
View Related
Mar 18, 2015
I have an instance with 4 datafiles for tempdb each set at initial size of 4G and growth rate of 100MB. After some time the initial file sizes seem to have changed automatically. They now read 3962,100,3688 and 2847 respectively. Is this something done by SQL Server itself? I cannot imagine that it was done manually.
I don't think there was a restart after the initial sizes of 4G were set, could this be related to the problem?
View 1 Replies
View Related
Jul 14, 2015
I have a query below which filters detail field in the #TempLogins table. The details field is a text field which contains many types of text strings, some containing urls that have parts like "ResultID=5" which is what is contained in the ResultIDSearch and ResultSetIDSearch fields. The records with entries like "ResultID=5" are the ones I'm trying to filter for.
The problem I have is that the query takes way too long to run. The TempLogin table has around 200 K records and the TempSearch table has around 80 K records.
select * from #TempLogins a where exists
(select 1 from #TempSearch t1 where
a.detail like '%' + t1.ResultIDSearch + '%'
or
a.detail like '%' + t1.ResultSetIDSearch + '%')
View 1 Replies
View Related
Feb 2, 2015
I'm looking to serialize some NVP data into an XML blob. I plan to put a primary xml index on the column, but my question is, would putting an XSD on the column speed up any queries, or would it just ensure format? I know that with selective xml indices, you can do things like specify the datatype associated with the xpath which can further optimize retrieval.
View 1 Replies
View Related
Jun 5, 2008
I have a vb.net page that I need to display a list of employees who work in a specific office, based on a MatterID passed in a query string. But, I don't know how to get a value returned from one sql statement into a second. Here's what I'm trying to do...
From the QueryString, we know that the MatterID = 4 ( xxx.aspx?MatterID=4)
Knowing that the Matterid=4, I query the database to get the OfficeId for that MID (Select OfficeID from tMatter where Mid=4) ~This returns an OfficeID of 6
So, then I need to do another query to get the employees where OfficeID = 6 (Select EmployeeID from tEmployees where OfficeID = 6)
How do I do these in one query, or how do I use the Calculated Value for the OfficeID in the 2nd statement?
View 3 Replies
View Related
Oct 18, 2007
Hi,
I have several data bases on a server (SQL Server 2000 only, no web server installed) and lately, as the company keeps gowing, my users complain saying the server gets slow, (this dbs are well designed and recieve optimizations and integrity checks, etc) because of this, Im thinking about getting a new server to repleace my old ProLiant ML 330 which was bought 4 years ago but Im concerned about what server arquitecture or characteristic can help me best to improve response performance, is it HD speed? Processor speed? or more Ram? I want to make a good decision, so I´d really appreciate your help...
Thanks, Luis Luevano
View 1 Replies
View Related
Apr 22, 2015
How do I find out top expensive queries from SQL Server 2008 – Standard edition ?
View 9 Replies
View Related
Nov 13, 2007
I would like to change rows into columns and columns to rows for the query output table
If the query output is shown the table below (including the column names in the first row).
A B C D E
a1 b1 c1 d1 e1
a2 b2 c2 d2 e2
a3 b3 c3 d3 e3
a4 b4 c4 d4 e4
a5 b5 c5 d5 e5
The table needs to be converted to (the rows become columns and columns become rows).
A a1 a2 a3 a4 a5
B b1 b2 b3 b4 b5
C c1 c2 c3 c4 c5
D d1 d2 d3 d4 d5
E e1 e2 e3 e4 e5
Thanks
KK
View 12 Replies
View Related
Jun 15, 2015
How can we get most frequent queries that are running against to a table in our database?
View 3 Replies
View Related
Oct 7, 2015
One of my current responsibilities is to export data to 3rd party vendors. Each export can contain many csv files. The exports are all different in terms of what data is being sent.
The way I have it currently setup is each file that needs to be created is a view. An SSIS package gets the data from the view, writes to CSV, and then sftp to 3rd party vendor. This seemed like a good idea at first because the columns are static but the calculations might change. So all I have to do is ALTER VIEW and I don't have to change anything in the package.
Is there a better way of doing this? I was curious to see what other people are doing. What makes it challenging is that all the exports are so different. If they were similar I could have created generic views that cover all the exports instead of each export having its own view. Eventually I'm going to have 100's of views.
View 9 Replies
View Related
Apr 18, 2008
Is there any improvments in SQL 2008 backup methods such as spliting backup files in manageable size(s), compress the backups and/or improving the speed of backup?
Though there are "commercial" tools available, it would be nicer if Microsoft SQL team can incorporate some core needed features in these areas for Small/Medium size businesses.
This is not a question but a suggesstion.
Thanks
View 2 Replies
View Related
Jun 4, 2015
Here's the scenario. I have a table (let's call it MyTable) that consists of four fields: Id, Source, FirstField, and SecondField, where Source only takes one of two values: Source1 and Source2.
The records in this table look as follows:
I need to return, using 3 different T-SQL queries:
1) Products that exist only in Source2 (in red above)
2) Products that exist only in Source1 (in green above)
3) Products that exist both in Source1 and Source2 (in black above)
For 1) so far I've been doing something along the lines of
SELECT * FROM MyTable WHERE Source=Source1 AND FirstField NOT IN (SELECT DISTINCT(FirstField) FROM MyTable WHERE Source=Source2)
Not being a T-SQL expert myself, I'm wondering if this is the right or more efficient way to go. I have read about INTERSECT and EXCEPT, but I am a little unclear if they could be applied in this case out of the box.
View 5 Replies
View Related
Jul 22, 2015
I have an intermittent issue where some remote PC's occasionally fail to execute select queries that have a join or return multiple result sets, however simple one table select queries continue to work okay. When it does happen the PC's needs to be rebooted to get to work again. This may only happen some PC's while others continue to work away okay.
I am using a VB6 application and ADO to connect to the database and the error message I get is a General Network Error, Server Not Found when it fails to execute the query. I have ran SQL Profiler on the server and while simple select queries continue to run away okay, a query a join does not even seem to show up in the profiler. The program has been working fine for 15 years with 1000's of users and has only now become an issue on one site for a number of users. Have tried moving the database to a different server and swapping network cards on the local PC's but can't seem to find the cause. The processor and the memory don't seem to be under load, but I am not sure if there is something else in SQL that is causing it to hang under certain conditions.
There have been network analysts experts in to run scans on the network, but I have not had the results of this back yet. Other applications do not seem to be affected so if this analysis does not show up anything.
View 5 Replies
View Related
Mar 30, 2015
Our monitoring tool shows that our production system periodically experiencing large rate - up to 800 memory pages/sec. How to find out which particular queries, S.P., processes that initiate this?
View 3 Replies
View Related
Nov 29, 2006
I have a pretty good db server with four CPUs, it has not any other loads on it, but the following query takes 4ms to return. I use the syscolumns this way quite often, I am not sure why it takes it that long to return, any idea?
select 'master',id,colid,name,xtype,length,xprec,xscale,status from [ablestatic].[dbo].syscolumns where id=(select id from [ablestatic].[dbo].sysobjects where name='link_data_ezregs')
View 6 Replies
View Related
Aug 21, 2007
Hi,
I have this SQL query that can take too long time, up to 1 minute if table contains over 1 million rows. And if the system is very active while executing this query it can cause more delays I guess.
select
distinct 'CONV 1' as Conveyour,
info as Error,
(select top 1 substring(timecreated, 0, 7) from log b where a.info = b.info order by timecreated asc) as Date,
(select count(*) from log b where b.info = a.info) as 'Times occured'
from log a where loggroup = 'CSCNV' and logtype = 4
The table name is LOG, and I retrieve 4 columns: Conveyour, Error, Date and Times occured. The point of the subqueries is to count all distinct post and to retrieve the date of the first time the pst was logged. Also, a first and last date could be specified but is left out here.
Does anyone knows how I can improve this SQL query?
Best /M
View 6 Replies
View Related
May 10, 1999
Hi all,
Does anyone know if it is possible to register SQL Server 7.0 in SQL Server 6.5 and vice versa?
I can't find the SQLOLE70.SQL file in SQL Server 7.0 installation disk.
Thanks in advance
View 1 Replies
View Related
Jul 28, 2014
Here is sample data I am working with:
Create table cattimelines (categoryID int, EffectiveDate datetime, CategoryValue varchar(11))
INSERT INTO cattimelines(categoryID, EffectiveDate, CategoryValue) VALUES(1000, '2014-01-01', 'A')
INSERT INTO cattimelines(categoryID, EffectiveDate, CategoryValue) VALUES(1000, '2014-02-01', 'B')
INSERT INTO cattimelines(categoryID, EffectiveDate, CategoryValue) VALUES(1000, '2014-04-01', 'C')
INSERT INTO cattimelines(categoryID, EffectiveDate, CategoryValue) VALUES(1000, '2014-07-01', 'A')
I need to calculates a term date for each record which will be 1 day before the effective date of any new record, thus:
CATEGORYIDEFFECTIVEDATETERMDATECATEGORYVALUE
10002014-01-012014-01-31A
10002014-02-012014-03-21B
10002014-04-012014-06-30C
10002014-07-01NULLA
View 3 Replies
View Related
Jul 7, 2006
Aside from indexes, will it help if I use multiple filegroups to improve the time needed to query millions of records?
View 2 Replies
View Related
Jul 20, 2005
I have a table called work_order which has over 1 million records and acontractor table which has over 3000 records.When i run this query ,it takes long time since its grouping bycontractor and doing multiple sub SELECTs.is there any way to improve performance of this query ??-------------------------------------------------SELECT ckey,cnam,t1.contractor_id,count(*) as tcnt,(SELECT count(*) FROM work_order t2 WHEREt1.contractor_id=t2.contractor_id and rrstm=1 and rcdt is NULL) as r1,(SELECT count(*) FROM work_order t3 WHEREt1.contractor_id=t3.contractor_id and rrstm=2 and rcdt is NULL) as r2,(SELECT count(*) FROM work_order t4 WHEREt1.contractor_id=t4.contractor_id and rrstm=3 and rcdt is NULL) as r3,SELECT count(*) FROM work_order t5 WHEREt1.contractor_id=t5.contractor_id and rrstm=4 and rcdt is NULL) as r4,(SELECT count(*) FROM work_order t6 WHEREt1.contractor_id=t6.contractor_id and rrstm=5 and rcdt is NULL) as r5,(SELECT count(*) FROM work_order t7 WHEREt1.contractor_id=t7.contractor_id and rrstm=6 and rcdt is NULL) as r6,SELECT count(*) FROM work_order t8 WHEREt1.contractor_id=t8.contractor_id and rcdt is NULL) as open_count,(SELECT count(*) FROM work_order t9 WHEREt1.contractor_id=t9.contractor_id and vendor_rec is not NULL) asAck_count,(SELECT count(*) FROM work_order t10 WHEREt1.contractor_id=t10.contractor_id and (rtyp is NULL or rtyp<>'R') andrcdt is NULL) as open_norwoFROM work_order t1,contractor WHEREt1.contractor_id=contractor.contractor_id andcontractor.tms_user_id is not NULL GROUP BYckey,cnam,t1.contractor_id ORDER BY cnam*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
View 2 Replies
View Related
Jul 20, 2005
Hey guys,Here's my situation:I have a table called lets say 'Tree', as illustred bellow:Tree====TreeId (integer)(identity) not nullL1(integer)L2(integer)L3(integer)....L10(integer)The combination of the values of L1 thru L10 is called a "Path" , andL1 thru L10 values are stored in a second table lets say called'Leaf':Leaf====LeafId (integer)(identity) not nullLeatText varchar(2000)Here's my problem:I need to lookup for a given keyword in each path of the tree table,and return each individual column for the paths that match thecriteria. Here's the main idea of how I have this now.SELECT TreeId,L1,L2,...,L10, GetText(L1) + GetText(L2) as L2text + ...+ GetText(L10) AS PathTextINTO #tmp FROM Tree //GetText is a lookup function for the Leaf tableSELECT L1,GetText(L1),L2,GetText(L2),...,L10,GetText(L10) FROM #tmpWHERECharIndex(@keyword,a.pathtext) > 0Does anyone would know a better,smart, more efficient way toaccomplish this task? :)Thks,
View 1 Replies
View Related
Dec 14, 2007
SQL Experts,
I'm facing a performance issue with the following query...
The Output of the following Query is 184 Records and it takes 2 to 3 secs to execute the query.
SELECT DISTINCT Column1 FROM Table1 (NOLOCK) WHERE Column1 NOT IN
(SELECT T1.Column1 FROM Table1 T1(NOLOCK) JOIN Table2 T2 (NOLOCK)
ON T2.Column2 = T1.Column2 WHERE T2.Column3= <Value>)
Data Info.
No of records in Table1 --> 1377366
No. of distinct records of Column1 in Table1 --> 33240
Is there any way the above query can be rewritten to improve the performance, which should take less than 1 sec...
(I'm using DISTINCT because there are Duplicate records of Column1 inTable1 )
Any of your help in this regard will be greately appreciated.
--
ash
View 7 Replies
View Related
Nov 8, 2006
now i want to get results from server tables, but i found it is very slow, for example :
select Coalesce(T1.Name, T2.Name, T3.Name), T1.M1, T2.M2, T3.M3
from T1
full outer join T2
on Coalesce(T1.Name, NULL) = T2.Name
full outer join T3
on Coalesce(T1.Name, T2.Name) = T3.Name
in Tables i have builded index for name, but when every table have 20000 records, the sql above is very slow, is there other method to improve the query speed ?
Thks
View 3 Replies
View Related
Jan 2, 2008
Hi,
I have database D1 which contains 5 million users and one more database D2 having 95k Users.
i wanted to insert common users into new database D3 based on filter which is Phone number and is unique value. Below is the structure of my tables in D1 and D2:
D1(database)
UserProfiles(Table)
UserId (Column Name)
UserProfiledata (Column Name)
D2 (database)
Alerts (Table)
PhoneNumbers (ColumnName - Unique)
Now userProfiles table contains data in string format as below:
User.state AA User.City CC User.Pin 1234 User.phonenumber 987654
so iam parsing for each user using cursor and writing phone numbers into some temp table and wanted to query D2 database to verify whether this phone number exists in Alerts Table of D2 database.
can anyone please suggest on how i can go ahead with this and also help me on how to improve perfomance.
Thanks,
-Veera
View 3 Replies
View Related
Apr 26, 2007
Hi,
We have setup SQL Server 2000 32-bit (Publisher/Subscribers) replication with 5 different locations. We are planning to purchase SQL Server 2005 Enterprise Edition 64-bit and/or 32-bit. I need suggestion, if we have both version of 2005 Enterprise, can we setup replication with SQL Server 2005 64-bit with SQL Server 2005 32-bit? is that supporting?
Also can we run replication with SQL Server 2005 64-bit (Publisher) with SQL Server 2000 32-bit (Subscriber).
- Publisher Subscribers
--------------------------------------------------------------------------------------------------------
1- SQL Server 2005 64-bit SQL Server 2005 32-bit
2- SQL Server 2005 64-bit SQL Server 2000 32-bit
Shamshad Ali
shamshad_ali74@hotmail.com
shamshad_ali74@yahoo.com
View 3 Replies
View Related
Apr 10, 2007
I want get get results in sql that are all written in UPPERCASE but I want to receive them in Initial Case format
I know UPPERCASE is UPPER
lowercase is lower
but what is Initial Case(first letter Capital in a word)
View 3 Replies
View Related
Dec 4, 2007
Afternoon,
I have a few Log Shipped DBs that are working great.
Currently they are set to fire off every 15 minutes 24/7.
My question is this ... I need to get FULL backups of the source DBs in order to restore them on certain Dev boxes.
If I were to execute the full backup on one of these Log Shipped DBs ... how would it affect the log shipping process?
Is there a special method to accomplish this?
As a side note, what would be some concerns/issues if in being able to create the FULL backups and not interupt log shipping, I were to create the backup using a 3rd party tool like Quest LiteSpeed?
I sure wish we were on Enterprise, then I could create a mirror and then snapshot off it to create my backups BUT ... that is not the case as we stand today.
Thanks
View 9 Replies
View Related
Oct 22, 2004
Hi ,
I have
SQL Server with 2 processors , 2 GB memory and RAID 5
40 GB db (Pricing ) working with 3rd party application.
in db Pricing
table A = 180000 rows (20 column and 5 indexes incling clustered primary)
table B = 1789000 rows (25 coumns and 6 indexes incling clustered primary )
table C = 10005 rows (15 columns and 4 indexes incling clustered primary)
Users start complaining about poor performance when selecting data from tables A,B and C
I used profier to capture all queries running with A,B and C tables
where duration more then 1 second.
Profiler shows 0 cpu and very high read for all capured queries
I run INDEXDEFRAG on tables A,B and C
then rerun Profiler
No changes at all in profiler reading
same low CPU and high read
If the is no changes in performance after INDEXDEFRAG can I conclude following
everything has been done to optimize tables A,B and C , query code or table structure should be modified?
or
any other improvment could be done without modifing code?
Thank you
Alex
View 10 Replies
View Related