How To Improve The Speed Of Sort?
Nov 15, 2006i have found the speed of sort is very slow in my sql (because sql is very complicated, i can't paste it on this page), how can i improve the speed of sort ?
are there better methods?
thks
i have found the speed of sort is very slow in my sql (because sql is very complicated, i can't paste it on this page), how can i improve the speed of sort ?
are there better methods?
thks
hi
in asp.net,i used sql server to store records.
in a table has 1 million records,but when i update the record,it is very slowly.
is "create index" helpful for "update" operation?
i need help,thanks a lot.
I have a table which has around 132000 rows with 24 columns. I use rda.pull download the data to PDA. For query these data, I must create a index on 5 character columns.
The data download time is good enough, around 4 mins. But it takes 12mins to create the index.
Please help to give me any idea on how to improve the whole synchroniztion speed. Thanks!
I have a table (F_POLICY_TRANSACTION).This table has a couple of million rows in it.I am using a column named POLICY_TRANSACTION_BKEY to select records to delete (approximately 750k using the code below)This column has a non-clustered index applied..This is the code I have used:
WHILE 1 = 1
BEGIN
DELETE TOP(50000)
FROM F_POLICY_TRANSACTION with (tablockx)
[code]....
Problem is, it takes around 10 minutes to run.Is there any way it can be made more efficient?I have tried varying the rowcount with no success
I have a pretty large DB and a fairly complex query. If I drop buffers and clear cache the query runs in 20 seconds returning 25K rows. Subsequent runs are 2 seconds. Is this the result of the results being cached, execution being cached, other? Are there good ways to close the gap between the initial and later runs? Does the cache stay present until the service restarts or does SQL recycle the memory and if so, based on what criteria?
View 5 Replies View Related
Hi,
I have several data bases on a server (SQL Server 2000 only, no web server installed) and lately, as the company keeps gowing, my users complain saying the server gets slow, (this dbs are well designed and recieve optimizations and integrity checks, etc) because of this, Im thinking about getting a new server to repleace my old ProLiant ML 330 which was bought 4 years ago but Im concerned about what server arquitecture or characteristic can help me best to improve response performance, is it HD speed? Processor speed? or more Ram? I want to make a good decision, so I´d really appreciate your help...
Thanks, Luis Luevano
I am trying to set sorting up on a DataGrid in ASP.NET 2.0. I have it working so that when you click on the column header, it sorts by that column, what I would like to do is set it up so that when you click the column header again it sorts on that field again, but in the opposite direction. I have it working using the following code in the stored procedure: CASE WHEN @SortColumn = 'Field1' AND @SortOrder = 'DESC' THEN Convert(sql_variant, FileName) end DESC,
case when @SortColumn = 'Field1' AND @SortOrder = 'ASC' then Convert(sql_variant, FileName) end ASC,
case WHEN @SortColumn = 'Field2' and @SortOrder = 'DESC' THEN CONVERT(sql_variant, Convert(varchar(8000), FileDesc)) end DESC,
case when @SortColumn = 'Field2' and @SortOrder = 'ASC' then convert(sql_variant, convert(varchar(8000), FileDesc)) end ASC,
case when @SortColumn = 'VersionNotes' and @SortOrder = 'DESC' then convert(sql_variant, convert(varchar(8000), VersionNotes)) end DESC,
case when @SortColumn = 'VersionNotes' and @SortOrder = 'ASC' then convert(sql_variant, convert(varchar(8000), VersionNotes)) end ASC,
case WHEN @SortColumn = 'FileDataID' and @SortOrder = 'DESC' THEN CONVERT(sql_variant, FileDataID) end DESC,
case WHEN @SortColumn = 'FileDataID' and @SortOrder = 'ASC' THEN CONVERT(sql_variant, FileDataID) end ASC And I gotta tell you, that is ugly code, in my opinion. What I am trying to do is something like this: case when @SortColumn = 'Field1' then FileName end,
case when @SortColumn = 'FileDataID' then FileDataID end,
case when @SortColumn = 'Field2' then FileDesc
when @SortColumn = 'VersionNotes' then VersionNotes
end
case when @SortOrder = 'DESC' then DESC
when @SortOrder = 'ASC' then ASC
end and it's not working at all, i get an error saying: Incorrect syntax near the keyword 'case' when i put a comma after the end on line 5 i get: Incorrect syntax near the keyword 'DESC' What am I missing here? Thanks in advance for any help -Madrak
I have the following SQL, which works but I think it can be done simplier. I seem to have to group it by multiple columns, but I am sure there must be a way of grouping the results by a single column. Any Ideas?
Code:
SELECT count(order_items.order_id) as treenum, orders.order_id, orders.order_date,
orders.cust_order, orders.del_date, orders.confirmed, orders.del_addr
FROM orders, order_items
WHERE orders.order_id = order_items.order_id GROUP by orders.order_id, orders.order_date
, orders.cust_order, orders.del_date, orders.confirmed, orders.del_addr
ORDER BY orders.order_id DESC
Hello,
I have four different transactions such as below and I do one insert and one update in each transaction and it seem it is slow and creates deadlock with the user interface.
These transactions are performed against the tables that users are accessing with another user interface. I have following two questions:
1. T2.TextField1 and TextField2 = @TextField2 are Ok, Nok fields so I did not put index since only two distinct values. Should I put indexes on these fields?
2. Can I make this transaction let user interface do its task in case accessing the same rows, I can start transaction again but I do not want users get disturbed?
.
BEGIN TRANSACTION pTrans
BEGIN
INSERT INTO T1
(fields)
SELECT (fields)
FROM T2 INNER JOIN View1 ON T2.TrID = View1.MyTableID
WHERE (T2.TextField1 = @TrType AND T2.TextField2 = @TextField2)
UPDATE T2
SET TextField2 = 'Ok', TextField2Date=@MyRunDateTime
FROM T2
WHERE (TextField1 = @TrType AND TextField2 = @TextField2)
IF @@ERROR <> 0
BEGIN
rollback transaction pTrans
return(-1)
END
ELSE
BEGIN
commit transaction pTrans
END
END
I have a pretty good db server with four CPUs, it has not any other loads on it, but the following query takes 4ms to return. I use the syscolumns this way quite often, I am not sure why it takes it that long to return, any idea?
select 'master',id,colid,name,xtype,length,xprec,xscale,status from [ablestatic].[dbo].syscolumns where id=(select id from [ablestatic].[dbo].sysobjects where name='link_data_ezregs')
hi, have a nice day to all , can some expert here point out what are the do and don't when we writing query ? ie:should we create many view as handler to get data? when should we use store procedure and trigger ? and so on
i seeking the way to improve my sql skill, can someone suggest some reference site or material or book as well ?
thank you very much for helping
I wrote the following function a few years ago - before I learned about SQL's PATINDEX function. It might be possible to check for a valid email address syntax with a single PATINDEX string which could replace the entire body of hte function below.
Is anyone is interested in taking a crack at it?
Signed... lazy Sam
CREATE FUNCTION dbo.EmailIsValid (@Email varchar (100))
/*
RETURN 1 if @Email contains a valid email address syntax, ELSE RETURN 0
*/
RETURNS BIT
AS
BEGIN
DECLARE @atpos int, @dotpos int
SET @Email = LTRIM(RTRIM(IsNull(@Email, ''))) -- remove leading and trailing blanks
IF LEN(@Email) = 0 RETURN(0) -- nothing to validate
SET @atpos = charindex('@',@Email) -- position of first (hopefully only) @
IF @atpos <= 1 OR @atpos = LEN(@Email) RETURN(0) -- @ is neither 1st or last or missing
IF CHARINDEX('@', @email, @atpos+1) > 0 RETURN(0) -- Two @s are illegal
IF CHARINDEX(' ',@Email) > 0 RETURN(0) -- Embedded blanks are illegal
SET @dotpos = CHARINDEX('.',REVERSE(@Email)) -- location (from rear) of last dot
IF (@dotpos < 3) or (@dotpos > 4) or (LEN(@Email) - @dotpos) < @atpos RETURN (0) -- dot / 2 or 3 char, after @
RETURN(1) -- Whew !!
END
Go
Hi,
I have this SQL query that can take too long time, up to 1 minute if table contains over 1 million rows. And if the system is very active while executing this query it can cause more delays I guess.
select
distinct 'CONV 1' as Conveyour,
info as Error,
(select top 1 substring(timecreated, 0, 7) from log b where a.info = b.info order by timecreated asc) as Date,
(select count(*) from log b where b.info = a.info) as 'Times occured'
from log a where loggroup = 'CSCNV' and logtype = 4
The table name is LOG, and I retrieve 4 columns: Conveyour, Error, Date and Times occured. The point of the subqueries is to count all distinct post and to retrieve the date of the first time the pst was logged. Also, a first and last date could be specified but is left out here.
Does anyone knows how I can improve this SQL query?
Best /M
Hello,
I have four different transactions such as below and I do one insert and one update in each transaction and it seem it is slow and creates deadlock with the user interface.
These transactions are performed against the tables that users are accessing with another user interface. I have following two questions:
1. T2.TextField1 and TextField2 = @TextField2 are Ok, Nok fields so I did not put index since only two distinct values. Should I put indexes on these fields?
2. Can I make this transaction let user interface do its task in case accessing the same rows, I can start transaction again but I do not want users get disturbed?
.
BEGIN TRANSACTION pTrans
BEGIN
INSERT INTO T1
(fields)
SELECT (fields)
FROM T2 INNER JOIN View1 ON T2.TrID = View1.MyTableID
WHERE (T2.TextField1 = @TrType AND T2.TextField2 = @TextField2)
UPDATE T2
SET TextField2 = 'Ok', TextField2Date=@MyRunDateTime
FROM T2
WHERE (TextField1 = @TrType AND TextField2 = @TextField2)
IF @@ERROR <> 0
BEGIN
rollback transaction pTrans
return(-1)
END
ELSE
BEGIN
commit transaction pTrans
END
END
Hi,
We have a poorly performing SQL 2000 db. i have just defragged ( the HD, not indexes, these are done daily via SQL Agent) the data files of our server and have not found any improvement in response.
I have now got into using SQL profiler to analyse the server performance. in the results that the trace is returning there are some huge (REALLY BIG) values for the duration and cpu values but these rows have no textdata value returned (ie it is null)
why is this? for these rows, the reads and writes columns are also high.
if these rows are what is taking the cpu's time then how can i identify what the server is doing to make any changes?
any thoughts on what other values i might trace or what action i can take to find the slow down cause?
in performance manager the processors (dual Xeons) are rarely dropping below 60%.
thanks in advance
fatherjack
Aside from indexes, will it help if I use multiple filegroups to improve the time needed to query millions of records?
View 2 Replies View Relatedis there anyway to improve this statement. make it short? any input will be appreciated.
update TempAddressParsingTable set ad_unit = case
when right(rtrim(ad_str1),3) like 'apt' or
right(rtrim(ad_str1),3) like 'lot'
and substring(reverse(ad_str1), 4,1) in ('', ',', '.')
then right(ad_str1,3)
when right(rtrim(ad_str1),4) like 'unit' or
right(rtrim(ad_str1),4) like 'apt%' or
right(rtrim(ad_str1),4) like 'lot%'
and substring(reverse(ad_str1), 5,1) in ('', ',', '.')
then right(ad_str1,4)
when right(rtrim(ad_str1),5) like 'unit%' or
right(rtrim(ad_str1),5) like 'apt%%' or
right(rtrim(ad_str1),5) like 'lot%%'
and substring(reverse(ad_str1), 6,1) in ('', ',', '.')
then right(ad_str1,5)
when right(rtrim(ad_str1),6) like 'unit%%' or
right(rtrim(ad_str1),6) like 'apt%%%' or
right(rtrim(ad_str1),6) like 'lot%%%'
and substring(reverse(ad_str1), 7,1) in ('', ',', '.')
then right(ad_str1,6)
when right(rtrim(ad_str1),7) like 'unit%%%' or
right(rtrim(ad_str1),7) like 'apt%%%%' or
right(rtrim(ad_str1),7) like 'lot%%%%'
and substring(reverse(ad_str1), 8,1) in ('', ',', '.')
then right(ad_str1,7)
when right(rtrim(ad_str1),8) like 'unit%%%%' or
right(rtrim(ad_str1),8) like 'apt%%%%%'
and substring(reverse(ad_str1), 9,1) in ('', ',', '.')
then right(ad_str1,8)
when right(rtrim(ad_str1),9) like 'unit%%%%%' or
right(rtrim(ad_str1),9) like 'apt%%%%%%'
and substring(reverse(ad_str1), 10,1) in ('', ',', '.')
then right(ad_str1,9)
when right(rtrim(ad_str1), 2) like '#%'
and substring(reverse(ad_str1), 3, 1) in ('', ',', '.')
then right(ad_str1, 2)
when right(rtrim(ad_str1), 3) like '#%%'
and substring(reverse(ad_str1), 4, 1) in ('', ',', '.')
then right(ad_str1, 3)
when right(rtrim(ad_str1), 4) like '#%%%'
and substring(reverse(ad_str1), 5, 1) in ('', ',', '.')
then right(ad_str1, 4)
when right(rtrim(ad_str1), 5) like '#%%%%'
and substring(reverse(ad_str1), 6, 1) in ('', ',', '.')
then right(ad_str1, 5)
else null
end
Dear Experts,
I'm a DBA, Working for a Product based company.
We are implementing our product for a certain client of huge OLTP.
our reports team is facing problem (error: all the reports are timed out).though the queries are written properly, Each query is taking some minutes of time.
I've given the command DBCC DROPCLEANBUFFERS.
the time immediately dropped to 10 sec.
now my question is :
please suggest me the DBCC commands or any DBA related commands to improve the performance of the application for my reports team.
Thanks in advance.
Vinod
Hi All,
from your experience in SQL 2005 - do i have any free software that can help in improve performance or can help in identifying performance bottleneck. two examples of performance and help that i use usually use are the maintenance plan that do (check DB > reorganized index > rebuild index > update statics) and the second software is the SQL 2005 DASHBOARD for the reporting help.
do you have any other free tools and help that you can give me for performance or any thing that i must have in my SQL 2005 servers.
Thx
I have a table called work_order which has over 1 million records and acontractor table which has over 3000 records.When i run this query ,it takes long time since its grouping bycontractor and doing multiple sub SELECTs.is there any way to improve performance of this query ??-------------------------------------------------SELECT ckey,cnam,t1.contractor_id,count(*) as tcnt,(SELECT count(*) FROM work_order t2 WHEREt1.contractor_id=t2.contractor_id and rrstm=1 and rcdt is NULL) as r1,(SELECT count(*) FROM work_order t3 WHEREt1.contractor_id=t3.contractor_id and rrstm=2 and rcdt is NULL) as r2,(SELECT count(*) FROM work_order t4 WHEREt1.contractor_id=t4.contractor_id and rrstm=3 and rcdt is NULL) as r3,SELECT count(*) FROM work_order t5 WHEREt1.contractor_id=t5.contractor_id and rrstm=4 and rcdt is NULL) as r4,(SELECT count(*) FROM work_order t6 WHEREt1.contractor_id=t6.contractor_id and rrstm=5 and rcdt is NULL) as r5,(SELECT count(*) FROM work_order t7 WHEREt1.contractor_id=t7.contractor_id and rrstm=6 and rcdt is NULL) as r6,SELECT count(*) FROM work_order t8 WHEREt1.contractor_id=t8.contractor_id and rcdt is NULL) as open_count,(SELECT count(*) FROM work_order t9 WHEREt1.contractor_id=t9.contractor_id and vendor_rec is not NULL) asAck_count,(SELECT count(*) FROM work_order t10 WHEREt1.contractor_id=t10.contractor_id and (rtyp is NULL or rtyp<>'R') andrcdt is NULL) as open_norwoFROM work_order t1,contractor WHEREt1.contractor_id=contractor.contractor_id andcontractor.tms_user_id is not NULL GROUP BYckey,cnam,t1.contractor_id ORDER BY cnam*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
View 2 Replies View RelatedHey guys,Here's my situation:I have a table called lets say 'Tree', as illustred bellow:Tree====TreeId (integer)(identity) not nullL1(integer)L2(integer)L3(integer)....L10(integer)The combination of the values of L1 thru L10 is called a "Path" , andL1 thru L10 values are stored in a second table lets say called'Leaf':Leaf====LeafId (integer)(identity) not nullLeatText varchar(2000)Here's my problem:I need to lookup for a given keyword in each path of the tree table,and return each individual column for the paths that match thecriteria. Here's the main idea of how I have this now.SELECT TreeId,L1,L2,...,L10, GetText(L1) + GetText(L2) as L2text + ...+ GetText(L10) AS PathTextINTO #tmp FROM Tree //GetText is a lookup function for the Leaf tableSELECT L1,GetText(L1),L2,GetText(L2),...,L10,GetText(L10) FROM #tmpWHERECharIndex(@keyword,a.pathtext) > 0Does anyone would know a better,smart, more efficient way toaccomplish this task? :)Thks,
View 1 Replies View Related
Hello,
I would like to know how can I change a "not in" clause with the same results in a SQL sentence in order to improve the performance of my SQL sentence.
I am working with SQL Server 2005
Thanks,
jmota
SQL Experts,
I'm facing a performance issue with the following query...
The Output of the following Query is 184 Records and it takes 2 to 3 secs to execute the query.
SELECT DISTINCT Column1 FROM Table1 (NOLOCK) WHERE Column1 NOT IN
(SELECT T1.Column1 FROM Table1 T1(NOLOCK) JOIN Table2 T2 (NOLOCK)
ON T2.Column2 = T1.Column2 WHERE T2.Column3= <Value>)
Data Info.
No of records in Table1 --> 1377366
No. of distinct records of Column1 in Table1 --> 33240
Is there any way the above query can be rewritten to improve the performance, which should take less than 1 sec...
(I'm using DISTINCT because there are Duplicate records of Column1 inTable1 )
Any of your help in this regard will be greately appreciated.
--
ash
I am not an expert in either SSIS or VFP technology but know enough to get my way round. One anomaly I did discover I thought was worth sharing for all those concerned with getting large amounts of data out of VFP in as short a time as possible. When you search for performance tips in relation to SSIS the advice is to never use select table or view from data access mode list in ole db source as this effectively translates to select * from table and I've never come across anything to contradict this €“ well I am and let me explain why:
When you use SQL command as data access mode in ole db source (where ole db source is foxpro dbc) and you write out select column1, column 2 etc etc from table a etc etc and then connect that to a destination (in my case ole db destination) the SSIS task spends ages stuck on Pre-execute before anything happens (the bigger the fox tbl the longer the wait). What is happening behind scenes is that the foxpro engine (assuming its foxpro engine and not sql engine €“ either way don€™t think it matters too much) is executing the sql command and then writing results to a tmp (temp) file on your local temp folder €“ (in my case : C:Documents and SettingsautologinLocal SettingsTemp1). These files take up gigs of space and it is only when this process is complete does the SSIS task actually finish the Pre-execute and start the data transfer process. I couldn€™t understand a) why my packages were stuck on pre-execute for such long times? and b) why were the tmp files being created and why they were soo big?
If you change from SQL command in source to Table or view and then select your table from list the SSIS task when executed kicks off immediately and doesn€™t get stuck on pre-execute nor create any tmp files €“ so you save time and disk space. The difference in time is immense and if like me you were really frustrated with poor performance when extracting from VFP now you know why.
Btw maybe this does not apply to all versions of VFP but it certainly does to v7.
now i want to get results from server tables, but i found it is very slow, for example :
select Coalesce(T1.Name, T2.Name, T3.Name), T1.M1, T2.M2, T3.M3
from T1
full outer join T2
on Coalesce(T1.Name, NULL) = T2.Name
full outer join T3
on Coalesce(T1.Name, T2.Name) = T3.Name
in Tables i have builded index for name, but when every table have 20000 records, the sql above is very slow, is there other method to improve the query speed ?
Thks
Hi,
I have database D1 which contains 5 million users and one more database D2 having 95k Users.
i wanted to insert common users into new database D3 based on filter which is Phone number and is unique value. Below is the structure of my tables in D1 and D2:
D1(database)
UserProfiles(Table)
UserId (Column Name)
UserProfiledata (Column Name)
D2 (database)
Alerts (Table)
PhoneNumbers (ColumnName - Unique)
Now userProfiles table contains data in string format as below:
User.state AA User.City CC User.Pin 1234 User.phonenumber 987654
so iam parsing for each user using cursor and writing phone numbers into some temp table and wanted to query D2 database to verify whether this phone number exists in Alerts Table of D2 database.
can anyone please suggest on how i can go ahead with this and also help me on how to improve perfomance.
Thanks,
-Veera
-- Search orders which meet @empDefId
ALTER PROCEDURE SearchOrders
@empDefId nvarchar(50)
AS
declare @ctmId int
select @ctmId=customerid from customers where employeedefineid=@empDefId
select * from producedetails where orderid in
(select orderid from orders where customerid=@ctmId) -- this query executes three times!
order by orderid
select * from producecautions where orderid in
(select orderid from orders where customerid=@ctmId)
order by orderid
select * from orders where customerid=@ctmId order by orderid
RETURN
I submit a SSIS package with data flow task by DTEXEC command on a W2k3 SP4 server installed SQL2005 EE SP1 several time. The package sometimes finish the task successfully but sometimes failed.
I would like to know how to improve the stability of running package. The server is not busy for other tasks. The resource should enough to run my task. It is strange.
Thanks.
Hello,
I have the following view SQL ( LATESTRECALL_v, for communication purposes ):
SELECT REC.RECKEY, REC.RECSTM, REC.RECUID, REC.RECREC, REC.RECLNN, REC.RECCCD, REC.RECSKU, REC.RECSKD, REC.RECPEC,
REC.RECPRL, REC.RECQTY, REC.RECTGQ, REC.RECTAG, REC.RECASN, REC.RECRLC, REC.RECPLC, REC.RECNTS, REC.RECRST,
REC.RECLOT, REC.RECARF, REC.RECVEN, REC.RECIBO, REC.RECSRN, REC.RECPAR, REC.RECINS, REC.RECSDTR, REC.RECDEA,
REC.RECUOI, REC.RECFPL, REC.RECTDA, REC.RECUMN, REC.RECRQT, REC.RECCID, REC.RECCTN, REC.RECFPC, REC.RECEPU,
REC.RECSCQ, REC.RECLOG, REC.RECSTS, REC.RECTDV, REC.RECSHDR, REC.RECRRN, REC.RECWDF, REC.RECWWF, REC.RECSLT,
REC.RECSQT, REC.RECSTG, REC.RECRTG, REC.RECPSN, REC.RECIFL, REC.RECEXPR, REC.RECMFDR, REC.RECMFGR, REC.RECCNO,
REC.RECSC1, REC.RECSC2, REC.RECSC3, REC.RECSC4, REC.RECSC5, REC.RECSC6, REC.RECSC7, REC.RECSC8, REC.RECSC9,
REC.RECSD1, REC.RECST1, REC.RECSD2, REC.RECST2, REC.RECSD3, REC.RECST3, REC.RECSQ1, REC.RECSQ2, REC.RECSQ3,
REC.RECSQ4, REC.RECSQ5, REC.RECSQ6, REC.RECSDU1, REC.RECSDU2, REC.RECRRC, REC.RECICS, REC.RECJOBS
FROM dbo.REC, dbo.LATESTRECKEY_V
WHERE REC.RECKEY = LATESTRECKEY_V.LATEST_RECKEY
Note that it refers to view LATESTRECKEY_V, which is defined below ( the SQL ):
SELECT REC.RECSKU, REC.RECPRL, max(REC.RECKEY) AS latest_reckey
FROM dbo.REC, dbo.LATESTREC_V
WHERE
REC.RECSKU = LATESTREC_V.RECSKU AND
REC.RECPRL = LATESTREC_V.RECPRL AND
sysdb.ssma_oracle.to_char_date(REC.RECSDTR, 'YYYYMMDD') + REC.RECSTM = LATESTREC_V.LATEST_DATE_TIME
GROUP BY REC.RECSKU, REC.RECPRL
Note that it refers to view LATESTREC_V, which is defined below ( the SQL ):
SELECT REC.RECSKU, REC.RECPRL, max(sysdb.ssma_oracle.to_char_date(REC.RECSDTR, 'YYYYMMDD') + REC.RECSTM) AS latest_date
time
FROM dbo.REC
GROUP BY REC.RECSKU, REC.RECPRL
Now, there already is a key on the REC table, columns RECSKU and RECPRL. Adding that index sped things up quite a bit. But this setup is still very enefficient. I cannot apply the indexed view concept to LATESTRECKEY_V or LATESTREC_V because they use "max". I suppose I could apply it to the first view, but since that view depends on the other 2 views, I am not convinced it will improve my performance. Any ideas on improving such a situation? Thanks for your time.
Tony
Hi everyone
I need a solution for this query. It is working fine for 2 tables but when there are 1000's of records in each table and query has more than 2 tables. The process never ends.
Here is the query
(select siqPid= 1007, t1.Gmt909Time as GmtTime,(t1.engValue+t2.engValue+t3.engValue+t4.engValue) as EngValue,
t1.Loc1Time as locTime,t1.msgId
into #temp5
from #temp1 as t1,#temp2 as t2,#temp3 as t3,#temp4 as t4
where t1.Loc1Time = t2.Loc1Time and t2.Loc1Time = t3.Loc1Time and t3.Loc1Time = t4.Loc1Time)
I was trying to do something with this query.
But the engValues cant be summed up. and if I add that in the query, the query isnt compiling.
(select siqPid= 1007, t1.Gmt909Time as GmtTime,
t1.Loc1Time as locTime,t1.msgId,(t1.engValue+t2.engValue+t3.engValue+t4.engValue) as engValue
--into #temp5
from #temp1 as t1
where exists
(Select 1
from #temp2 as t2
where t1.Loc1Time = t2.Loc1Time and
exists
(Select 1
from #temp3 as t3
where t2.Loc1Time = t3.Loc1Time and
exists
(Select 1
from #temp4 as t4
where t3.Loc1Time = t4.Loc1Time))))
I need immediate help on that, I would appreciate an input on it.
Thanks
-Sarah
I should add an Identity field (Identity=True) and a row version field(timestamp) to my table, and avoid to arrange tables into different databases, is it true in general?
View 4 Replies View RelatedHi ,
I have
SQL Server with 2 processors , 2 GB memory and RAID 5
40 GB db (Pricing ) working with 3rd party application.
in db Pricing
table A = 180000 rows (20 column and 5 indexes incling clustered primary)
table B = 1789000 rows (25 coumns and 6 indexes incling clustered primary )
table C = 10005 rows (15 columns and 4 indexes incling clustered primary)
Users start complaining about poor performance when selecting data from tables A,B and C
I used profier to capture all queries running with A,B and C tables
where duration more then 1 second.
Profiler shows 0 cpu and very high read for all capured queries
I run INDEXDEFRAG on tables A,B and C
then rerun Profiler
No changes at all in profiler reading
same low CPU and high read
If the is no changes in performance after INDEXDEFRAG can I conclude following
everything has been done to optimize tables A,B and C , query code or table structure should be modified?
or
any other improvment could be done without modifing code?
Thank you
Alex
Intel P4 3.0Ghz dual core, 512MB DDR memory, and EIDE HDD
15.4GB mdf file with 19 million rows.
How will lessen the time to get a result from a query for this database without upgrading hardware components?
Index, tuning, multiple filegroups, etc.? Btw, this will no longer get additional rows.
THanks.