How To Improve The Efficience Of Sql Query ?
Nov 8, 2006
now i want to get results from server tables, but i found it is very slow, for example :
select Coalesce(T1.Name, T2.Name, T3.Name), T1.M1, T2.M2, T3.M3
from T1
full outer join T2
on Coalesce(T1.Name, NULL) = T2.Name
full outer join T3
on Coalesce(T1.Name, T2.Name) = T3.Name
in Tables i have builded index for name, but when every table have 20000 records, the sql above is very slow, is there other method to improve the query speed ?
Thks
View 3 Replies
ADVERTISEMENT
Nov 29, 2006
I have a pretty good db server with four CPUs, it has not any other loads on it, but the following query takes 4ms to return. I use the syscolumns this way quite often, I am not sure why it takes it that long to return, any idea?
select 'master',id,colid,name,xtype,length,xprec,xscale,status from [ablestatic].[dbo].syscolumns where id=(select id from [ablestatic].[dbo].sysobjects where name='link_data_ezregs')
View 6 Replies
View Related
Aug 21, 2007
Hi,
I have this SQL query that can take too long time, up to 1 minute if table contains over 1 million rows. And if the system is very active while executing this query it can cause more delays I guess.
select
distinct 'CONV 1' as Conveyour,
info as Error,
(select top 1 substring(timecreated, 0, 7) from log b where a.info = b.info order by timecreated asc) as Date,
(select count(*) from log b where b.info = a.info) as 'Times occured'
from log a where loggroup = 'CSCNV' and logtype = 4
The table name is LOG, and I retrieve 4 columns: Conveyour, Error, Date and Times occured. The point of the subqueries is to count all distinct post and to retrieve the date of the first time the pst was logged. Also, a first and last date could be specified but is left out here.
Does anyone knows how I can improve this SQL query?
Best /M
View 6 Replies
View Related
Jul 7, 2006
Aside from indexes, will it help if I use multiple filegroups to improve the time needed to query millions of records?
View 2 Replies
View Related
Jul 20, 2005
I have a table called work_order which has over 1 million records and acontractor table which has over 3000 records.When i run this query ,it takes long time since its grouping bycontractor and doing multiple sub SELECTs.is there any way to improve performance of this query ??-------------------------------------------------SELECT ckey,cnam,t1.contractor_id,count(*) as tcnt,(SELECT count(*) FROM work_order t2 WHEREt1.contractor_id=t2.contractor_id and rrstm=1 and rcdt is NULL) as r1,(SELECT count(*) FROM work_order t3 WHEREt1.contractor_id=t3.contractor_id and rrstm=2 and rcdt is NULL) as r2,(SELECT count(*) FROM work_order t4 WHEREt1.contractor_id=t4.contractor_id and rrstm=3 and rcdt is NULL) as r3,SELECT count(*) FROM work_order t5 WHEREt1.contractor_id=t5.contractor_id and rrstm=4 and rcdt is NULL) as r4,(SELECT count(*) FROM work_order t6 WHEREt1.contractor_id=t6.contractor_id and rrstm=5 and rcdt is NULL) as r5,(SELECT count(*) FROM work_order t7 WHEREt1.contractor_id=t7.contractor_id and rrstm=6 and rcdt is NULL) as r6,SELECT count(*) FROM work_order t8 WHEREt1.contractor_id=t8.contractor_id and rcdt is NULL) as open_count,(SELECT count(*) FROM work_order t9 WHEREt1.contractor_id=t9.contractor_id and vendor_rec is not NULL) asAck_count,(SELECT count(*) FROM work_order t10 WHEREt1.contractor_id=t10.contractor_id and (rtyp is NULL or rtyp<>'R') andrcdt is NULL) as open_norwoFROM work_order t1,contractor WHEREt1.contractor_id=contractor.contractor_id andcontractor.tms_user_id is not NULL GROUP BYckey,cnam,t1.contractor_id ORDER BY cnam*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
View 2 Replies
View Related
Jul 20, 2005
Hey guys,Here's my situation:I have a table called lets say 'Tree', as illustred bellow:Tree====TreeId (integer)(identity) not nullL1(integer)L2(integer)L3(integer)....L10(integer)The combination of the values of L1 thru L10 is called a "Path" , andL1 thru L10 values are stored in a second table lets say called'Leaf':Leaf====LeafId (integer)(identity) not nullLeatText varchar(2000)Here's my problem:I need to lookup for a given keyword in each path of the tree table,and return each individual column for the paths that match thecriteria. Here's the main idea of how I have this now.SELECT TreeId,L1,L2,...,L10, GetText(L1) + GetText(L2) as L2text + ...+ GetText(L10) AS PathTextINTO #tmp FROM Tree //GetText is a lookup function for the Leaf tableSELECT L1,GetText(L1),L2,GetText(L2),...,L10,GetText(L10) FROM #tmpWHERECharIndex(@keyword,a.pathtext) > 0Does anyone would know a better,smart, more efficient way toaccomplish this task? :)Thks,
View 1 Replies
View Related
Dec 14, 2007
SQL Experts,
I'm facing a performance issue with the following query...
The Output of the following Query is 184 Records and it takes 2 to 3 secs to execute the query.
SELECT DISTINCT Column1 FROM Table1 (NOLOCK) WHERE Column1 NOT IN
(SELECT T1.Column1 FROM Table1 T1(NOLOCK) JOIN Table2 T2 (NOLOCK)
ON T2.Column2 = T1.Column2 WHERE T2.Column3= <Value>)
Data Info.
No of records in Table1 --> 1377366
No. of distinct records of Column1 in Table1 --> 33240
Is there any way the above query can be rewritten to improve the performance, which should take less than 1 sec...
(I'm using DISTINCT because there are Duplicate records of Column1 inTable1 )
Any of your help in this regard will be greately appreciated.
--
ash
View 7 Replies
View Related
Jan 2, 2008
Hi,
I have database D1 which contains 5 million users and one more database D2 having 95k Users.
i wanted to insert common users into new database D3 based on filter which is Phone number and is unique value. Below is the structure of my tables in D1 and D2:
D1(database)
UserProfiles(Table)
UserId (Column Name)
UserProfiledata (Column Name)
D2 (database)
Alerts (Table)
PhoneNumbers (ColumnName - Unique)
Now userProfiles table contains data in string format as below:
User.state AA User.City CC User.Pin 1234 User.phonenumber 987654
so iam parsing for each user using cursor and writing phone numbers into some temp table and wanted to query D2 database to verify whether this phone number exists in Alerts Table of D2 database.
can anyone please suggest on how i can go ahead with this and also help me on how to improve perfomance.
Thanks,
-Veera
View 3 Replies
View Related
Oct 22, 2004
Hi ,
I have
SQL Server with 2 processors , 2 GB memory and RAID 5
40 GB db (Pricing ) working with 3rd party application.
in db Pricing
table A = 180000 rows (20 column and 5 indexes incling clustered primary)
table B = 1789000 rows (25 coumns and 6 indexes incling clustered primary )
table C = 10005 rows (15 columns and 4 indexes incling clustered primary)
Users start complaining about poor performance when selecting data from tables A,B and C
I used profier to capture all queries running with A,B and C tables
where duration more then 1 second.
Profiler shows 0 cpu and very high read for all capured queries
I run INDEXDEFRAG on tables A,B and C
then rerun Profiler
No changes at all in profiler reading
same low CPU and high read
If the is no changes in performance after INDEXDEFRAG can I conclude following
everything has been done to optimize tables A,B and C , query code or table structure should be modified?
or
any other improvment could be done without modifing code?
Thank you
Alex
View 10 Replies
View Related
Jul 2, 2006
Intel P4 3.0Ghz dual core, 512MB DDR memory, and EIDE HDD
15.4GB mdf file with 19 million rows.
How will lessen the time to get a result from a query for this database without upgrading hardware components?
Index, tuning, multiple filegroups, etc.? Btw, this will no longer get additional rows.
THanks.
View 6 Replies
View Related
Jun 9, 2004
I have a view which uses UNION of two tables. First table has a 1.5 Million records and the second one has 40,000 records. When I query the view with a column (that is indexed in both tables) in the where clause, it's taking taking 3 Minutes to give the result. The column is of DateTime data Type. Any ideas as to how to improve the query performance ???
TIA
-XLDB
View 14 Replies
View Related
Feb 28, 2005
Hello,
I have the following setup and I would appreciate any help in improving
the performance of the query.
BigTable:
Column1 (indexed)
Column2 (indexed)
Column3 (no index)
Column4 (no index)
select
[time] =
CASE
when BT.Column3 = 'value1' then DateAdd(...)
when BT.Column3 in ('value2', 'value3') then DateAdd(...)
END,
Duration =
CASE
when BT.Column3 = 'value1' then DateDiff(...)
when BT.Column3 in ('value2', 'value3') then DateDiff(ss,
BT.OrigTime, (select TOP 1 X.OrigTime from BigTable X where X.Column1 >
BT.Column1 and X.Column3 <> 'value4' order by X.Column1 ))
END,
FROM
BigTable BT where BT.Column3 = 'value1' OR (BT.Column3 in ('value2',
'value3') and BT.Column4 <> (select X.Column4 from BigTable X where
X.Column1 = BT.Column1 and X.Column3 = 'Value1'))
Apart from the above mentioned, there are a few more columns which are
just a part of select statement and are not in any condition statments.
The BigTable has around 1 Mil records and the response time is very
poor, it takes around 3 mins to retrieve the records (which would be
around 500K)
With the Statistics ON,
I get the following:
Table 'BigTable'. Scan count 2, logical reads 44184, physical reads 0,
read-ahead reads 0.
Table 'WorkTable'. Scan count 541221, logical reads 4873218, physical
reads 0, read-ahead reads 0.
Is there any way to increase the performance, so that I can get the
result under 1 minute?
Any help would be appreciated.
P.S: I tried indexing the Column3, but no improvement.
-SR
View 8 Replies
View Related
Jul 23, 2005
Hello,I have the following setup and I would appreciate any help in improvingthe performance of the query.BigTable:Column1 (indexed)Column2 (indexed)Column3 (no index)Column4 (no index)select[time] =CASEwhen BT.Column3 = 'value1' then DateAdd(...)when BT.Column3 in ('value2', 'value3') then DateAdd(...)END,Duration =CASEwhen BT.Column3 = 'value1' then DateDiff(...)when BT.Column3 in ('value2', 'value3') then DateDiff(ss,BT.OrigTime, (select TOP 1 X.OrigTime from BigTable X where X.Column1 >BT.Column1 and X.Column3 <> 'value4' order by X.Column1 ))END,FROMBigTable BT where BT.Column3 = 'value1' OR (BT.Column3 in ('value2','value3') and BT.Column4 <> (select X.Column4 from BigTable X whereX.Column1 = BT.Column1 and X.Column3 = 'Value1'))Apart from the above mentioned, there are a few more columns which arejust a part of select statement and are not in any condition statments.The BigTable has around 1 Mil records and the response time is verypoor, it takes around 3 mins to retrieve the records (which would bearound 500K)With the Statistics ON,I get the following:Table 'BigTable'. Scan count 2, logical reads 44184, physical reads 0,read-ahead reads 0.Table 'WorkTable'. Scan count 541221, logical reads 4873218, physicalreads 0, read-ahead reads 0.Is there any way to increase the performance, so that I can get theresult under 1 minute?Any help would be appreciated.P.S: I tried indexing the Column3, but no improvement.
View 1 Replies
View Related
Apr 23, 2015
I have a pretty large DB and a fairly complex query. If I drop buffers and clear cache the query runs in 20 seconds returning 25K rows. Subsequent runs are 2 seconds. Is this the result of the results being cached, execution being cached, other? Are there good ways to close the gap between the initial and later runs? Does the cache stay present until the service restarts or does SQL recycle the memory and if so, based on what criteria?
View 5 Replies
View Related
Mar 9, 2008
I have the following SQL, which works but I think it can be done simplier. I seem to have to group it by multiple columns, but I am sure there must be a way of grouping the results by a single column. Any Ideas?
Code:
SELECT count(order_items.order_id) as treenum, orders.order_id, orders.order_date,
orders.cust_order, orders.del_date, orders.confirmed, orders.del_addr
FROM orders, order_items
WHERE orders.order_id = order_items.order_id GROUP by orders.order_id, orders.order_date
, orders.cust_order, orders.del_date, orders.confirmed, orders.del_addr
ORDER BY orders.order_id DESC
View 2 Replies
View Related
Jul 12, 2006
Hello,
I have four different transactions such as below and I do one insert and one update in each transaction and it seem it is slow and creates deadlock with the user interface.
These transactions are performed against the tables that users are accessing with another user interface. I have following two questions:
1. T2.TextField1 and TextField2 = @TextField2 are Ok, Nok fields so I did not put index since only two distinct values. Should I put indexes on these fields?
2. Can I make this transaction let user interface do its task in case accessing the same rows, I can start transaction again but I do not want users get disturbed?
.
BEGIN TRANSACTION pTrans
BEGIN
INSERT INTO T1
(fields)
SELECT (fields)
FROM T2 INNER JOIN View1 ON T2.TrID = View1.MyTableID
WHERE (T2.TextField1 = @TrType AND T2.TextField2 = @TextField2)
UPDATE T2
SET TextField2 = 'Ok', TextField2Date=@MyRunDateTime
FROM T2
WHERE (TextField1 = @TrType AND TextField2 = @TextField2)
IF @@ERROR <> 0
BEGIN
rollback transaction pTrans
return(-1)
END
ELSE
BEGIN
commit transaction pTrans
END
END
View 3 Replies
View Related
Jul 11, 2006
hi, have a nice day to all , can some expert here point out what are the do and don't when we writing query ? ie:should we create many view as handler to get data? when should we use store procedure and trigger ? and so on
i seeking the way to improve my sql skill, can someone suggest some reference site or material or book as well ?
thank you very much for helping
View 2 Replies
View Related
Nov 30, 2005
I wrote the following function a few years ago - before I learned about SQL's PATINDEX function. It might be possible to check for a valid email address syntax with a single PATINDEX string which could replace the entire body of hte function below.
Is anyone is interested in taking a crack at it?
Signed... lazy Sam
CREATE FUNCTION dbo.EmailIsValid (@Email varchar (100))
/*
RETURN 1 if @Email contains a valid email address syntax, ELSE RETURN 0
*/
RETURNS BIT
AS
BEGIN
DECLARE @atpos int, @dotpos int
SET @Email = LTRIM(RTRIM(IsNull(@Email, ''))) -- remove leading and trailing blanks
IF LEN(@Email) = 0 RETURN(0) -- nothing to validate
SET @atpos = charindex('@',@Email) -- position of first (hopefully only) @
IF @atpos <= 1 OR @atpos = LEN(@Email) RETURN(0) -- @ is neither 1st or last or missing
IF CHARINDEX('@', @email, @atpos+1) > 0 RETURN(0) -- Two @s are illegal
IF CHARINDEX(' ',@Email) > 0 RETURN(0) -- Embedded blanks are illegal
SET @dotpos = CHARINDEX('.',REVERSE(@Email)) -- location (from rear) of last dot
IF (@dotpos < 3) or (@dotpos > 4) or (LEN(@Email) - @dotpos) < @atpos RETURN (0) -- dot / 2 or 3 char, after @
RETURN(1) -- Whew !!
END
Go
View 8 Replies
View Related
Jul 12, 2006
Hello,
I have four different transactions such as below and I do one insert and one update in each transaction and it seem it is slow and creates deadlock with the user interface.
These transactions are performed against the tables that users are accessing with another user interface. I have following two questions:
1. T2.TextField1 and TextField2 = @TextField2 are Ok, Nok fields so I did not put index since only two distinct values. Should I put indexes on these fields?
2. Can I make this transaction let user interface do its task in case accessing the same rows, I can start transaction again but I do not want users get disturbed?
.
BEGIN TRANSACTION pTrans
BEGIN
INSERT INTO T1
(fields)
SELECT (fields)
FROM T2 INNER JOIN View1 ON T2.TrID = View1.MyTableID
WHERE (T2.TextField1 = @TrType AND T2.TextField2 = @TextField2)
UPDATE T2
SET TextField2 = 'Ok', TextField2Date=@MyRunDateTime
FROM T2
WHERE (TextField1 = @TrType AND TextField2 = @TextField2)
IF @@ERROR <> 0
BEGIN
rollback transaction pTrans
return(-1)
END
ELSE
BEGIN
commit transaction pTrans
END
END
View 3 Replies
View Related
Dec 18, 2007
hi
in asp.net,i used sql server to store records.
in a table has 1 million records,but when i update the record,it is very slowly.
is "create index" helpful for "update" operation?
i need help,thanks a lot.
View 4 Replies
View Related
Mar 8, 2004
Hi,
We have a poorly performing SQL 2000 db. i have just defragged ( the HD, not indexes, these are done daily via SQL Agent) the data files of our server and have not found any improvement in response.
I have now got into using SQL profiler to analyse the server performance. in the results that the trace is returning there are some huge (REALLY BIG) values for the duration and cpu values but these rows have no textdata value returned (ie it is null)
why is this? for these rows, the reads and writes columns are also high.
if these rows are what is taking the cpu's time then how can i identify what the server is doing to make any changes?
any thoughts on what other values i might trace or what action i can take to find the slow down cause?
in performance manager the processors (dual Xeons) are rarely dropping below 60%.
thanks in advance
fatherjack
View 2 Replies
View Related
Aug 31, 2006
is there anyway to improve this statement. make it short? any input will be appreciated.
update TempAddressParsingTable set ad_unit = case
when right(rtrim(ad_str1),3) like 'apt' or
right(rtrim(ad_str1),3) like 'lot'
and substring(reverse(ad_str1), 4,1) in ('', ',', '.')
then right(ad_str1,3)
when right(rtrim(ad_str1),4) like 'unit' or
right(rtrim(ad_str1),4) like 'apt%' or
right(rtrim(ad_str1),4) like 'lot%'
and substring(reverse(ad_str1), 5,1) in ('', ',', '.')
then right(ad_str1,4)
when right(rtrim(ad_str1),5) like 'unit%' or
right(rtrim(ad_str1),5) like 'apt%%' or
right(rtrim(ad_str1),5) like 'lot%%'
and substring(reverse(ad_str1), 6,1) in ('', ',', '.')
then right(ad_str1,5)
when right(rtrim(ad_str1),6) like 'unit%%' or
right(rtrim(ad_str1),6) like 'apt%%%' or
right(rtrim(ad_str1),6) like 'lot%%%'
and substring(reverse(ad_str1), 7,1) in ('', ',', '.')
then right(ad_str1,6)
when right(rtrim(ad_str1),7) like 'unit%%%' or
right(rtrim(ad_str1),7) like 'apt%%%%' or
right(rtrim(ad_str1),7) like 'lot%%%%'
and substring(reverse(ad_str1), 8,1) in ('', ',', '.')
then right(ad_str1,7)
when right(rtrim(ad_str1),8) like 'unit%%%%' or
right(rtrim(ad_str1),8) like 'apt%%%%%'
and substring(reverse(ad_str1), 9,1) in ('', ',', '.')
then right(ad_str1,8)
when right(rtrim(ad_str1),9) like 'unit%%%%%' or
right(rtrim(ad_str1),9) like 'apt%%%%%%'
and substring(reverse(ad_str1), 10,1) in ('', ',', '.')
then right(ad_str1,9)
when right(rtrim(ad_str1), 2) like '#%'
and substring(reverse(ad_str1), 3, 1) in ('', ',', '.')
then right(ad_str1, 2)
when right(rtrim(ad_str1), 3) like '#%%'
and substring(reverse(ad_str1), 4, 1) in ('', ',', '.')
then right(ad_str1, 3)
when right(rtrim(ad_str1), 4) like '#%%%'
and substring(reverse(ad_str1), 5, 1) in ('', ',', '.')
then right(ad_str1, 4)
when right(rtrim(ad_str1), 5) like '#%%%%'
and substring(reverse(ad_str1), 6, 1) in ('', ',', '.')
then right(ad_str1, 5)
else null
end
View 13 Replies
View Related
Mar 13, 2007
Dear Experts,
I'm a DBA, Working for a Product based company.
We are implementing our product for a certain client of huge OLTP.
our reports team is facing problem (error: all the reports are timed out).though the queries are written properly, Each query is taking some minutes of time.
I've given the command DBCC DROPCLEANBUFFERS.
the time immediately dropped to 10 sec.
now my question is :
please suggest me the DBCC commands or any DBA related commands to improve the performance of the application for my reports team.
Thanks in advance.
Vinod
View 4 Replies
View Related
Jan 15, 2008
Hi All,
from your experience in SQL 2005 - do i have any free software that can help in improve performance or can help in identifying performance bottleneck. two examples of performance and help that i use usually use are the maintenance plan that do (check DB > reorganized index > rebuild index > update statics) and the second software is the SQL 2005 DASHBOARD for the reporting help.
do you have any other free tools and help that you can give me for performance or any thing that i must have in my SQL 2005 servers.
Thx
View 3 Replies
View Related
Nov 23, 2007
Hello,
I would like to know how can I change a "not in" clause with the same results in a SQL sentence in order to improve the performance of my SQL sentence.
I am working with SQL Server 2005
Thanks,
jmota
View 6 Replies
View Related
Nov 5, 2007
I am not an expert in either SSIS or VFP technology but know enough to get my way round. One anomaly I did discover I thought was worth sharing for all those concerned with getting large amounts of data out of VFP in as short a time as possible. When you search for performance tips in relation to SSIS the advice is to never use select table or view from data access mode list in ole db source as this effectively translates to select * from table and I've never come across anything to contradict this €“ well I am and let me explain why:
When you use SQL command as data access mode in ole db source (where ole db source is foxpro dbc) and you write out select column1, column 2 etc etc from table a etc etc and then connect that to a destination (in my case ole db destination) the SSIS task spends ages stuck on Pre-execute before anything happens (the bigger the fox tbl the longer the wait). What is happening behind scenes is that the foxpro engine (assuming its foxpro engine and not sql engine €“ either way don€™t think it matters too much) is executing the sql command and then writing results to a tmp (temp) file on your local temp folder €“ (in my case : C:Documents and SettingsautologinLocal SettingsTemp1). These files take up gigs of space and it is only when this process is complete does the SSIS task actually finish the Pre-execute and start the data transfer process. I couldn€™t understand a) why my packages were stuck on pre-execute for such long times? and b) why were the tmp files being created and why they were soo big?
If you change from SQL command in source to Table or view and then select your table from list the SSIS task when executed kicks off immediately and doesn€™t get stuck on pre-execute nor create any tmp files €“ so you save time and disk space. The difference in time is immense and if like me you were really frustrated with poor performance when extracting from VFP now you know why.
Btw maybe this does not apply to all versions of VFP but it certainly does to v7.
View 6 Replies
View Related
Feb 7, 2007
I have a table which has around 132000 rows with 24 columns. I use rda.pull download the data to PDA. For query these data, I must create a index on 5 character columns.
The data download time is good enough, around 4 mins. But it takes 12mins to create the index.
Please help to give me any idea on how to improve the whole synchroniztion speed. Thanks!
View 6 Replies
View Related
Sep 9, 2007
-- Search orders which meet @empDefId
ALTER PROCEDURE SearchOrders
@empDefId nvarchar(50)
AS
declare @ctmId int
select @ctmId=customerid from customers where employeedefineid=@empDefId
select * from producedetails where orderid in
(select orderid from orders where customerid=@ctmId) -- this query executes three times!
order by orderid
select * from producecautions where orderid in
(select orderid from orders where customerid=@ctmId)
order by orderid
select * from orders where customerid=@ctmId order by orderid
RETURN
View 3 Replies
View Related
Jan 26, 2007
I submit a SSIS package with data flow task by DTEXEC command on a W2k3 SP4 server installed SQL2005 EE SP1 several time. The package sometimes finish the task successfully but sometimes failed.
I would like to know how to improve the stability of running package. The server is not busy for other tasks. The resource should enough to run my task. It is strange.
Thanks.
View 3 Replies
View Related
Sep 13, 2007
Hello,
I have the following view SQL ( LATESTRECALL_v, for communication purposes ):
SELECT REC.RECKEY, REC.RECSTM, REC.RECUID, REC.RECREC, REC.RECLNN, REC.RECCCD, REC.RECSKU, REC.RECSKD, REC.RECPEC,
REC.RECPRL, REC.RECQTY, REC.RECTGQ, REC.RECTAG, REC.RECASN, REC.RECRLC, REC.RECPLC, REC.RECNTS, REC.RECRST,
REC.RECLOT, REC.RECARF, REC.RECVEN, REC.RECIBO, REC.RECSRN, REC.RECPAR, REC.RECINS, REC.RECSDTR, REC.RECDEA,
REC.RECUOI, REC.RECFPL, REC.RECTDA, REC.RECUMN, REC.RECRQT, REC.RECCID, REC.RECCTN, REC.RECFPC, REC.RECEPU,
REC.RECSCQ, REC.RECLOG, REC.RECSTS, REC.RECTDV, REC.RECSHDR, REC.RECRRN, REC.RECWDF, REC.RECWWF, REC.RECSLT,
REC.RECSQT, REC.RECSTG, REC.RECRTG, REC.RECPSN, REC.RECIFL, REC.RECEXPR, REC.RECMFDR, REC.RECMFGR, REC.RECCNO,
REC.RECSC1, REC.RECSC2, REC.RECSC3, REC.RECSC4, REC.RECSC5, REC.RECSC6, REC.RECSC7, REC.RECSC8, REC.RECSC9,
REC.RECSD1, REC.RECST1, REC.RECSD2, REC.RECST2, REC.RECSD3, REC.RECST3, REC.RECSQ1, REC.RECSQ2, REC.RECSQ3,
REC.RECSQ4, REC.RECSQ5, REC.RECSQ6, REC.RECSDU1, REC.RECSDU2, REC.RECRRC, REC.RECICS, REC.RECJOBS
FROM dbo.REC, dbo.LATESTRECKEY_V
WHERE REC.RECKEY = LATESTRECKEY_V.LATEST_RECKEY
Note that it refers to view LATESTRECKEY_V, which is defined below ( the SQL ):
SELECT REC.RECSKU, REC.RECPRL, max(REC.RECKEY) AS latest_reckey
FROM dbo.REC, dbo.LATESTREC_V
WHERE
REC.RECSKU = LATESTREC_V.RECSKU AND
REC.RECPRL = LATESTREC_V.RECPRL AND
sysdb.ssma_oracle.to_char_date(REC.RECSDTR, 'YYYYMMDD') + REC.RECSTM = LATESTREC_V.LATEST_DATE_TIME
GROUP BY REC.RECSKU, REC.RECPRL
Note that it refers to view LATESTREC_V, which is defined below ( the SQL ):
SELECT REC.RECSKU, REC.RECPRL, max(sysdb.ssma_oracle.to_char_date(REC.RECSDTR, 'YYYYMMDD') + REC.RECSTM) AS latest_date
time
FROM dbo.REC
GROUP BY REC.RECSKU, REC.RECPRL
Now, there already is a key on the REC table, columns RECSKU and RECPRL. Adding that index sped things up quite a bit. But this setup is still very enefficient. I cannot apply the indexed view concept to LATESTRECKEY_V or LATESTREC_V because they use "max". I suppose I could apply it to the first view, but since that view depends on the other 2 views, I am not convinced it will improve my performance. Any ideas on improving such a situation? Thanks for your time.
Tony
View 2 Replies
View Related
Nov 15, 2006
i have found the speed of sort is very slow in my sql (because sql is very complicated, i can't paste it on this page), how can i improve the speed of sort ?
are there better methods?
thks
View 4 Replies
View Related
Aug 15, 2006
Hi everyone
I need a solution for this query. It is working fine for 2 tables but when there are 1000's of records in each table and query has more than 2 tables. The process never ends.
Here is the query
(select siqPid= 1007, t1.Gmt909Time as GmtTime,(t1.engValue+t2.engValue+t3.engValue+t4.engValue) as EngValue,
t1.Loc1Time as locTime,t1.msgId
into #temp5
from #temp1 as t1,#temp2 as t2,#temp3 as t3,#temp4 as t4
where t1.Loc1Time = t2.Loc1Time and t2.Loc1Time = t3.Loc1Time and t3.Loc1Time = t4.Loc1Time)
I was trying to do something with this query.
But the engValues cant be summed up. and if I add that in the query, the query isnt compiling.
(select siqPid= 1007, t1.Gmt909Time as GmtTime,
t1.Loc1Time as locTime,t1.msgId,(t1.engValue+t2.engValue+t3.engValue+t4.engValue) as engValue
--into #temp5
from #temp1 as t1
where exists
(Select 1
from #temp2 as t2
where t1.Loc1Time = t2.Loc1Time and
exists
(Select 1
from #temp3 as t3
where t2.Loc1Time = t3.Loc1Time and
exists
(Select 1
from #temp4 as t4
where t3.Loc1Time = t4.Loc1Time))))
I need immediate help on that, I would appreciate an input on it.
Thanks
-Sarah
View 15 Replies
View Related
Oct 20, 2002
I should add an Identity field (Identity=True) and a row version field(timestamp) to my table, and avoid to arrange tables into different databases, is it true in general?
View 4 Replies
View Related