I have the following SQL, which works but I think it can be done simplier. I seem to have to group it by multiple columns, but I am sure there must be a way of grouping the results by a single column. Any Ideas?
Code:
SELECT count(order_items.order_id) as treenum, orders.order_id, orders.order_date,
orders.cust_order, orders.del_date, orders.confirmed, orders.del_addr
FROM orders, order_items
WHERE orders.order_id = order_items.order_id GROUP by orders.order_id, orders.order_date
, orders.cust_order, orders.del_date, orders.confirmed, orders.del_addr
ORDER BY orders.order_id DESC
I have four different transactions such as below and I do one insert and one update in each transaction and it seem it is slow and creates deadlock with the user interface.
These transactions are performed against the tables that users are accessing with another user interface. I have following two questions:
1. T2.TextField1 and TextField2 = @TextField2 are Ok, Nok fields so I did not put index since only two distinct values. Should I put indexes on these fields?
2. Can I make this transaction let user interface do its task in case accessing the same rows, I can start transaction again but I do not want users get disturbed? .
BEGIN TRANSACTION pTrans BEGIN INSERT INTO T1 (fields) SELECT (fields) FROM T2 INNER JOIN View1 ON T2.TrID = View1.MyTableID WHERE (T2.TextField1 = @TrType AND T2.TextField2 = @TextField2)
UPDATE T2 SET TextField2 = 'Ok', TextField2Date=@MyRunDateTime FROM T2 WHERE (TextField1 = @TrType AND TextField2 = @TextField2)
IF @@ERROR <> 0 BEGIN rollback transaction pTrans return(-1) END ELSE BEGIN commit transaction pTrans END END
I have a pretty good db server with four CPUs, it has not any other loads on it, but the following query takes 4ms to return. I use the syscolumns this way quite often, I am not sure why it takes it that long to return, any idea? select 'master',id,colid,name,xtype,length,xprec,xscale,status from [ablestatic].[dbo].syscolumns where id=(select id from [ablestatic].[dbo].sysobjects where name='link_data_ezregs')
hi, have a nice day to all , can some expert here point out what are the do and don't when we writing query ? ie:should we create many view as handler to get data? when should we use store procedure and trigger ? and so on
i seeking the way to improve my sql skill, can someone suggest some reference site or material or book as well ?
I wrote the following function a few years ago - before I learned about SQL's PATINDEX function. It might be possible to check for a valid email address syntax with a single PATINDEX string which could replace the entire body of hte function below.
Is anyone is interested in taking a crack at it?
Signed... lazy Sam
CREATE FUNCTION dbo.EmailIsValid (@Email varchar (100)) /* RETURN 1 if @Email contains a valid email address syntax, ELSE RETURN 0 */
RETURNS BIT AS BEGIN DECLARE @atpos int, @dotpos int
SET @Email = LTRIM(RTRIM(IsNull(@Email, ''))) -- remove leading and trailing blanks
IF LEN(@Email) = 0 RETURN(0) -- nothing to validate
SET @atpos = charindex('@',@Email) -- position of first (hopefully only) @
IF @atpos <= 1 OR @atpos = LEN(@Email) RETURN(0) -- @ is neither 1st or last or missing
IF CHARINDEX('@', @email, @atpos+1) > 0 RETURN(0) -- Two @s are illegal
IF CHARINDEX(' ',@Email) > 0 RETURN(0) -- Embedded blanks are illegal
SET @dotpos = CHARINDEX('.',REVERSE(@Email)) -- location (from rear) of last dot
IF (@dotpos < 3) or (@dotpos > 4) or (LEN(@Email) - @dotpos) < @atpos RETURN (0) -- dot / 2 or 3 char, after @
I have this SQL query that can take too long time, up to 1 minute if table contains over 1 million rows. And if the system is very active while executing this query it can cause more delays I guess.
select distinct 'CONV 1' as Conveyour, info as Error, (select top 1 substring(timecreated, 0, 7) from log b where a.info = b.info order by timecreated asc) as Date, (select count(*) from log b where b.info = a.info) as 'Times occured' from log a where loggroup = 'CSCNV' and logtype = 4
The table name is LOG, and I retrieve 4 columns: Conveyour, Error, Date and Times occured. The point of the subqueries is to count all distinct post and to retrieve the date of the first time the pst was logged. Also, a first and last date could be specified but is left out here.
Does anyone knows how I can improve this SQL query?
I have four different transactions such as below and I do one insert and one update in each transaction and it seem it is slow and creates deadlock with the user interface.
These transactions are performed against the tables that users are accessing with another user interface. I have following two questions:
1. T2.TextField1 and TextField2 = @TextField2 are Ok, Nok fields so I did not put index since only two distinct values. Should I put indexes on these fields?
2. Can I make this transaction let user interface do its task in case accessing the same rows, I can start transaction again but I do not want users get disturbed? .
BEGIN TRANSACTION pTrans BEGIN INSERT INTO T1 (fields) SELECT (fields) FROM T2 INNER JOIN View1 ON T2.TrID = View1.MyTableID WHERE (T2.TextField1 = @TrType AND T2.TextField2 = @TextField2)
UPDATE T2 SET TextField2 = 'Ok', TextField2Date=@MyRunDateTime FROM T2 WHERE (TextField1 = @TrType AND TextField2 = @TextField2)
IF @@ERROR <> 0 BEGIN rollback transaction pTrans return(-1) END ELSE BEGIN commit transaction pTrans END END
hi in asp.net,i used sql server to store records. in a table has 1 million records,but when i update the record,it is very slowly. is "create index" helpful for "update" operation? i need help,thanks a lot.
Hi, We have a poorly performing SQL 2000 db. i have just defragged ( the HD, not indexes, these are done daily via SQL Agent) the data files of our server and have not found any improvement in response. I have now got into using SQL profiler to analyse the server performance. in the results that the trace is returning there are some huge (REALLY BIG) values for the duration and cpu values but these rows have no textdata value returned (ie it is null)
why is this? for these rows, the reads and writes columns are also high.
if these rows are what is taking the cpu's time then how can i identify what the server is doing to make any changes?
any thoughts on what other values i might trace or what action i can take to find the slow down cause?
in performance manager the processors (dual Xeons) are rarely dropping below 60%.
is there anyway to improve this statement. make it short? any input will be appreciated.
update TempAddressParsingTable set ad_unit = case
when right(rtrim(ad_str1),3) like 'apt' or right(rtrim(ad_str1),3) like 'lot' and substring(reverse(ad_str1), 4,1) in ('', ',', '.') then right(ad_str1,3)
when right(rtrim(ad_str1),4) like 'unit' or right(rtrim(ad_str1),4) like 'apt%' or right(rtrim(ad_str1),4) like 'lot%' and substring(reverse(ad_str1), 5,1) in ('', ',', '.') then right(ad_str1,4)
when right(rtrim(ad_str1),5) like 'unit%' or right(rtrim(ad_str1),5) like 'apt%%' or right(rtrim(ad_str1),5) like 'lot%%' and substring(reverse(ad_str1), 6,1) in ('', ',', '.') then right(ad_str1,5)
when right(rtrim(ad_str1),6) like 'unit%%' or right(rtrim(ad_str1),6) like 'apt%%%' or right(rtrim(ad_str1),6) like 'lot%%%' and substring(reverse(ad_str1), 7,1) in ('', ',', '.') then right(ad_str1,6)
when right(rtrim(ad_str1),7) like 'unit%%%' or right(rtrim(ad_str1),7) like 'apt%%%%' or right(rtrim(ad_str1),7) like 'lot%%%%' and substring(reverse(ad_str1), 8,1) in ('', ',', '.') then right(ad_str1,7)
when right(rtrim(ad_str1),8) like 'unit%%%%' or right(rtrim(ad_str1),8) like 'apt%%%%%' and substring(reverse(ad_str1), 9,1) in ('', ',', '.') then right(ad_str1,8)
when right(rtrim(ad_str1),9) like 'unit%%%%%' or right(rtrim(ad_str1),9) like 'apt%%%%%%' and substring(reverse(ad_str1), 10,1) in ('', ',', '.') then right(ad_str1,9)
when right(rtrim(ad_str1), 2) like '#%' and substring(reverse(ad_str1), 3, 1) in ('', ',', '.') then right(ad_str1, 2) when right(rtrim(ad_str1), 3) like '#%%' and substring(reverse(ad_str1), 4, 1) in ('', ',', '.') then right(ad_str1, 3) when right(rtrim(ad_str1), 4) like '#%%%' and substring(reverse(ad_str1), 5, 1) in ('', ',', '.') then right(ad_str1, 4) when right(rtrim(ad_str1), 5) like '#%%%%' and substring(reverse(ad_str1), 6, 1) in ('', ',', '.') then right(ad_str1, 5) else null end
Dear Experts, I'm a DBA, Working for a Product based company. We are implementing our product for a certain client of huge OLTP. our reports team is facing problem (error: all the reports are timed out).though the queries are written properly, Each query is taking some minutes of time. I've given the command DBCC DROPCLEANBUFFERS. the time immediately dropped to 10 sec.
now my question is : please suggest me the DBCC commands or any DBA related commands to improve the performance of the application for my reports team.
from your experience in SQL 2005 - do i have any free software that can help in improve performance or can help in identifying performance bottleneck. two examples of performance and help that i use usually use are the maintenance plan that do (check DB > reorganized index > rebuild index > update statics) and the second software is the SQL 2005 DASHBOARD for the reporting help. do you have any other free tools and help that you can give me for performance or any thing that i must have in my SQL 2005 servers.
I have a table called work_order which has over 1 million records and acontractor table which has over 3000 records.When i run this query ,it takes long time since its grouping bycontractor and doing multiple sub SELECTs.is there any way to improve performance of this query ??-------------------------------------------------SELECT ckey,cnam,t1.contractor_id,count(*) as tcnt,(SELECT count(*) FROM work_order t2 WHEREt1.contractor_id=t2.contractor_id and rrstm=1 and rcdt is NULL) as r1,(SELECT count(*) FROM work_order t3 WHEREt1.contractor_id=t3.contractor_id and rrstm=2 and rcdt is NULL) as r2,(SELECT count(*) FROM work_order t4 WHEREt1.contractor_id=t4.contractor_id and rrstm=3 and rcdt is NULL) as r3,SELECT count(*) FROM work_order t5 WHEREt1.contractor_id=t5.contractor_id and rrstm=4 and rcdt is NULL) as r4,(SELECT count(*) FROM work_order t6 WHEREt1.contractor_id=t6.contractor_id and rrstm=5 and rcdt is NULL) as r5,(SELECT count(*) FROM work_order t7 WHEREt1.contractor_id=t7.contractor_id and rrstm=6 and rcdt is NULL) as r6,SELECT count(*) FROM work_order t8 WHEREt1.contractor_id=t8.contractor_id and rcdt is NULL) as open_count,(SELECT count(*) FROM work_order t9 WHEREt1.contractor_id=t9.contractor_id and vendor_rec is not NULL) asAck_count,(SELECT count(*) FROM work_order t10 WHEREt1.contractor_id=t10.contractor_id and (rtyp is NULL or rtyp<>'R') andrcdt is NULL) as open_norwoFROM work_order t1,contractor WHEREt1.contractor_id=contractor.contractor_id andcontractor.tms_user_id is not NULL GROUP BYckey,cnam,t1.contractor_id ORDER BY cnam*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
Hey guys,Here's my situation:I have a table called lets say 'Tree', as illustred bellow:Tree====TreeId (integer)(identity) not nullL1(integer)L2(integer)L3(integer)....L10(integer)The combination of the values of L1 thru L10 is called a "Path" , andL1 thru L10 values are stored in a second table lets say called'Leaf':Leaf====LeafId (integer)(identity) not nullLeatText varchar(2000)Here's my problem:I need to lookup for a given keyword in each path of the tree table,and return each individual column for the paths that match thecriteria. Here's the main idea of how I have this now.SELECT TreeId,L1,L2,...,L10, GetText(L1) + GetText(L2) as L2text + ...+ GetText(L10) AS PathTextINTO #tmp FROM Tree //GetText is a lookup function for the Leaf tableSELECT L1,GetText(L1),L2,GetText(L2),...,L10,GetText(L10) FROM #tmpWHERECharIndex(@keyword,a.pathtext) > 0Does anyone would know a better,smart, more efficient way toaccomplish this task? :)Thks,
I would like to know how can I change a "not in" clause with the same results in a SQL sentence in order to improve the performance of my SQL sentence.
I'm facing a performance issue with the following query... The Output of the following Query is 184 Records and it takes 2 to 3 secs to execute the query.
SELECT DISTINCT Column1 FROM Table1 (NOLOCK) WHERE Column1 NOT IN
(SELECT T1.Column1 FROM Table1 T1(NOLOCK) JOIN Table2 T2 (NOLOCK)
ON T2.Column2 = T1.Column2 WHERE T2.Column3= <Value>)
Data Info.
No of records in Table1 --> 1377366
No. of distinct records of Column1 in Table1 --> 33240
Is there any way the above query can be rewritten to improve the performance, which should take less than 1 sec... (I'm using DISTINCT because there are Duplicate records of Column1 inTable1 )
Any of your help in this regard will be greately appreciated.
I am not an expert in either SSIS or VFP technology but know enough to get my way round. One anomaly I did discover I thought was worth sharing for all those concerned with getting large amounts of data out of VFP in as short a time as possible. When you search for performance tips in relation to SSIS the advice is to never use select table or view from data access mode list in ole db source as this effectively translates to select * from table and I've never come across anything to contradict this €“ well I am and let me explain why:
When you use SQL command as data access mode in ole db source (where ole db source is foxpro dbc) and you write out select column1, column 2 etc etc from table a etc etc and then connect that to a destination (in my case ole db destination) the SSIS task spends ages stuck on Pre-execute before anything happens (the bigger the fox tbl the longer the wait). What is happening behind scenes is that the foxpro engine (assuming its foxpro engine and not sql engine €“ either way don€™t think it matters too much) is executing the sql command and then writing results to a tmp (temp) file on your local temp folder €“ (in my case : C:Documents and SettingsautologinLocal SettingsTemp1). These files take up gigs of space and it is only when this process is complete does the SSIS task actually finish the Pre-execute and start the data transfer process. I couldn€™t understand a) why my packages were stuck on pre-execute for such long times? and b) why were the tmp files being created and why they were soo big?
If you change from SQL command in source to Table or view and then select your table from list the SSIS task when executed kicks off immediately and doesn€™t get stuck on pre-execute nor create any tmp files €“ so you save time and disk space. The difference in time is immense and if like me you were really frustrated with poor performance when extracting from VFP now you know why.
Btw maybe this does not apply to all versions of VFP but it certainly does to v7.
in Tables i have builded index for name, but when every table have 20000 records, the sql above is very slow, is there other method to improve the query speed ?
I have a table which has around 132000 rows with 24 columns. I use rda.pull download the data to PDA. For query these data, I must create a index on 5 character columns.
The data download time is good enough, around 4 mins. But it takes 12mins to create the index.
Please help to give me any idea on how to improve the whole synchroniztion speed. Thanks!
Hi, I have database D1 which contains 5 million users and one more database D2 having 95k Users. i wanted to insert common users into new database D3 based on filter which is Phone number and is unique value. Below is the structure of my tables in D1 and D2:
Now userProfiles table contains data in string format as below: User.state AA User.City CC User.Pin 1234 User.phonenumber 987654
so iam parsing for each user using cursor and writing phone numbers into some temp table and wanted to query D2 database to verify whether this phone number exists in Alerts Table of D2 database.
can anyone please suggest on how i can go ahead with this and also help me on how to improve perfomance.
ALTER PROCEDURE SearchOrders @empDefId nvarchar(50) AS declare @ctmId int select @ctmId=customerid from customers where employeedefineid=@empDefId
select * from producedetails where orderid in (select orderid from orders where customerid=@ctmId) -- this query executes three times! order by orderid
select * from producecautions where orderid in (select orderid from orders where customerid=@ctmId) order by orderid
select * from orders where customerid=@ctmId order by orderid
I submit a SSIS package with data flow task by DTEXEC command on a W2k3 SP4 server installed SQL2005 EE SP1 several time. The package sometimes finish the task successfully but sometimes failed.
I would like to know how to improve the stability of running package. The server is not busy for other tasks. The resource should enough to run my task. It is strange.
SELECT REC.RECSKU, REC.RECPRL, max(REC.RECKEY) AS latest_reckey FROM dbo.REC, dbo.LATESTREC_V WHERE REC.RECSKU = LATESTREC_V.RECSKU AND REC.RECPRL = LATESTREC_V.RECPRL AND sysdb.ssma_oracle.to_char_date(REC.RECSDTR, 'YYYYMMDD') + REC.RECSTM = LATESTREC_V.LATEST_DATE_TIME GROUP BY REC.RECSKU, REC.RECPRL Note that it refers to view LATESTREC_V, which is defined below ( the SQL ):
SELECT REC.RECSKU, REC.RECPRL, max(sysdb.ssma_oracle.to_char_date(REC.RECSDTR, 'YYYYMMDD') + REC.RECSTM) AS latest_date time FROM dbo.REC GROUP BY REC.RECSKU, REC.RECPRL Now, there already is a key on the REC table, columns RECSKU and RECPRL. Adding that index sped things up quite a bit. But this setup is still very enefficient. I cannot apply the indexed view concept to LATESTRECKEY_V or LATESTREC_V because they use "max". I suppose I could apply it to the first view, but since that view depends on the other 2 views, I am not convinced it will improve my performance. Any ideas on improving such a situation? Thanks for your time.
i have found the speed of sort is very slow in my sql (because sql is very complicated, i can't paste it on this page), how can i improve the speed of sort ?
Hi everyone I need a solution for this query. It is working fine for 2 tables but when there are 1000's of records in each table and query has more than 2 tables. The process never ends. Here is the query (select siqPid= 1007, t1.Gmt909Time as GmtTime,(t1.engValue+t2.engValue+t3.engValue+t4.engValue) as EngValue, t1.Loc1Time as locTime,t1.msgId into #temp5 from #temp1 as t1,#temp2 as t2,#temp3 as t3,#temp4 as t4 where t1.Loc1Time = t2.Loc1Time and t2.Loc1Time = t3.Loc1Time and t3.Loc1Time = t4.Loc1Time) I was trying to do something with this query.
But the engValues cant be summed up. and if I add that in the query, the query isnt compiling. (select siqPid= 1007, t1.Gmt909Time as GmtTime, t1.Loc1Time as locTime,t1.msgId,(t1.engValue+t2.engValue+t3.engValue+t4.engValue) as engValue --into #temp5 from #temp1 as t1 where exists (Select 1 from #temp2 as t2 where t1.Loc1Time = t2.Loc1Time and exists (Select 1 from #temp3 as t3 where t2.Loc1Time = t3.Loc1Time and exists (Select 1 from #temp4 as t4 where t3.Loc1Time = t4.Loc1Time))))
I need immediate help on that, I would appreciate an input on it.
I should add an Identity field (Identity=True) and a row version field(timestamp) to my table, and avoid to arrange tables into different databases, is it true in general?
Hi , I have SQL Server with 2 processors , 2 GB memory and RAID 5 40 GB db (Pricing ) working with 3rd party application.
in db Pricing table A = 180000 rows (20 column and 5 indexes incling clustered primary) table B = 1789000 rows (25 coumns and 6 indexes incling clustered primary ) table C = 10005 rows (15 columns and 4 indexes incling clustered primary)
Users start complaining about poor performance when selecting data from tables A,B and C
I used profier to capture all queries running with A,B and C tables where duration more then 1 second.
Profiler shows 0 cpu and very high read for all capured queries
I run INDEXDEFRAG on tables A,B and C
then rerun Profiler
No changes at all in profiler reading same low CPU and high read
If the is no changes in performance after INDEXDEFRAG can I conclude following everything has been done to optimize tables A,B and C , query code or table structure should be modified? or any other improvment could be done without modifing code?
I have a view which uses UNION of two tables. First table has a 1.5 Million records and the second one has 40,000 records. When I query the view with a column (that is indexed in both tables) in the where clause, it's taking taking 3 Minutes to give the result. The column is of DateTime data Type. Any ideas as to how to improve the query performance ???
Hello, I have the following setup and I would appreciate any help in improving the performance of the query.
BigTable: Column1 (indexed) Column2 (indexed) Column3 (no index) Column4 (no index)
select [time] = CASE when BT.Column3 = 'value1' then DateAdd(...) when BT.Column3 in ('value2', 'value3') then DateAdd(...) END, Duration = CASE when BT.Column3 = 'value1' then DateDiff(...) when BT.Column3 in ('value2', 'value3') then DateDiff(ss, BT.OrigTime, (select TOP 1 X.OrigTime from BigTable X where X.Column1 > BT.Column1 and X.Column3 <> 'value4' order by X.Column1 )) END,
FROM BigTable BT where BT.Column3 = 'value1' OR (BT.Column3 in ('value2', 'value3') and BT.Column4 <> (select X.Column4 from BigTable X where X.Column1 = BT.Column1 and X.Column3 = 'Value1'))
Apart from the above mentioned, there are a few more columns which are just a part of select statement and are not in any condition statments.
The BigTable has around 1 Mil records and the response time is very poor, it takes around 3 mins to retrieve the records (which would be around 500K)
I have this queston that I cannot get a clear answer on. I have searched the internet to find out if using foreign keys have any performane benefits but some articles yes and some say no. So what should I believe here. Does foreign keys have any performance benefits.
One of the department in my company having data searching and responding in slow time problem. I will eloborate the current system.
Problem description: ==================== There is 4 lanes. Note : Our focus here is only to one lane!
each lane will have 4 machines.Each machines will have local database to its corresponding sub machines. All the information from the sub machine's local database server will then finally pass all the data to the main database server after 2 weeks. Within two weeks, one table may grow upto 600 000 rows of records and search record time from all the sub database server is varied depend upon phase by phase.
Phase by phase: =============== Assume 4 machines label, A,B,C and D respectively. The user from location A will check one item and scan the item's barcode to get info from the db and will update the 'result' field from table in stage A whether past / fail and also update the current process whether its process status = started/halted/interrupted/succefully completed/canceled.
Assume each table [all the tables are identitical version]in 4 sub database server will now having 600 000 rows respectively.When the User A scan the barcode from that particular Item belongs to group 1.1, it will search from database A and display info. the respon time is quite ok since not much filtering in searching method. When another items belongs to group 1.1 as well, is scanned by user B then it will search the info from database A, and displaythe item's [group 1.1] status and process status from stage A.The respond time here will be slower since it will check from db A. Same scenario will apply when any item from the similar group items' being scanned in db C and db D and check if that particular item is passed in previous stage before proceed to its current phase[A,B,C & D].
Currently, the user will have to wait for few seconds to get the info when the db is growing each time and due to this they have to clear the table and send to main database server once in 2 weeks.We hope can keep for atleast 1 month but it is delaying few seconds per scan and the respond times are varied for phase A, B ,C and D.D is slowest since it needs to check in A,B and C before it accept in its own phase which is D.
I am trying to propose to use SQL Server 2005 and try re-design the current structure in each lanes. I am now suppose to state in my paper proposal, that one each server for roughly 600k rows contents, if apply search and use stored procedure to habndle it and return the output then what is the average time for all the 4 machines.if can, better specify by each stage one how many miliseconds/seconds will be utilized. Please someone whom have face similar case or come accross to new method , help me suggest new way of solving this problem.Also plz give me an accurate time if can for each phases as I need to specify in my proposal documents. Can someone give a accurate time for each stage also a better solution for re-designing process and estimate a new respon time.
realy appreciated , dear my frens.I trust you are genius and chosen from around the world.;) Im sure someone would have done something miracle...lol Im in deep trouble as I have to submit the proposal in 2 days times.