Want To Make ''Get Latest Status'' Function More Efficient

May 14, 2007

Hey folks, I'm looking at making the following query more efficient
potentially using the ranking functions and I'd like some advice from
the gurus. The purpose of the following is to have a status for a
person, and also have a historical background as to what they've done,
status wise. This was the best way I could come up with to do this a
few years back, but I'm wondering if there's a better way with Sql
2005.

Here's a toned down structure and my query. Any help/critique would
be greatly appreciated.


CREATE TABLE #Status(
StatusID int NOT NULL,
StatusName VARCHAR(50) NOT NULL,
StatusCategoryID int NOT NULL
) ON [PRIMARY]

CREATE TABLE #RegStatus(
[RegistrationID] [uniqueidentifier] NOT NULL,
[StatusID] [int] NOT NULL,
[StatusTimeStamp] [datetime] NOT NULL,
[UniqueRowID] [int] IDENTITY(1,1) NOT NULL
) ON [PRIMARY]

SET NOCOUNT on
INSERT INTO #Status VALUES(200, 'StatusA', 1)
INSERT INTO #Status VALUES(210, 'StatusB', 1)
INSERT INTO #Status VALUES(115, 'StatusC', 1)
INSERT INTO #Status VALUES(112, 'StatusD', 1)
INSERT INTO #Status VALUES(314, 'StatusE', 1)
INSERT INTO #Status VALUES(15, 'StatusF', 1)
INSERT INTO #Status VALUES(22, 'StatusG', 1)

INSERT INTO #Status VALUES(300, 'StatusX', 2)
INSERT INTO #Status VALUES(310, 'StatusY', 2)
INSERT INTO #Status VALUES(320, 'StatusZ', 2)

INSERT INTO #RegStatus VALUES('7A6058D0-06CB-4E83-A8C4-B1AFC74B11F0',
200, GETDATE())
INSERT INTO #RegStatus VALUES('7A6058D0-06CB-4E83-A8C4-B1AFC74B11F0',
210, GETDATE())
INSERT INTO #RegStatus VALUES('7A6058D0-06CB-4E83-A8C4-B1AFC74B11F0',
115, GETDATE())

INSERT INTO #RegStatus VALUES('8B94A666-A3DD-4CB1-89A0-9910047AE7A0',
112, GETDATE())
INSERT INTO #RegStatus VALUES('8B94A666-A3DD-4CB1-89A0-9910047AE7A0',
314, GETDATE())
INSERT INTO #RegStatus VALUES('8B94A666-A3DD-4CB1-89A0-9910047AE7A0',
200, GETDATE())
INSERT INTO #RegStatus VALUES('8B94A666-A3DD-4CB1-89A0-9910047AE7A0',
22, GETDATE())

INSERT INTO #RegStatus VALUES('58AF0FC6-C900-4BD0-B3F7-F9D62701F021',
15, GETDATE())
INSERT INTO #RegStatus VALUES('58AF0FC6-C900-4BD0-B3F7-F9D62701F021',
115, GETDATE())
INSERT INTO #RegStatus VALUES('58AF0FC6-C900-4BD0-B3F7-F9D62701F021',
200, GETDATE())
INSERT INTO #RegStatus VALUES('58AF0FC6-C900-4BD0-B3F7-F9D62701F021',
115, GETDATE())
SET NOCOUNT Off

/*
This is a query from within a function that I use to not only get the
latest
status for one registrant, but I can use it to get the latest status
for everyone as well.
*/


DECLARE @RegStatusCatID int,
@RegID UNIQUEIDENTIFIER

SET @RegStatusCatID = 1
SET @RegID = null

select LS.*, S.StatusName, S.StatusCategoryID
from #Status S
join(
select RS.RegistrationID, RS.StatusID, RS.StatusTimeStamp
from #RegStatus RS
join
(
SELECT RS.RegistrationID , max(RS.UniqueRowID) UniqueRowID
FROM #RegStatus RS
join #Status S
on RS.StatusID = S.StatusID
and S.StatusCategoryID = @RegStatusCatID
and (
@RegID is null
or (@RegID is not null
and RS.RegistrationID = @RegID)
)
group by RS.RegistrationID
)LS
on RS.UniqueRowID = LS.UniqueRowID
) LS
on S.StatusID = LS.StatusID

--SELECT * FROM #RegStatus

DROP TABLE #RegStatus
DROP TABLE #Status

View 4 Replies


ADVERTISEMENT

Unable To Synchronize To The Latest Status

Mar 21, 2007

Greeting guys, does anyone here experience the problem that I had currently. I did set up a mirroring for my db for 1 month + and work perfectly. However, due to lack of stability on line connection on the principal database side, so makes mirror side get disconnected frequently. I found that once the principal side get back smooth, the mirroring status shows 'synchronizing' status and I thought it gets fine again.

However, after 2 days (which more than 48 hours) and I check back the status, it still showing 'synchronizing' and synchronize slowly. It's impossible that the mirror side takes more than 2 days for synchonizing. ( Size on principal size is around 1.2GB and mirror side around 800MB) Is there any solution on it ? Is it have to re-set up the mirroring session again? I don't think is line connection issue because I did set up another mirroring session between these two servers and work fine. Hope you guys can help me out asap. Thx.

Best Regards,

Hans

View 3 Replies View Related

Create A View To Get Latest Status For Each Application

Apr 24, 2007

I would like some help creating a view that will display the latest status for each application. The lastest status should be based on CreateDt. 
For example:
Table Structure:
============
Application: ApplicationID, Name, Address, City, State, Zip, etc..
ApplicationAction: ApplicationActionID, ApplicationID, Status (Ex:new, reviewed, approved, closed), CreateDt 
  
View should display:
==============
ApplicantID, ApplicantActionID, Status, CreateDt  
 
Example:
==========
ApplicantID=4, Name=Bob Smith, etc....
ApplicantActionID=1,  ApplicantID=4, Status=New,  CreatDt=1/3/20071:00
ApplicantActionID=2,  ApplicantID=4, Status=Reviewed,  CreatDt=1/3/2007 2:00
ApplicantActionID=3,  ApplicantID=4, Status=Approved,  CreatDt=1/4/2007 1:00
 .... etc....
View should return:
Applicant=4, ApplicantActionID=3, Status=Approved,  CreatDt=1/4/2007 1:00
etc....
 
 
 
 

View 4 Replies View Related

T-SQL (SS2K8) :: Return Value In A Status Field Which Has Latest Year And Month

May 11, 2015

I have table in which month & year are stored, Like this

Month Year
10 2014
11 2014
12 2014
1 2015
2 2015
3 2015
4 2015

I wanted a query in which it should return the value in a status field which has latest year & month.

View 9 Replies View Related

Dynamic Order Status Column Function

Apr 7, 2008

I am looking for assistance coming up with a function (or maybe not a function if there is a better way) to make the Status column in my order table dynamic. The default value for new records will always be "1" to designate a quote. At this point the field is not dynamic. Once the customer confirms the order, the user needs to execute a command to change the status to "3" to designate a Confirmed order. At this point the field needs to be dynamic based on the shipping records. There are two order details tables. One for sales items and one for rental items. Each of these details tables has their own shipping record. the CheckInOut Tables are for rental while the Ship tables are for sales. So, if some (but not all) of the items in either of these order details tables has a shipping record associated with it, then the status should be changed to "5". If everything has been shipping, the status is changed to "4". If everything has been shipping but some items have been returned, the status is "6" if everything has been shipping and all of the RentalDetail items have been returned then the status is "7" and if there is any other combination of a variety of ships and returns, the status is "8". Also, at any time, the user needs to be able to execute a command to change the value to "2". once the value is changed to "2" the field stops being dynamic again.

Below are my tables creation commands.


CREATE TABLE OrderHeader
(
OrderID int identity primary key,
Status int,
StartDate datetime,
EndDate datetime
)--Use Type 1 = "Quote" Type 2 = "Cancelled" Type 3 = "Confirmed", Type 4 = "Shipped", Type 5 = "Part Shipped", Type 6 = "Part Returned", Type 7 = "Returned, Type 8 = "Mixed"
CREATE TABLE OrderRentalDetail
(
OrderRentalDetailID int identity primary key,
OrderID int FOREIGN KEY REFERENCES OrderHeader(OrderID),
ItemName varchar(30),
Qty int,
SiteID int,
)
CREATE TABLE CheckInOutHeader
(
CheckInOutID int identity primary key,
Type int,
SiteID int,
ActionDate datetime
)--Use Type 1 = "Ship" Type 2 = "Return" Type 3 = "Lost"
CREATE TABLE CheckInOutDetail
(
CheckInOutDetailID int identity primary key,
CheckInOutID int NOT NULL FOREIGN KEY REFERENCES ShipHeader(ShippingID),
OrderRentalDetailID int,
Qty int
)
CREATE TABLE OrderSalesDetail
(
OrderSalesDetailID int identity primary key,
OrderID int FOREIGN KEY REFERENCES OrderHeader(OrderID),
ItemName varchar(30),
Qty int,
SiteID int,
)
CREATE TABLE ShipHeader
(
ShippingID int identity primary key,
Type int,
SiteID int,
ActionDate datetime
)--Use Type 1 = "Ship" Type 2 = "Return"
CREATE TABLE ShipDetail
(
ShipDetailID int identity primary key,
ShippingID int NOT NULL FOREIGN KEY REFERENCES ShipHeader(ShippingID),
OrderSalesDetailID int,
Qty int
)

View 3 Replies View Related

Make Date Function (like In VB)

Dec 11, 2002

I hate manipulating dates in SQL. One of the many things Access and VB handles much better!

I wanted a function like DateSerial(Year,Month,Day) so I created one.

One caveat: The Year must be >= 1800. But all other numbers can be most any int value that results in a valid date.

Which means you can do:

MDate(Year(@d),1,1) (first day of date @d)
MDate(Year(@d), Month(@d) + 1, -1) (last day of month of date @d)
MDate(2000,1,1) (create a date quickly and easily w/o convert)
MDate(Year(@d)+1,1,1) (get first day of next year for date @d)

..etc... whatever you can do with VB's DateSerial() function, except for the year must be >=1800.

Or, does this exist already in SQL and I'm missing it somewhere??

Is there an easier/better way to do this?

* * * *

Create function MDate(@Year int, @Month int, @Day int)
returns datetime
AS
BEGIN
declare @d datetime;
set @d = dateadd(year,(@Year - 1800),'1/1/1800');
set @d = dateadd(month,@Month - 1,@d);
return dateadd(day,@Day - 1,@d)
END

- Jeff

Edited by - jsmith8858 on 12/12/2002 14:04:03

View 15 Replies View Related

Transact SQL :: Best Way To Make A Function For Summing Arbitrary Number Of Times Values

Jun 22, 2015

How is the best way to make a function for summing an arbitrary number of times values (table parm?)- I 've read it's necessary to convert to seconds, sum then convert back, but Im' wondering if there's an alternative.

Here's the example I want to sum:
00:02:01:30
00:01:28:10
00:01:01:50
00:06:50:30
00:00:01:50

View 8 Replies View Related

Make Sql Function To Modify Data Like Sql Procedure To Improve Code Structure And Readability.

Aug 27, 2007

Now Sql Function can not modify data as Sql Procedure like other database, it's very troublesome at most case!
in most case, Sql Function is used to improve code structure then it can be maintanced easy! BUT only because it can not modify data, there are 2 troublesome way:

1. Make all callers in the path from Sql Functions to Sql Procedure. and the coder will cry, the code will become very
confusional , and very difficult to maintance.

2. Divide the Sql Function into a thin CLR wrapper to call a Sql Procedure, can only use another connection, BUT can not be in the same transaction context And the code is ugly and slow.

You should not give limitation to Sql Function, should just limit the using context of Sql Function!

The sql code is more difficult to read and maintance than norm code(C#), then the improving code structure and readability should be one of your most important task, that's why microsoft!


View 6 Replies View Related

Most Efficient DataSource?

Aug 3, 2007

Hi there,
 I'm using a Repeater at the moment which is bound to a SQLDataSource. I expect much load on that Website, should I choose another DataSource? Which other DataSource is better if it's about Performance?
I read some stuff about the SQLAdapter and a DataSet.. is that better in performance? Why is it better?
What about LinQ?
Thanks a lot for any clarification.

View 3 Replies View Related

More Efficient Code

Dec 13, 2007

Hi all, I have the code listed below and feel that it could be run much more efficiently.  I run this same code for attrib2, 3, description, etc for a total of 21, so on each postback I am running a total of 21 different connections, i have listed only 3 of them here for the general idea.  I run this same code for update and for insert, so 21 times for each of them as well.  In fact if someone is adding a customer, after they hit the new customer button, it first runs 21 inserts of blanks for each field, then runs 21 updates for anything they put in fields, on the same records.  This is running too slow...  any ideas on how I can combine these??  We have 21 different entries for EVERY customer.  The Pf_property does not change, it is 21 different set entries, the only one that changes is the Pf_Value.
Try                Dim queryString As String = "select Pf_Value from CustomerPOFlexField where [Pf_property] = 'Attrib1' and [Pf_CustomerNo] = @CustomerNo"                Dim connection As New SqlClient.SqlConnection("connectionstring")                Dim command As SqlClient.SqlCommand = New SqlClient.SqlCommand(queryString, connection)                command.Parameters.AddWithValue("@CustomerNo", DropDownlist1.SelectedValue)                Dim reader As SqlClient.SqlDataReader                command.Connection.Open()                reader = command.ExecuteReader                reader.Read()                TextBox2.Text = Convert.ToString(reader("Pf_Value"))                command.Connection.Close()            Catch ex As SystemException                Response.Write(ex.ToString)            End Try
            Try                Dim queryString As String = "select Pf_Value from CustomerPOFlexField where [Pf_property] = 'Attrib1Regex' and [Pf_CustomerNo] = @CustomerNo"                Dim connection As New SqlClient.SqlConnection("connectionstring")                Dim command As SqlClient.SqlCommand = New SqlClient.SqlCommand(queryString, connection)                command.Parameters.AddWithValue("@CustomerNo", DropDownlist1.SelectedValue)                Dim reader As SqlClient.SqlDataReader                command.Connection.Open()                reader = command.ExecuteReader                reader.Read()                TextBox5.Text = Convert.ToString(reader("Pf_Value"))                command.Connection.Close()            Catch ex As SystemException                Response.Write(ex.ToString)            End Try
            Try                Dim queryString As String = "select Pf_Value from CustomerPOFlexField where [Pf_property] = 'Attrib1ValMessage' and [Pf_CustomerNo] = @CustomerNo"                Dim connection As New SqlClient.SqlConnection("connectionstring")                Dim command As SqlClient.SqlCommand = New SqlClient.SqlCommand(queryString, connection)                command.Parameters.AddWithValue("@CustomerNo", DropDownlist1.SelectedValue)                Dim reader As SqlClient.SqlDataReader                command.Connection.Open()                reader = command.ExecuteReader                reader.Read()                TextBox6.Text = Convert.ToString(reader("Pf_Value"))                command.Connection.Close()            Catch ex As SystemException                Response.Write(ex.ToString)            End Try
 Thanks,
Randy

View 2 Replies View Related

Which Query Is More Efficient?

Apr 1, 2008

what's the difference, if I use SQLDataReader at code level, making a query of that retrieves 500 rows and 2 columns, and making a query that retrieves 2 rows and 500 columns? 

View 6 Replies View Related

Most Efficient Way To Do This Select....

Feb 12, 2002

I have a table that has the following...

ID Status Type Check_Num Issued IssueTime Paid PaidTime
-----------------------------------------------------------------
1 I <null> 10 10.00 2/1/02
2 E IDA 10 <null> <null> 10.01 2/3/02
3 E CAP 10 <null> <null> 10.00 2/4/02
4 E PNI 11 <null> <null> 15.00 2/6/02


I want to return the Check_Num,Type, Paid, and Max(PaidTime) from this...

Example:
Check_Num Type Paid Time
---------------------------
10 CAP 10.00 2/4/02
11 PNI 15.00 2/6/02

Any assistance will be greatly appreciated.

Thanks,
Brian

View 1 Replies View Related

In-efficient SQL Code

Sep 7, 2000

Hey people

I'd be really grateful if someone can help me with this. Could someone explain the following:
If the following code is executed, it runs instantly:

declare @SellItemID numeric (8,0)
select @SellItemID = 5296979

SELECT distinct s.sell_itm_id
FROM stor_sell_itm s
WHERE (s.sell_itm_id = @SellItemID )

However, if I use this WHERE clause instead -

WHERE (@SellItemID = 0 OR s.sell_itm_id = @SellItemID)

- it takes 70 micro seconds. When I join a few more tables into the statement, the difference is 4 seconds!

This is an example of a technique I'm using in loads of places - I only want the statement to return all records if the filter is zero, otherwise the matching record only. I think that by using checking the value of the variable in the WHERE clause, a table scan is used instead of an index. This seems nonsensical since the variable is effectively a constant. Wrapping the entire select statement with an IF or CASE works, but when I've got 10 filters I'd have to 100 select statements.
I DON'T GET IT!! There must be a simple answer, HELP!!
Jo

PS this problem seems to occur both in 6.5 and 7.0

View 4 Replies View Related

Need Efficient Query

Jun 12, 2008

This query is giving me very slow search .What could be the efficient way


SELECT (
SELECT COUNT(applicationID) FROM Vw_rptBranchOffice
WHERE ( statusDate between '2008-03-13 16:12:11.513' AND '2008-05-30 00:00:00.000'
AND SearchString like '%del%')) AS
totalNO,ApplicationID,SearchString,StudentName,IntakeID,CounslrStatusDate
FROM Vw_rptBranchOffice
WHERE statusDate between '2008-03-13 16:12:11.513' AND '2008-05-30 00:00:00.000' AND
SearchString like '%del%'

View 5 Replies View Related

More Efficient Way Than You Have Published

Dec 8, 2004

Sumanesh writes "Recently I read one article “Converting Multiple Rows into a CSV string�

(http://www.sqlteam.com/item.asp?ItemID=256)

I have found one easy and more efficient way to achieve the same thing.
First I have a table called test with 2 columns (ID and Data)
And some data inserted into it.

CREATE FUNCTION [dbo].CombineData(@ID SMALLINT)
RETURNS VARCHAR(2000)
AS
BEGIN
DECLARE @Data VARCHAR(2000)

SET @Data = ''
SELECT @Data = @Data + Data + ',' FROM Test WHERE ID = @ID
RETURN LEFT(@Data,LEN(@Data)-1)

END
GO

Then used a simple select query to achieve the same

SELECT DISTINCT ID, [dbo].CombineData(ID) FROM Test

Which achieved the same thing without using any temporary tables and Cursors. I am sure that you will accept that this is more efficient than the one that has been published.


Thanks
Sumanesh"

View 1 Replies View Related

Efficient SQL Backup?

Apr 19, 2006

Hi all,I am having issues of efficiency of backing up data from one SQL database to another.The two servers in questions are on different networks , behinddifferent firewalls. We have MS SQL 2000.On the source data i run a job with the following steps:1> take trans backup every 4 hrs2> ftp to the remote server3> if ftp fails , disable the whole jobOn the target server I run a job which does the following1> restore the trans backup with NORECOVERY.If the job fails at target. I will have to go through the whole processof doing a complete backup of the source , restoring it at the otherens and then starting trans-backup again.Also, if we do a failover to the target server, then when we roll backto the source server again we have to da a back-up of the target andrestore it on the source server.Is ther a more efficent way of doing this??

View 3 Replies View Related

More Efficient - Exists Or In

Jul 20, 2005

Which is more efficient:Select * from table1 where id in (select id from table2)orSelect * from table1 where exists(select * from table2 wheretable2.id=table1.id)

View 8 Replies View Related

Efficient Select

Jul 20, 2005

It seems I should be able to do 1 select and both return thatrecordset and be able to set a variable from that recordset.eg.Declare @refid intSelect t.* from mytable as t --return the recordsetSet @refid = t.refidthe above doesn't work-how can I do it without making a second trip tothe database?Thanks,Rick

View 3 Replies View Related

Efficient Way Than IN Statement

Aug 21, 2006

I have two tables JDECurrencyRates and JDE Currency Conversion

I want to insert all the records all the records from JDECurrencyRates to JDECurrencyConversion that does not exists in JDECurrencyConversion table. For matching I am to use three keys i.e. FromCurrency, TO Currency and Effdate

To achieve this task i wrote the following query

INSERT INTO PresentationEurope.dbo.JDECurrencyConversion(Date,FromCurrency,FromCurrencyDesc,

ToCurrency, ToCurrencyDesc, EffDate, FromExchRate, ToExchRate,CreationDatetime

,ChangeDatetime)

(SELECT effdate as date, FromCurrency, FromCurrencyDesc, ToCurrency, ToCurrencyDesc, EffDate,

FromExchRate, ToExchRate, GETDATE(),GETDATE() FROM MAINTENANCE.DBO.JDECURRENCYRATES

WHERE FROMCURRENCY NOT IN (SELECT FromCurrency FROM PRESENTATIONEUROPE.DBO.JDECURRENCYCONVERSION)

OR TOCURRENCY NOT IN (SELECT TOCURRENCY FROM PRESENTATIONEUROPE.DBO.JDECURRENCYCONVERSION)

OR EFFDATE NOT IN (SELECT EFFDATE FROM PRESENTATIONEUROPE.DBO.JDECURRENCYCONVERSION))



Can any one suggest me the better way to accomplish this task or this query is OK (or efficient enough)

View 5 Replies View Related

Most Efficient Way Of Returning Data

Feb 22, 2007

I have a number of tables, and I need to create a summary of the data.Now I have two choices. I could create a stored procedure that creates a temp table with the filters applied. This would require 12 selects for each year (we have 3 years of data so far). This means 36 selects so far that fill a temp table.The second choice is to create a table with the required columns and rows. As I am esspecially cross-joining 4 tables, the table would have about 10 x 10 x 12 x A rows, where A is a variable that will grow quickly. I can then keep this table up-to-date using triggers for each insert. Then all I need to do is use an aggregate funtion on the relevant filter.My question is, which is more efficient. Creating a stored procedure that creates the table dynamically, or have a table with thousands of rows and using an aggregate function on these.PS I am using SQL Server 2000Jag 

View 1 Replies View Related

Is There A More Efficient Way To Perform This Process?

Dec 11, 2007

My e-commerce site is currently running the following process when items are shipped from the manufacturer:1. Manufacturer sends a 2-column CSV to the retailer containing an Order Number and its Shipping Tracking Number.2. Through the web admin panel I've built, a retail staff member uploads the CSV to the server.3. In code, the CSV is parsed.  The tracking number is saved to the database attached to the Order Number.4. After a tracking # is saved, each item in that order has its status updated to Shipped.5. The customer is sent an email using ASPEmail from Persits which contains their tracking #.The process seems to work without a problem so long as the CSV contains roughly 50 tracking #'s or so.  The retailer has gotten insanely busy and wants to upload 3 or 4 thousand tracking #'s in a single CSV, but the process times out, even with large server timeout values being set in code.  Is there a way to streamline the process to make this work more efficiently?  I can provide the code if that helps. 

View 5 Replies View Related

Date Efficient Query ??

Dec 11, 2007

I have a table that has a date and time column. I need to do a search on the table by days and will eventually need to do it by hours as well.
 I wanted to ask the question that will the performance get better if I create two additional columns one stateing the "Day of Week" and the other stating " Hour of Week". These will have numerical values prepopulated i.e. for Saturday 7, sunday 1, Monday 2 etc etc etc. And for the time , I will have 1 for 1pm-159pm  2 for 2-2:59pm pm 3 for 3-3:59pm etc etc etc
 The total number of rows in the table could total half a million, with filtered to by weekf of day may be reduce to  being 80,000 or so.
 Is the above criteria to add two numeric columns to the table and putting indexes on those two numeric fields is a good solution? and efficinet or just using the datepart functionality with the actual date column and using the  week of day and time parameters as the case may be.
Thanks fro your help. 
 
Thanks
 

View 3 Replies View Related

More Efficient Updates Of SQL Server

Feb 6, 2001

If I have 5000 rows to update with different values based upon the primary key, is there a better way to do it than 5000 separate update statements?

View 2 Replies View Related

More Efficient Date Conversion

Sep 25, 2000

I have a function which works that converts getdate() to a 8 character string. I have tried others ways but this one works OK. However the more I look at it the more I think a more efficient way has to exist. Any ideas greatly appreciated. Here is my approach

declare @order_date char(8), @year char(4), @month char(2), @day char(2)
set @year = cast(datepart(yyyy,getdate()) as char(4))
if datepart(dd,getdate())<10 set @day = '0'+cast(datepart(dd,getdate()) as char(2)) else set @day = cast(datepart(dd,getdate()) as char(2))
if datepart(mm,getdate())<10 set @month = '0'+cast(datepart(mm,getdate()) as char(2)) else set @month = cast(datepart(mm,getdate()) as char(2))
set @order_date = @year + @month + @day select @order_date

View 1 Replies View Related

Efficient JOIN Query

Mar 15, 2006

Please help me with the efficient JOIN query to bring the below result :


create table pk1(col1 int)

create table pk2(col1 int)

create table pk3(col1 int)

create table fk(col1 int, col2 int NOT NULL, col3 int, col4 int)


insert into pk1 values(1)
insert into pk1 values(2)
insert into pk1 values(3)

insert into pk2 values(1)
insert into pk2 values(2)
insert into pk2 values(3)

insert into pk3 values(1)
insert into pk3 values(2)
insert into pk3 values(3)

insert into fk values(1, 1, null, 10)
insert into fk values(null, 1, 1, 20)
insert into fk values(1, 1,null, 30)
insert into fk values(1, 1, null, 40)
insert into fk values(1, 1, 1, 70)
insert into fk values(2, 3, 1, 60)
insert into fk values(1, 1, 1, 100)
insert into fk values(2, 2, 3, 80)
insert into fk values(null, 1, 2, 50)
insert into fk values(null, 1, 4, 150)
insert into fk values(5, 1, 2, 250)
insert into fk values(6, 7, 8, 350)
insert into fk values(10, 1, null, 450)

Below query will give the result :

select fk.* from fk inner join pk1 on pk1.col1 = fk.col1 inner join pk2 on pk2.col1 = fk.col2 inner join pk3 on pk3.col1 = fk.col3

Result :
+------+------+------+------+
| col1 | col2 | col3 | col4 |
+------+------+------+------+
| 1 | 1 | 1 | 70 |
| 2 | 3 | 1 | 60 |
| 1 | 1 | 1 | 100 |
| 2 | 2 | 3 | 80 |
+------+------+------+------+

But I require also the NULL values in col1 and col3

Hence doing the below :

select distinct fk.* from fk inner join pk1 on pk1.col1 = fk.col1 or fk.col1 is null inner join pk2 on pk2.col1 = fk.col2 inner join pk3 on pk3.col1 = fk.col3 or fk.col3 is null

+------+------+------+------+
| col1 | col2 | col3 | col4 |
+------+------+------+------+
| null | 1 | 1 | 20 |
| null | 1 | 2 | 50 |
| 1 | 1 | null | 10 |
| 1 | 1 | null | 30 |
| 1 | 1 | null | 40 |
| 1 | 1 | 1 | 70 |
| 1 | 1 | 1 | 100 |
| 2 | 2 | 3 | 80 |
| 2 | 3 | 1 | 60 |
+------+------+------+------+

The above is the reqd output, but the query will be very slow if there are more NULL valued rows in col1 and col3, since I need to also use distinct if I use 'IS NULL' check in JOIN.

Please let me know if there is an aliternative to this query which can return the same result set in an efficient manner.

View 2 Replies View Related

Most Efficient FIFO Logging.

Jul 14, 2006

My buddy has an application that logs entries, and for space reasons needs to only retain a maximum N records in the log table. He wants to delete old log entries as part of the insert procedure. Here is his stab at it:CREATE PROCEDURE spInserttblLog
@Message varchar(1024),
@LogDate datetime,
@ElapsedSeconds float,
@LogLevel varchar(50),
@UserName varchar(100),
@ProcessName varchar(100),
@MachineName varchar(100),
@MaxEntries int=0
AS
declare @ varchar(300)
if @MaxEntries > 0
Begin
set @MaxEntries = @MaxEntries -1
set @SQL='Delete From tblLog Where LogID Not In (Select Top ' + cast(@MaxEntries as varchar) + ' LogID from tblLog order by LogID desc)'
execute (@SQL)
End

insert into tblLog
(Message,
LogDate,
ElapsedSeconds,
LogLevel,
UserName,
ProcessName,
MachineName)
values(@Message,
@LogDate,
@ElapsedSeconds,
@LogLevel,
@UserName,
@ProcessName,
@MachineName)


I think this would be more efficient:--SQL Server
CREATEPROCEDURE spInserttblLog
@Message varchar(1024),
@LogDate datetime,
@ElapsedSeconds float,
@LogLevel varchar(50),
@UserName varchar(100),
@ProcessName varchar(100),
@MachineName varchar(100),
@MaxEntries int=0
AS

delete
fromtblLog
whereLogID < (select max(LogID) from tlbLog) - @MaxEntries - 2
and @MaxEntries > 0

insert into tblLog
(Message,
LogDate,
ElapsedSeconds,
LogLevel,
UserName,
ProcessName,
MachineName)
values(@Message,
@LogDate,
@ElapsedSeconds,
@LogLevel,
@UserName,
@ProcessName,
@MachineName)Comments, or suggestion for an even faster method? This is a relatively high-activity table, so there are a lot of inserts.

View 4 Replies View Related

Efficient Way Of Writing This Query

Mar 31, 2008

I am using a Table Many Times in Left Outer Joins and Inner Joins for various Conditions,
is there anyway of writing a query using minimal Table usage, instead of Recurring all the time.

**********************************
SELECT
blog.blogid,
BM.TITLE,
U.USER_FIRSTNAME+ ' ' + U.USER_LASTNAME AS AUTHORNAME,
Blog_Entries = (cASE WHEN Blog_Entries is NULL or Blog_Entries = ' ' then 0 else Blog_Entries END),
Blog_NewEntries = (cASE WHEN Blog_NewEntries is NULL or Blog_NewEntries = ' ' then 0 else Blog_NewEntries END),
Blog_comments = (cASE WHEN Blog_comments is NULL or Blog_comments = ' ' then 0 else Blog_comments END),
dbo.DateFloor(VCOM.objCreationDate) AS CreationDate,
dbo.DateFloor(BLE.entryDate) AS Date_LastEntry
FROM vportal4VSEARCHCOMM.dbo.blog_metaData BM
INNER JOIN vportal4VSEARCHCOMM.dbo.blog BLOG
ON BM.BLOGID = BLOG.BLOGID
INNER JOIN vportal4VSEARCH.dbo.[USER] U
ON U.USER_ID = BLOG.OWNERID
INNER JOIN vportal4VSEARCHCOMM.dbo.vComm_obj VCOM
ON BLOG.vCommObjID = VCOM.vCommObjId
INNER JOIN vportal4VSEARCHCOMM.dbo.blog_entry BLE
ON BLOG.BLOGID = BLE.BLOGID
LEFT OUTER JOIN
(SELECT BlogID, Blog_Entries = COUNT(*)
FROM vportal4VSEARCHCOMM.dbo.Blog_Entry
GROUP BY BlogID )B on B.BLOGID = BM.BLOGID
LEFT OUTER JOIN
(
SELECT BlogID, Blog_NewEntries = COUNT(*)
FROM vportal4VSEARCHCOMM.dbo.Blog_Entry
WHERE ENTRYDATE > '01/01/2008'
GROUP BY BlogID )C on C.BLOGID = BM.BLOGID
LEFT OUTER JOIN
(
SELECT BEN.BLOGID, Blog_comments = COUNT(*)
FROM vportal4VSEARCHCOMM.dbo.blog_comment BC
INNER JOIN vportal4VSEARCHCOMM.dbo.blog_entry BEN
ON BEN.blog_entryId = BC.blogEntryId
GROUP BY BEN.BLOGID )D on D.BLOGID = BM.BLOGID
WHERE VCOM.objName like '%blog%'



thanks

View 5 Replies View Related

More Efficient Than LEFT JOIN

Feb 15, 2006

I have a table with data that is refreshed regularly but I still need tostore the old data. I have created a seperate table with a foreign keyto the table and the date on which it was replaced. I'm looking for anefficient way to select only the active data.Currently I use:SELECT ...FROM DataTable AS DLEFT OUTER JOIN InactiveTable AS I ON I.Key = D.KeyWHERE D.Key IS NULLHowever I am not convinced that this is the most efficient, or the mostintuitive method of acheiving this.Can anyone suggest a more efficient way of getting this informationplease.Many thanks.*** Sent via Developersdex http://www.developersdex.com ***

View 3 Replies View Related

Most Efficient Way To Run Update Query

Jun 9, 2006

Hi all,Any thoughts on the best way to run an update query to update a specificlist of records where all records get updated to same thing. I would thinka temp table to hold the list would be best but am also looking at theeasiest for an end user to run. The list of items is over 7000Example:update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS' where item_no = '001-LBK'update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS' where item_no = '001-LYE'update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS' where item_no = '001-XLBK'update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS' where item_no = '001-XLYE'update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS' where item_no = '002-LGR'update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS' where item_no = '002-LRE'All records get set to same. I tried using an IN list but this wassignificantly slower:update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS'where item_no in('001-LBK','001-LYE','001-XLBK','001-XLYE','002-LGR','002-LRE')Thanks

View 3 Replies View Related

Please Help Obi-Wan: Efficient Join/Cursor/Something/Anything?

Jul 20, 2005

Hi allI have a bit of a dilema that I am hoping some of you smart dudesmight be able to help me with.1. I have a table with about 50 million records in it and quite a fewcolumns. [Table A]2. I have another table with just over 300 records in it and a singlecolumn (besides the id). [Table B]3. I want to:Select all of those records from Table A where [table A].descriptiondoes NOT contain any of (select color from [table B])4. An exampleTable Aid ... [other columns] ... description1the green hornet2a red ball3a green dog4the yellow submarine5the pink pantherTable Bidcolor55blue56gold57green58purple59pink60whiteSo I want to select all those rows in Table A where none of the wordsfrom Table B.color appear in the description field in Table A.I.E: The query would return the following from Table A:2a red ball4the yellow submarineThe real life problem has more variables and is a little morecomplicated than this but this should suffice to give me the rightidea.Due to the number of rows involved I need this to be relevantlyefficient. Can someone suggest the most efficient way to proceed.PS. Please excuse my ignorance.CheersSean

View 3 Replies View Related

Efficient Way Of Backing Up Data

Jan 24, 2008

We are writing an in-house backup tools. It's written in C# and the program allows me to select which records we want to archive and remove from existing database. Functionally, we can acheive what we want to do but I feel that we did not do it right. Can someone show me the right way to go? Here is what we are doing in the backup process.

1) Create an archive database (with the file name, say archive_101.mdf)
2) Move the data from orginal database to archive database using the following SQL statement
INSERT INTO [archive_database].dbo.records SELECT a, b, c, date FROM [original_database] WHERE date < CONVERT(DATETIME, '2007-10-10', 120)
3) Delete the moved data from original database.
4) Of course, step 2 and step3 are done in a transaction
5) Then, we detach the archive database and store the file "archive_101.mdf somewhere.

I felt it's wrong because the performance is very bad. If I run the select statement, it takes me 1 min to dump all the data to console. If I run the same SELECT statement together with INSERT INTO, like what I wrote in step 2, it takes me more than 1 hour to write to another database. I checked the tempdb and its size does grow a lot when I am doing step 2. I know the data I am selecting is about 500MB large but still, it should not take that long. Can somebody give me any hints?

Thanks in advance.

View 5 Replies View Related

Which Is More Efficient To Use: Joins Or SubQuery

Aug 28, 2007

View 6 Replies View Related

What Is Most Efficient Way To Use DataAdapters With Large SQL Tables

Jan 9, 2008

 I'm using DataAdapters with my SQL database with the intention of all the SELECT, UPDATE, INSERT, DELETE commands to be automatically generated.One table is huge so I'm wondering is it more efficient to "SELECT Top(1) * FROM hugetable" instead of  "SELECT * FROM hugetable" in order to facilitate the generation of commands.I hope this isn't too confusing.Thanks,Geoff  

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved