Query Optimization Having 10 Million Rows

Feb 27, 2015

i have a following table

table name : emp_master

empid efname emname elamane efathername emothername deptno edob edoj createdby updateby lastupdatedatetime lastactionperformed

empid is primarykey.

this table contains 20million of records and i want to fire following query on this to get employye all data where eployee is more than 10 year old

select empid ,efname, emname, elamane, efathername, emothername, deptno ,edob ,edoj ,createdby, updateby, lastupdatedatetime ,lastactionperformed
from emp_master
where year(doj)+10 > year(getadate())

this will return approx 10 million rows and taking 18 mins. tune this query what approaches should i take to reduce the time of execution.

View 6 Replies


ADVERTISEMENT

Transact SQL :: Query To Update A Table With More Than 150 Million Rows Of Data?

Sep 17, 2015

I have been tasked with writing an update query to update a table with more than 150 million rows of data. Here are the table structures:

Source Tables :

OC
CREATE TABLE [dbo].[OC](
[OC] [nvarchar](255) NULL,
[DATE DEBUT] [date] NULL,
[DATE FIN] [date] NULL,
[Code Article] [nvarchar](255) NULL,
[INSERTION] [nvarchar](255) NULL,

[Code] ....

The update requirement is as follows:

DECLARE @Counter INT=0 --This causes the @@rowcount to be > 0
while @@rowcount>0
BEGIN
    SET rowcount 10000
    update r
    set Comp=t.Comp

[Code] ....

The update took more than 48h and didn't terminate , how to accelerate it ?

View 6 Replies View Related

SQL Server 2012 :: Update Statement Will Not Update Data Beyond 7 Million Plus Rows Out Of 38 Millions Rows

Dec 12, 2014

I run the following statement and it will not update beyond 7 million plus rows and I have about 38 million to complete. I keep checking updated row counts and after 1/2 day it's still the same so I know something is wrong because it was rolling through no problem when I initiated it. I need to complete ASAP so it's adding to my frustration. The 'Acct_Num_CH' field is an encrypted field (fyi).

SET rowcount 10000
UPDATE [dbo].[CC_Info_T]
SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v'
WHERE [Acct_Num_CH] IS NOT NULL
WHILE @@ROWCOUNT > 0
BEGIN
SET rowcount 10000
UPDATE [dbo].[CC_Info_T]
SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v'
WHERE [Acct_Num_CH] IS NOT NULL
END
SET rowcount 0

View 5 Replies View Related

500 Million Rows Of Data?

Apr 9, 2008

I'm new to using a DB and have a few questions about what I'm trying to do. I have some historical options data and want to place it into a sql express database. (I understand I might need to use a none express version once the db gets to big.) A months worth of data is over 5.5 million rows of data. So six years worth is ~400 million rows. Is it possible to put this into a sql db and be able to search it very fast? I have a months worth in a db now and it is pretty slow. Should I use a new table for each month and then have 6 years * 12 month = 72 tables to increase the search speed? I search by date and stock_symbol and the data looks like this:
 Date, Stock_Symbol, Option_Symbol, Strike, BidPrice, AskPrice, Volume, OpenInterest, (and a few others)
The select statement is simple: SELECT * FROM Options WHERE Date = @Date and StockSymbol = @Symbol
Thanks

View 4 Replies View Related

Copying 4 Million Rows Everyday

Mar 21, 2000

In our database, we have a very large table that gets updated every morning, start of the day is copying 4 million rows from the fact table from previous date to today's date in the same table and then some other processing. It takes 1 1/2 to 2 hrs to do this. There is a dts package created to copy these rows into temp table and then to this fact table.

This table has more than 200 million rows

Any ideas on how to accomplish this without doing the copy twice and not running into locking problems.

Thanks for any suggestions.

View 5 Replies View Related

Deleting 8 Million Rows From Database

Jul 16, 2013

i am deleating 8 Million rows from my database,I am wondering how to control T-Log,also I heard something about row lock and table lock

View 4 Replies View Related

Which Task Is Best For Loading 1 Million Rows

Mar 24, 2008

Hi,

i have to load 1million rows from database( or flatfile) to the database(or flat file).
which task is used as the best solution for this?

Appreciate any assistance in this regard.

Thanks,
Das

View 5 Replies View Related

Help On Updating 1.3 Million Rows On The Production Server

May 4, 2000

I need to update about 1.3 million rows in a table of mine.
I am getting the data from one of the columns of the same table and
updating the new column.
I am doing this using a cursor which I have put in a stored procedure.
As this is a production table which users might be accessing.It is a
web based application and I can't slow the system down.
So I am willing to run the stored prcedure during off peak hours.
However, do I need to put this in a transaction?
If I did put it in a transaction what type of isolation level should I
opt for?
Data integrity is very important for me and I don't mind to compromise
on the performance.
I am doing this because one of the columns which has "short description"
entry is has become too small for business purposes and we want to increase it's
length from varchar(100) to varchar(150).
As this is SQL 6.5, I can't increase the lenght of the column.
So Iadded a new column and will run the stored proc.
What precautions are to be taken?
This is on a high priority basis and very important too.

Thanks in advance...

Stored procedure code:

USE DB_Registration_Dev
GO
IF EXISTS (SELECT NAME FROM SYSOBJECTS WHERE NAME='usp_update_product' AND TYPE='P')
DROP PROCEDURE usp_update_product
GO
CREATE PROC usp_update_product
AS
DECLARE @short_desc varchar(100)
DECLARE @prod_id int

DECLARE sdesc_curs CURSOR
FOR
SELECT [Product].[product_id] , [Product].[short_description]
FROM Product

OPEN sdesc_curs

FETCH NEXT FROM sdesc_curs
INTO @prod_id, @short_desc

WHILE @@FETCH_STATUS = 0
BEGIN
UPDATE Product
SET [Product].[sdesc] = @short_desc
WHERE Product_id=@prod_id
FETCH NEXT FROM sdesc_curs
INTO @prod_id, @short_desc
IF @@FETCH_STATUS <> 0
PRINT ' Finished Updating the table...go ahead and have fun ...! '
END
DEALLOCATE sdesc_curs
GO

View 1 Replies View Related

Missing 1 Million Rows During BCP PROCESS!! Urgent Help

Mar 12, 1999

Hi, I used the /e in my bcp code. yet did not get all the rows from the main frame into the sql talbes... here is the case I have 11 million rows in an ftp server I use this code to bcp into sql server can anyonecheck if this code is good for the process, I am missing one million row in the bcp process and do not know why??? I put the /e to see if there is any error but could not see any error file in my hard drive?
Please check it out and let me know

regards
Ali


Exec master..xp_cmdshell "bcp dbname..tablename in c:ftprootNbtorder.txt /fd:ftprootformatfileablename.fmt /Servername /Usa /Password /b250000 /a8000 /eerrfileORD"

View 2 Replies View Related

Updating A Column In A Table That Contains 50 Million Rows

Feb 27, 2008

I'm looking for some performance assistance on updating a column value in a table that contains approximately 50 million rows. I have a permanent table in another database that has the key column and value to be set. My query is listed below, but I'm afraid it will run quite awhile. Any suggestions would be appreciated.

update mytable
set column2 = b.column2
from mytable as a
join mytable1 as b
on a.column1 = b.column1



There is a one to one relationship between the two tables.

View 8 Replies View Related

DataWarehouse - 140 Million Rows - Load Performance Issue

Mar 10, 2000

Need help, I am managing a Data Warehouse (80 G.B Database Size), I purge older than 6 months data from a table which has more than 140 Million rows on daily basis. The daily data load performance is degrading. The table has no clustered indexes (only non-clustered indexes).

Tried dropping and rebuilding the non-clustered indexes, didn't work.

One way to solve the problem is drop the non-clustered index, bcp out the data, truncate the table and bcp in the data and rebuild the non-clustered indexes. This is too risky and taking 14 hours to bcp out the data.

This was not the issue in SQL Server 6.5, because SQL 6.5 always insert new record indexes at the end of the heap link (heap = non-clustered indexes without clustered index). In contrast, SQL Server 7.0 first checks for available space in existing pages by using percent free space pages (this is where it is killing the performance ).

Thanks for your help!

View 4 Replies View Related

Checking To See If A Records Exists Before Inserting - 3 Million + Rows

Aug 21, 2007

I have 1+ CSV files (using a foreach loop) which I'm doing a lot of transform work on and then inserting into a SQL database table.
Each CSV file usually contains about 2 days worth of data (contains date stamps) - somewhere in the region of 60k records per day.
The destination table currently contains 3 million+ rows and will get bigger.
I need to make sure that before inserting into the destination table, the data doesn't already exist.

I've read the following article: http://www.sqlis.com/311.aspx
While the lookup method works, it takes ages and eats up memory as it caches the 3m+ records before running for each CSV. Obviously this will only get worse as the table grows in size.

To make things a little more efficient what I'd like to do, is first derive the dates I'm dealing with in the current file - essentially storing the max(date) and min(date) in variables. Then in the lookup SQL use those vars, to reduce the amount of data that needs to be brought into the transformation to check against before inserting into the destination table.
Lookup SQL eg. SELECT * FROM MyTable WHERE Date BETWEEN varMinDate AND varMaxDate.

Ideally I'd use an aggregate transformation and then use the subsequent output from that either in the lookup query or store the output in vars, but I don't think you can do that and I get the feeling I'm approaching this with the wrong mindset.

Any thoughts would be great!

View 6 Replies View Related

DB Engine :: Deleting 1 Million Records From Transaction Table Of 10 Million Data On 24/7 Environment

Jun 12, 2015

I have a requirement to delete 1 Million records from a table having 10 Million data and it's being queried on 24/7 basis (don't have a downtime). how can I achieve that?

View 13 Replies View Related

Compare Int To Varchar, Import 1.5 Million Rows (was 2 Simple Problem)

Feb 1, 2005

1) I write a select statemnet

select a.columname1, b.columname1
from table1 a,table2 b
where a.columname2 = b. columname3


How do i compare
a.columname2 <--> int type column
while b. columname3 <--->varchar type

how should i use convert function

2) whats the best way to import 1.5 million rows ?
How about text file

View 2 Replies View Related

SQL Server 2014 :: Insert 500 Million Rows Into In-memory Table

Jul 29, 2014

I am doing a performance testing for In-memory option is sql server 2014. As a part I want to insert 500 million rows of records into a in-memory enabled test table I have created.

I need a sample script to insert 500 million records into a table ....

View 9 Replies View Related

AFTER INSERT Trigger Takes Forever On A Large Table (20 Million Rows)

Aug 30, 2007

I have a row that is being used log track plays on our website.

Here's the table:


CREATE TABLE [dbo].[Music_BandTrackPlays](
[ListenDate] [datetime] NOT NULL DEFAULT (getdate()),
[TrackId] [int] NOT NULL,
[IPAddress] [varchar](20)
) ON [PRIMARY]


There's a CLUSTERED INDEX on ListenDate ASC and a NON CLUSTERED INDEX on the TrackId.

I have a TRIGGER on the Music_BandTrackPlays table that looks like the following:


CREATE TRIGGER [trig_Increment_Music_BandTrackPlays_PlayCount]
ON [dbo].[Music_BandTrackPlays] AFTER INSERT
AS
UPDATE
Music_BandTracks
SET
Music_BandTracks.PlayCount = Music_BandTracks.PlayCount + TP.PlayCount
FROM
(SELECT TrackId, COUNT(*) AS PlayCount
FROM inserted
GROUP BY TrackId) AS TP
WHERE
Music_BandTracks.TrackId = TP.TrackId


When a simple INSERT statement is done on the Music_BandTrackPlays table, it can take quite a long time. When I remove the TRIGGER the INSERTs are immediate. The Execution plan for the TRIGGER shows that a 'Inserted Scan' is taking up most of the resources.

How exactly is the pseudo 'inserted' table formed?

For now, I think the easiest thing to do is update my logging page so it performs 2 queries. One to UPDATE the Music_BandTracks table and increment the counter, and perform the INSERT into the Music_BandTrackPlays table seperately.

I'm ok with that solution but I would really like to understand why the TRIGGER is taking so long. The 'inserted' pseudo table will be 1 row 99% of the time. Does SQL Server perform a table scan on all 20 million rows in order to determine what's new and put it in the inserted pseudo table?

Thanks!

View 6 Replies View Related

Transact SQL :: Adding A Column To A Large (100 Million Rows) Table With Default Constraint?

Apr 24, 2013

IF NOT EXISTS (SELECT TOP 1 1 FROM dbo.syscolumns WHERE id = OBJECT_ID(N'dbo.Employee) and name = 'DoNotCall')
BEGIN
ALTER TABLE [dbo].[Employee] ADD [DoNotCall] bit not null Constraint DoNot_Call_Default DEFAULT 0
IF ( @@ERROR <> 0 )
GOTO QuitWithRollback
END

It just takes a LOT of time in SQL Server Management studio. I have to cancel the query and cancelling takes a whole lot time. I am using SQL Server 2008.

View 4 Replies View Related

Deleting Table Rows And Optimization

Dec 10, 2001

i have 6 read only tables that every night all the data gets dumped and a new updated copy of the data is copied over. the tables range from 500,000 rows to almost 4 million. i have indexes set up on the fields i use to query against. my questions are
1.since i dump all the data every night and replace it, do i need to rebuild the indexes every night or is that done after the data is reentered?
2.i want to use a fill factor on the table since it is read only, but will dumping the data every night and reinputting it have adverse affects with a fill factor?
3.should i be shrinking the database or defragging it everynight cause of my data dumps and reloads?

thanks in advance

spiral

View 1 Replies View Related

Stored Procedure Query Optimization - Query TimeOut Error

Nov 23, 2004

How to optimize the following Stored procedure running on MSSQL server 2000 sp4 :

CREATE PROCEDURE proc1
@Franchise ObjectId
, @dtmStart DATETIME
, @dtmEnd DATETIME
AS
BEGIN


SET NOCOUNT ON

SELECT p.Product
, c.Currency
, c.Minor
, a.ACDef
, e.Event
, t.Dec
, count(1) "Count"
, sum(Amount) "Total"
FROM tb_Event t
JOIN tb_Prod p
ON ( t.ProdId = p.ProdId )
JOIN tb_ACDef a
ON ( t.ACDefId = a.ACDefId )
JOIN tb_Curr c
ON ( t.CurrId = c.CurrId )
JOIN tb_Event e
ON ( t.EventId = e.EventId )
JOIN tb_Setl s
ON ( s.BUId = t.BUId
and s.SetlD = t.SetlD )
WHERE Fran = @Franchise
AND t.CDate >= @dtmStart
AND t.CDate <= @dtmEnd
AND s.Status = 1
GROUP BY p.Product
, c.Currency
, c.Minor
, a.ACDef
, e.Event
, t.Dec

RETURN 1
END



GO

View 8 Replies View Related

Query Optimization - Please Help

Aug 15, 2007

Hi,
Can anyone help me optimize the SELECT statement in the 3rd step? I am actually writing a monthly report. So for each employee (500 employees) in a row, his attendance totals for all days in a month are displayed. The problem is that in the 3rd step, there are actually 31 SELECT statements which are assigned to 31 variables. After I assign these variable, I insert them in a Table (4th step) and display it. The troublesome part is the 3rd step. As there are 500 employees, then 500x31 times the variables are assigned and inserted in the table. This is taking more than 4 minutes which I know is not required :). Can anyone help me optimize the SELECT statements I have in the 3rd step or give a better suggestion.
                DECLARE @EmpID, @DateFrom, @Total1 .... // Declaring different variables
               SELECT   @DateFrom = // Set to start of any month e.g. 2007-06-01            ...... 1st
               Loop (condition -- Get all employees, working fine)
               BEGIN
                        SELECT         @EmpID = // Get EmployeeID                                      ...... 2nd      
                        SELECT         @Total1 = SUM (Abences)                                           ...... 3rd
                        FROM            Attendance
                        WHERE         employee_id_fk = @EmpID  (from 2nd step)
                        AND              Date_Absent = DATEADD ("day", 0, Convert (varchar, @DateFrom)) (from 1st step)
                        SELECT        @Total2 ........................... same as above
                        SELECT        @Total3 ........................... same as above
                         INSERT IN @TABLE (@EmpID, @Total1, ...... @Total31)                 ...... 4th
                         Iterate (condition) to next employee                                                ...... 5th
END
It's only the loop which consumes the 4 minutes. If I can somehow optimize this part, I will be most satisfied. Thanks for anyone helping me....

View 11 Replies View Related

Query Optimization

Feb 19, 2001

Trying to optimize a query, and having problems interpreting the data. We have a query that queries 5 tables with 4 INNER JOINS. When I use INNER HASH JOIN, this is the result:

(Using SQL Programmer)

SQL Server Execution Times:
CPU time = 40 ms, elapsed time = 80 ms.

Table 'Table1'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0.
Table 'Table2'. Scan count 1, logical reads 40, physical reads 0, read-ahead reads 0.
Table 'Table3Category'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0.
Table 'Table4'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0.
Table 'Table5'. Scan count 1, logical reads 17, physical reads 0, read-ahead reads 0.

When I use INNER JOIN, this is the result:

SQL Server Execution Times:
CPU time = 10 ms, elapsed time = 34 ms.

Table 'Table1'. Scan count 4, logical reads 10, physical reads 0, read-ahead reads 0.
Table 'Table2'. Scan count 311, logical reads 670, physical reads 0, read-ahead reads 0.
Table 'Table3'. Scan count 69, logical reads 102, physical reads 0, read-ahead reads 0.
Table 'Table4'. Scan count 69, logical reads 98, physical reads 0, read-ahead reads 0.
Table 'Table5'. Scan count 1, logical reads 17, physical reads 0, read-ahead reads 0.

Now, when timing the code execution on my ASP page, it's "faster" not using the HASH. Using HASH, there are a few Hash Match/Inner Joins reported in the Execution Plan. Not using HASH, there are Bookmark Lookups/Nested Loops.

My question is which is better to "see": Boomark Lookups/Nested Loops or Hash Match/Inner Joins for the CPU/Server?

Thanks.

View 1 Replies View Related

Query Optimization

Mar 14, 2003

IS there any way to rewrite this Query in optimized way?

SELECT dbo.Table1.EmpId E from dbo.Table1
where EmpId in(
SELECT dbo.Table1.EmpId
FROM (SELECT DISTINCT PersonID, MAX(dtmStatusDate) AS dtmStatusDate
FROM dbo.Table1
GROUP BY PersonID) derived_table INNER JOIN
dbo.Table1 ON derived_table.PersonID = dbo.Table1.PersonID AND
derived_table.dtmStatusDate = dbo.Table1.dtmStatusDate))

Thanks....j

View 1 Replies View Related

Query Optimization

Mar 7, 2006

How can I optimized the following query:
(SELECT e.SID
FROMStudents s
JOINTable1e ON e.SID= s.SID
JOINTable2 ed ON ed.Enrollment = e.Enrollment
JOINTable3 t ON t.TNum = e.TNum
JOINTable4 bt ON bt.TNum = t.TNum
JOINTable5 b ON b.Batch = bt.Batch
JOIN IPlans i ON i.IPlan = ed.IPlan
JOINPGroups g ON g.PGroup= i.PGroup

WHERE t.TStatus= 'ACP'
ANDed.EStatus= 'APR'
ANDe.SID=(select distinct SID from Table1 where Enrollment=@DpEnrollment))
AND(ed.EffectiveDate=
(SELECT EffectiveDate
FROM Table2 ed JOIN Table1 e
ON e.enrollment=ed.enrollment
WHERE IPlan = @DpIPlan
ANDTCoord = @DpTCoord
ANDAGCoord= @DpAGCoord
ANDDCoord=@DpDCoord )
ANDDSeq= @DpDSeq)
ANDe.SID=
(select distinct SID from Table1 where Enrollment=@DpEnrollment))
)
ANDed.TerminationDate=
(SELECT TerminationDate
FROM Table2 ed JOIN Table1 e
ON e.enrollment=ed.enrollment
WHERE IPlan = @DpIPlan
ANDTCoord = @DpTCoord
ANDAGCoord= @DpAGCoord
ANDDCoord= @DpDCoord )
ANDDSeq= @DpDSeq)
ANDe.SID=
(select distinct SID from Table1 where Enrollment=@DpEnrollment))
)
))

View 2 Replies View Related

Query Optimization

Mar 20, 2006

DECLARE @PTEffDate_tmp AS SMALLDATETIME
SELECT @PTEffDate_tmp = DateAdd(day, -1, PDate)
FROM PDates pd WHERE iplan = @DIPlan and pd.TCoord = @DTCoord and DType = 'EF'

DECLARE @PTCoord_tmp as char(3)
SELECT @PTCoord_tmp = tc.TCoord
FROM PDates pd JOIN TCoords tc ON (pd.TCoord = tc.TCoord)
WHERE pd.Iplan = @DIPlan and tc.TGroup = @TGroup_tmp
and PDate = @PTEffDate_tmp and DateType = 'TR1'

DECLARE @EStatus_tmp as char(3)
SELECT @EStatus_tmp = EDStatus From EDetails ed
JOIN ENR e ON (ed.enr = e.enr)
JOIN Trans t ON (e.transID = t.TransID)
WHERE iplan = @DIPlan
and ed.TCoord = @PTCoord_tmp
and t.TransS= 'ACP'
and DCoord = @DCoord
and CEnr is null

View 3 Replies View Related

Query Optimization

Aug 17, 2006

How can I optimazed my query. Since my DB is more then 1 mln it takes a while to do all those join?
select *
FROM EEMaster eem
JOIN NHistory nh
ON eem.SNumber = nh.SNumber
OR eem.OldNumber = nh.SNumber
OR eem.CID = (Replicate ('0',12-len( nh.SNumber))+ nh.SNumber )

View 4 Replies View Related

Query Optimization

Apr 23, 2008

I work on tables containing 10 million plus records.
What are the general steps needed to ensure that my queries run faster?
I know a few:
- The join fields should be indexed
-Selecting only needed fields
-Using CTE or derived tables as much as I can
-Using good table reference
eg
select a.x , b.y
from TableA a inner join TableB b
on a.id = b.id


I will be happy if somebody could share or add more to my list.

Regards to all

View 4 Replies View Related

Query Optimization

May 1, 2008

Dear all,
The below query take 7 min to execute so i want optimize the query.please any suggestions..........


SELECT DISTINCT VC.O_Id C_Id, VC.Name C_Name,VB.Org_Id B_Id,
VB.code S_Code,VB.Name S_Name, mt12.COLUMN003 M_D_Code,
mt12.COLUMN004 M_D_Name,CQ.COLUMN004 R_Code,
CQ.COLUMN005 R_Date, CQ.COLUMN006 Ser,CQ.COLUMN008 R_Nature,
CQ.COLUMN011 E_Date,mt26.COLUMN003 W_Code, mt26.COLUMN004 W_Name,
mt17.COLUMN005 V_Code,mt17.COLUMN006 V_Name, mt19.column002 I_Code,
mt19.column003 I_Name, mt19.COLUMN0001 R_I_No,mt92.COLUMN001 B_Id,
mt92.COLUMN005 B_No, CASE mt92.COLUMN006 WHEN '0' THEN 'Ser'
WHEN '1' THEN 'Un-Ser' WHEN '2' THEN 'Ret' WHEN '3' THEN 'Retd'
WHEN '4' THEN 'Rep' WHEN '5' THEN 'Repd' WHEN '6' THEN 'Con'
WHEN '7' THEN 'Cond' ELSE mt92.COLUMN006 END S_C_Type,
mt20.COLUMN003 T_G_Code,mt20.COLUMN004 T_G_Name, V.U_Code,V.U_Name,
mt19.column005 I_Quantity,mt20.COLUMN003 T_Code, mt20.COLUMN004 T_Name,
mt59.COLUMN005 T_Price,VR.code C_L_Code,
VR.Name C_L_Name
FROM tab90 CQ
INNER JOIN tab91 mt19 ON mt19.COLUMN002 = CQ.COLUMN001
LEFT JOIN tab92 mt92 ON mt92.COLUMN002 = CQ.COLUMN001
LEFT JOIN tab93 mt93 ON mt93.COLUMN004 = CQ.COLUMN001
INNER JOIN tab12 mt12 ON mt12.COLUMN001 = CQ.COLUMN003
LEFT JOIN tab26 mt26 ON mt26.COLUMN001 = CQ.COLUMN009
LEFT JOIN tab20 mt20 ON mt20.COLUMN001 = mt93.COLUMN005
LEFT JOIN tab59 mt59 ON mt59.COLUMN002=mt20.COLUMN001
LEFT JOIN tab17 mt17 ON mt17.COLUMN001 = CQ.COLUMN010
INNER JOIN VM V ON V.UOM_ID = mt19.COLUMN004
INNER JOIN tab19 mt19 ON mt19.COLUMN001 = mt19.COLUMN003
INNER JOIN vOrg VR ON CQ.COLUMN007 = VR.Org_Id
INNER JOIN vOr VB ON CQ.COLUMN002 = VB.Org_Id
INNER JOIN vOr VC ON VB.Top_Parent = VC.Org_Id
WHERE CQ.COLUMN005 Between '02/01/2007' and '08/25/2008' And VC.O_Id in ('fb243e92-ee74-4278-a2fe-8395214ed54b')




Thanks&Regards,

Msrs

View 4 Replies View Related

SQL Query Optimization

Jun 18, 2008

Hi All,

table with initial data:
Primary key (COL1 + COL2)

COL1 COL2 NEW LATEST
1241 1 1
125 0 1 1

by default, NEW and LATEST columns will have values 1, 1.

Now, one row is inserted with values (124,2)
Table data should be:

COL1, COL2, NEW, LATEST
1241 1 0
125 0 1 1
1242 0 1

LATEST column value changes for Row 1 since there is a repetition of value 124, meaning this row is no longer the latest.

NEW COLUMN value changes for ROW 2 since there it is no longer new; we already have an occurrence of 124 in the first row.

I m not sure if i can solve this query using any option other than cursor. it will be like taking first row --> comparing it with all the other rows and then moving further.

Plz. suggest me if there is a better approach for doing this

View 11 Replies View Related

Query Optimization

Mar 12, 2007

Okay guys, this will probably be messy. Just throw out some thoughts and I'll deal with it. How do I make this query smaller and more efficient?

Query deleted and link posted: http://theninjalist.com/

View 4 Replies View Related

Query Optimization

Oct 16, 2007

Dear all,

View 15 Replies View Related

Query Optimization

Nov 2, 2007

Dear all,

View 6 Replies View Related

Query Optimization

Apr 9, 2008

Hello!

I have a query:



SELECT *,
.....

(SELECT add_house
FROM hs_address
WHERE add_id = do_address_registration_id) as add_house,
(SELECT add_flat
FROM hs_address
WHERE add_id = do_address_registration_id) as add_house,

.....
FROM hs_donor
WHERE do_id = 400





Fields add_flat and add_house belong to one table. How one may optimize this query?



P.S. do_address_registration_id can be equal NULL


TIA

View 1 Replies View Related

Query Optimization

Mar 11, 2008



I am writing a query which will display employee details who is handling maximum number of projects.
Here I am joining 2 tables. one is LUP_EmpProject, which contain employee id and project id and project date, in this table I have used a composite primary key of employee id, project id and project date. The other table is

EmployeeDetails which contain employee names and employee id.

I want to display the details of the employee who is handling maximum projects.
Below given is the code which is working fine. But the query is taking time to execute it. Any body know how to optimize the code so that I can get the result quickly.




Code Snippet
SELECT EmployeeDetails.FirstName+' '+EmployeeDetails.LastName AS EmpName,
COUNT(LUP_EmpProject.Empid) AS Number_Of_Projects
FROM LUP_EmpProject
INNER JOIN EmployeeDetails
ON LUP_EmpProject.Empid=EmployeeDetails.Empid
GROUP BY EmployeeDetails.FirstName+' '+EmployeeDetails.LastName,
LUP_EmpProject.Empid
HAVING COUNT(LUP_EmpProject.Empid)>0
AND COUNT(LUP_EmpProject.Empid)=(SELECT
MAX(Number_Of_Projects)
FROM (SELECT COUNT(LUP_EmpProject.Empid) Number_Of_Projects
FROM LUP_EmpProject
GROUP BY LUP_EmpProject.Empid)AS sub)




Please help!!!!!!!!!!

View 6 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved