Hi group,I have a select statement that if run against a 1 million recorddatabase directly in query analyzer takes less than 1 second.However, if I execute the select statement in a stored procedureinstead, calling the stored proc from query analyzer, then it takes12-17 seconds.Here is what I execute in Query Analyzer when bypassing the storedprocedure:USE VerizonGODECLARE @phonenumber varchar(15)SELECT @phonenumber = '6317898493'SELECT Source_Identifier,BADD_Sequence_Number,Record_Type,BAID ,Social_Security_Number ,Billing_Name,Billing_Address_1,Billing_Address_2,Billing_Address_3,Billing_Address_4,Service_Connection_Date,Disconnect_Date,Date_Final_Bill,Behavior_Score,Account_Group,Diconnect_Reason,Treatment_History,Perm_Temp,Balance_Due,Regulated_Balance_Due,Toll_Balance_Due,Deregulated_Balance_Due,Directory_Balance_Due,Other_Category_BalanceFROM BadDebtWHERE (Telephone_Number = @phonenumber) OR (Telephone_Number_Redef =@phonenumber)order by Service_Connection_Date descRETURNGOHere is what I execute in Query Analyzer when calling the storedprocedure:DECLARE @phonenumber varchar(15)SELECT @phonenumber = '6317898493'EXEC Verizon.dbo.baddebt_phonelookup @phonenumberHere is the script that created the stored procedure itself:CREATE PROCEDURE dbo.baddebt_phonelookup @phonenumber varchar(15)ASSELECT Source_Identifier,BADD_Sequence_Number,Record_Type,BAID ,Social_Security_Number ,Billing_Name,Billing_Address_1,Billing_Address_2,Billing_Address_3,Billing_Address_4,Service_Connection_Date,Disconnect_Date,Date_Final_Bill,Behavior_Score,Account_Group,Diconnect_Reason,Treatment_History,Perm_Temp,Balance_Due,Regulated_Balance_Due,Toll_Balance_Due,Deregulated_Balance_Due,Directory_Balance_Due,Other_Category_BalanceFROM BadDebtWHERE (Telephone_Number = @phonenumber) OR (Telephone_Number_Redef =@phonenumber)order by Service_Connection_Date descRETURNGOUsing SQL Profiler, I also have the execution trees for each of thesetwo different ways of running the same query.Here is the Execution tree when running the whole query in theanalyzer, bypassing the stored procedure:--------------------------------------Sort(ORDER BY:([BadDebt].[Service_Connection_Date] DESC))|--Bookmark Lookup(BOOKMARK:([Bmk1000]),OBJECT:([Verizon].[dbo].[BadDebt]))|--Sort(DISTINCT ORDER BY:([Bmk1000] ASC))|--Concatenation|--IndexSeek(OBJECT:([Verizon].[dbo].[BadDebt].[Telephone_Index]),SEEK:([BadDebt].[Telephone_Number]=[@phonenumber]) ORDERED FORWARD)|--IndexSeek(OBJECT:([Verizon].[dbo].[BadDebt].[Telephone_Redef_Index]),SEEK:([BadDebt].[Telephone_Number_Redef]=[@phonenumber]) ORDEREDFORWARD)--------------------------------------Finally, here is the execution tree when calling the stored procedure:--------------------------------------Sort(ORDER BY:([BadDebt].[Service_Connection_Date] DESC))|--Filter(WHERE:([BadDebt].[Telephone_Number]=[@phonenumber] OR[BadDebt].[Telephone_Number_Redef]=[@phonenumber]))|--Compute Scalar(DEFINE:([BadDebt].[Telephone_Number_Redef]=substring(Convert([BadDebt].[Telephone_Number]),1, 10)))|--Table Scan(OBJECT:([Verizon].[dbo].[BadDebt]))--------------------------------------Thanks for any help on my path to optimizing this query for ourproduction environment.Regards,Warren WrightScorex Development Team
ok can someone tell me why i get two different answers for the same query. (looking for last day of month for a given date)
SELECT DATEADD(ms, - 3, DATEADD(mm, DATEDIFF(m, 0, CAST('12/20/2006' AS datetime)) + 1, 0)) AS Expr1 FROM testsupplierSCNCR I am getting the result of 01/01/2007
I need to write a 'select' statement to fetch data from different tables, which are located on different servers. Can any one help in writing this 'select' statement with out moving the tables on to same server.
My goal is to add a diff into a query that grabs data from 2 different tables.
The code: SELECT MIN(TableName) as TableName, ID1, COL1, COL2, COL3, COL4, COL5, COL6, COL7, COL8 ,COL9, COL10, COL11, COL12, COL13, COL14, COL15, --COL16, COL17, COL18, COL19, COL20, COL21 FROM ( SELECT 'Table A' as TableName, SessionID as ID1, StartDateCode as COL1, StartTimeCode as COL2, EndDateCode as COL3, EndTimeCode as COL4, HandledByCode as COL5, DispositionCode as COL6, DNISCode as COL7, CallServiceQueueCode as COL8, ApplicationCode as COL9, IVREndPointCode as COL10, BankCode as COL11, TotalQueueTimeSS as COL12, TotalAgentTalkTimeSS as COL13, TotalAgentHoldTimeSS as COL14, TotalAgentHandleTimeSS as COL15, --TotalIVRTimeSS as COL16, AfterHoursFlag as COL17, SourceSystemID as COL18, anubisTransferExtNumber as COL19, anubisEndPoint as COL20, AccountNumber as COL21
from [pdx0sql45].Rubicon_Marts.dbo.INB_Call_Fact where startdatecode between 2738 and 2769
UNION all
SELECT 'Table B' as TableName, SessionID as ID1, StartDateCode as COL1, StartTimeCode as COL2, EndDateCode as COL3, EndTimeCode as COL4, HandledByCode as COL5, DispositionCode as COL6, DNISCode as COL7, CallServiceQueueCode as COL8, ApplicationCode as COL9, IVREndPointCode as COL10, BankCode as COL11, TotalQueueTimeSS as COL12, TotalAgentTalkTimeSS as COL13, TotalAgentHoldTimeSS as COL14, TotalAgentHandleTimeSS as COL15, --TotalIVRTimeSS as COL16, AfterHoursFlag as COL17, SourceSystemID as COL18, anubisTransferExtNumber as COL19, anubisEndPoint as COL20, AccountNumber as COL21
from pdx0sql04.Rubicon_Marts.dbo.INB_Call_Fact where startdatecode between 2738 and 2769
) tmp
GROUP BY ID1, COL1, COL2, COL3, COL4, COL5, COL6, COL7, COL8 ,COL9, COL10, COL11, COL12, COL13, COL14, COL15, --COL16, COL17, COL18, COL19, COL20, COL21 HAVING COUNT(*) = 1 ORDER BY 2,1
---------
Is it possible to add a command into the query to output diff/compare scenario?
Hi.I'm trying but not getting correct results.I have two tablesone with app, msg, time(varchar,datetime,varchar)app1 start 2006-04-03 13:33:36.000app1 stuff 2006-04-03 13:33:36.000app1 end 2006-04-03 13:33:36.000app1 start 2006-04-03 13:33:36.000app2 start 2006-04-03 13:33:36.000app2 stuff 2006-04-03 13:33:36.000app2 end 2006-04-03 13:33:36.000app2 start 2006-04-03 13:33:36.000app3 start 2006-04-03 13:33:36.000app2 end 2006-04-03 13:33:36.000app2 start 2006-04-03 13:33:36.000app2 end 2006-04-03 13:33:36.000app2 start 2006-04-03 13:33:36.000app2 end 2006-04-03 13:33:36.000app3 end 2006-04-03 13:33:36.000app1 end 2006-04-03 13:33:36.000and another with dr watson crash info(varchar, datetime)app1 2006-04-03 13:33:36.000app2 2006-04-03 13:33:36.000app1 2006-04-03 13:33:36.000app1 2006-04-03 13:33:36.000app3 2006-04-03 13:33:36.000I'm trying to make a query that will allowme to see what entries in the first tableoccurred wtihin, say, a minute, or maybe 40seconds of any of the entries in the secondtable.I want all the entries in the second table tobe present, so I know it has to be some sortof join, probably an outer join.my syntax is giving me bad results, probablybecause I'm just out of practice.can someone tell me how to put a query togetherso I see the data I'm looking for?ThanksJeffJeff Kish
I have Table A . we already have 80 columns . we have to add 65 more columns.
we are populating this table from oracle .and we need to populate those 65 columns again from the same table.
Is it a better idea to add those new 65 columns to the same table or new table.
If we go for the same table then loading time will be double, If I go for new table and If i am able to run both the packages which loads table data from same oracle server to difffrent sql tables then we should be good. But if we run in to temp space issues on oracle server . Then i have to load the two tables separately which consumes the same time as earlier one.
I was thinking if there is a way in SSIS where I can pull data from same oracle table in to two diff sql tables at same time?
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***
Is there any specific place where I can find which SQL Query is more efficient?.
Is INNER JOIN is faster or Select ... Where ID in (SELECT ...) is faster?
I have two tables: 1.FLEET (The number of rows is not so much) Attributes : Company_Id (PK) Fleet_Id (PK) Fleet_Name Fleet_Description
2.USER_PRIVILEGE (The number of rows can reach up to 3 times the number of row in fleet table) Attributes : Company_Id (PK) Fleet_Id (PK) User_Id (PK) Privilege_Id(PK) Comment Category
I want to select Fleet_Id and Fleet_Name from fleet table Where the current user has privilege_id=1
I have two possible select statement :
1.Option 1
SELECT Fleet_Name, Fleet_Id FROM FLEET WHERE (Company_Id = 2) AND (Fleet_Id IN (SELECT fleet_id FROM user_privilege WHERE user_id = 11 AND company_id = 2 AND privilege_id = 1)) ORDER BY Fleet_Name
2.Option 2
SELECT F.Fleet_Name, F.Fleet_Id FROM USER_PRIVILEGE U INNER JOIN FLEET F ON U.Fleet_Id = F.Fleet_Id WHERE (F.Company_Id = 2) AND (U.Privilege_Id = 1) AND (U.User_Id = 11) ORDER BY F.Fleet_Name
Actually which one is faster. Is SQL Statement with INNER JOIN (Option 2) can be executed faster than the one with double Select Statement(Option 1)?
Hi: I have the following query, can somebody help me? SELECT s.Id, s.NameFROM Switch s INNER JOIN SwitchTelephoneRange r ON s.Id = r.IdWHERE '1526858' BETWEEN FromTelephone AND ToTelephone Where the '1526858' is a phone number. My problem is, I want to run the above query for each record in : select Telephone from PhoneDirectory So, each telephone number in the second phone, would macth the ' ' in the first query. How can I do so? Do I need a loop? a cursor? Can you help please? Thanks
Hi, I want to know query execution time as output. I want execution time only. this is for tuning purpose... Time displayed in the status bar is not helpful for me. thanks.
Hi there, I'm having a big performance problems with sql query. What i have done is changes physical layout (rearanged the columns) in one of the tables in the database. I used bcp to get the data out and then in. There are about quarter million rows in the this table. I have created the same indexes but know the same query takes a long time to execute. I have noticed that the showplan is somehow different than it used to be. This query uses the table i have changed and another one that i haven't touched. I have updated the stats to no help. Here are the show plans. this one is slow STEP 1 The type of query is INSERT The update mode is direct Worktable created for ORDER BY FROM TABLE SW_PERSON Nested iteration Index : swiPERSON10 FROM TABLE SW_CASE Nested iteration Table Scan TO TABLE Worktable 1 STEP 2 The type of query is SELECT This step involves sorting FROM TABLE Worktable 1 Using GETSORTED Table Scan
this one used to be fast STEP 1 The type of query is INSERT The update mode is direct Worktable created for ORDER BY FROM TABLE SW_CASE Nested iteration Table Scan FROM TABLE SW_PERSON Nested iteration Index : PK_SW_PERSON_1__27 TO TABLE Worktable 1 STEP 2 The type of query is SELECT This step involves sorting FROM TABLE Worktable 1 Using GETSORTED Table Scan
I think the problem is with the fact that the fist one doesn't use the PK key which is the one the links both tables. My question is how to force the query to use this index. PS. One thing i haven't done is to recreate indexes on the other table. But i don't think that would have made a differnece. Thanks
CREATE FUNCTION dbo.fnProductsRetrieveBySupplierID ( @SupplierID int ) RETURNS TABLE AS RETURN ( SELECT * FROM Products WHERE SupplierID = @SupplierID )
CREATE FUNCTION dbo.fnSuppliersRetrieveBySupplierID ( @SupplierID int ) RETURNS TABLE AS RETURN ( SELECT * FROM Suppliers WHERE SupplierID = @SupplierID )
I have been testing the performance of the following SQL statements:
Code:
1. SELECT * FROM Products INNER JOIN Suppliers ON (Products.SupplierID = Suppliers.SupplierID) WHERE Products.SupplierID = 3
2. SELECT * FROM dbo.fnProductsRetrieveBySupplierID (3), dbo.fnSuppliersRetrieveBySupplierID (3)
I have built a loop to execute each statement multiple times and then compare the execution times. Although both queries produce the same result, the 2nd one (which uses the functions) is about twice as slow, does anyone know why?
I am joining three tables each table has got about 1.5 million rows,selecting data from these three tables and inserting into a table,to avoid transaction log issues I am running the query in a batch of size 50,000 rows,it is taking about 5hrs to insert all the 1.5 millions rows.
All the columns in the where clause have proper indexes,I ran show plan for the query and it is using indexes properly and not doing any table scan.I updated the statistics for all the indexes also.
query looks some thing like this.
insert into table d (col1,col2,col3,.............. ) values (a.col1,b.col2,c.col3 .....................) from a,b,c where a.id = b.id and a.id = c.id and a.id in between @minid and @currid
The @minid starts from 1 and @currid starts from 50000 ,I am running this in a loop, in next iteration @minid will become 50001 and currid 100,000 and so on.
I have two tables. Employee EmployeeCode int Primary Key
Employee_Stock EmployeeCode int StockCode varchar(10) Primay key on (Employeecode, StockCode)..
There is no foreign key relation between these 2 tables. Now my question is which query give more performance. and why? 1. Select * from Employee INNER JOIN Employee_Stock on Employee.Employeecode = Employee_Stock.EmployeeCode
2. Create a foreign Key between Employee and Employee_Stock for EmployeeCode. and run the same query.
Actually we forgot to put the foreign key relationship between these 2 tables and we have lot of queries joining them.. Now if we add foreignkey, is it going to improve the performance or not?
I wrote a query and I use cursor in the query. When I run the query on dev box it takes 3 mins. I moved the query to EPM database box and it takes forever to run. Usually EPM database query performance is much better. How do I start debugging the poor performance?
How can I check if the query is creating any table locks?
Purpose of query: I get all the Companies (20000) and loop thru each company in the cursor and do calculations.
I have a query like below and it takes a aouple of seconds to run
select a.Registration_Key, ag.Agreement_Type_Name,p.ServiceProvider from dbo.Assessment a INNER JOIN ( select distinct Registration_Key, p.ServiceProvider, max(CSDS_Object_Key) as [Sequence] from dbo.Assessment a INNER JOIN dbo.CD_Provider_Xref p ON a.Provider_CD = p.Provider_CD where Creation_DT >= '07/01/2007' and Reason_CD = 1 group by Registration_Key, p.ServiceProvider ) as s1 ON a.CSDS_Object_Key = s1.Sequence INNER JOIN dbo.CD_Provider_XREF p ON a.Provider_CD = p.Provider_CD INNER JOIN dbo.CD_Agreement_Type ag ON ag.Agreement_Type_CD = a.Agreement_Type_CD LEFT OUTER JOIN ( select distinct Registration_Key, p.ServiceProvider , 1 as served from dbo.Encounters e INNER JOIN dbo.CD_Provider_Xref p ON e.Provider_CD = p.Provider_CD where Encounter_Begin_DT between '08/01/2007' and '08/31/2007' and Procedure_CD is not null and Encounter_Units > 0
) as s2 ON a.Registration_Key = s2.Registration_Key and p.ServiceProvider = s2.ServiceProvider
group by a.Registration_Key, ag.Agreement_Type_Name, p.ServiceProvider
However, if i add a served field( stamped with 1) it takes forever to run.. All of join columns have indexes.. cluster and non-clustered.. and i don;t see any index fregmentaitons...
select a.Registration_Key, ag.Agreement_Type_Name,p.ServiceProvider, served from dbo.Assessment a INNER JOIN ( select distinct Registration_Key, p.ServiceProvider, max(CSDS_Object_Key) as [Sequence] from dbo.Assessment a INNER JOIN dbo.CD_Provider_Xref p ON a.Provider_CD = p.Provider_CD where Creation_DT >= '07/01/2007' and Reason_CD = 1 group by Registration_Key, p.ServiceProvider ) as s1 ON a.CSDS_Object_Key = s1.Sequence INNER JOIN dbo.CD_Provider_XREF p ON a.Provider_CD = p.Provider_CD INNER JOIN dbo.CD_Agreement_Type ag ON ag.Agreement_Type_CD = a.Agreement_Type_CD LEFT OUTER JOIN ( select distinct Registration_Key, p.ServiceProvider , 1 as served from dbo.Encounters e INNER JOIN dbo.CD_Provider_Xref p ON e.Provider_CD = p.Provider_CD where Encounter_Begin_DT between '08/01/2007' and '08/31/2007' and Procedure_CD is not null and Encounter_Units > 0
) as s2 ON a.Registration_Key = s2.Registration_Key and p.ServiceProvider = s2.ServiceProvider
group by a.Registration_Key, ag.Agreement_Type_Name, p.ServiceProvider, served
Hello SQL Gurus, From the query below, I am using 2 TOP functions to return the desired row. I am wondering if someone can shed some light on how to AVOID using 2 TOP statements and combine into just one select query?
select TOP 1 * from (select TOP 2 Num from A order by Num) X order by Num desc
Truly Appreciate your help as this performance issue has been bugging in my head for quite some time...
I usually am all over answering these kinds of questions, but while I continue to work on this issue, maybe someone here can lend me a hand. A vendor application we run, stores metadata about backup blobs stored on a NAS device. The app basically backs up select folders on 1400 remote computers in the back office of our stores, and stores this on a NAS, while maintaining metadata about the BLOBs in SQL Server so that they can push recovery of the data back to the original store it came from. The database is roughly 80GB in size and has a single file group and is on its own dedicated LUN. It uses TempDB heavily, and this is not something that I can change, but TempDB is on a different disk array.
Today I spent hours on a conference call with them looking at a specific stored procedure that is used to clean up the records in the database after a BLOB file is deleted. A single BLOB file can have millions of related records in the database. There is a LEFT JOIN in the code that is against a table with 150 million + rows of data in it. The table size is fairly small, only 5 GB of data, but the LEFT JOIN spools 2.4GB of data to a Hash Match. It seems to me like the left join can't be removed, but I don't get how all of this works, because I didn't write the application. It is an INDEX SCAN. I can't seem to eliminate it. Is there anything I can do to help this thing out?
I am attempting to get a better understanding of why my SQL 2005 setup when running a simple select statement on a large table is displaying very low IO in performance monitor. If i run a single Select * From testtable i see 4mbsec transfer and Disk readssec is around 8-9. This particular table is sitting on a single U320 10k drive so i expecting to see far more substantial IO. Does anyone have any information on how IO is consumed using different SQL operations so i can obtain a better understanding?
So I am experimenting with upgrading a Windows Mobile application from .NETCF 2.0 to .NETCF3.5, along with moving my SQL 2005 Compact to SQL Compact 3.5. I have a database that I upgraded using the recommended methods (creating a datasource in VS2008, opening the SQL 2005 Compact .sdf file and allowing the tool to upgrade to SQL Compact 3.5). On the device (Dell Axim x51), with the .SDF files on an SD Card, the query, when executed against the SQL 2005 Compact database file, takes 1.5 seconds, but takes 1min41sec to execute on the SQL Compact 3.5 database.
This is a fairly simple query, with an inner join (using about 4 inner join constraints), a where clause (over about 3 things), and an order by clause. The execution plan for the SQL Compact 3.5 query shows index seeks (one consuming 2% and the other consuming 0%, with the inner join using 98%). The database files are on the order of 90MB.
Can anyone offer any suggestion why the SQL Compact 3.5 query performance would be so much worse than the SQL 2005 Compact performance?
select distinct a.* from test a inner join test1 b on b.col1 = a.col1 inner join test2 c on c.col2 = a.col2 where exists (select NULL from test3 d where (d.col3 = a.col3 or a.col3 is null))
All the columns involved in the WHERE clause and JOIN conditions have index. Is there any alternative available for the above which can increase the performance ?
Hello, Here I have a small doubt about validating UserName and Password.I validate username and password with following quey (forget about case-sensitiveness of password) :select password from table where username='Uname' and password='pwd';Now in second scenario, I use following :select password from table where username='Uname'and validate password in .NET code.1) If user having 'Uname' does not exists in database then which query is faster (first or second)?2.1) If user exists and password is not matching then which is faster?2.2) 2.1 + If there is clustered index on username column, is first query optimized? Thanks
How could I tell the performance difference between two queries:
One is: select * from table where Lower(colomnname) = 'value'
The other is: select * from table where colomnname = 'value'
Basically the difference is in lower() function, how much this function will affect the query performance. Is there a formal way to test it out, or by any logic. Thanks, Mike
Hello, I'm new to OLAP systems and MDX, and am doing some testing on Microsoft Analysis Service 2000 SP3, the database is Microsoft SQL Server 2000 SP3. In the cube I designed, the fact table contains purcahse information including the cost and quantity of the parts and the suppliers of the parts. There are 2 measures, qtyAvailable and cost. Two dimensions are involved, which are part and supplier. Here is what I 'm going to do: 1. calculate sum(qtyAvailable * cost * 0.0001) for all the items in the fact table, let us call this value sum1 2. find out in the fact table all of those parts with their sum( qtyAvailable * cost ) greater than sum1
Here is the MDX to do the 2 things above: with member [Measures].[prod1] as '[Measures].[qtyAvailable] * [Measures].[cost]' with member [Measures].[prod2] as '[Measures].[prod1] * 0.0001' with member [Measures].[sum1] as 'sum(crossjoin([part].members, [supplier].members), [measures].[prod2])' with member [Measures].[sum2] as 'sum(crossjoin([part].currentmember, [supplier].members), [measures].[prod1])' select {[Measures].[sum2]} on columns, Filter({[part].members}, ([measures].[sum2]>[Measures].[sum1]) ) on rows from cube1
It takes 9 seconds to calculate only sum1 by using another MDX. The value of sum1 is 8256865.23. If I replace sum1 in the MDX provided above with 8256865.23, it takes several minutes to finish. But it keeps running for hours if I run the MDX query above with [sum1] instead of 8256865.23. So the calculation of sum1 seems to be the bottle neck. In my MDXquery, it iterates thru the members of the dimension [part]. I don't know whether [sum1] is calculated repeatedly for each iteration or not. However, Sum1 will be constant during the running of the whole MDX query. So sum1 only needs to be calculated once. I tried to use cache to improve the performance but it didn't work. Can anyone help to tell whether there is anyway to optimize this query? Thanks so much Roy
Please let me know the way to increase the performance of the below query :
SELECT DISTINCT a.* FROM a INNER JOIN #temp1 b on (a.col1 = b.col1 OR a.col1 IS NULL) INNER JOIN #temp2 c on (a.col2 = c.col1 OR a.col2 IS NULL)
Here, there are no indexes/pk on the columns in any table. But I am sure that the table #temp1 and #temp2 has distinct/unique values in columns col1 used here. The table 'a' has redandant values in its column used here.
Should I create pk on the columns for #temp1 and #temp2 used here. Is that enough ? Or should I also create index on the columns of the table 'a' used here.
Also please let me know is there anyother way to increase the performance of the query.
Hi, I want to know if anyone have any clue about the reason why this happens. I have a table on SQL Server 7 with 320 thousand registers and when I execute a SELECT * on it, it takes about 6 seconds to give an answer. But the same table on SQL Server 2005 Ent takes about 16 seconds, Is it normal?:shocked: :shocked:
Hi all. I'm new to this forum and looking for some assistance.
I've run into a unique (for me) performance problem.
I have a select statement that performs fine ( < 1 second ) using one set of values in the criteria but very poorly ( > 3 minutes )using different values. In both circumstances the query returns zero rows. The query involves a parent-child join with the criteria spread across both tables.
The execution plan looks similar between the two; the difference being a few percentage points difference on some of the operations. The tuning advisor has no recommendation in case 1 but suggests a couple of additional indexes and 4 statistics in case 2.
My gut tells me that the solution is *not* applying the additional indexes/statistics but some other issue. Or it could be the sushi I just ate.
Anyway, I'm hoping someone can point me in the right direction as what to analyze to determine why simply changing a single supplied criteria value would have such a dramatic effect on performance.
I have just created a logging table that I anticipate to have 10's of millions of rows (maybe 100's of millions eventually).
Basically its a very basic, narrow table, we are using it to log hits on images for a webserver.
My question is that we want to run queries that show how many rows are shown per day etc, however we want to make sure these queries which we are anticipating to be very heavy, do not slow down the system.
I have been recommended to have a seperate database (mirror/replica) for reporting so that the performance of regular activity will not be affected.
I assume this means I would need another server for this other database?
I am thinking there are probably some alternative solutions to this as well. Getting a dedicated server just for these queries really isnt an option.
In order to improvement it is not a problem to make some sacrifices. For example, having the data update every 15 minutes is more than acceptable.
I see certain websites I use employ this strategy of making data update every 15 minutes, but I am unsure what is likely going on behind the scenes. Also the queries are lightening fast when run. I am thinking that they have some sort of table that is populated with some computed data, so its quick to query.
Any thoughts or suggestions to give me some direction, are very much appreciated !