I have a query that returns raw tick data from a table. Unfortunately after
even a few days there are hundreds of thousands of rows so the following
query is not efficient.
SELECT * FROM RAWTICKDATA WHERE Status = 'I' AND ContractCode = ? AND
RawTickID = (SELECT Max(RawTickID) FROM RAWTICKDATA WHERE Status = 'I' AND
ContractCode = ? AND PRICE =(SELECT Min(Price) FROM RAWTICKDATA WHERE Status
= 'I' AND ContractCode = ?))
The most obvious solution then is to get all tick data with status ='I'
(Imported) for a contract, process it and then move it to another table for
archiving. I am faced with a problem however: After selecting all data for a
contract with status='I' the application was updating these records to a new
status of 'P' (processed). Unfortunately another application is continuning
to feed in live data and so it is possible that we will inadvertantly update
unprocessed data to 'P'.
Question: Is it possible to select all records with status 'I' (from above
query) and update their status to 'P' in one sequence?
I am not a programmer, but if this is possible I should be able to implement
the query.
I am installing a FTS system on an existing system (that used LIKE % queries!! hahaha)
Anyway, it is working pretty well (AND FAST!) but when I type in a common word like "damage" I get like 32,000 records. Now, the server handles those records in about one second but the ASP page that returns the results takes about one MINUTE to download. When I save the source, it is almost 12 MEGS!!
So, basically, I am streaming 12 megs across the pipe and I want to reduce that.
I would like the system to detect over maybe 500 records and cancel the search.
I have put a "TOP 500" into the search and that actually works pretty well but is there a better/smarter method?
I have this problem with reporting services wherein my report contains an Area Chart. My plot points is about 100,000 records. The problem is sometimes the Chart is displaying right and sometimes its not. The users need to refresh the report again for the chart to display right. This doesn't happen if my records to plot contains around 10,000 records.
Does RS have a limation in plotting records in an Area Chart? Any solutions?
Hi,I have a sql server database with 1.7 million records in a table, withabout 30 fieldsWhen I run select * from tablename it can take over 5 minutes.How can I get this time down or is it normal?ThanksJerry
writing the query for the following, I need to collapse the continuity. If the termdate for an ID is one day less than the effdate of the next id (for the same ID) i need to collapse the records. See below example .....how should i write the query which will give me the desired output. i.e., get min(effdate) and max(termdate) if termdate is one day less than the effdate of next record.
i m stuck up with a problem...actually i dont have much experience in database line....i m new to this line....i have recently joined the job & this problem is like a test of me....if i will be able to give the solution then everything is fine otherwise i will be fired & im not in a condition to leave this job as this is my first job in software development....i have got this chance with lots of difficulty....so please help me if u can...
the problem is....>> i m using a procedure to check the duplicatye records by using string comparison against address of persons..allover the country....
i m using SQL server 7.0 i have a single table(name of table is DATA) which contains 350000 records( i mean address entries) there are about 35 columns but i have to check duplicate records only against address field...for that first of all i remove special characters from the address field.....then i compare first 20 characters for duplicate entries...
for this i m generating another table(name of another table is RESULT)...
how the logic works...initially the data table contains the records but the result table is totally blank....first of all i pick first entry of address from DATA table then...check it with the entry in RESULT table if the entry exists... it compares the address if the record is same then it generates a refference of this address and make an entry....means a refference of that entry....(as far as very first record is concerned there will be no entry in the RESULT table so it will enter the address over there...then it picks up the second record...checks it in the RESULT table...now this record will be compared with the one & only entry in the RESULT table....if the entry is same then the refference will be entered... otherwise it will be entered as second record in the RESULT table....)
now where lies the problem.....initially the procedure is very fast.... but it gradually slows down .....because(when it checks the 10th record for duplication it compares the entry in RESULT table for 9 times only *** similarly when it checks the 100th record it compares it for 99 times *** similarly when it checks the 10000th record it compares it for 9999 times so here lies the problem....
when it checks the 100000th record it gets dammm slow... what i have get till now is that i have checked.....>>>>> 5000 records in 4 mins.... 25000 records in 22 mins.... and 100000 records in 20 hours....(means initially its faster but it gradually slows down) ************************************************** ************************ here i m giving the code for the procedure...... ************************************************** *************************
CREATE PROCEDURE pro1 as
SET NOCOUNT ON Declare @IvgId as numeric(15) Declare @Address as nvarchar(250) Declare @AddressClean as nvarchar(250) Declare @MaxLen as INT Declare @Add as nvarchar(250) Declare @Ic as int Declare @FoundIvgId as numeric(15) Declare @NewIvgId as numeric(15)
/* here 'N' is for keeping track for some system failures etc */
Declare CurData CURSOR forward_only FOR Select IvgId, Address From Data Where ProcessClean = 'N'
OPEN CurData
FETCH NEXT FROM CurData INTO @IvgId, @Address
WHILE @@FETCH_STATUS = 0 Begin /*here i m doing string cleaning by removing special characcters */ Select @MaxLen = len(LTRIM(RTRIM(@Address))) Select @Address = LOWER(@Address) Select @Ic = 1 Select @AddressClean = ' ' While @Ic <= @MaxLen /* here @MaxLen is the maximum length of the address field but i have to compare only first 20 characters */ Begin Select @Add = Substring(@Address, @Ic, 1)
If ascii(@Add) > 47 AND ascii(@Add) <= 64 AND @Add <> ' ' Begin Select @AddressClean = @AddressClean + @Add End
If ascii(@Add) > 90 AND @Add <> ' ' Begin Select @AddressClean = @AddressClean + @Add End
Select @Ic = @Ic + 1 End
/* now we have removed special characters , for failure checking i m using this 'Y' */ Update Data Set AddressClean = @AddressClean, ProcessClean = 'Y' Where IvgId = @IvgId
/* till now procedure doesnt take too much time & cleans all the 3 lack records in abt 40 mins but next part is giving trouble */
Declare CurData CURSOR FOR Select IvgId, Address, AddressClean From Data Where ProcessDup = 'N' OPEN CurData
FETCH NEXT FROM CurData INTO @IvgId, @Address, @AddressClean Select @NewIvgId = 100
WHILE @@FETCH_STATUS = 0 Begin
If EXISTS (Select IvgId From Result Where SubString(RTRIM(LTRIM(AdressClean)),1,20) = SubString(RTRIM(LTRIM(@AddressClean)),1,20)) Begin Update Result Set DupIvgId = @IvgId Where SubString(RTRIM(LTRIM(AdressClean)),1,20) = SubString(RTRIM(LTRIM(@AddressClean)),1,20) End
ELSE Begin Insert Into Result Values (@NewIvgId, @Address, @AddressClean,0) Select @NewIvgId = @NewIvgId + 1 End
Update Data set ProcessDup = 'Y' Where IvgId = @IvgId FETCH NEXT FROM CurData INTO @IvgId, @Address, @AddressClean End
Close CurData Deallocate CurData SET NOCOUNT OFF
Print 'Done................................'
************************************************** ************************** now the procedure is over....now i m writing the SQL script of DATA & RESULT table ************************************************** ************************
so now i have given the whole description of my problem....i m eagerly waiting for reply...... if anybody can help....i will be very thankful..... bye for now Bhupinder singh
Hi, i just started working on a new project, a RIA with a backend on SQL 2005.
The question that I'm asking - about the databases architecture - is it possible: a) (this is the preferred solution) to have up to 5 000 (five thousands) databases inside a SQL 2005 instance (each client with his own small db) b) would it be better to have one large database with tables eventually having near a billion of rows ? c) one single database with thousands of small tables.
Thank You All.
Edit : The clients databases will be identical in structure. The motivaion for having each client with his own database is to ease management (backups, restores, roll-backs) and tools in this direction will be created (scripts, automated procedures, scheduled management plans etc). Each client is proprietary, and to have things separated seems more legal. The hardware is not a problem, the db server will be a blade center runing a clustered MS Windows Server.
Code Snippet
(07:51:49) him: i haven't followed sql recently but let me see (seems difficult question) (07:52:36) him: what is the motivation to have different datbases for each client? (07:52:46) me: (the hardware is not a problem, probably ibm blade center, about the management - tools will be created to ease) (07:52:52) him: it is it the size/efficiency or you want to separate them (07:53:02) me: for the back-ups and restores and roll-backs (07:53:13) me: the db-s r identical (07:53:30) him: identical in design or data inside them (07:53:33) me: yes, design (08:02:12) him: so the limitation is Databases per instance of SQL Server 32,767 (08:02:21) him: so it seems that in that respect you are going to be fine (08:02:45) me: yes, i've seen the specs (08:02:50) him: yep (08:03:03) him: in terms of design (08:03:13) him: managing 5000 databases would be a nightmare (08:03:22) him: unless there is some automatic way to do it (08:04:18) me: that's for shure, and that's what is intented (08:04:29) me: automadet procedures, scripts etc (08:05:27) him: so again what is the motivation to partition each client in different database (08:06:05) me: there is a need to keep their data separate (08:06:19) me: each client is proprietary (08:06:44) me: and to have things separated is more legal (08:07:25) him: well you try with several databases and see how it scales (08:08:37) him: one thing that would also prevent confusing databases connections and may be helpful if you decide to change the to a single database model is to design the database as if it would be used with many clients but use it just for one (08:09:09) him: then, if you decide to merge the databases in the future it would be relatively easy (08:09:16) him: unless this would create too much overhead (08:09:41) me: yes, this is allready the case (08:09:58) me: so, i go for a test u say (08:10:12) me: and see how it's working (08:10:16) him: ok. then it seems that the issue is maintaince then anything else (08:10:29) him: yeah. i don't think the sql server would be a problem (08:10:44) him: there would be some overhead of the different databases but should be small (08:11:12) me: :) (08:11:24) me: k, thank you verry much (08:12:53) him: http://www.thescripts.com/forum/thread503521.html (08:13:09) him: i think either way has plus and minuses (08:16:56) him: also read the forum link i sent you (08:17:00) me: ty, i'm on it (08:17:16) him: there are some interesting considerations i did not think of (08:17:58) me: i just saw about caching maintainance plans, stored procs
How can I quickly delete thousands of rows in a table (SQL2000) according a query and without blowing up the log file? For instance executing the query: Delete from transactions WHERE transactiondatestamp < DATEADD (m,-4,GETDATE())
increases my log file to almost 6GB before job was done an normal size was re-obtained. In addition it took a long to time to get the job done. With the command truncate table I cannot use query unfortunately but this would be faster.
I have a custom application that on occasion requires thousands of TSQLfiles (on the file system) to be compiled to the database.What is the quickest way to accomplish this?We currently have a small vbs script that gets a list of all the files,the loops around a call to "osql". each call to osql opens/closes aconnection to the destination database (currently across the network).
I am able to run a query which runs FAst in QA but slow in theapplication.It takes about 16 m in QA but 1000 ms on theApplication.What I wanted to know is why would the query take a longtime in the application when it runs fast on SQL server?How should we try debugging it?Ajay
I have a report that I'd like to involve in delivering tons of PDFs for each of our customers for a billing cycle. There is a table that the report reads from that gives it all its data necessary for the report (which is a customer bill) and the table also has a column that has the file name for the PDF for that particular customers bill for the report. Basically each table row represents one output report (bill) and each table row has its name nicely formated for me. Writing the report is not a problem. Figuring out how to run through thousands of rows generating a PDF for each with the file name from the table is my challenge.
I would like to create either a SP or a SSIS package (that is scheduled) that can run a report for each line of data in my table and spit out a PDF file to a UNC path. It might have to generate thousands of PDFs to a UNC path that has plenty of space. There will be another SSIS package that moves the PDFs later to their proper directory.
So, is there someone who has done this before? Any suggestions? Is there a quick path to doing this, would it take a ton of time? Any tutorials out there?
Thanks, Keith
p.s. My preference is to do this in SP's or SSIS and if I need a .NET language to do that in VB.NET, but I'd rather avoid that if I can.
I have a query similar to the following. The intent of this query is to retrieve the top 6 records meeting the specified criteria (LOGTYPENAME = 'Process Status Start' OR LOGTYPENAME = 'Process Status End' ) based on most recent dates. Please keep in mind that I expect to return up to 6 records for each unique LogProcessName. This could be thousands of different LogProcessNames with up to 6 records for each.
1) The table I am executing against currently is very large in size and thus takes a long time to execute against. It would seem there must be a more efficient query to get the results I am looking for? 2) CTE doesn't work on SQL 2000. I need a query that does. 3) I cannot modify the database itself in the process.
;WITH cte AS ( SELECT [LogProcessName], [LogBody], [LogDate], [LogGUID], row_number() OVER(PARTITION BY [LogProcessName] ORDER BY [LogDate] DESC) AS RN FROM [LOGTABLE] WHERE [LogTypeGUID] IN ( SELECT LogTypeGUID FROM LOGTYPE WHERE LogTypeName = 'Process Status Start' OR LogTypeName = 'Process Status End' ) ) SELECT * FROM cte WHERE RN = 1 OR RN = 2 OR RN = 3 OR RN = 4 OR RN = 5 OR RN = 6 ORDER BY [LogProcessName] DESC, [LogDate] DESC
Does anybody else have any idea that would yield the results that I am looking for and take into account items 1-3 above?
I have an update query running which to just now has been running for 22 hours running on two tables 1 a lookuptable that has just been created within the batch the other a denormalised table for doing data analysis on
the query thats causing teh problem is
--//////////////////////////////////// this is the one thats running
Print 'Update Provider 04-05 EmAdmsCount12mths : ' + CAST(GETDATE() AS varchar) GO Update Provider_APC_2004_05 set EmAdmsCount12mths = (Select COUNT(*)-1 from Combined_Admissions where ((Combined_Admissions.NHSNumber = Provider_APC_2004_05.NHSNumber) or (Combined_Admissions.PASNUMBER = Provider_APC_2004_05.PDDISTNO)) and (Combined_Admissions.AdmDate BETWEEN DateAdd(yyyy,-1,Provider_APC_2004_05.AdmDate) AND Provider_APC_2004_05.AdmDate) AND Combined_Admissions.AdmMethod like 'Emergency%')-- and -- CA.NHSorPrivate = 'NHS')) FROM Provider_APC_2004_05, Combined_Admissions
any help in improving speed would be most welcome as there are 3 more of these updates to run right after this one and the analysis tables are almost double the size of this one
This sounds like a pretty easy one. I have a SQL 2000 database with 2-3.4GHZ CPUs and 1GB of RAM. I have one database on it. I go in Query Analyzer on another machine and run a simple query like 'SELECT * FROM USERS' which should return 15,000 rows.
IT takes 30 (thirty) seconds to finish this query. OMG
Where do I start to decipher why on Earth this takes more than .01 seconds?
Hi, I have a query which has suddenly started responding slow. CAn anyone tell me what could be the possibilities? I tried update stats(I am on sql 70-though it's done auto but i did it manually again) I used union all in place of union but had no big effect.any othe thought? Thanks!
I have a query that takes minutes to execute, even through there are about 300,000 records are being processed. I would appreciate any help with optimizing that query. I have two tables: User and Usage. Table user has two fields: User_Id and Date_Created and a non-clustered index on User_Id. Table usage has two fields also: User_Id and Date_Used and non-clustered index on both fields. The User table is populated when the user registers. The Usage table is populated every time the user opens a document.
Here is what I need to do: get the number of users from the Usage table who opened a document at least once after they have registered during the last 30 days for each day in the time frame, where the time frame varies. For example, if the time frame is 8/01/00 - 8/31/00, I need to get the following data:
date returns ---- ------- 8/01/00 10 (10 users returned to the document between 7/2/00 and 8/1/00) 8/02/00 15 (15 users returned between 7/3/00 and 8/02/00) . . . 8/31/00 20 (20 users returned between 8/1/00 and 8/31/00)
Here is my query:
SELECT [date], (SELECT count(distinct user_id) FROM usage u JOIN [user] ON u.[user_id] = [user].[user_id] WHERE u.[date] BETWEEN usage.[date]-30 AND usage.[date] AND u.[date]>[user].date_created GROUP BY usage.[date])returns FROM usage WHERE [date] BETWEEN @date1 AND @date2
This query works fine, but too slow. We use MS SQL server 7.0.
I have a query which responds immediately when run however if I add an order by clause it takes 40 seconds. Below is the query with the order by clause SELECT distinct Licenseplate, platetypecode.platetypecode, platetypecode.platetypecodeid FROM Ticket INNER JOIN PlateTypeCode ON PlateTypeCode.PlateTypeCodeID = Ticket.PlateTypeCodeID ORDER BY licenseplate
The Ticket table contains approx. 11,000 records. I have created a nonclustered index for the licenseplate field, a 7 char varchar field. Any suggestions for speeding up the query?
The following query is causing some problems because it's taking too long to complete. I looked at the estimated execturion plan and I am unsure why it appears to spend over 50% of its time doing a 'Bookmark Lookup' on on particular column (SRA_SR_ID in the S_EVT_ACT table). There is an index on the column - S_EVT_F14. I'm not sure if the query is using the index properly. What can be done to specifically improve this particular problem? In general, does anyone have some suggestions for optimizing the query as a whole?
Thanks in advance. Clive
SELECT T1.APPT_REPT_FLG, T18.X_ALIS_ID, CONVERT (VARCHAR (10),T1.APPT_START_TM, 101) + ' ' + CONVERT (VARCHAR (10),T1.APPT_START_TM, 8), T1.ASGN_USR_EXCLD_FLG, T2.NAME, T19.STAT_CD, T1.APPT_REPT_TYPE, T15.NAME, CONVERT (VARCHAR (10),T1.TODO_ACTL_END_DT, 101) + ' ' + CONVERT (VARCHAR (10),T1.TODO_ACTL_END_DT, 8), T1.TODO_CD, T1.X_DOC_CAT_ID, CONVERT (VARCHAR (10),T1.TODO_PLAN_START_DT, 101) + ' ' + CONVERT (VARCHAR (10),T1.TODO_PLAN_START_DT, 8), T1.TARGET_OU_ID, T7.ZIPCODE, T3.ZIPCODE, T9.EXP_RPT_NUM, T1.LAST_UPD_BY, T1.OWNER_PER_ID, T1.PART_RPR_ID, T1.RATE_LST_ID, CONVERT (VARCHAR (10),T1.APPT_REPT_END_DT, 101) + ' ' + CONVERT (VARCHAR (10),T1.APPT_REPT_END_DT, 8), T1.ACTIVITY_UID, T4.NAME, T1.PR_TMSHT_LINE_ID, T18.LAST_NAME, T7.ADDR, T18.SEX_MF, T1.BILLABLE_FLG, CONVERT (VARCHAR (10),T1.TODO_PLAN_END_DT, 101) + ' ' + CONVERT (VARCHAR (10),T1.TODO_PLAN_END_DT, 8), T1.SRA_SR_ID, T1.TARGET_PER_ADDR_ID, T18.X_FST_NAME, T1.EVT_STAT_CD, CONVERT (VARCHAR (10),T1.X_SCAN_DATE, 101) + ' ' + CONVERT (VARCHAR (10),T1.X_SCAN_DATE, 8), T1.ROW_STATUS, T1.ACD_CALL_DURATION, T5.NAME, T8.FAX_PH_NUM, T8.X_FST_NAME, T8.LAST_NAME, T1.MODIFICATION_NUM, T1.X_CAMP_ID, CONVERT (VARCHAR (10),T1.X_SCAN_TIME, 101) + ' ' + CONVERT (VARCHAR (10),T1.X_SCAN_TIME, 8), T1.ASSOCIATED_COST, T13.NAME, CONVERT (VARCHAR (10),T1.LAST_UPD, 101) + ' ' + CONVERT (VARCHAR (10),T1.LAST_UPD, 8), T17.TMSHT_NUM, T1.PR_SYMPTOM_CD, T1.OPTY_ID, CONVERT (VARCHAR (10),T18.BIRTH_DT, 101) + ' ' + CONVERT (VARCHAR (10),T18.BIRTH_DT, 8), T1.PR_EXP_RPT_ID, CONVERT (VARCHAR (10),T1.APPT_START_DT, 101) + ' ' + CONVERT (VARCHAR (10),T1.APPT_START_DT, 8), T8.FST_NAME, T16.SR_NUM, T1.SRA_DEFECT_ID, T1.CREATED_BY, T8.WORK_PH_NUM, CONVERT (VARCHAR (10),T1.COST_EXCH_DT, 101) + ' ' + CONVERT (VARCHAR (10),T1.COST_EXCH_DT, 8), T1.CALL_ID, T1.X_CLIENT_ID, T1.PROJ_ID, T12.DEFECT_NUM, T1.CREATOR_LOGIN, T1.CONFLICT_ID, T19.OUTCOME_CD, T1.TEMPLATE_FLG, T2.PR_ADDR_ID, T1.PREV_ACT_ID, T1.X_DOC_NAME, T1.EXP_RLTD_FLG, T1.X_BATCH_REF, T1.PRI_LST_ID, T1.SRC_ID, T1.X_POLICY_REF, CONVERT (VARCHAR (10),T1.CREATED, 101) + ' ' + CONVERT (VARCHAR (10),T1.CREATED, 8), T1.EMAIL_FORWARD_FLG, T11.DMT_NUM, T1.TMSHT_RLTD_FLG, T1.ROW_ID, T10.NAME, T18.CONSUMER_FLG, T1.TARGET_PER_ID, T18.FST_NAME, T1.PRIV_FLG, T3.PROVINCE, T8.X_ALIS_ID, T8.JOB_TITLE, T14.NAME, T1.NAME, T1.PCT_COMPLETE, T1.SRA_TYPE_CD, T1.ALARM_FLAG, T1.CAL_DISP_FLG, T1.EVT_PRIORITY_CD, T1.COST_CURCY_CD, T2.LOC, CONVERT (VARCHAR (10),T1.TODO_ACTL_START_DT, 101) + ' ' + CONVERT (VARCHAR (10),T1.TODO_ACTL_START_DT, 8), T20.FILE_NAME, T1.SRA_RESOLUTION_CD, T6.PRDINT_ID, T1.OWNER_LOGIN FROM dbo.S_EVT_ACT T1 LEFT OUTER JOIN dbo.S_ORG_EXT T2 ON T1.TARGET_OU_ID = T2.ROW_ID LEFT OUTER JOIN dbo.S_ADDR_ORG T3 ON T2.PR_ADDR_ID = T3.ROW_ID LEFT OUTER JOIN dbo.S_PRI_LST T4 ON T1.PRI_LST_ID = T4.ROW_ID LEFT OUTER JOIN dbo.S_PRI_LST T5 ON T1.RATE_LST_ID = T5.ROW_ID LEFT OUTER JOIN dbo.S_ACT_PRDINT T6 ON T1.ROW_ID = T6.ACTIVITY_ID LEFT OUTER JOIN dbo.S_ADDR_PER T7 ON T1.TARGET_PER_ADDR_ID = T7.ROW_ID LEFT OUTER JOIN dbo.S_CONTACT T8 ON T1.TARGET_PER_ID = T8.ROW_ID LEFT OUTER JOIN dbo.S_EXP_RPT T9 ON T1.PR_EXP_RPT_ID = T9.ROW_ID LEFT OUTER JOIN dbo.S_OPTY T10 ON T1.OPTY_ID = T10.ROW_ID LEFT OUTER JOIN dbo.S_PART_RPR T11 ON T1.PART_RPR_ID = T11.ROW_ID LEFT OUTER JOIN dbo.S_PROD_DEFECT T12 ON T1.SRA_DEFECT_ID = T12.ROW_ID LEFT OUTER JOIN dbo.S_PROD_INT T13 ON T6.PRDINT_ID = T13.ROW_ID LEFT OUTER JOIN dbo.S_PROJ T14 ON T1.PROJ_ID = T14.ROW_ID LEFT OUTER JOIN dbo.S_SRC T15 ON T1.SRC_ID = T15.ROW_ID LEFT OUTER JOIN dbo.S_SRV_REQ T16 ON T1.SRA_SR_ID = T16.ROW_ID LEFT OUTER JOIN dbo.S_TMSHT_LINE T17 ON T1.PR_TMSHT_LINE_ID = T17.ROW_ID LEFT OUTER JOIN dbo.S_CONTACT T18 ON T1.X_CLIENT_ID = T18.ROW_ID LEFT OUTER JOIN dbo.S_CAMP_CON T19 ON T1.X_CAMP_ID = T19.SRC_ID AND T1.TARGET_PER_ID = T19.CON_PER_ID LEFT OUTER JOIN dbo.S_ACTIVITY_ATT T20 ON T1.ROW_ID = T20.PAR_ROW_ID WHERE ((T1.APPT_REPT_FLG != 'Y' OR T1.APPT_REPT_FLG IS NULL) AND (T1.TEMPLATE_FLG != 'Y' AND T1.TEMPLATE_FLG != 'P' OR T1.TEMPLATE_FLG IS NULL)) AND (T1.SRA_SR_ID = '1-EQLOO')
I have a query that is taking too long to run. It take 14 seconds to return 6800 rows. However, if I move the query out of a stored proc, it takes 1 second. I want to understand this issue and ideally fix the stored proc case.
I've simplified my actual queries for readability.
-- @filter is value to filter against or NULL to return all records. CREATE PROCEDURE queryPlayerStations(@filter INTEGER) AS SELECT * FROM MyTable -- Other joins and query logic omitted for brevity WHERE ((@filter IS NULL) OR (MyTable.Column = @filter)) GO
When I run the query directly in Query Analyzer, it runs very fast.
DECLARE @filter INTEGER SET @filter = NULL
-- Takes ~1 second to return 6800 rows. That's great performance SELECT * FROM MyTable -- Other joins and query logic omitted for brevity WHERE ((@filter IS NULL) OR (MyTable.Column = @filter))
When I put the parameters in the stored proc it runs fast.
CREATE PROCEDURE queryPlayerStations AS DECLARE @filter INTEGER SET @filter = NULL
SELECT * FROM MyTable -- Other joins and query logic omitted for brevity WHERE ((@filter IS NULL) OR (MyTable.Column = @filter)) GO
-- Takes ~1 second to return 6800 rows. That's great performance EXEC dbo.queryPlayerStations
Anyone have any ideas what I can do to improve the stored proc case?
i want to have a like search in the following query.
SELECT DISTINCT TOP 200 a.AccountID, a.AccountNumber, c.CLI, con.SurName, addr.Address1 [Account Address], addr.Postcode as [Account Postcode], atp.Name AS Type, cs.Code AS Status FROM account_t a INNER JOIN customer_t cust on a.customerID = cust.CustomerID INNER JOIN AccountType_T atp on cust.AccountTypeID = atp.AccountTypeID INNER JOIN CustomerStatus_T cs ON a.CustomerStatusID = cs.CustomerStatusID INNER JOIN Contacts_T con on cust.MasterContactID = con.ContactID INNER JOIN Address_T addr ON cust.MasterAddressID = addr.AddressID LEFT OUTER JOINCLI_T c ON a.AccountID = c.AccountID WHERE (c.CLI LIKE @CLI + '%') AND (con.SurName LIKE @Surname + '%') AND (addr.Address1 LIKE @Address + '%') AND (REPLACE(addr.Postcode, ' ', '') LIKE @Postcode + '%') AND c.DateArchived IS NULL
here all fields @CLI, @Surname, @Address, @Postcode are varhcar types...which is making this query very slow...is there any suggestion to improve this query?
When I want to display the total records (#) in a webpage, it is very slow. When I try to remove the total records and show them per 20s, it responds very fast.
What might be going on here? The Query is against a single table withsome criteria. The database is active with upto 200 connected usersand at peak times there are 10 or more active sessions. Most of thetime, the query comes back in milliseconds. Occasionally though, it itcan take a whole minute. I've been watching CPU, Memory, Disk. Noneof these appear to be the bottlenecking. (CPU usually below 10% andalways below 50%, pages/sec is 0, and disk % is low and does notspikes during hangs) I also checked to see if hangs were synching withTlog backups or other scheduled jobs, but that is not the case. Thebox has good hardware 4GB RAM and 2 CPU at 3.4 GHz. What could beholding this query up?Thanks for any ideas.Dave
This UNION query is very slow. With only 3,000 records in the Parent tableand 7,000 records in the Child table, it takes about 60 seconds to run andreturns about 2200 records.Any ideas on speeding it up? Thanks.-- PART 1: HAS NO CHILD RECORDSSELECT P.PROJECT_ID, 'No Child Data' AS SUB_TYPEFROM PROJECTS P LEFT JOIN PROJECTS_CHILDREN CON P.PROJECT_ID = C.PROJECT_IDWHERE P.PROJECT_ID IS NULLUNION-- PART 2: HAS CHILD RECORDS, BUT NOT OF TYPE ZSELECT PROJECT_ID, 'Child Data, Not type Z' AS SUB_TYPEFROM PROJECTSWHERE PROJECT_ID NOT IN((SELECT PROJECT_IDFROM PROJECTS_CHILDRENWHERE CHILD_TYPE Like "Z*")ANDPROJECT_ID NOT IN (SELECT P.PROJECT_IDFROM PROJECTS P LEFT JOIN PROJECTS_CHILDREN CON P.PROJECT_ID = C.PROJECT_IDWHERE P.PROJECT_ID IS NULL));
Excuse me in advance fo my little English.I've got this stored procedure************************************************** ************************************declare @Azienda as varchar(3), @Utente as varchar(20),@DataDa as datetime, @DataA as datetime,@AreaDa as varchar(3), @AreaA as varchar(3),@LineaDa as varchar(3), @LineaA as varchar(3),@TipoDa as varchar(3), @TipoA as varchar(3),@FamigliaDa as varchar(3), @FamigliaA as varchar(3),@ProdottoDa as varchar(20), @ProdottoA as varchar(20),@AgenteDa as varchar(4), @AgenteA as varchar(4),@NazioneDa as varchar(50), @NazioneA as varchar(50),@ZonaDa as Varchar(3), @ZonaA as Varchar(3),@ProvinciaDa as varchar(2), @ProvinciaA as varchar(2),@ClienteDa as Varchar(12), @ClienteA as Varchar(12),@DestinDa as varchar (5), @DestinA as varchar (5),@TipoDestinDa as varchar(1), @TipoDestinA as varchar(1),@FlagProdNoTarget as varchar(5),@GrAcqDa as varchar(10), @GrAcqA as varchar(10),@TipoCliDa as varchar(3), @TipoCliA as varchar(3),@SettMercDa as varchar(3), @SettMercA as varchar(3)Set @Azienda = '900'Set @Utente = 'Eugenio'Set @DataDa = '2004-01-01'Set @DataA = '2004-01-10'Set @AreaDa = 'UNI'Set @AreaA = 'UNI'Set @LineaDa = ''Set @LineaA = 'ZZZ'Set @TipoDa = ''Set @TipoA = 'ZZZ'Set @FamigliaDa = ''Set @FamigliaA = 'ZZZ'Set @ProdottoDa = ''Set @ProdottoA = 'ZZZZZZZZZZZZZZZZZZZZ'Set @AgenteDa = ''Set @AgenteA = 'ZZZZ'Set @NazioneDa = ''Set @NazioneA = 'ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ ZZ'Set @ZonaDa = ''Set @ZonaA = 'ZZZ'Set @ProvinciaDa = ''Set @ProvinciaA = 'ZZ'Set @ClienteDa = ''Set @ClienteA = 'ZZZZZZZZZZZZ'Set @DestinDa = ''Set @DestinA = 'ZZZZZ'Set @TipoDestinDa = ''Set @TipoDestinA = 'Z'Set @FlagProdNoTarget = 'Vero'Set @GrAcqDa = ''Set @GrAcqA = 'ZZZZZZZZZZ'Set @TipoCliDa = ''Set @TipoCliA = 'ZZZ'Set @SettMercDa = ''Set @SettMercA = 'ZZZ'Select WSDFR.AreaCommerciale,WSDFR.Agente,WSDFR.NazDestin,WSDFR.ZonaDestin,WSDFR.ProvDestin,WSDFR.Cliente,WSDFR.DescrCliente,WSDFR.GruppoAcq,WSDFR.TipoCli,WSDFR.SettMerc,WSDFR.CDestin,WSDFR.DescrDestin,WSDFR.TipoDestin,WSDFR.EsclStatis,WSDFR.EsclTarget,WSDFR.ValoreNetto,WSDFR.TpDocum,WSDFR.VCambioITL,WSDFR.VCambioEUR,WSDFR.MeseFatt,WSDFR.PosizioneFrom W_St_DocFatt_Righe WSDFRinner join UniP_Prodotti UPP onWSDFR.prodotto=UPP.CodWhere WSDFR.Dtdocum between @DataDa and @DataA andWSDFR.AreaCommerciale between @AreaDa and @AreaA andWSDFR.LineaProdotto between @LineaDa and @LineaA andWSDFR.TipoProdotto between @TipoDa and @TipoA andWSDFR.FamigliaProdotto between @FamigliaDa and @FamigliaA andWSDFR.Prodotto between @ProdottoDa and @ProdottoA andWSDFR.Agente between @AgenteDa and @AgenteA************************************************** **************************************************"W_St_DocFatt_Righe" is a view.This query run on my SQL7 server and it takes about 10 seconds.This query exists on another SQL7 server and until last week it took about10 seconds.The configuration of both servers are same. Only the hardware is different.Now, on the second server this query takes about 30 minutes to extract the same details, but anybody has changed any details.If I execute this query without Where, it'll show me the details in 7seconds.This query still takes about same time if Where isWhere WSDFR.Dtdocum between @DataDa and @DataA andWSDFR.AreaCommerciale between @AreaDa and @AreaA andWSDFR.LineaProdotto between @LineaDa and @LineaA and--WSDFR.TipoProdotto between @TipoDa and @TipoA and--WSDFR.FamigliaProdotto between @FamigliaDa and @FamigliaA andWSDFR.Prodotto between @ProdottoDa and @ProdottoA andWSDFR.Agente between @AgenteDa and @AgenteAorWhere WSDFR.Dtdocum between @DataDa and @DataA andWSDFR.AreaCommerciale between @AreaDa and @AreaA and--WSDFR.LineaProdotto between @LineaDa and @LineaA and--WSDFR.TipoProdotto between @TipoDa and @TipoA andWSDFR.FamigliaProdotto between @FamigliaDa and @FamigliaA andWSDFR.Prodotto between @ProdottoDa and @ProdottoA andWSDFR.Agente between @AgenteDa and @AgenteAorWhere WSDFR.Dtdocum between @DataDa and @DataA andWSDFR.AreaCommerciale between @AreaDa and @AreaA and--WSDFR.LineaProdotto between @LineaDa and @LineaA and--WSDFR.TipoProdotto between @TipoDa and @TipoA and--WSDFR.FamigliaProdotto between @FamigliaDa and @FamigliaA and--WSDFR.Prodotto between @ProdottoDa and @ProdottoA andWSDFR.Agente between @AgenteDa and @AgenteAIt is a real puzzle!What happen?Is there someone that had such as problems and have the right solution?Thanks in advance.ByeEugenio
hello friends i have table1 and 200 coulumn of table1 :) i have 647.600 records. i entered my records to table1 with for step to code lines in one day :) i select category1 category2 and category3 with select code but i have just one index.. it is productnumber and it is primarykey..So my select code lines is so slow.. it is 7-9 second.. how can i select in 0.1 second ? Should i create index for category1 and category2 and category3 ? But i dont know create index.. My select code lines is below.. Could you learn me and show me index for it ?? or Could you learn me and show me fast Select code lines and index or etc ??? Also my search code line have a dangerous releated to attaching table1 with hackers :) cheersi send 3 value of treview1 node and childnode and child.childnode to below page.aspx :) Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If Not Me.IsPostBack Then If Request("TextBox1") IsNot Nothing ThenTextBox1.Text = Request("TextBox1") End If If Request("TextBox2") IsNot Nothing ThenTextBox2.Text = Request("TextBox2") End If If Request("TextBox3") IsNot Nothing ThenTextBox3.Text = Request("TextBox3") End If End If Dim searchword As String If Request("TextBox3") = "" And Request("TextBox2") = "" Then searchword = "Select * from urunlistesi where kategori= '" & Request("TextBox1") & "'" End If If Request("TextBox3") = "" Then searchword = "Select * from urunlistesi where kategori= '" & Request("TextBox1") & "' and kategori1= '" & Request("TextBox2") & "'" End If If Request("TextBox3") <> "" And Request("TextBox2") <> "" And Request("TextBox1") <> "" Then searchword = "Select * from urunlistesi where kategori= '" & Request("TextBox1") & "' and kategori1= '" & Request("TextBox2") & "' and kategori2= '" & Request("TextBox3") & "'" End If SqlDataSource1.SelectCommand = searchword End Sub
Hi all.I want to use the following query in a sp to enable paging using ObjectDataSource.The problem (being EXTREMELEY slow) arises when I add these joins and where statements. SELECT r.RID AS ReqID, r.Name AS ReqName, r.Family AS ReqFamily,t3.Name AS DistName, t4.Name AS RurName,t5.Name AS VilName, n.Name+' '+n.Family AS NazerName ,ROW_NUMBER() over (order by r.Family) AS RowRankFROM Requests rLEFT OUTER JOIN Nazeran n ON r.nazerID = n.ID LEFT OUTER JOIN t1States t1 ON t1.ID = r.StateID LEFT OUTER JOIN t2Provinces t2 ON t1.ID = t2.StateID AND r.ProvID = t2.ID LEFT OUTER JOIN t3Districts t3 ON t2.ID = t3.provID AND t1.ID = t3.stateID AND r.DistID = t3.ID LEFT OUTER JOIN t4RuralDistricts t4 ON t3.ID = t4.distID AND t2.ID = t4.provID AND t1.ID = t4.stateID AND r.RurID = t4.ID LEFT OUTER JOIN t5Villages t5 ON t4.ID = t5.rurID AND t3.ID = t5.distID AND t2.ID = t5.provID AND t1.ID = t5.stateID AND r.VilID = t5.IDWHERE r.stateid=(case when @StateID is null or @StateID='' then r.stateid else @StateID end) and r.provid=(case when @provID is null or @provID='' then r.provid else @provID end) and r.rID=(case when @ReqID is null or @ReqID='' then r.rID else @ReqID end) and isnull(r.nazerID,'')=(case when @nazerID is null or @nazerID='' then isnull(r.nazerID,'') else @nazerID end) and r.name+' '+r.family like (case when @ReqName is null or @ReqName='' then r.name+' '+r.family else '%'+@ReqName+'%' end) **there are 1million rows in [Requests] table ,200000 rows in [t5villages], and about total 5000 rows in other tables.As you can see, this is for a GridView showing list of people requesting a loan allowing users to make alternative searches based on Name, Familyname ,ID ,... would you please help me optimize and make fast this query.Many thanks..