Hi, I created a SP which is supposed to calculate some values for me and return them as a resultset. I have a RequestTime field, and a ResponseTime field; This sp should calculate how much time does it take for us to respond to a customer's request. if it is more than a specific time, this sp should calculate the extra time and its fine base on a constant fine-per-extra-minute value. It should also calculate total fine for all records. To do so, I wrote it as this:
Code:
CREATE PROCEDURE up_responsetime @sdate smalldatetime, @edate smalldatetime, @TotalFine int OUTPUT AS DECLARE @Fine_Per_Min int DECLARE @Max_ResponseTime int SET @Fine_Per_Min = 3 SET @Max_ResponseTime = 120
SELECT ID,RequestDate,RequestTime,ResponseDate,ResponseTime, -- Calculate exceeded amount of time for each record DATEDIFF(minute,RequestDate+RequestTime,ResponseDate+ResponseTime) - @Max_ResponseTime as ExtraTime, -- Calculate fine for each record (DATEDIFF(minute,RequestDate+RequestTime,ResponseDate+ResponseTime) - @Max_ResponseTime) * @Fine_Per_Min as Fine FROM CusRequests WHERE RequestDate BETWEEN @sdate AND @edate --Calculate sum of all fines and return it in TotalFine variable. SELECT @TotalFine = SUM((DATEDIFF(minute,RequestDate+RequestTime,ResponseDate+ResponseTime) - @Max_ResponseTime) * @Fine_Per_Min) FROM CusRequests GO
but I have two concerns about this: 1- Calculated Fine field returns a negative figure if the ResponseTime is in the desired period of time. But I want it to return zero for such cases, and return only a positive figure when extra time was spent on responding to the request.
2- As you can see, there are many redundant calculation in this code, and this will affect its performance. I wanna know if there is any more optimized way to write such a code?
For inserting current date and time into the database, is it more efficient and performant and faster to do getDate() inside SQL Server and insert the value OR to do System.DateTime.Now in the application and then insert it in the table? I figure even small differences would be magnified if there is moderate traffic, so every little bit helps. Thanks.
Hi, I have two tables A and B. In table A i have three columns called empid, empname and empsalary where empid is an identity column. Table A has some records filled in it. Table B has the same schema except the fact that the empid is not an identity column in table B. Table B does not contain any rows initially. All other aspects remain the same as that of table A. Now i am going to delete some records in table A based on the empid. When i delete the records in table A based on empid the deleted records should be inserted into table B with the same empid. I need to accomplish these two tasks in a single stored procedure. How to do it? I need the entire code for the stored procedure. Please help me. I am trying for the past 4 days. Thanx in Advance
I need to run a sequel statement in a stored procedure and get if it found any records because the next statements depend on if records were found or not. How do I do this?
I have some simple TSQL running on a large block of data (565 million records). This process is estimated to take around 5 days and it is critical to get this running as quickly as possible.
I'm watching Windows Performance Monitor and both disk and CPU use are really low and all data is local so there is no network access involved. This is the only task running on this database server. How could this be? The process is running; I just need it to run faster. Typically a system is CPU-bound, disk-bound, network-bound, or operating under maximum capacity. This seems to be none of the above.
In Enterprise Manager, I see the process is sleeping with a Wait Type of "MISCELLANEOUS". What does that mean? Online help gives a pretty useless explanation of that.
I have a stored procedure that calls several views that rely on each other. In the past these views used to go parallel and use up all 100% of the CPU (12 cores), and now when the same stored procedure runs it only uses 8% of the CPU (1 core). This extends the time spent on the query from roughly 10-15 sec to 2-3min. I'm not quite sure why this is happening.
Are there some obvious things to look at when optimizing views to utilize all cores/threads? Also, it doesn't matter if I set Cost Threshold for Parallelism to 1 or 50 or 5, it is always the same, and I have Max Degree of Parallelism set to 0 as well, which should mean to use all cores when available.
I have data that I want at multiple granularities, 5,15,30 and 60 minutes. To reduce repetition, I have put them all in the same table, so that there is a column for 5,15,30 and 60 minutes, with a filtered index on each of the columns that removes the nulls. This means that each day will have 288 slots, but only 24 of the slots are filled in for 60 min data, and all of them are filled for 5 minute data.
I have another column that specifies the interval granularity, and my first thought was to access my data through a join, where I can use a CASE statement, and depending on the data granularity necessary, it will look at a different column:
INNER JOIN Data d ON AND d.settlement_key = CASE st.interval_granularity WHEN 5 THEN [5_min_settlement_key] WHEN 15 THEN [15_min_settlement_key] WHEN 60 THEN [60_min_settlement_key] ELSE NULL END
Despite the presence of the indexes on the columns, then the process seems to be quite slow, I think probably due to the fact that any query plan isn't going to know beforehand which of the columns it is going to use for any given dataset, until it actually starts to run, so it may not be optimised.
How I could optimise this based on the given structure? Maybe there are hints to be added to the join, or maybe I can clear the query plan each time the SQL is run? My other option for dealing with the data of different granularity was to use one column and repeat the data multiple times, each at the different granularity, but this makes my data, row and table sizes much higher, as we are adding just a column for each additional granularity. Would this work any better in future versions of SQL server, maybe with column store indexes?
Hey :)I'm facing a lot of troubles trying to create a new pause/break-system. Right now i'm building up the query that counts how many records that is inside 2 fields. Let me first show you my table: ID (int) | stamp_start (Type: DateTime) | stamp_end (Type: DateTime) | Username (varchar)0 | 17-03-07 12:00:00 | 17-03-07 12:30:00 | Hovgaard The client will enter a start time and a end time and this query should then count how many records that are inside this periode of time. Example: The client enter starttime: 12:05 and endtime: 12:35.The query shall then return 1 record found. The same thing if the user enters 12:20 and 12:50.My current query looks like this:SELECT COUNT(ID) AS Expr1 FROM table WHERE (start_stamp <= @pausetime_start) AND (end_stamp >= @pausetime_end)But this will only count if I enter the exact same times as the one inside the table.Any ideas how I can figure this out?Thanks for your time so far :)/Jonas Hovgaard - Denmark
I'm trying to execute a stored procedure within the case clause of select statement. The stored procedure returns a table, and is pretty big and complex, and I don't particularly want to copy the whole thing over to work here. I'm looking for something more elegant.
@val1 and @val2 are passed in
CREATE TABLE #TEMP( tempid INT IDENTITY (1,1) NOT NULL, myint INT NOT NULL, mybool BIT NOT NULL )
INSERT INTO #TEMP (myint, mybool) SELECT my_int_from_tbl, CASE WHEN @val1 IN (SELECT val1 FROM (EXEC dbo.my_stored_procedure my_int_from_tbl, my_param)) THEN 1 ELSE 0 FROM dbo.tbl WHERE tbl.val2 = @val2
SELECT COUNT(*) FROM #TEMP WHERE mybool = 1
If I have to, I can do a while loop and populate another temp table for every "my_int_from_tbl," but I don't really know the syntax for that.
Just wonder whether is there any indicator or system parameters that can indicate whether stored procedure A is executed inside query analyzer or executed inside application itself so that if execution is done inside query analyzer then i can block it from being executed/retrieve sensitive data from it?
What i'm want to do is to block someone executing stored procedure using query analyzer and retrieve its sensitive results. Stored procedure A has been granted execution for public user but inside application, it will prompt access denied message if particular user has no rights to use system although knew public user name and password. Because there is second layer of user validation inside system application.
However inside query analyzer, there is no way control execution of stored procedure A it as user knew the public user name and password.
Looking forward for replies from expert here. Thanks in advance.
Note: Hope my explaination here clearly describe my current problems.
declare @table table ( ParentID INT, ChildID INT, Value float ) INSERT INTO @table SELECT 1,1,1.2
[code]....
This case ParentID - Child 1 ,1 & 2,2 and 3,3 records are called as parent where as null , 1 is child whoose parent is 1 similarly null,2 records are child whoose parent is 2 , .....
Now my requirement is to display parent records with value ascending and display next child records to the corresponding parent and parent records are sorted ascending
I need a little help here..I want to transfer ONLY new records AND update any modified recordsfrom Oracle into SQL Server using DTS. How should I go about it?a) how do I use global variable to get max date.Where and what DTS task should I use to complete the job? Data DrivenQuery? Transform data task? How ? can u give me samples. Perhaps youcan email me the Demo Package as well.b) so far, what I did was,- I have datemodified field in my Oracle table so that I can comparewith datelastrun of my DTS package to get new records- records in Oracle having datemodified >Max(datelastrun), and transferto SQL Server table.Now, I am stuck as to where should I proceed - how can I transfer theserecords?Hope u can give me some lights. Thank you in advance.
I have indexed my SQL Server tables to gain some speed on calling up tables and queries ( using VB and ADO ). It is still very slow...Is there a move I have to make once my tables are indexed or is there any tricks to improve the speed cause I am getting kinda desparate right now :(
I have a question in regards to optimistic locking:
I have a database conversion that will be running on a SQL 7.0 system. The process needs to be completed ASAP and to this end, I have tried to set up all aspects of the server to be geared towards speed rather than redundancy for the duration of the process (i.e. moving heavily used tables to separate filegroups on a RAID 0 set, dedicating a separate disk for the database log). I was now looking at trying to tweak locking behaviour to enhance performance (as for the duration of the conversion, no other user will be connecting to the database - the only initator of data changes will be the conversion application, which feeds statements serially to the server). As far as I know changing lock settings is something that would be initiated by the application itself, but is there any property I can set on the server to further enhance performance in this area?
We are evaluating a tool by Lechotech that can optimize sql statements. It is a pretty good tool, but we would like to compare it against some others. Has anyone seen any other such tools?
I'm no SQL wiz, just know basics to get me by ... What I'm trying to do is: everytime a record is inserted into an online orders table, that record needs to be inserted into another table in another database, but with added information.
This is the Trigger I came up with:
CREATE TRIGGER OtherDatabaseInsertTrigger ON dbo.t_order FOR INSERT AS
DECLARE @CLIENT VARCHAR(30) DECLARE @OrderNumberID INT SET @CLIENT = 'DevShed'
SET @OrderNumberID = (SELECT @@IDENTITY) UPDATE test2.dbo.t_order SET client = @CLIENT WHERE oid = @OrderNumberID;
I don't know if its possible to do an INSERT INTO SELECT with additional fields in the 2nd table, I was trying, but failed. Had to resort to the bottom piece of SQL to get the ID and run a separate query to add the additional items to the new record in table 2.
Any SQL masters out there that can help me make this better, or know of some other way to do this.
Hello, I am hoping someone here can help me optimize the following query: SELECT INCOMING.DATE_TIME, INCOMING.URL, INCOMING.HITS, USER_NAMES.USER_LOGIN_NAME, CATEGORY.NAME FROM (wsHQMay2004.dbo.INCOMING INCOMING INNER JOIN wsHQMay2004.dbo.CATEGORY CATEGORY ON INCOMING.CATEGORY = CATEGORY.CATEGORY) INNER JOIN wsHQMay2004.dbo.USER_NAMES USER_NAMES ON INCOMING.USER_ID = USER_NAMES.USER_ID WHERE INCOMING.DATE_TIME >= '2004-05-01 00:00:00.00' AND INCOMING.DATE_TIME < '2004-06-01 00:00:00.00' ORDER BY INCOMING.URL ASC
I am just hoping to get some tips on perhaps a better way to write this query as right now, due to the size of the incoming table, this query just takes forever.
I've tried a bunch of different ways in an effort to stay away from using a cursor, but I haven't been able to accomplish what I need to do without one. So, I coded this process using cursors and performance (as expected) is pretty mediocre. I was wondering if someone could take a quick look and suggest a different approach or maybe suggest ways to optimize the current code.
SELECT T1.F3 FROM T1 INNER JOIN T2 ON T1.F4 = T2.F4 WHERE (T1.F1 > @iNum AND T2.F1 > @iNum) OR ( @iNum2 * (T1.F1 - T2.F1)/(T1.F2 - T2.F2) ) + (T1.F1 - ((T1.F1 - T2.F1)/(T1.F2 - T2.F2) * T1.F2) ) > @INum
As you can see, the second part of the WHERE (after the OR) is much more complicated than the part before the OR. My query would run a lot faster if it tried the first part of the OR and didn't bother with the second part if the first part was satisfied. Is there any way to do this?
SELECT * FROM OPENQUERY (liorder, ' SELECT DISTINCT a.AUF_NR AS OrdNo, e.KU_NAME AS Customer, a.AUF_POS AS Pos, f.PC_PANE_NO AS Pane, f.PC_SGGL_SEQ AS Component, f.PC_SGGL_COD AS GlassCode, d.GL_BEZ AS GlassDesc, a.ANZ AS Qty, ((c.BREITE/1000*c.HOEHE/1000)*a.ANZ) AS SQM, (a.ANZ*c.SUM_BRUTTO) AS Val, (CASE WHEN(SELECT SUM(h.KF_FERT_QTY) FROM LIPROD.KAPA_AUS_FERT h WHERE a.AUF_NR = h.KF_ORDER_NO AND a.AUF_POS = h.KF_ORDER_POS AND f.PC_PANE_NO = h.KF_SCHEIB_NR AND f.PC_SGGL_SEQ = CASE WHEN h.KF_SEQ_NR = 0 THEN 1 ELSE h.KF_SEQ_NR END AND h.KF_SCHR_NR IN (2, 402, 502, 602)) IS NULL THEN 0 ELSE(SELECT SUM(h.KF_FERT_QTY) FROM LIPROD.KAPA_AUS_FERT h WHERE a.AUF_NR = h.KF_ORDER_NO AND a.AUF_POS = h.KF_ORDER_POS AND f.PC_PANE_NO = h.KF_SCHEIB_NR AND f.PC_SGGL_SEQ = CASE WHEN h.KF_SEQ_NR = 0 THEN 1 ELSE h.KF_SEQ_NR END AND h.KF_SCHR_NR IN (2, 402, 502, 602)) END) AS Done FROM LIORDER.AUF_STAT a, LIORDER.AUF_KOPF b, LIORDER.AUF_POS c, LIORDER.GLAS_DATEN d, LIORDER.KUST_ADR e, LIPROD.AUF_POS_COMP f WHERE EXISTS (SELECT g.AUF_NR FROM LIORDER.AUF_STAT g WHERE g.AUF_NR = a.AUF_NR AND g.RG_OFFEN != 0) AND EXISTS (SELECT i.KF_ORDER_NO FROM LIPROD.KAPA_AUS_FERT i WHERE a.AUF_NR = i.KF_ORDER_NO AND i.KF_SCHR_NR IN (2, 402, 502, 602)) AND a.AUF_NR = b.AUF_NR AND b.AUF_NR = c.AUF_NR AND c.AUF_NR = f.PC_ORDER_NO AND a.AUF_POS = c.AUF_POS AND c.AUF_POS = f.PC_ORDER_POS AND b.KUNR = e.KU_NR AND f.PC_SGGL_COD = d.IDNR AND a.HISTORY = 0 AND b.AUF_OFF = 0 AND c.VER_ART != ''V'' AND e.KU_VK_EK = 0 AND e.KU_NAME IS NOT NULL ORDER BY a.AUF_NR DESC, a.AUF_POS ASC')
...It is retrieving data from an Oracle linked server. But the execution time is so friggin' long! I tried running it and for around 30 minutes it hasn't shown any results. So I could even tell the exact time it would take to return results. Do you have any tips regarding query optimization? Thanks in advance.
we have an insurance program up and running in our regions and we get random reports of slowness. in an effort to track down all facets of slowness i am looking at all my sql code to make sure it is as efficient as possible. I know a little about SQL and writing SQL statements, enough to help me do my job well. but i do not write optimized code.
if request.form("selPolicyNum") <> "" then sqlPolicyInfo = "SELECT PIEffectiveDate, PIExpirationdate from PIMaster where PIPolicyNum='" & request.form("selPolicyNum") & "'" Set rsPolicyInfo = Server.CreateObject("ADODB.Recordset") Set rsPolicyInfo.ActiveConnection = webLookupConn 'rsPolicyInfo.CursorType = adOpenDynamic 'rsPolicyInfo.LockType = adLockOptimistic rsPolicyInfo.Source = sqlPolicyInfo 'rsPolicyInfo.CursorLocation = adUseClient rsPolicyInfo.Open 'response.write sqlPolicyInfo end if
that is the code used to store a remark into the system. is this code optimized already or should some of the db parameters be changed to make things faster? this is just an example of many of the SQL statements that i may or may not have to fix. thank you for any and all help.
I have the following query that works fine but i'm wondering if there is a way to optimize it better as when I analyze through sql profiler it is at the top of the list of using the cpu
SELECT DISTINCT site, d, (SELECT COUNT(id) FROM anP aPV2 WHERE aPV2.confirmed=1 and aPV2.stage=2 and aPV2.inserted=0 and aPV2.site=aPV1.site and aPV2.d>=aPV1.d and aPV2.d<=aPV1.d) AS mycount FROM anP aPV1 WHERE confirmed=1 AND stage=2 AND inserted=0 ORDER BY site,d
-- minatest table will contain indexes with fragmentation above 10% which need to be defragged -- this will go through all databases -- null indexes will not be affected
exec sp_msforeachdb' use ? INSERT INTO #minatest SELECT db_name(database_id), phystat.page_count, i.fill_factor, OBJECT_NAME(i.object_id), i.name, phystat.avg_fragmentation_in_percent, newfragmentvalue = 0, index_type_desc, index_level FROM sys.dm_db_index_physical_stats(NULL, NULL, NULL, NULL, ''DETAILED'') phystat JOIN sys.indexes i ON i.object_id = phystat.object_id AND i.index_id = phystat.index_id WHERE phystat.avg_fragmentation_in_percent > 10 AND phystat.page_count < 10000 '
select @Counts = count(Databasename) from #minatest -- sets the maximum amount of fields to go threw as a number
declare targets cursor -- declare cursor with values to search through for select * from #minatest
open targets -- open cursor
fetch next from targets into @DatabaseName,@pagecount,@vfillfactor,@TableName,@IndexName,@FragmentPercentage,@vnewfrag,@index_type_desc,@index_level-- take rows from table
select @i=0 while @@fetch_status=0 and @i<=@Counts-- set loop condition
begin
select @sql = 'USE '+@DatabaseName+'; '+ ' ALTER INDEX '+@IndexName+' ON '+ @TableName+ ' REBUILD with (ONLINE=ON,SORT_IN_TEMPDB=ON,STATISTICS_NORECOMPUTE=OFF);' exec sp_executesql @sql select @nfsql = 'select @cnt = avg_fragmentation_in_percent FROM sys.dm_db_index_physical_stats(NULL,NULL, NULL, NULL, ''DETAILED'') phystat JOIN '+@DatabaseName+'.sys.indexes i ON i.object_id = phystat.object_id AND i.index_id = phystat.index_id WHERE i.name='''+@IndexName+''' and index_type_desc='''+@index_type_desc+''' and index_level='''+CAST(@index_level as varchar(20))+'''' exec sp_executesql @nfsql ,@params, @cnt=@vnewfrag OUTPUT
update #minatest set newfragmentvalue = @vnewfrag where IndexName = @IndexName and TableName = @TableName select @i=@i+1
fetch next from targets into @DatabaseName,@pagecount,@vfillfactor,@TableName,@IndexName,@FragmentPercentage,@vnewfrag,@index_type_desc,@index_level-- take next field of table
end
close targets DEALLOCATE targets
ALTER TABLE #minatest DROP COLUMN index_type_desc,index_level select * from #minatest -- displays which indexes where defraged and their new frag value drop table #minatest
I have the below query written so that i do not insert entries that is already existing in the table. I am trying to put in 70000 entries at a single shot and it breaks down. Can anybody help me optimize the below query so that it doesnt break? Is there any other way I can write this query?
Please do help me with this. Thanks in advance. The table in which i am inserting the entries has a composite key composed of ACCT_NUM_MIN and ACCT_NUM_MAX. I am getting this from a table which doesnt have a primary key(CORE)
INSERT INTO CRF (CORE_UID,ACCT_NUM_MIN, ACCT_NUM_MAX,BIN, BUS_ID,BUS_NM,ISO_CTRY_CD, REGN_CD, PROD_TYPE_CD, CARD_TYPE) SELECT UID , LEFT(ACCT_NUM_MIN,16), LEFT(ACCT_NUM_MAX,16), BIN, BUS_ID, BUS_NM, ISO_CTRY_CD, REGN_CD, PROD_TYPE_CD, CARD_TYPE FROM CORE o WHERE NOT EXISTS (SELECT * FROM CRF i WHERE o.ACCT_NUM_MIN = i.ACCT_NUM_MIN AND o.ACCT_NUM_MAX = i.ACCT_NUM_MAX)
I have two tables.One has approx 90,000 rows with a field .. let's call in BigInt (and itis defined as a bigint data type).I have a reference table, with approx 10,000,000 rows. In thisreference table, I have starting_bigint and ending_bigint fields. Iwant to pull out all of the reference data from the reference table forall 90,000 rows in the transaction table where the BigInt from thetransaction table is between the starting_bigint and ending_bigint inthe reference table.I have the join working now, but it is not as optimized as I wouldlike. It appears no matter what I do, the query does a full table scanon the 10,000,000 rows in the reference table.Sample codeSELECT ref.*, tran.bigintfrom transactiontable tranINNER JOIN referencetable ref on tran.bigint betweenref.starting_bigint and ending_bigintYes, all 3 of the fields are indexed. I even have a composite index onthe reference table with the starting_bigint and ending_bigint fieldsselected as the composite.Any help would be appreciated.Robert H. KershbergIT DirectorTax Credit CompanyJoin Bytes! or Join Bytes! or Join Bytes!
Hello all,I have a table with thousands of rows and is in this format:id col1 col2 col3 col4--- ------ ----- ------ ------1 nm 78 xyz pir2 bn 45 abc dirI now want to get the data from this table in this format:field val---------------------------col1 nmcol1 bncol2 78col2 45col3 xyzcol3 abccol4 pircol4 dirIn order to do this I am doing a union:select * into #tempUpdate(select 'col1' as field, col1 as val from table1unionselect 'col2' as field, col2 as val from table1unionselect 'col3' as field, col3 as val from table1)the above example query is smaller - I have a much bigger table withabout 80 columns (Imagine the size of my union query :) and this takesa lot of time to execute. Can someone please suggest a better way to dothis?The results of this union query are selected into a temp table, which Ithen use to update another table. I am using SQL Server 2000.my main concern is performance. any ideas please?thanks
To start with, I'll give a simplified overview of my data.BaseRecord (4mil rows, 25k in each Region)ID | Name | Region | etcOtherData (7.5mil rows, 1 or 2 per ID)ID | Type(1/2) | DataProblemTable (4mil rows)ID | ConcatenatedHistoryThe concatenated history field is a nvarchar with up to 20 differentpipe delimited date/code combinations, eg. '01/01/2007X|11/28/2006Q|11/12/2004Q|'Using left outer joins (all from base, the rest optional) I've got aview something like:View (4mil rows)ID | Name | Region | etc | Data | Data2 | ConcatenatedHistoryQuerying it, it takes about 15-20 seconds to do this:Select ID, Name, Region, etc, Data, Data2, ConcatenatedHistory
I have an application that's allows user input, and is translating it bystripping out the html tags and also doing some code translations. The useris able to later edit their input. However it's unfeasible to reversetranslate it back as the logic would be too complicated, and there areinstances where it won't be possible.So, what I'm thinking to do to speed up performance is to duplicate the userdata, one for native data, and the other for the translated data. When useredits their input, the native data is shown. When the application isshowing the data in a page, the translated data is shown.My question is, would it make a performance difference if I store the nativedata and the translated data in the same table, or would it be better tostore the cached data in another table?
selected_item_id as int (PK), cust_id as int (FK), item_id as int (FK), ...
-------
With the following query:
select cust_ID from selected_items_tbl WHERE item_id in (1, 2, n) GROUP BY cust_id, item_id HAVING cust_id in (select cust_id from selected_items_tbl where item_id = 1) AND cust_id in (select cust_id from selected_items_tbl where item_id = 2) AND cust_id in (select cust_id from selected_items_tbl where item_id = n)
-------
Each of these tables has other items included. Selected_Items_tbl holds zero to many of the items from the item_tbl for each customer. If I am searching for a customer who has item 1 AND item 2 AND item n, what would be the most efficient query for this? Currently, the above query is what I am working with. However, it seems that we should be able to do this type of search in a single query (without subqueries).
I have combined three reports into one big report. I would like to someway cache the big report, and then create little reports from the cached report. What would be the best way to go about doing this?