Avoiding Nested Cursors
Dec 4, 2007
I have a Master/Detail table setup - let's call the master "Account" and the detail "Amount". I also have a "black box" stored procedure (BlackBox_sp) which carries out a lot of complex processing.
What I need to do is, for each Account, I need to iterate thtough it's Amount records and call the black box each time. Once I've finished going through all the Amount records, I need to call the black box again once for the Account. This must be done with the Account & Amount rows in a specific order.
So I have something along the lines of
Code Block
DECLARE Total int
DECLARE Account_cur
OPEN Account_cur
FETCH NEXT FROM Account_cur
WHILE FETCH_STATUS = 0
BEGIN
SET Total = 0
DECLARE Amount_cur
OPEN Amount_cur
FETCH NEXT FROM Amount_cur
WHILE FETCH_STATUS = 0
BEGIN
SET Total = Total + Amount
EXEC BlackBox_sp (Amount)
END
CLOSE Amount_cur
EXEC BlackBox_sp (Total)
END
CLOSE Account_cur
Any tips on another approach would be appreciated given the contraints I have.
Greg.
View 1 Replies
ADVERTISEMENT
Sep 26, 2005
Hi all,
I have a temp guy who is doing some work for us (seriously, I never wrote the query).
Now this is the scenario (hold tight). So we have a stored procedure as follows...
ALTER PROCEDURE createSegmentPoints AS
DECLARE @fLat INTEGER
DECLARE @fLon INTEGER
DECLARE @segId INTEGER
-- declare cursor to return the from lat and lon for all segments that do not have address point 109.
DECLARE c1 CURSOR FOR (SELECT From_Latitude, From_Longitude, id AS segment_id FROM Segments WHERE SegmentType != 109)
OPEN c1
FETCH NEXT FROM c1 INTO @fLat, @fLon, @segId
WHILE @@FETCH_STATUS = 0
BEGIN
-- insert into table the segId, from lat, from lon and returned segment id from function.
INSERT INTO test VALUES (@segId,@fLat,@fLon,dbo.points_test(@fLat,@fLon))
FETCH NEXT FROM c1 INTO @fLat, @fLon, @segId
END
CLOSE c1
DEALLOCATE c1
As you can see here I am using a Cursor, which in turn calls a function
with each row that is processed in the recordset. The function that is
called is as follows...
ALTER FUNCTION points_test(@x INTEGER, @y INTEGER)
RETURNS INTEGER
AS
BEGIN
-- function to find the closed segment point with address point 109 to the segment specified in procedure.
DECLARE @tempDistance FLOAT(4)
SET @tempDistance = 1000000
DECLARE @id, @seg, lat, lon INTEGER
DECLARE @distance, @xd, @yd FLOAT
DECLARE c1 CURSOR FOR (SELECT from_latitude, from_longitude, id FROM segments WHERE segmenttype = 109)
OPEN c1
FETCH NEXT FROM c1 INTO @lat, @lon, @id
WHILE @@FETCH_STATUS = 0
BEGIN
-- calucations to get distance.
SET @xd = (@lat-@x)
SET @yd = (@lon-@y)
SET @distance = SQRT((@xd*@xd) + (@yd*@yd))
-- test if you have shortest distance.
IF (@distance < @tempDistance)
BEGIN
SET @tempDistance = @distance
SET @seg = @id
END
FETCH NEXT FROM c1 INTO @lat,@lon, @id
END
CLOSE c1;
DEALLOCATE c1;
RETURN @seg
END
(This function works out an equation to get the shortest distance from
two parameters passed to the function calculated with data from each
row returned within the cursor)
As you can see here, this function contains ANOTHER cursor!! Ouch. The
fact that their is an SQL query in a function is a killer, but having
another embedded cursor there is also a killer - this has virtually
killer the application.
So, how best is it for me to correct this. Should I turn the function
into a stored procudure? But even if I do this, the nested cursor still
remains. I was thinking maybe to have the SQRT equations within the
SELECT expression and then wrapped in a MIN() to maybe get the lowest
value.
Any ideas would be of great help.
Thanks
Tryst
View 2 Replies
View Related
Jun 3, 2004
Morning everyone,
I have a sp that I've created that is to show me everyone table name and column name using nested cursors. However when I execute the procedure it doesn't show me the names, it just tells me the command completed successfully. Here is the code:
CREATE PROCEDURE uspSeeAllViews
AS
SET NOCOUNT ON
DECLARE @strMessageVARCHAR(100)
DECLARE @strColumnVARCHAR(100)
DECLARE @strViewVARCHAR(100)
DECLARE @strCommandVARCHAR(250)
DECLARE crsViews CURSOR FOR
SELECT
name AS strView
FROM
sysobjects
WHERE
type = 'U'
OPEN crsViews
FETCH NEXT FROM crsViews INTO @strView
WHILE @@FETCH_STATUS = 0 BEGIN
DECLARE crsColumns CURSOR FOR
SELECT
name AS strColumn
FROM
syscolumns
WHERE
name = @strView
OPEN crsColumns
FETCH NEXT FROM crsColumns INTO @strColumn
WHILE @@FETCH_STATUS = 0 BEGIN
PRINT @strView + ':' + @strColumn
FETCH NEXT FROM crsColumns INTO @strColumn
END
CLOSE crsColumns
DEALLOCATE crsColumns
FETCH NEXT FROM crsViews INTO @strView
END
CLOSE crsViews
DEALLOCATE crsViews
Thanks for looking, any ideas??
View 7 Replies
View Related
Apr 4, 2008
I have a table like:
Converted_Mortgage_Number Activity_Date Activity_Time Sequence_Number, History_Text
100 2006-01-20 84936 1 Sent a reminder letter
105 2006-01-23 134502 1 Called, but no reply
100 2006 01-24 104532 1 Asked to call again later
100 2006-01-24 104532 2 when it is more convenient
100 2006-01-24 104532 3 and the person I require is available
I wish to concantenate the History_text fields together for records with the same Converted_Mortgage_Number, Activity_Date and Activity_Time so:
100 2006-10-24 104532 Asked to call again later when it is moreconvenient and the person I require is available.
To do this I amusing nested cursors so:
USE ar
GO
DECLARE @mortgage INT,
@date datetime,
@time INT,
@getMortgage CURSOR,
@notes varchar(MAX),
@notesComplete varchar(MAX),
@getDetail CURSOR
SET @getMortgage = CURSOR FOR
SELECT Converted_Mortgage_Number, Activity_Date, Activity_Time
FROM format_history
GROUP BY Converted_Mortgage_Number, Activity_Date, Activity_Time
ORDER BY Converted_Mortgage_Number ASC
OPEN @getMortgage
FETCH NEXT
FROM @getMortgage INTO @mortgage, @date, @time
WHILE @@FETCH_STATUS = 0
BEGIN
SET @notes = ''
SET @getDetail = CURSOR FOR
SELECT History_Text
FROM format_history
WHERE Converted_Mortgage_Number = @mortgage AND Activity_Date = @date AND Activity_Time = @time
ORDER BY Sequence_Number
OPEN @getDetail
SET @notesComplete = ''
FETCH NEXT FROM @getDetail INTO @notes
WHILE (@@FETCH_STATUS = 0)
BEGIN
@notesComplete = @notesComplete + @notes + ' '
FETCH NEXT FROM @getDetail INTO @notes
END
PRINT @notesComplete
CLOSE @getDetail
DEALLOCATE @getDetail
FETCH NEXT
FROM @getMortgage INTO @mortgage, @date, @time
END
CLOSE @getMortgage
DEALLOCATE @getMortgage
GO
But I am getting the error: 'Incorrect syntax near the keyword CLOSE, incorrect syntax near @notesComplete'.
Can anyone please see where I am going wrong?
View 4 Replies
View Related
Aug 24, 2007
What is the best way to nest cursors?
This code does not seem to be returning me all of the data.
Code Snippet
DECLARE element_Cursor CURSOR FOR
SELECT ElementTypeRecNo
FROM dbo.tblTemplateElementType
where TemplateRecno = @TemplateRecNo
OPEN element_cursor
FETCH NEXT FROM Element_Cursor into @ElementTypeRecno
--delete from tblElementCPO
WHILE @@FETCH_STATUS = 0
BEGIN
select @Count = count (*)
from tblProjTypeSet
where ProjRecno = @ProjRecNo
if @Count > 0
begin
select @ProjTypeRecno = ProjTypeRecno
from tblProjTypeSet
where ProjRecno = @ProjRecNo
select @Count = count (*)
FROM dbo.tblElementTypeDep
where TemplateRecno = @TemplateRecNo
and ProjTypeRecno = @ProjTypeRecNo
if @Count > 0
begin
DECLARE ElementTypeDep_Cursor CURSOR FOR
SELECT ElementTypeDepRecNo, PreElementTypeRecNo,
PostElementTypeRecNo, ElapsedTimeDueDates, ElapsedTimePlanDates,
Description
FROM tblElementTypeDep
WHERE (TemplateRecNo = @TemplateRecNo)
AND (ProjTypeRecNo = @ProjTypeRecno)
AND (PreElementTypeRecNo = @ElementTypeRecno)
OPEN ElementTypeDep_cursor
FETCH NEXT FROM ElementTypeDep_Cursor
into @ElementTypeDepRecno, @PreElementTypeRecNo,
@PostElementTypeRecno, @ElapsedTimeDueDates, @ElapsedTimePlanDates,
@Description
WHILE @@FETCH_STATUS = 0
BEGIN
select @PreElementRecNo = ElementRecno
from tblElementCPO
where ProjRecNo = @ProjRecNo
and IssueRecno = @IssueRecNo
and ElementTypeRecno = @PreElementTypeRecno
if @PreElementRecno is not null
begin
select @PostElementRecNo = ElementRecno
from tblElementCPO
where ProjRecNo = @ProjRecNo
and IssueRecno = @IssueRecNo
and ElementTypeRecno = @PostElementTypeRecno
if @PostElementRecno is not null
begin
select @Count = count (*)
from tblElementDepCPO
where ElementTypeDepRecno = @ElementTypeDepRecno
and PreElementRecNo = @PreElementRecNo
and PostElementRecno = @PostElementRecno
if @Count = 0
begin
INSERT INTO tblElementDepCPO
(ElementTypeDepRecNo, PreElementRecNo,
PostElementRecNo, ElapsedTimeDueDates,
ElapsedTimePlanDates, Description,
ChangeDate, ChangePerson)
VALUES (@ElementTypeDepRecno, @PreElementRecNo,
@PostElementRecno, @ElapsedTimeDueDates,
@ElapsedTimePlanDates, @Description,
GETDATE(), CURRENT_USER)
end
select @Count = count (*)
from tblElementAttemptCPO
where ElementRecNo = @PostElementRecNo
if @Count = 0
begin
select @Count = count (*)
from tblElementAttemptCPO
where ElementRecNo = @PostElementRecNo
if @Count = 0
begin
select @NextPlanDate = ProjectedCompletionDate,
@NextDueDate = RequiredCompletionDate
from tblElementAttemptCPO
where ElementRecno = @PreElementRecNo
end
else
begin
select @NextPlanDate = @StartDate
select @NextDueDate = @StartDate
end
select @NextPlanDate =
dbo.fncAddBusinessDays (@NextPlanDate, @ElapsedTimePlanDates)
select @NextDueDate =
dbo.fncAddBusinessDays (@NextDueDate, @ElapsedTimePlanDates)
insert into tblElementAttemptCPO (ElementRecno,
ProjectedCompletionDate, RequiredCompletionDate,
ProjectedStartDate, RequiredStartDate,
ActualStartDate, ActualCompletionDate, AttemptNum,
IsCompleted, IsStarted, ResponsibleRoleTypeRecno,
ChangeDate, ChangePerson)
values (@PostElementRecno,
@NextPlanDate, @NextDueDate,
'1/11/1900', '1/11/1900',
'1/11/1900', '1/11/1900', 0,
0, 0, 0,
GETDATE(), CURRENT_USER)
end
end
end
FETCH NEXT
FROM ElementTypeDep_Cursor
into @ElementTypeDepRecno, @PreElementTypeRecNo,
@PostElementTypeRecno, @ElapsedTimeDueDates, @ElapsedTimePlanDates,
@Description
END
CLOSE elementTypeDep_Cursor
DEALLOCATE elementTypeDep_Cursor
end
FETCH NEXT FROM element_Cursor into @ElementTypeRecno
END
CLOSE element_Cursor
DEALLOCATE element_Cursor
end
View 2 Replies
View Related
Jul 1, 2004
I am not getting any error message. It just hangs...
Any idea what I am missing....
******************
ALTER PROCEDURE TrigSendPreNewIMAlertP2
@REID int
AS
Declare @RRID int
Declare @ITID int
Declare @FS1 int
Declare @FS2 int
Declare crReqRec cursor local for
select RRID from RequestRecords where REID = @REID and RRSTatus = 'IA' and APID is not null
open crReqRec
fetch next from crReqRec into @RRID
set @FS1 = @@Fetch_Status
while @FS1 = 0
Begin
Declare crImpGrp cursor local for
select ITID from RequestRecords where RRID = @RRID
open crImpGrp
fetch next from crImgGrp into @ITID
set @FS2 = @@Fetch_Status
while @FS2 =0
Begin
EXEC TrigSendNewIMAlertP2 @ITID
FETCH next from crImpGrp into @ITID
end
FETCH next from crReqRec into @RRID
end
close crImpGrp
deallocate crImpGrp
close crReqRec
deallocate crReqRec
View 1 Replies
View Related
Dec 11, 2000
All,
I have a question on a nested Cursor statement, I am having problem getting the @@Fetch_Status to work correctly...
I can get the first cursor to fetch and the the second cursor to fetch all of it rows but when it reaches the end. It should fetch the first cursors next row and @@fetch_Status should be 0.
Is there a way around this???
Thanks,
~Lee
Alter Procedure Summary As
Declare @DateID nvarchar(50)
Declare @ShortDate nvarchar(30)
Declare @SourceIPID nvarchar(50)
Declare @SourceIP nvarchar(30)
Declare @Packets int
Declare lead_cur CURSOR FOR
SELECT tbl_Date.DateID, tbl_Date.ShortDate FROM Tbl_Date WHERE NOT tbl_date.shortdate = convert(char(11),GetDate())
FOR READ ONLY
Declare lead_cur2 CURSOR FOR
SELECT SourceIPID, SourceIP FROM tbl_SrcIP
FOR READ ONLY
Open lead_cur
Open lead_cur2
FETCH NEXT FROM lead_cur INTO @DateID,@ShortDate
While @@FETCH_STATUS = 0
BEGIN
FETCH NEXT FROM lead_cur2 INTO @SourceIPID, @SourceIP
While @@FETCH_STATUS = 0
BEGIN
Set @Packets = (Select ISNULL(Sum(CAST(strNumPackets As Int)),0)
FROM tbl_All_SysDATA
Where DateID = @DateID and SourceIPID = @SourceIPID)
If @packets > 0
Insert tbl_SysIPSummary (DateID,IPID,Packets) VALUES (@DateID,@SourceIPID,@Packets)
FETCH NEXT FROM lead_cur2 INTO @SourceIPID, @SourceIP
END
FETCH NEXT FROM lead_cur INTO @DateID,@ShortDate
END
Close lead_cur
Close lead_cur2
DEALLOCATE lead_Cur
DEALLOCATE lead_Cur2
GO
View 1 Replies
View Related
Apr 18, 2008
I need to use a nested cursor in SQL server 2005. I have never done this before. The problem here is that I need to use "@@FETCH_STATUS' twice.Once for my inner loop and once for my outer loop. If there is a way to use two different @@FETCH_STATUS variables at once, I would like to know how.
Here is my code. The inner loop is currently pseudo code and I just need to figure out how to code it.
DECLARE @mycur1 CURSORDECLARE @mycur2 CURSORDECLARE @InMarketId INTSET @InMarketId=575 -- Hard code for now. Just for testing.DECLARE @test VARCHAR(10)SET @mycur1 = CURSORSET FORSELECT SubDivisionId FROM SubDivision WHERE MarketId=@InMarketIdOPEN @mycur1FETCH NEXT FROM @mycur1 INTO @testWHILE @@FETCH_STATUS = 0 BEGIN
PRINT @test-- Begin Inner Nested Loop-- FOR EACH SubDivisionId in SubDivisionSubMarket SDSM delete from SubMarket SM Where SDSM.SubMarketId=SM.SubMarketId
-- End of Inner nested Loop
FETCH NEXT FROM @mycur1 INTO @test ENDDEALLOCATE @mycur1------------------------------------------Here are my three tables I am scripting agains and a description of each.
SubDivision-----------SubDivisionIdMarketIdNameDescriptionCreatedBy
SubDivisionSubMarket--------------------SubDivisionIdSubMarketId
SubMarket--------------------SubMarketIdCodeDescriptionLastUpdateDate
View 2 Replies
View Related
May 15, 2015
I have been wrestling with the code all day to accomplish the following: I need to update a table based on values from another table. So far, I have been able to do the below:
DECLARE @LookUpTerm VARCHAR(25)
, @SearchCol VARCHAR(255)
, @LogonIDToProcess VARCHAR(50)
, @Matched CHAR
, @Cycle INT = 1
IF OBJECT_ID('tempdb..#Glossary','U') IS NOT NULL DROP TABLE #Glossary
IF OBJECT_ID('tempdb..#Employees','U') IS NOT NULL DROP TABLE #Employees
[code]...
View 7 Replies
View Related
Apr 17, 2007
Hello
I am running an script and the following sentence throws and error because the DTC service is not running in the Remote Server:
insert into MyLocalTable
execute synonym_MyRemoteProcedure @SomeParameter
Since a transaction is not declared within the script, why is the DTC required?
How can I avoid the usage of the DTC? Is there a way to say "this code is not within a distributed transaction"?
Thanks a lot.
View 1 Replies
View Related
Apr 8, 2008
Here is a simplified example of a problem I am facing.
I have 2 tables: Tasks and Employees.
Tasks:
(Task_ID, Task_Name, Task_Type, Task_Requirement, Employee_ID)
Employees:
Emp_ID, Emp_Name, Emp_Specialty, Emp_Task_Cnt, Max_Task_Cnt
Requirements: Write a MS SQLServer 2000 Storeed Procedure to:
1. Update the Tasks table by assigning the task to an Employee.
2. Incrememnt the employee's Emp_Task_Cnt for each Task assigned.
3. Match the Employee to the Task by matching the Task_Requirement to the Emp_Specialty.
4. Do not exceed the employee's Max_Task_Cnt.
I have a working solution to the requirements, but it involves using cursor logic. For all the obvious reasons, I wanted to avoid using a cursor (or cursor-like looping structure) but could not figure out any other way to avoid processing the Task table one record at a time because of the: "4. Do not allow an Employee's Task_Cnt to exeed the Max_Task_Cnt."
Q: Is there a way to do this without using a cursor and still meet all of the requirements?
View 2 Replies
View Related
May 9, 2007
I'm trying to performance tune a procedure and am sort of being thwarted by caching.
When I first run the procedure, it takes a few seconds which is too long in this case. Subsequent executions in Management Studio are nearly instantaneous, though, which I imagine is due to caching and does not reflect the behavior of the procedure in production.
Is there a way to disable caching so that each execution of the procedure in Management Studio will be consistent and reflect the "first run" performance?
View 3 Replies
View Related
Sep 7, 2007
This query uses a cursor to fetch a parameter and pass it to another Stored proc. Is there a straightforward way to do this without using a cursor?
declare @deleteunassigned int
declare cur_unassigned cursor for select distinct a.cust_cont_pk
from cust_cont a, cont_fold_ass b (NOLOCK)
where a.cust_cont_pk != b.CUST_CONT_PK
open cur_unassigned
fetch next from cur_unassigned into @deleteunassigned
while @@fetch_status = 0
begin
exec spDeleteCustContbypk @deleteunassigned
fetch next from cur_unassigned into @deleteunassigned
end
close cur_unassigned
deallocate cur_unassigned
GO
declare @deleteunassigned int
declare cur_unassigned
cursor for
SELECT DISTINCT a.cust_cont_pk
FROM cust_cont a,
cont_fold_ass b (NOLOCK)
WHERE a.cust_cont_pk != b.CUST_CONT_PK
open cur_unassigned
FETCH NEXT FROM cur_unassigned INTO @deleteunassigned
while @@fetch_status = 0
begin
exec spDeleteCustContbypk @deleteunassigned
FETCH NEXT FROM cur_unassigned INTO @deleteunassigned
end
close cur_unassigned
deallocate cur_unassigned
GO
Future guru in the making.
View 2 Replies
View Related
Jul 20, 2005
Using small stored procs or sp_executesql dramatically reduces the number ofrecompiles and increases the reuse of execution plans. This is evident fromboth the usecount in syscacheobjects, perfmon, and profiler. However I'm ata loss to determine what causes a compilation. Under rare circumstances theusecount for Compiled Plan does not increase as statements are run. Seemsto correspond to when there is no execution plan. It would seem to me thatcompilation is a resource intensive task that if possible (data and schemaare not changing) should be held to a minimum.How does one encourage the reuse of compile plans?Is this the same as minimizing compilation?Looks like some of this behavior is changing in SQL 2005....Thanks,Danny
View 3 Replies
View Related
May 4, 2006
I have a stored procedure spUpdateClient, which takes as params a number of properties of a client application that wants to register its existence with the database. The sp just needs to add a new row or update an existing row with this data.
I tried to accomplish this with code somethign like this. (The table I'm updating is called Client, and its primary key is ClientId, which is a value passed into the sp from the client.)
IF (SELECT COUNT(ClientId) FROM Clients WHERE ClientId=@ClientId) = 0
BEGIN
-- client not found, create it
INSERT INTO Clients (ClientId, Hostname, Etc)
VALUES (@ClientId, @Hostname, @Etc)
END
ELSE
BEGIN
-- client was found, update it
UPDATE Clients
SET Hostname=@Hostname, Etc=@Etc
WHERE ClientId=@ClientId
END
But the client apps call this every second or so, so soon enough I started getting primary key violations. It looks like one client would make two calls nearly at the same time, both would get a 0 value on the SELECT line, so both would try to insert a new row with the same ClientId. No good.
So then I added
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
at the top, and a COMMIT at the bottom. I thought the first one in would get to run the whole sp, and the next one in would have to wait for the first to be done.
Instead I'm now getting deadlock errors.
If I understand the docs right, that's because the exclusive lock is not placed on the Clients table until the INSERT happens, not at the SELECT. So when two calls to the sp happen at nearly the same time (call them A and B), A does the SELECT and that locks Clients so nobody else can update it. Then B does the SELECT, locking Clients so nobody else (including A) can update it. Now A needs to exclusively lock Clients to do its INSERT, but B still has that read lock on it, and they're deadlocked.
I could catch the deadlock in my client app after SQL Server kills one of the transactions, but it seems to me there should be some way to set a lock at the top of the sp that says "nobody else can enter this sp until I exit it". Any such thing?
Thanks.
Nate Hekman
View 9 Replies
View Related
Sep 7, 2007
Hello.
I have been developing a small site that has two backend SQL Server databases. One for my application data and one for the ASPNETDB database that is created by the ASP .NET Configuration utility.
Is it possible to configure the ASP .NET Configuration tool to use my custom database instead of creating a second database called ASPNETDB?
Thanks in advance.
Kev
View 2 Replies
View Related
Aug 5, 2004
I am exclusively using Stored Procedures to access the database, i.e. there are no Ad-Hoc SQL statements anywhere in the C# code. However, one thing I need to be able to do is to allow filtering for data grids on my ASP.NET page. I want to do the filtering in the Stored Procedure using Dynamic SQL to set the WHERE clause. However, one fear of mine is SQL injection from the client. How can I avoid arbitrary SQL injection, yet still allow for a dynamic WHERE clause to be passed into the stored procedure?
Jason Pacheco
View 2 Replies
View Related
Jun 9, 2008
Hello all,
I currently have an asp script that is generating a 12 month rolling report. From asp I'm running a for loop with 12 iterations, each one sending the following query:
select count(a.aReportDate) as ttl from findings f left outer join audits a on a.aID = f.auditID
where f.findingInvalid <> 1 and month(aReportDate) = " & Mo & " and year(aReportDate) = " & Yr
where the Mo and Yr variables are incremented accordingly.
I actually have 4 sets of data being pulled back to populate a graph, so this results in 48 queries with each page load! Obviously not ideal. So I'm hoping to reduce this to 4 queries. I was playing with the following in enterprise manager:
DECLARE @DT DATETIME
DECLARE @CNT INT
SET @DT = '10/31/07'
SET @CNT = 1
WHILE(@CNT < 12)
BEGIN
select count(a.aReportDate) as ttl from findings f left outer join audits a on a.aID = f.auditID
where f.findingInvalid <> 1 and month(aReportDate) = month(@DT) and year(aReportDate) = year(@DT)
SET @CNT = @CNT + 1
END
I haven't yet added any logic to increment the date, but my concern is that it looks like it is returning 12 separate results. Is there any way to combine this all into one resultset that will be passed back to my asp script? Hopefully this makes sense?
Suggestions on a completely different approach would also be welcome.
Thanks!
View 2 Replies
View Related
Mar 2, 2007
Hi,
Hope someone could help me in revising a long running query. Here is the query
select *
from table1
where classid is null
and productid not in (
select productid
from table1
where classid = 67)
In here table1 could have several occurance of productid in which productid could have different classid. The possible values of classid are: NULL,1,2,3,67. Basically I am looking for all records whose classid is null but should never had an instance in table1 where its classid is 67.
Do you have something like a "join" statment that will only include all records in the left table that is not in the right table?
Hope someone could help me with this. Thanks in advance.
-Ruel
View 9 Replies
View Related
Oct 8, 2007
MS SQL Server 2005
I have a table in our system that hold temporary data for doing calculations. It will process several million records in it. each time they forecast our products.....
Is there any way to have the SQL server NOT add these transactions to the transaction log, since I'm going to wipe the data anyway? I'd like to be able to pick and choose the tables that are 'backed up' into the transaction log...
Please advice. Thanks
View 8 Replies
View Related
Mar 18, 2008
I am trying to figure out an efficient way of comparing two tables of identical structure and primary keys only I want to do a join where one of the tables reveals values for records which have been modified and/or updated.
To illustrate, I have two tables in the generic form:
id-dt-val
For which the 'val' in table 2 could be different from the 'val' in table 1 - for a given id-dt coupling that are identical in both tables.
Does anyone know of an efficient way I could return all id-dt couplings in table 2 which have values that are different from those with the same id-dt couplings in table 1?
NOTE: I am asking this because I am trying to avoid explicit comparisons between the 'val' columns. The tables I am working with in actuality have roughly 900 or so columns, so I don't want this kind of a monster query to do (otherwise, I would simply do something like where a.id = b.id and a.dt = b.dt and a.val <> b.val) - but this won't do in this case.
As a sample query, I have the following script below. When I attempt the where not exists, as you might expect, I only get the one record in which the id-dt coupling is different from those in table 1, but I'm not sure how to return the other records where the id-dt coupling is the same in table 1 but for where modified values exist:
create table #tab1
(
id varchar(3),
dt datetime,
val float
)
go
create table #tab2
(
id varchar(3),
dt datetime,
val float
)
go
insert into #tab1
values
('ABC','01/31/1990',5.436)
go
insert into #tab1
values
('DEF','01/31/1990',4.427)
go
insert into #tab1
values
('GHI','01/31/1990',7.724)
go
insert into #tab2
values
('XYZ','01/31/1990',3.333)
go
insert into #tab2
values
('DEF','01/31/1990',11.111)
go
insert into #tab2
values
('GHI','01/31/1990',12.112)
go
select a.* from #tab2 a --Trouble is, this only returns the XYZ record
where not exists
(select b.* from #tab1 b where a.id = b.id and a.dt = b.dt)
go
drop table #tab1
drop table #tab2
go
I really dont' want to have to code up a loop to do the value by value comparison for inequality, so if anyone knows of an efficient set-based way of doing this, I would really appreciate it.
Any advice appreciated!
-KS
View 7 Replies
View Related
Jul 20, 2005
The C++ application calls the database to look up property data. Onetroublesome query is a function that returns a table, finding data whichis assembled from four or five tables through a view that has a join,and then updating the resulting @table from some other tables. Thereare several queries inside the function, which are selected accordingto which parameters are supplied (house #, street, zip, or perhaps parcelnumber, or house #, street, town, city,...etc.). If a lot of parametersare provided, and the property is not in the database, then several queriesmay be attempted -- it keeps going until it runs out of queries or findssomething. Usually it takes ~1-2 sec for a hit, but maybe a minute insome failure cases, depending on the distribution of data. (~100 milproperties in the DB) Some queires operate on the assumption the input datais slightly faulty, and take relatively a long time, e.g., if WHEREZIP=@Zip fails, we try WHERE ZIP LIKE substring(@Zip,1,3)+'%'. Whileall this is going on the application may decide the DB is never going toreturn, and time out; it also seems more likely to throw an exception thelonger it has to wait. Is there a way to cause the DB function to fail ifit takes more than a certain amount of time? I could also recast it asa procedure, and check the time consumed after every query, and abandonthe search if a certain amount of time has elapsed.Thanks in advance,Jim Geissman
View 3 Replies
View Related
Jun 8, 2007
What is the experession to evaluate if the result of a computation would be a divide by zero error for a text box in report?
IIF(divide by zero, display nothing, else display computed result)...??
View 6 Replies
View Related
Mar 27, 2008
I have web site when people orders through website at same time, a problem can be arrive when allocating next primary key value to new record, using maximum number of records +1
how to avoid this problem and insert to sql server
please give me your ideas
View 16 Replies
View Related
Mar 7, 2001
Hi there,
I'm using a query to fetch data from a table where one of the criteria is IN(...) clause for the key column of the table.Now the data being retrieved is ordered by the key column of the table even though I haven't specified any order by clause.
I want to know if there a way in which the data being fetched is in the order of my IN(...) clause.
Thanx
Aby
View 3 Replies
View Related
Feb 13, 2002
Hi there!
Here is my situation:
table 'ReceiptHeader'
IDCustomerIDDateCreated
1225102/06/2002
1332102/09/2002
1444002/15/2002
table 'ReceiptDiscount'
IDDiscountIDReceiptHeaderIDAmount
111210.00
241250.00
311325.00
**a receipt can have multiple discounts**
Table 'ReceiptDetail'
ReceiptHeaderID LineItemIDTotal
121155.33
131145.33
141241.66
**for this example there is only one line item per receipt**
Without using a cursor, i would like to return a result set
like this one below using a set based solution...
ReceiptIDCustomerIDDiscountTotal
1225160.00155.33
1332125.00145.33
144400.00241.66
Thanks,
SF
View 1 Replies
View Related
Jun 26, 2002
Hi,
Can anybody help me how to Stop this 'System Creation Indexes' (Index Name like 'WA%')?.
Is there any method is available to delete the Existing 'System Indexes'?.
thanks,
Srini
View 1 Replies
View Related
Mar 9, 2007
Guys,
Is there a way to temporarily disable logging into the transaction log.
In our system, we perform purging of our database every night, where the purging consists of 2 steps:
1. For each table, insert the data, to be deleted, into a corresponding "purged" table, to remain there for one day only.
2. For each table, delete the unnecessary data (i.e. same data stored in purged tables in step 1)
During these 2 steps, the transaction log grows, and since we perform the transactional log back up, the back up at that time is huge. We are running a bit low on the hard disk space and I'd like to disable logging into the transaction log when these operations are performed.
I really don't care about being able to recover this data.
I thought that one option is to set the database to simple recovery, then perform the purging of the database, and then change back to full.
However, I think that trans log can grow even if recovery model is simple [although you won't be able to retrieve any changes].
So, is there a way to delete a portion of a table [or insert into it] so that no data is written to a transaction log (I know that we can use TRUNCATE if we need to remove whole table without logging)?
Thanks a lot
View 13 Replies
View Related
Nov 15, 1999
Help,
Have a user who needs many data changes to a huge table with over 2 million rows.
He hopes to do this by running a series of queries.
Last time our log grew gargantuan. I know 7.0 is wonderful & dynamic, but after
an hour of rambling through books online I'm more confused than ever.
Is there a way we can code into his query at intervals any hard coded checkpoints
or log backups? Do we need to dbcc shrinkfile the logfile?
Thanks,
Mark Blackburn
mark@mbari.org
View 1 Replies
View Related
Sep 6, 2006
Hi
I currently have two tables called Book and JournalPaper, both of which have a column called Publisher. Currently the data in the Publisher column is the Publisher name that is entered straight into either table and has been duplicated in many cases. To tidy this up I have created a new table called Publisher where each entry will have a unique ID.
I now want to remove the Publisher columns from Book and JournalPaper, replace it with an ID foreign key column and move the Publisher name data into the Publisher table. Is there a way I can do this without duplicating the data as some publishers appear several times on both tables?
Any help with this will be greatly appreciated as my limited SQL is not up to this particular challenge!!!
Thanks!
View 7 Replies
View Related
Oct 31, 2007
Hello -- I have a Huge Table Tab1 (160 Mill Rows). I have a script that Inserts and Deletes from this Tab1 (This Script runs for a good 4/5 Hrs).
When the script is being executed, No other session can even do a "SELECT TOP 10 * FROM Tab1"
How do i avoid this? Any "NOLOCK" Keywords i should be specifying in my Script?
Thanks in advance
View 3 Replies
View Related
Jun 23, 2015
I have a function with multiple if ( condition) which is CPU intensive. How could I avoid this.
CREATE FUNCTION prici.[fn_pricipalamt]
(
-- Add the parameters for the function here
@Tcode char(10),
@SecTypeCode1 char(10),
@SecTypeCode2 char(10),
@TradeAmount float,
[Code] .....
View 2 Replies
View Related
Oct 26, 2006
Hi!
I have an SQL Server where only a group of sysadmins have access to install DTSX packages. Those DTSX packages are developed by another team that does not have access to the production SQL Server. They use their own SQL Server.
In order to make it as simple as possible to install these packages by the sysadmins, I suggested the use of configuration files. The files are associated with the job that executes the package and all that has to be done to install the package is copy it to the file system or import it into the SQL Server. Developers use their configuration file, sysadmins user theirs. Nothing new here.
The problem is that some of the packages have to access some old systems and we cannot use integrated authentication. We have to use SQL authentication and therefore specify a user account and password in the connection string. If this is stored in the configuration file, it is available in clear text! If I store the configuration in the package itself using ProtectSensitiveWithPassword protection level, the sysadmins will have to edit every DTSX package to reset the connections to the production environment (the developers always send them with their development configurations) and I don't want that. If I store it in a SQL Server database, it seems the sysadmins also have to edit the package to point the package configuration to the correct database and set the configuration filter.
Another solution is to store the credentials in clear text in the configuration file but set the file system permissions on that file so only the account that executes the package can read them (this is what I'm implementing if nothing better comes up...)
Is there any other way to do this? Am I doing something wrong?
Thanks in advance.
View 5 Replies
View Related