SQL Server 2012 :: Using WHILE To Avoid Cursor Under Certain Conditions
Mar 20, 2015
I need to use WHILE to avoid Cursor under certains conditions.
My SELECT statement is:
SELECT ref, ano, numberofyears ,nreint, naoreint,degress,
tabela, tax, taxamaxima,[evactual],
[evaldepact],[ereintact],nrregbt,[taxAmtAno]
FROM deprec
ORDER BY [ref] ASC
numberofyears= 100 /tax for exemple for a good where lifecycle is 4 years ,ex:
Tax = 25% Then 100/25 = 4 years
I see this WHILE script, but i need to run :
1. for each REF + Until years < 4 in this exemple, because i have goods years depend on Percent.
the WHILE script i see is:
DECLARE @table1 TABLE (Id int not null primary key identity(1,1), col1 int )
INSERT into @table1 (col1) SELECT col1 FROM table2
SET @num_rows=@@ROWCOUNT
SET @cnt=0
WHILE @cnt<@num_rows
[Code] .....
My doubt is how to make the LOOP for each REF until Year < 4 (like my example)
I added a new column to StagePayments table - Activity - which is supposed to end up being the same as the JobActivityID from the JobActivities table. Basically, I need to get JobActivities (JA) info and put it in the StagePayment (SP) column. Problem is there are duplicate JA/Descriptions and SP/Activities, so what I need is to take the first sequence SP/Activity and grab the first JA/JobActivityID that matches for a particular JobID. Then get the next one of each and so on and so on...
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Tables...
CREATE TABLE [dbo].[StagePayment] ( [PaymentID] [uniqueidentifier] NOT NULL , [JobID] [uniqueidentifier] NOT NULL , [Paid] [bit] NOT NULL , [Sequence] [smallint] NOT NULL , [Amount] [decimal](10, 2) NOT NULL , [Comment] [varchar] (2000) COLLATE SQL_Latin1_General_CP437_BIN NULL , [Activity] [varchar] (30) COLLATE SQL_Latin1_General_CP437_BIN NULL , [ActivityID] [uniqueidentifier] NULL ) ON [PRIMARY] GO
ALTER TABLE [dbo].[StagePayment] WITH NOCHECK ADD CONSTRAINT [PK__StagePayment__457442E6] PRIMARY KEY CLUSTERED ( [PaymentID] ) ON [PRIMARY] GO
ALTER TABLE [dbo].[StagePayment] WITH NOCHECK ADD CONSTRAINT [DF__StagePayme__Paid__4668671F] DEFAULT (0) FOR [Paid] GO
CREATE INDEX [IX_StagePayment] ON [dbo].[StagePayment]([JobID], [Sequence]) ON [PRIMARY] GO
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
CREATE TABLE [dbo].[JobActivities] ( [JobActivityID] [uniqueidentifier] NOT NULL , [JobID] [uniqueidentifier] NOT NULL , [Sequence] [smallint] NOT NULL , [Activity_Status] [char] (1) COLLATE SQL_Latin1_General_CP437_BIN NOT NULL , [Description] [char] (30) COLLATE SQL_Latin1_General_CP437_BIN NOT NULL , [PlanStartDate] [datetime] NULL , [PlanEndDate] [datetime] NULL , [ActEndDate] [datetime] NULL , [AmountDue] [decimal](10, 2) NOT NULL , [CanDelete] [bit] NOT NULL , [Comments] [varchar] (2000) COLLATE SQL_Latin1_General_CP437_BIN NULL ) ON [PRIMARY] GO
ALTER TABLE [dbo].[JobActivities] WITH NOCHECK ADD CONSTRAINT [PK_JobActivity] PRIMARY KEY CLUSTERED ( [JobActivityID] ) WITH FILLFACTOR = 90 ON [PRIMARY] GO
ALTER TABLE [dbo].[JobActivities] WITH NOCHECK ADD CONSTRAINT [DF_JobActivities_JobActivityID] DEFAULT (newid()) FOR [JobActivityID] GO
CREATE INDEX [IX_JobActivity] ON [dbo].[JobActivities]([JobID]) WITH FILLFACTOR = 90 ON [PRIMARY] GO
CREATE INDEX [IX_JobActivity_1] ON [dbo].[JobActivities]([JobID], [Activity_Status]) WITH FILLFACTOR = 90 ON [PRIMARY] GO
CREATE INDEX [IX_JobActivity_2] ON [dbo].[JobActivities]([JobID], [ActEndDate]) WITH FILLFACTOR = 90 ON [PRIMARY] GO
CREATE INDEX [IX_JobActivities] ON [dbo].[JobActivities]([JobID], [PlanEndDate]) WITH FILLFACTOR = 90 ON [PRIMARY] GO
CREATE INDEX [JobActivities53] ON [dbo].[JobActivities]([JobID], [Description], [ActEndDate]) WITH FILLFACTOR = 90 ON [PRIMARY] GO
CREATE INDEX [JobActivities50] ON [dbo].[JobActivities]([JobID], [Description], [PlanEndDate], [ActEndDate]) WITH FILLFACTOR = 90 ON [PRIMARY] GO
CREATE INDEX [IX_JobActivities_6] ON [dbo].[JobActivities]([JobActivityID], [Activity_Status], [Description]) WITH FILLFACTOR = 90 ON [PRIMARY] GO
CREATE INDEX [IX_JobActivities_1] ON [dbo].[JobActivities]([JobID], [Sequence], [Description], [JobActivityID]) ON [PRIMARY] GO
CREATE INDEX [IX_JobActivities_2] ON [dbo].[JobActivities]([JobID], [Sequence], [ActEndDate]) ON [PRIMARY] GO
ALTER TABLE [dbo].[JobActivities] ADD CONSTRAINT [FK_JobActivity_Job] FOREIGN KEY ( [JobID] ) REFERENCES [dbo].[Jobs] ( [JobID] ) ON DELETE CASCADE GO
I have tried various versions of this... begin transaction update StagePayment set Activity = (Select J.JobActivityID from JobActivities J (nolock) inner join StagePayment SP (nolock) on J.JobID = SP.JobID and J.Description = SP.Activity) where Activity is not NULL
way too basic as I get this error...
Server: Msg 512, Level 16, State 1, Line 2 Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. The statement has been terminated.
I know somehow I have to get the duplicate Activities in the same sequence as the Descriptions, but that's where I'm stuck?!?
If anyone is able to offer suggestions on how to get this to update correctly, I'd be very happy to hear about it! Thanks in advance! Tiffanie
I need to query a database for a recordset and insert this into another database row-by-row.
For each record I want to provide a time of insert, but for each insert this time has to be incremented by 1, the time format is HH:MM:SS.0000 so for each inserted record the last decimal point would increment by 1.
Application is a Work Tracking/Timesheet database.The increments of work are stored in the TimesheetItem table. Thiscontains, inter alia, the Work Code, the Start and the Duration thatthe employee spent that day on a particular project.Some employees in the Network Support Department don't complete astandard 7.5 hour day for various reasons, so for every Network Supportperson I need to update these particular days with an amount to bringthe total day's hours to 7.5.This SQL will get me a list of all TimesheetItem records for thepersonnel concerned.SELECTTimesheetItem.TypeID,[Work].WorkCode,TimesheetItem.Start AS Start,SUM(CAST(TimesheetItem.DurationMins AS float) / 60) AS HoursFROMTimesheetItem LEFT OUTER JOIN[Work] ON TimesheetItem.WorkID = [Work].WorkIDWHERE(TimesheetItem.EmployeeID IN(SELECT EmployeeID FROM Employee WHERE DepartmentID = 2))GROUP BYTimesheetItem.TypeID, TimesheetItem.Start, [Work].WorkCodeHAVING(TimesheetItem.Start >= @FromDate) AND (TimesheetItem.Start <= @ToDate)ORDER BYTimesheetItem.StartWhat I need is to group these records by EmployeeID where theaccumulated hours per day are < 7.5, so that I can then insert anincrement to make up the difference.I'm writing this from home and I don't have access to the tables toprovide a script, but there's nothing untoward there.Hope someone can help!ThanksEdward
hi all, i have a huge database and i am using Cursor to retrieve data and with each fatched data i m doing some operation....as my database is increasing the time duration to execute the cycle is also increation hugely....how to solve it??
I defined a stored procedure with a cursor inside for updating data. When I call it from an MSAccess client, it fails. When I execute it directly in a ISQL/w windows, it doesn`t fail but it displays me the data (wich is the reason for failing from MSAccess). Do somebody know if I could do it without displaying data in the screen ??
Let's say I have a scalar functions that I'd like it's input to be the output from the previous row, with a recursive CTE I can do the following:
;with rCTE(iterations,computed) as ( select 0 [iterations], 1e+0 [computed] union all select iterations+1,dbo.f(computed) from where rCTE where iterations < 30 ) select * from rCTE
Thus for each iteration, is the nTh fold of the function f applied to 1. [e.g 5 is f(f(f(f(f(1)))))]
However, for some illogical reason this relatively simple function did lots of read and write in tempdb. Can I reform this query somehow to just use lag instead? I know for a fact I only want to get let's say 30 iterations. It'd be very nice to be able to enjoy a window spool which will spawn a worktable with minimal IO.
I know I can put 30 rows into a table variable and do a quirky update across it, but Is there a nice way to do this without doing some sort of hack.
I am trying to think of a way to read a control table, build the SQL statement for each line, and then execute them all, without using a cursor.
To make it simple... control table would look like this:
CREATE TABLE [dbo].[Control_Table]( [Server_Name] [varchar](50) NOT NULL, [Database_Name] [varchar](255) NOT NULL, CONSTRAINT [PK_Control_Table] PRIMARY KEY CLUSTERED
[Code] ....
So if we then load:
insert into zt_Planning_Models_Plant_Include_Control_Table values ('r2d2','planing1'), ('r2d2','planing7'), ('deathstar','planing3')
Then you would build a SQL script that would end up looking like the following (note all the columns are the same):
insert into master_models Select * from r2d2.planning1.dbo.models insert into master_models select * from r2d2.planning7.dbo.models insert into master_models Select * from deathstar.planning3.dbo.models
And works perfectly, but ... how to make sure every item has an element "nodes" ? The case here is for the child leafs obviously. This, because on the client i have to inject this element "nodes" on a json version of this xml, and just wanted to avoid normalizing the structure on the client.
For the root I am using
FOR XML PATH('root'),TYPE; and for the hierarchy that follows FOR XML RAW ('node'), root('nodes'), ELEMENTS
I have a table called Table1 where I have five fields like Tableid, Processigndate, Amount, remainingCollectonCount and Frequency. All total I have more than 5Lacs records.
Now I need to fill up another table Called FutuecashFlow taking the records from Table1. There will be also five Columns like FutureCashflowid, Table1id, Processigndate, Amount.
Now the condition is that if the remainingCollectonCount =6 and the frequency is 12 then there will be the 6 entries in the futurecasflow table where the prcessign datae wille be addeed by 1 month.
For example Table1 Tableid, Processigndate Amount remainingCollectonCount Frequency 1 2014-02-15 48 8 12
the future cash flow table the prcessing date column will be shown in the following way
I am trying to calculate cumulative GPA which is currently it is being done via a cursor and i was looking to change it to set based if possible.
The cumulative GPA is basically the average marks of all courses in a term and all previous terms. Where I am having an issue is if the course was taken in a previous term and the mark was lower, then the lower mark in not factored in to the GPA. If a course was taken in a previous term and the mark was higher then the marks for the current term are not factored into the GPA.
In the example below, math is taken in all three terms and the current term's mark is included in the GPA but not the previous terms because current terms mark is higher. Two classes (bio,chem) are taken in third term but not included due to marks being higher in second term.
I like to backup the stored procedures' code used in my databases. So I created this Script:
/* IF OBJECT_ID('[Monitor].[dbo].[Procedurecode]') IS NOT NULL DROP TABLE [Monitor].[dbo].[Procedurecode]; */ DECLARE @db nvarchar(50), @strSQL nvarchar (100) IF CURSOR_STATUS('global','cur1')>=-1 BEGIN DEALLOCATE cur1 END
[code]....
Problem is (and I seem not to be the only one with tis) described here:
"If the executed string contains a USE statement that changes the database context, the change to the database context only lasts until sp_executesql or the EXECUTE statement has finished running." There is nothing magical in SQL server that knows you intend a series of dynamic sql commands to be part of a single job running in a single context. You have to build the entire string for the job you want to execute."
I am writing a custom query to determine if a legacy table exists or not. From My CMS Server I already have all the instances I have to query and I store the name of the instance in the @Instance variable. I cannot get those stubborn ticks to work right in my query. Below I am using the IF EXISTS statement to search the metadata for the legacy table.
DECLARE @Found tinyint DECLARE @Instance varchar(100) set @Instance = 'The Instance' IF (EXISTS (SELECT a.* FROM OPENROWSET('SQLNCLI', 'Server=' + @Instance + ';UID=DBAReader;PWD=DBAReader;','SELECT * FROM [DBA].INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = ''TheTable''') AS a)) SET @Found = 1 ELSE SET @Found = 0
I have a situation where I need to call a stored procedure once per each row of table (with some of the columns of each row was its parameters). I was wondering how I can do this without having to use cursors.
Here are my simulated procs...
Main Stored Procedure: This will be called once per each row of some table.
-- All this proc does is, prints out the list of parameters that are passed to it.
CREATE PROCEDURE dbo.MyMainStoredProc ( @IDINT, @NameVARCHAR (200), @SessionIDINT ) AS BEGIN
[Code] ....
Here is a sample call to the out proc...
EXEC dbo.MyOuterStoredProc @SessionID = 123
In my code above for "MyOuterStoredProc", I managed to avoid using cursors and was able to frame a string that contains myltiple EXEC statements. At the end of the proc, I am using sp_executesql to run this string (of multipl sp calls). However, it has a limitation in terms of string length for NVARCHAR. Besides, I am not very sure if this is an efficient way...just managed to hack something to make it work.
I have a temptable with a list of user IDs that I want to drop so I created a script to do a cursor and run through my drop functions. The drops work by themselves and the ver check works with them but when I wrap them in the cursor all i get is an output for each user in the results window in ssms. why it's not setting the variable and instead outputting to results?
DECLARE @ver nvarchar(128); DECLARE @UserName nvarchar(50); DECLARE @UserD nvarchar(80); DECLARE @LoginD nvarchar(80); -- Initialize the variable. SET @ver = CAST(serverproperty('ProductVersion') AS nvarchar)
This store procedure will get some executable queries from the select statement, the cursor will fetch each rows to execute the query and insert the queries into table_3 to mark as 'E'. Until 17:00, this store procedure will stop execute the queries and just get the queries from select statement insert into table_3 to mark as 'C'.
I don't know why the outputs in table_3 are quiet different than I think. This store procedure comes out with two exactly same queries and one marked as C and another marked as E.
CREATE PROCEDURE procedure1 AS DECLARE cursor_1 CURSOR FOR SELECT 'This is a executable query' FROM table_1 DECLARE @table_2
An automatic failover set exists. This set consists of a primary replica and a secondary replica (the automatic failover target) that are both configured for synchronous-commit mode and set to AUTOMATIC failover.Configured the both AG Group database automatic failover and synchronous-commit mode.But automatic Failover failed also Cluster service not started automatically at Node2. It got connected through AO Listerner after starting Node1. As below SQL Error log during shutdown Node1
Date,Source,Severity,Message 10/27/2015 10:44:20,spid37s,Unknown,AlwaysOn Availability Groups: Waiting for local Windows Server Failover Clustering node to come online. This is an informational message only. No user action is required. 10/27/2015 10:44:20,spid37s,Unknown,AlwaysOn Availability Groups: Local Windows Server Failover Clustering node started.
I have stored procedure .In SP i am using cursur to load data from Parent to several child table.
I have attached the script with this message.
And my problem is how to use direct select and insert or load to speedup the process instead of cursor.
USE [IconicMarketing] GO /****** Object: StoredProcedure [dbo].[SP_DMS_INVENTORY] Script Date: 3/6/2015 3:34:03 PM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO
Here the SELECT query is fetching the records corresponding to ITEM_DESCRIPTION in 5 separate transactions. How to change the cursor to display the 5 records in at a time in single transactions.
CREATE TABLE #ITEMS (ITEM_ID uniqueidentifier NOT NULL, ITEM_DESCRIPTION VARCHAR(250) NOT NULL)INSERT INTO #ITEMSVALUES(NEWID(), 'This is a wonderful car'),(NEWID(), 'This is a fast bike'),(NEWID(), 'This is a expensive aeroplane'),(NEWID(), 'This is a cheap bicycle'),(NEWID(), 'This is a dream holiday') --- DECLARE @ITEM_ID uniqueidentifier DECLARE ITEM_CURSOR CURSOR
so async cursor population is supposed to create the cursor and return the cursor id quickly, while the server works on async populating the results. For a keyset-driven cursor, SQL Server stores the key sets in tempdb, which it then uses to fetch data for cursor results. Anyway, this works fine for smaller tables, but I'm finding for large result sets, the async cursor population is very slow and indeed seems to approximate synchronous time. The wait stat I get while it is running (supposedly asynchronously) is TRANSACTION_MUTEX.
Example: --enable async cursor exec dbo.sp_configure 'cursor threshold', 0; reconfigure; declare @cursor int, @stmt nvarchar(max), @scrollopt int, @ccopt int, @rowcount int; --example of giant result set set @stmt = 'select * from sys.all_objects o1, sys.all_objects o1';
[code]...
Note that using the SQL "select * from sys.all_objects o1" is much faster than "select * from sys.all_objects o1, sys.all_objects o2". However, if cursor population is async, I'd expect the time to return a cursor id to be similar between the two.
My question is fairly simple. When I join between two tables, I always use the ON syntax. For example:
SELECT
* FROM
Users
JOIN UserRoles
ON (Users.UserRoleId = UserRoles.UserRoleId)
No problems there. However, if I then decide to further filter the selection based on some trait of the UserRole, I have two options: I can add the condition as a WHERE statement, or I can add the condition within the ON block.
--Version 1:
SELECT
* FROM
Users
JOIN UserRoles
ON (Users.UserRoleId = UserRoles.UserRoleId) WHERE
UserRoles.Active = 'TRUE'
-- Version 2
SELECT
* FROM
Users
JOIN UserRoles
ON (Users.UserRoleId = UserRoles.UserRoleId
AND UserRoles.Active = 'TRUE')
So, the question is, which is faster/better, if either? The Query Analyzer shows the two queries have the exact same execution plan, which makes sense, since they're both joining the same tables. However, I'm wondering if adding the condition in the ON statement results in fewer rows the JOIN statement initially needs to join up, thus reducing the overall initial size of the results table before the WHERE conditions are applied.
So is there a difference, performance wise? I imagine that if Users had a thousand records, and UserRoles had 10 records, then the JOIN would create a cartesian product of the two tables, resulting in 10,000 records in the table before the WHERE conditions are applied. However, if only three of the UserRoles is set to Active, would that mean that the resulting table, before applying WHERE conditions, would only contain 3000 records?
I am exporting the data from database to an excel template that will have 100+ columns and approx 4000 rows of data. Then the business user will make changes to some columns without modifying primary key columns and will send back to us where we will update the same to database.
In order to this am using an excel template by protecting the primary key columns with a password protection.
At template level am fine and whenever am trying to modify any primary key column it's not allowing and am totally good there. But when I use that excel template as a destination to load data from SSIS, all the protected columns are no longer protected and i could able to make changes.
I need to call a stored proc for each of the IDs above. Our existing code which has a cursor to loop through the table and call the proc for each value is proving to be a performance nightmare. Is there an alternate method that I can use to avoid cursor and make it more efficient?
Select DISTINCT p.dbPatID, p.dbpatfirstname, p.dbPatLastName, s.dbSchTypeCnt as SchDetailType, t.dbSchTypeCnt as SchTypeType, ISNULL(r.dbStatusDesc, 'No Ref') AS dbStatusDesc, ISNULL(t.dbSchTypeCode, 'No Ref') AS dbSchTypeCode, ISNULL(t.dbSchTypeDesc, 'No Ref') AS dbSchTypeDesc,
[code]....
however, I only want the lastest a.dbPatApptTime and only when a.dbPFStatus = 1 and a.ClientRef = 'EPS'
So the stand alone sql could be....
Select Top(1) dbPatApptTime as LastVisitDate, dbSchTypeDesc as LastVisitDesc from appointments where dbPFStatus = 1 and clientref = 'EPS' order by dbPatApptTime desc
I'm just not sure how to incorporate that into my sql or whether there is a better way,
I want to insert the above temp table to another table with column names defined below like this
Empname, Vacation Hours, Sicks Hours EmpA, 12, 10
Basically if it is X it is vacation hours and if it is Y it is sick Hours. Needs a simple logic to place the apprpriate hours(Balance) to its respective columns. I'm not sure how I can achieve this in using Pivot or Conditions.
I have a database in sql server 2000. In this i have used identity(seed,increment) property so that unique ids may be available. Now if requests from diffrent users arrives at same time on the server, will there be a conflict because of ids?