I have a table that holds notes for item's. I'm want to do a select statement where one of my columns assigns a sequential value to each row based on the item number. Would like data to look like this where doc_no would be my row_number function:
item_no seq_no note doc_no ABC 1 blah 1 ABC 2 blahh 1 ABC 3 bla3 1 XYZ 1 more n 2 XYZ 2 another 2 EFG 1 blahhh 3
I have a BOM table with all finished item receipes and semi items recipes. create a query where semi item materials are also listed in finished item recipe.
I am using MS SQL 2012 and have a pretty simple table dbo. Migration Breakdown with sample data as follows.
DepartDateTime ZoneMovement 2015-06-26 14:00:00.000 6 to 4 2015-06-26 14:00:00.000 11 to 7 2015-06-26 15:30:00.000 9 to 6 2015-06-26 21:00:00.000 7 to 3 2015-06-27 08:01:00.000 7 to 4
[code]....
What I am trying to do is parse the data set to find out when we have more than three like movements ex. 3 to 10 within ANY rolling 72 hour period. I have looked at the SQL Window Functions OVER with a ROW | RANGE subclause, but I can't find out how to tackle this rolling 72 hour business.
Here's my problem. I have 2 tasks defined in my Control Flow tab:
EXECUTE SQL--------->EXECUTE DTS 2000 PACKAGE
When I attempt to run it, by right-clicking the EXECUTE SQL task, and selecting "Execute Task", it only runs the EXECUTE SQL part (successfully), and does not "kick off" the EXECUTE DTS 2000 PACKAGE, after it is done running (even though it completes successfully, as shown by the green box).
Yes, they are connected by a dark green arrow, as indicated in my diagram above.
Why is this?? Am I missing something here? Need help.
Hello,Basically, I have a table with 2 fieldsId item#1 33332 33333 22224 22225 22226 33337 33338 3333I would like to only select the last identical Item# which in this case would be the id 6,7 and 8Any idea how could I do that?Thanks
I am having trouble finishing the last bit of a report. The report shows orders that customers have placed that contain 0 promo items, All promo items (all items in order are promo items), and a mix of promo and non promo (at least 1 promo item and 1 non-promo item). Ive simplified this a bit for ease of understanding but lets assume we have 2 tables: A Promo table that contains the items on promotion and the dates that promotion is valid, and a Sales table, that contains the order number, order date, and sku ordered.
I've already written code that finds orders that have at least 1 promo item in them, and using that, I can determine what orders have 0 promo items in them. Where I am stuck is taking the orders that have at least 1 promo item in them, and separating them into orders that have only promo items, and those that have both promo and not promo items in them. Also, there are several promos throughout the year (called "Offers") so in my code below, you can see 2 different Offers ("JF" and "MA") with their corresponding dates they are valid. They will never overlap. My results also have to be split out by Offer so management can look at the results of each offer separately. Here is some code:
Code: create table #Promos ( Offer varchar(2) null, SKU int null, StartDt date null, EndDt date null
[Code] ....
So my results should show OrderNo A1111 in the Promo and No Promo group because of SKU 5 not being promotional during the time that order was placed. OrderNo A2222 should be in the Promo Only group because both SKUs on the order were promotional at the time the order was placed.
I have one stored proc that uses the Row_number over partition that looks like this:
Select TargetID, Academic_Year_id, Course_Mode, UK_Enrol, Int_Enrol, Notes, Revision_Number from (SELECT ROW_NUMBER() OVER (partition by [Academic_Year_id] order by [Revision_Number] DESC) as [RevNum],TargetID, Academic_Year_id, Course_Mode, Target_Year, UK_Enrol, Int_Enrol, Notes, Revision_Number FROM tbl_targets where course_mode=@course_mode) RV where (RV.RevNum=1)
Now the next store proc needs to use the above but i need to add the Academic_year from the tbl_acyear_lookup table also add filter the target_year ='year 1'
I have a large SQL 2012 table containing survey details. The number of questions vary in each survey and can range in number from as little as 10 questions to a maximum of 50.I need to adapt my crosstab code below to include a CASE statement that outputs a column (Q1, Q2, Q3 etc) representing the questions up to a maximum 50 questions (Q50) and to place the answer under the corresponding question column within each survey. Ideally I want to avoid having to write 50 CASE statements in my code. I chose the CASE statement method as I understand that the PIVOT option isn't as flexible,I have included some test data and the output should look like this:
There are three methods to consider when calculating the days to pay logic.
Method 1 - Simple : Look for Document Type 2 (Invoice), if "closed at date" > "posting date" then number of days = ("closed at date" - "posting date")
Method 2 - A Document Type 1 (payment) closes a Document Type 2 (Invoice) For this method the formula would be: Payment Record (1) "posting date" - Invoice Record (2) "posting date"
Method 3 - An Invoice closes the payment.
On a payment entry “closed by entry no.” refers to an Invoice entry.
a. In our code we are not on the payment looking for the invoice, we are on the invoice. i. Because of this we need to find the entry that our current invoiced has closed.
I am taking this from a page that has the pascal code that I need to translate to SQL.
IF (CustLedgEntry2."Document Type" = CustLedgEntry2."Document Type"::Invoice) AND NOT CustLedgEntry2.Open THEN IF CustLedgEntry2."Closed at Date" > CustLedgEntry2."Posting Date" THEN UpdateDaysToPay(CustLedgEntry2."Closed at Date" - CustLedgEntry2."Posting Date")
[Code] ....
I am also including create table and insert data scripts ...
With this query i get only the records i need, but i would like to output in this way
1 - 20 21 - 30 31 - 40
of course in the real environment the ID are not consecutive, this is just one example of data.
declare @temp table (ID int) declare @i int = 1 while(@i<1000) begin insert into @temp values (@i) set @i=@i+1 end select ID from ( select ID, row_number() over (order by ID) as rn from @temp ) q where (rn % 20=0) OR (rn % 20=1)
I need a new field added 'Field1' which will add SEQUENCE number 1,2,.. based ON GROUP BY MasterID..AND another field TotalCount which will COUNT total masterID (here it will be 2)
How to create a row number for a consecutive action. Example: I have a listing of people who have either completed a goal or not. I need to count by person the number of consecutively missed goals.
My sql table is this: PersonId, GoalDate, GoalStatus (holds completed or missed)
My first thought was to use the rownumber function however that doesn’t work because someone could complete a goal, miss a goal, then complete one and when they complete a goal after a missed goal the count has to start over.
I have found execution plan with significant difference between actual and estimated number of rows (roughly actual/2=estimated) in non-clustered index seek.Statistics are updated.
In a t-sql 2012 select statement, I have a query that looks like the following:
SELECT CAST(ROUND(SUM([ABSCNT]), 1) AS NUMERIC(24,1)) from table1. The field called [ABSCNT] is declared as a double. I would like to know how to return a number like 009.99 from the query. I would basically like to have the following:
1. 2 leading zeroes (basically I want 3 numbers displayed before the decimal point)
2. the number before the decimal point to always display even if the value is 0, and
I have a table which dont any Identity column. The data in the table has duplicate values, some rows are totally identical. I just want to select the row which has maximum Row Id/Row number if I am getting multiple rows in my select statement.
I have a server which is using total of 12 cores running an instance of SQL server. I was told to dedicate only 8 cores of CPU to be used by the instance for licensing purposes.
what the best way is to go about limiting CPU cores?
I am trying to calculate the number of hours a device has been used and I cant find how. I need a query that calculated and does an average of the number of hrs used in a week.
A New Monthly data is being loaded, checked and finally approved after 6 or 7 iteration before approval.Because of this iteration the monthly data set is being added then deleted then added then deleted few times.Because the table is big this process takes time, any thoughts on how to make the delete insert process faster.Keep in mind I cannot do much because it is a production table and is being access by other users to do other analysis.
Delete is done based on trx_date which is a year/month combo, like 201508.
The table has monthly sales by customer aggregated.
The table structure is:
CREATE TABLE [dbo].[Sales]( [batch_key] [int] NOT NULL, [Company_key] [int] NOT NULL, [customer_key] [char](22) NOT NULL, [Trx_Date] [int] NOT NULL, [account] [nvarchar](35) NOT NULL,
IF OBJECT_ID('tempdb..#Test') IS NOt NULL DROP TABLe #Test --===== Create the test table with create table #Test([Year] float, Age Int, ) INSERT INTO #Test ([Year], Age)
[Code]...
I queried below to get additional column
Select *,row_number() over(partition by [Year] order by Age) as RN from #Test as
Below is the scenario which I have currently in my query. I need to write this query without any hardcode values , so that it will work til n number of years without modifications.
Startdate = CASE WHEN Trandate between '06-04-2013' and '05-04-2014' then '06-04-2013' Trandate between '06-04-2012' and '05-04-2013' then '06-04-2012' Trandate between '06-04-2011' and '05-04-2012' then '06-04-2011' Trandate between '06-04-2010' and '05-04-2011' then '06-04-2010' Trandate between '06-04-2009' and '05-04-2010' then '06-04-2009' Trandate between '06-04-2008' and '05-04-2019' then '06-04-2008' END
From what I've seen, the CheckSum_Agg function appears to returns 0 for even number of repeated values. If so, then what is the practical use of this function for implementing an aggregate checksum across a set of values?
For example, the following work as expected; it returns a non-zero checksum across (1) value or across (2) unequal values.
declare @t table ( ID int ); insert into @t ( ID ) values (-7077); select checksum_agg( ID ) from @t; ----------- -7077 declare @t table ( ID int ); insert into @t ( ID ) values (-7077), (-8112); select checksum_agg( ID ) from @t; ----------- 1035
However, the function appears to returns 0 for an even number of repeated values.
declare @t table ( ID int ); insert into @t ( ID ) values (-7077), (-7077); select checksum_agg( ID ) from @t; ----------- 0
It's not specific to -7077, for example:
declare @t table ( ID int ); insert into @t ( ID ) values (-997777), (-997777); select checksum_agg( ID ) from @t; ----------- 0
What's curious is that (3) repeated equal values will return a checksum > 0.
declare @t table ( ID int ); insert into @t ( ID ) values (-997777), (-997777), (-997777); select checksum_agg( ID ) from @t; ----------- -997777
But a set of (4) repeated equal values will return 0 again.
declare @t table ( ID int ); insert into @t ( ID ) values (-997777), (-997777), (-997777), (-997777); select checksum_agg( ID ) from @t; ----------- 0
Finally, a set of (2) uneuqal values repeated twice will return 0 again.
declare @t table ( ID int ); insert into @t ( ID ) values (-997777), (8112), (-997777), (8112); select checksum_agg( ID ) from @t; ----------- 0
I am currently trying to write a query that pulls a summation of item specific data from sales orders. For simplicity's sake, the column structure can be something like the following...
Item#, PoundsDuringWeekNumber (this would be the current week number out of the 52 weeks in a year and the pounds of the item during that week)
However, those are not going to be the only 2 columns. The idea of the query is that the user would be able to provide the query a date range (say 2 months) and the columns would then morph into the following...
Item#, PoundsDuringWeekNumber (current), PoundsDuringWeekNumber(current - 1), PoundsDuringWeekNumber (current - 2), etc. etc. PoundsDuringWeekNumber(current - 8)
Initial ideas where to create a function to execute the summation of the pounds during the date range of the week in question and execute it across the columns, but with the indeterminable number of columns, the query would not know how many times to execute the function.
add a number to the end of an ID to create a series.For example, I have an EventID that may have many sub events. If the EventID is 31206, and I want to have subEvents, I would like have the following sequence. In this case, lets say I have 4 sub Events so I want to check the EventID and then produce:
312061 312062 312063 312064
How can I check what the EventID is, then concatenate a sequence number by the EventID?
Is there a performance limit on the number of indexes per table / database ? With Filtered indexes there appear to be many more opportunities for more finely defined, and therefore smaller indexes resulting in many more indexes on a single table.