SQL Server 2012 :: Using Cross Apply To UNPIVOT Data
Jan 15, 2014
I was reading Kenneth Fisher's and Dwain Camps' articles on unpivoting using cross apply... And I can actually get them to work....
CREATE TABLE #TxCycle(
Cycle INT NOT NULL,
PatientID INT NOT NULL,
ALOPECIA TINYINT,
Causality1 TINYINT,
Relatedness1 TINYINT,
[Code] ....
The one thing I was wondering was this: how do I extract the symptom names from the field list without knowing them all beforehand? Dwain does this
-- DDL and sample data for UNPIVOT Example 2
CREATE TABLE #Suppliers
(ID INT, Product VARCHAR(500)
,Supplier1 VARCHAR(500), Supplier2 VARCHAR(500), Supplier3 VARCHAR(500)
,City1 VARCHAR(500), City2 VARCHAR(500), City3 VARCHAR(500))
Can this be adapted if you don't know all the column names beforehand? (Likely not). Back in the dark ages, when I was working on a database like this, it was in Access, and I could loop over the fields collection and evaluate each field name. (Yes, I know you're not supposed to store information in field names, but I inherited that mess!)
create table a (id int, name varchar(10)); create table b(id int, sal int); insert into a values(1,'John'),(1,'ken'),(2,'paul'); insert into b values(1,400),(1,500);
select * from a cross apply( select max(sal) as sal from b where b.id = a.id)b;
Below is the result for the same:
idname sal 1John500 1ken500 2paulNULL
Now I'm not sure why the record with ID 2 is coming using CROSS APPLY, shouldn't it be avoided in case of CROSS APPLY and only displayed when using OUTER APPLY.
One thing that I noticed was that if you remove the Aggregate function MAX then the record with ID 2 is not shown in the output. I'm running this query on SQL Server 2012.
Hi... I'm reading the MS Press 70-442 Self-Paced Training kit, and I'm having problems with this example. I'd like help getting it to work correctly, or understand why it is isn't working the way I planned.
On page 67, the lab is about the APPLY operator (CROSS APPLY and OUTER APPLY). I first have to input a sample table-valued function into the AdventureWorks database:
Code Block CREATE FUNCTION fnGetAvgCost(@ProdID int) RETURNS @RetTable TABLE (AvgCost money) AS BEGIN WITH Product(stdcost) AS ( SELECT avg(standardcost) as AvgCost FROM Production.ProductCostHistory WHERE ProductID = @ProdID ) INSERT INTO @RetTable SELECT * FROM Product RETURN END
and then run a sample T-SQL statement
Code Block SELECT p.Name, p.ProductNumber, Convert(varchar, cost.AvgCost,1) AS 'Average Cost' FROM Production.Product p CROSS APPLY fnGetAvgCost(p.ProductID) AS cost WHERE cost.AvgCost IS NOT NULL ORDER BY cost.AvgCost desc
My problem is with the WHERE clause... According to page 56, CROSS APPLY returns only rows from the outer table that produces a result set, so why do I need to explicitly filter NULL values?
When I remove the WHERE clause, the query retrieves lots of NULL AvgCost values.
Again, according to page 56, it is the OUTER APPLY that returns all rows that return a result set and will include NULL values in the columns that are returned from the table-valued function.
So, in short, I don't see the difference between CROSS APPLY and OUTER APPLY, using this example, when I remove the WHERE clause?
(Please refrain from introducing another example into this question.)
I'm unable to reproduce the error. when they upgrade their OS and SQL EXPRESS to a more recent version the error disappears.
The error is: Incorrect syntax near '.'
the query in question resembles this:
Select column1, column2 from Table1 T cross apply function(t.column4,t.column5) F where column3 = 'XXXX'
I made sure that the compatibility level is greater than 90 this error is happening on SQL2005 SP2 as well as SQL2008 with SP2 (but not all clients are suffering from the same problem)
Can it be the .net framework? Although the machines had .net framework 3.52.
Can the OS be an issue? The OS' seem to be old, Windows Server 2008 SP2
I've tried to reproduce the error by setting up virtual machines with same OS and SQL but, again, can't reproduce.
Why am I getting a different numbers of distinct ids in those queries?
USE AdventureWorks go Declare @myXml as xml set @myXml = ' <lol>omg</lol> <lol>rofl</lol> ';
select locations.*, T.c.value('.','nvarchar(max)') from ( select newid() as Id from Production.ProductModel where ProductModelID in (7, 8) ) as locations cross apply @myXml.nodes('(/lol)') T(c);
select mytable.* , T.c.value('.','nvarchar(max)') from ( select newid() as Id union select newid() ) as mytable cross apply @myXml.nodes('(/lol)') T(c);
I've been trying to figure out why these two return a different amount of distinct ids... Is that a bug in optimization?
Code Snippet
USE AdventureWorks go Declare @myXml as xml set @myXml = ' <lol>omg</lol> <lol>rofl</lol> ';
WITH locations as ( select newid() as Id from Production.ProductModel where ProductModelID in (7, 8) ) select locations.*, T.c.value('.','nvarchar(max)') from locations cross apply @myXml.nodes('(/lol)') T(c);
with mytable as ( select newid() as Id union select newid() ) select mytable.* , T.c.value('.','nvarchar(max)') from mytable cross apply @myXml.nodes('(/lol)') T(c);
I'm attempting to leverage SQL's new 'APPLY" operator and I guess I don't fully understand it proper usage.
This is a relatively simple request, first i want to count the models produced within a valid period of time. The first 'Cross Apply' gets the valid starting and ending dates and looks ups the number of models produced for the period of time. This section of code works perfectly fine.
The problem appears to be with the second "Cross Apply". What I'm attempting to accomplish is to count all models produced, regardless of time frame.
When executed the query appears to go into an loop and I end up canceling out the request.
Any ideas where I went wrong?? Any help is greatly appreciated!
select b1.model as Model ,b1.MinDate as Mfg_Str_Date ,b1.MaxDate as Mfg_End_Date ,Count(b2.Model+B2.Serial) as Mfg_Date_Valid ,Count(b3.Model+B3.Serial) as All_Units
from (select b.model, min(b.build_date) as MinDate ,max(b.build_date) as MaxDate from etbl_models_Serial as b group by b.model) as b1
--These are Units produced within Valid Window cross apply (select b2.model,b2.Serial from etbl_Production as b2 where b2.Model = b1.Model and b2.Mfg_Date between b1.MinDate and b1.MaxDate) as b2
--These are all units produced cross apply (select b3.model,b3.Serial from etbl_Production as b3 where b3.Model = b2.Model) as b3
Group by b1.Model, b1.MinDate, b1.MaxDate Order by b1.Model
Select top 1000 a.EmployeeID,b.* from #TmpActiveEmployeesWSeverance a cross apply dbo.fn_Severance_AccountItemsTable(a.EmployeeID,a.BenefitTypeID,null,null,null,null) b order by a.EmployeeID,a.BenefitTypeID
It runs 4 seconds
If I try to insert the results into anything It runs > 5 minutes (I have yet to let it finish)
I have tried the two following pieces of code, both with the same results
Select top 1000 a.EmployeeID,b.* into #Tmp from #TmpActiveEmployeesWSeverance a cross apply dbo.fn_Severance_AccountItemsTable(a.EmployeeID,a.BenefitTypeID,null,null,null,null) b order by a.EmployeeID,a.BenefitTypeID
--and Insert Into TRP_ActiveEmployeesWSeverance (EmployeeID ,PK ,BeginningBalance ,BenefitInterestRowID ,BenefitInterestID ,BenefitTypeID ,DateReceived ,InvoiceDate ,Amount ,Hours ,Fraction1 ,Fraction2 ,Interest ,InterestAmount ,StartDate ,EndDate ,PeriodApplied ,Offset ,Reserve ,Account ,BenefitClosedID ,PaidOut ,ClosedAccount ,ai ,ClosedDate ,StartAgain ,PartialDividend ,PartialFraction ,SameDateCount) Select top 1000 a.EmployeeID,b.* from #TmpActiveEmployeesWSeverance a cross apply dbo.fn_Severance_AccountItemsTable(a.EmployeeID,a.BenefitTypeID,null,null,null,null) b order by a.EmployeeID,a.BenefitTypeID
SELECT res.res_id, sub.value('(./@id)[1]', 'char(2)') id FROM vwResult res CROSS APPLY report.nodes('/clue_personal_auto/report/search_dataset/subjects/*') report(sub)
It works just fine in SQL Query.After placing this into a view in SSDT November 2012 update, I get a compilation error.
I am doing a report that uses paging and in order to optimize it, i used row_number() so i could make it return x rows per page, so, in order to compute the number of pages needed, i have to count the total number of rows, which gets very slow because i'm using a cross apply with a table-valued function. Is there any way so i can get the number of rows processed by row_number() so i dont have the need to do count?
Hey guys. This is one of the queries pasted from BOL. I'm having problems excuting this query. The problem lies in the CROSS APPLY part. When I copy this query and run it in SSMS, it gives me an error saying 'Incorrect syntax near .' It doesn't like the qs.sql_handle part. If I remove that and pass the actual handle in for some query, it works. Can someone please tell me what I'm doing wrong?????? Also, I've sp1 installed on my SQL Server 2005 Enterprise, just in case if this matters. Below is the query pasted which is giving me problems. Thank you.
SELECT TOP 5 total_worker_time/execution_count AS [Avg CPU Time],
The result I want is the unique rows from columns:
PupilPersonId, EducationTypeId,VehicleTypeId AND there MAX EducationDate SELECT er1.* FROM EducationResult er1 INNER JOIN ( SELECT er.PupilPersonId, er.EducationTypeId, er.VehicleTypeId, MAX(er.EducationDate) as EducationDate
[Code] ....
I like to know is there another approach with CTE and or Cross Apply I can use instead?
I have a set of data in which i have a product number going through 6 stages and each stage has a date. Since the each stages are in columns, I have created a unpivot query to transpose the columns into rows.
The unpivot query is working fine however I am getting duplicate values in the result. For each productnumber there must be only 6 results however i am getting 24 rows for each product number due to duplication.
I have attached the code and the source data for reference
ChangeID ChangeDate EquipmentID ModuleID EquipStatus 1 12/9/08 230 1789 Normal 2 13/9/08 450 1245 Normal 3 17/9/08 230 1789 Open 4 21/9/08 230 1899 Open 5 21/9/08 450 1674 Normal 6 22/9/08 450 2364 Normal
Given a date, what module was each equipment item in on that date?How do I get the date of the nearest previous event from a list like this? I got a query from one of the post in this Forum only using Cross Apply to find the nearest record from the above table based on Date i.e.
SELECT outerT.* FROM your_table AS outerT CROSS APPLY ( SELECT TOP 1 equipment_id , change_date FROM your_table AS innerT WHERE innerT.change_date <= @point_in_time AND innerT.equipment_id = outerT.equipment_id ORDER BY change_date DESC ) AS applicable_records WHERE applicable_records.change_date = outerT.change_date
The problem is I need to get this query without using Cross Apply as i need to use the same for the LINQ which doesn't support Cross Apply.
See sample data below. I'm trying to count the number of occurrences of strings stored in table @word without a while loop.
DECLARE @t TABLE (Id INT IDENTITY(1,1), String VARCHAR(MAX))
INSERT INTO @t SELECT 'There are a lot of Multidimensional Expressions (MDX) resources available' AS String UNION ALL SELECT 'but most teaching aids out there are geared towards professionals with cube development experience' UNION ALL
I am using CROSS APPLY instead of UNPIVOT to unpivot > one column. I am wondering if I can dynamically replace column names based on different tables? The example code that I have working is based on the "Allergy" table. I have thirty more specialty tables to go. I'll show the working code first, then an example of another table's columns to show differences:
select [uplift specialty], [member po],[practice unit name], [final nomination status] ,[final uplift status], [final rank], [final uplift percentage] ,practiceID=row_number() over (partition by [practice unit name] order by Metricname) ,metricname,Metricvalue, metricpercentilerank
[code]....
Rheumatology Table:The columns that vary start with "GDR" and [GDR Percentile Rank] so I'm just showing those:
The following is a SAMPLE data from an excel spreadsheet. This SAMPLE data has many other fields as date. Here I have only used two date columns i.e. 28 Dec 2006 and 29 Dec 2006 This data needs to be exported into sql server 2005 table which has the fields below where I have placed the data into a table. How can this be done please?
data:
Ref Sector Name 28 Dec 2006 29 Dec 2006 1 Sovereign RUSSIA 05 null 173.21 2 Sovereign RUSSIA 07 102.99 102.22 3 Sovereign RUSSIA 10 114.33 104.63 4 Sovereign RUSSIA 18 115.50 145.50 ...
sql server table
create table tblData ( DataID int, Ref int, Sector varchar(20), Name varchar(20), Date datetime, value decimal(6,2) )
DataID Ref Sector Name Date value 1 1 Sovereign RUSSIA 05 28 Dec 2006 null 2 1 Sovereign RUSSIA 05 29 Dec 2006 173.21 3 2 Sovereign RUSSIA 07 28 Dec 2006 102.99 4 2 Sovereign RUSSIA 07 29 Dec 2006 102.22 5 3 Sovereign RUSSIA 10 28 Dec 2006 114.33 6 3 Sovereign RUSSIA 10 29 Dec 2006 104.63 7 4 Sovereign RUSSIA 18 28 Dec 2006 115.50 8 4 Sovereign RUSSIA 18 29 Dec 2006 145.50 ...
I have a heavy database , More than 100 GB only for six month .every Query on it takes me along time and I dont have enough space to add more indexes.by a way I decided to do partitioning. I create a partition function , on date filed and all Data records per month was appointed to a separate file.And is partitioning only for Future data entry?
I am trying to compare the data from 2 different servers. It is the same table. I did a select * and put the data into 2 different text files. I then used the Windows NT findstr utility to compare them to find the differences. I used the command findstr /v /i /x /g:file1 file2 > file3, where file1 is the larger table, and file2 is the smaller one. I want to find the records in file2 which are different or do not exist. I recieved no data back. I have ASCII characters in the text files. My question is does anyone know of a better way to do this, or how to make this command work?
Our backup system has worked ok for us to date. We can restore back to either full saves or up to a certain log (we take log backups on the hour). We've never had to, but wanted to test restoring to a point-in-time with the backup data.
What the system does is generates .mdf and .ldf files, which is essentially a full backup say in the middle of the night. It then creates .bak files for the log backups based on the backup set you want to restore.
I can detach the database and apply the .mdf and .ldf and re-attach the database, but to apply the .bak files I need to get the database into a (recovering) state. I can't seem to do that. Otherwise when I try to apply the .bak files the system says: The log or differential backup cannot be restored because no files are ready to rollforward.
How to apply a .mdf and then apply the .bak files?
I have a table (named table1) with 20million rows. It takes around 11 minutes to apply the primary key to this table. There are some tables with over 100 million rows so based on the previous time if my calculations are correct it will take close to an hour apply this primary key for tables with around 100 million rows.
My current solution is to create another table (named table2) with no indexs or primary keys. Pump over only like 5 days worth of data, then apply the primary key. Then have a script that will eventually populate table2 with the rest of the data gradually. When I say gradually I mean like insert like every 100k per hour or something. Keep in mind this table2 is heavily updated with new records.
I have 2 dbs (SQL 2012) - one contains a trigger that is enabled/disabled by a procedure in the other database. This all works fine.
If I create a Database Project solution in Visual Studio 2012 SSDT (or 2013) for both databases, the stored procedure generates a SQL71502 stating that my trigger name can't be resolved.
To recreate the issue:
CREATE DATABASE DbWithTrigger GO USE DbWithTrigger GO CREATE TABLE dbo.TblWithTrigger( Id int NULL, SomeValue varchar(30) NULL
1. Create a new solution with a project named DbWithTrigger 2. In project settings set the Target platform to SQL 2012 2. Import the DbWithTrigger db into this project 3. Create a new project named DbCallsTrigger 4. In project settings set the Target platform to SQL 2012 5. Import the DbCallsTrigger db into this project 6. Add a Database Reference in DbCallsTrigger for DbWithTrigger
When you build the solution both dbs build successfully, however there are two warnings. One is easily resolved by replacing DbWithTrigger in the body of the procedure with [$(DbWithTrigger)] (db variable name for the reference) but I can't find out how to get rid of the other. Is it a bug?
I have an MVC asp.net application that stores many records in a table on sql server, in its own system. used the system for 2 months, worked fine accessing, changing data.
Now that other users are logging in? there is cross coupling going on. one user gets the data from another users sql search.
In the mvc app it had used the get async method to read the ID record from the db, i set that to synchronous. no effect; the user makes their own login id but that does nt matter either.