SQL Server 2014 :: Multiple Columns To Appear On One Row
Jul 30, 2015
WE have a query which pulls revenue by country and client for the last 3 years. Right now we have each year being reported in separate columns but we would like to have the revenues for each year for each client to appear on one row. Below is the current query we have setup.
SELECT
p.country_code,
p.local_client_code,
wwc.local_client_name,
case when pr.fiscal_year = 2015 then sum(pr.local_consulting_fees*er.rate) + sum(pr.local_product_fees * er.rate) + sum(pr.local_admin_fees * er.rate) + sum(pr.local_misc_fees * er.rate) else 0 end as '2015 Revenue',
I have a requirement where in I have to concatenate the fields based on their sequence given in another table along with respect to their lengths. eg..
Input 1:
Table A: (below are the fields and their respective values, not all fields will have values) ----------- KSCHL - ZIC0 (KEY) KOTABNR - 521 (KEY) MATNR KUNNR-->1234567890 LIFNR VKORG-->a234 PRCTR KUNRE-->4355325363 LIFRE-->88390234 PRODH
Table BIt contains the same fields as in table A and will have sequence number in which the concatenation should happen. The length field(LEN) will have corresponding field lengths(pipe delimited) should be considered in concatenation)
Note: If the field length given in Table B doesn't match with actual size of the fields then, the field should be filled with 2 left spaces while concatenation.. Eg. In above example say LIFNR value = 88390234(len =icon_cool.gif then after concat the value should be like below:
12345678904355325363a234 88390234
Note:The fields are not constant..I have around 40 fields like that in which any combination of fields can be possible...eg..
I am not sure which field has the value 1, 2 etc.. and how many fields are forming the combination..It can be sometimes 3/40 fields or it can be 10/40 fields...I have to dynamically get those values and concat...
I can have any number of fields for concatenation..above example is just for 4...it should be dynamic enough to handle any number of fields..
Is it possible to assign multiple columns from a SQL query to one variable. In the below query I have different variable (email, fname, month_last_taken) from same query being assigned to different columns, can i pass all columns to one variable only and then extract that column out of that variable later? This way I just need to write the query once in the complete block.
DECLARE @email varchar(500) ,@intFlag INT ,@INTFLAGMAX int ,@TABLE_NAME VARCHAR(100)
I have a table #vert where I have value column. This data needs to be updated into two channel columns in #hori table based on channel number in #vert table.
CREATE TABLE #Vert (FILTER VARCHAR(3), CHANNEL TINYINT, VALUE TINYINT) INSERT #Vert Values('ABC', 1, 22),('ABC', 2, 32),('BBC', 1, 12),('BBC', 2, 23),('CAB', 1, 33),('CAB', 2, 44) -- COMBINATION OF FILTER AND CHANNEL IS UNIQUE CREATE TABLE #Hori (FILTER VARCHAR(3), CHANNEL1 TINYINT, CHANNEL2 TINYINT) INSERT #Hori Values ('ABC', NULL, NULL),('BBC', NULL, NULL),('CAB', NULL, NULL) -- FILTER IS UNIQUE IN #HORI TABLE
One way to achieve this is to write two update statements. After update, the output you see is my desired output
UPDATE H SET CHANNEL1= VALUE FROM #Hori H JOIN #Vert V ON V.FILTER=H.FILTER WHERE V.CHANNEL=1 -- updates only channel1 UPDATE H SET CHANNEL2= VALUE FROM #Hori H JOIN #Vert V ON V.FILTER=H.FILTER WHERE V.CHANNEL=2 -- updates only channel2 SELECT * FROM #Hori -- this is desired output
my channels number grows in #vert table like 1,2,3,4...and so Channel3, Channel4....so on in #hori table. So I cannot keep writing too many update statements. One other way is to pivot #vert table and do single update into #hori table.
Here is My requirement, I'm not sure if this is possible. Creating table called master like col1, col2 col3, col4 , col5 ...Where Col1, col2 are updatable - this can be done easily
Col3, col4 are columns in another table but these can be just a read only ?? Is this possible ? this is possible with View but not friendly with share point CRUD...Col 5 is a computed column of col 2 and col5 ? if above step can be done then sure this can be done I guess.
Basically I need to get the SUM of the sum of three columns and all three columns have nulls. To make it more complicated, the result set must return the top 20 in order desc as well.
I keep facing different issues whether I try and use Coalesce, IsNull, Sum, count, anything. My query never returns anything but 0 or NULL regardless of if I am trying to build a CTE or just use a query.
So I'm using Col A to get the TOP 20 in order (which is fine) but also trying to add together the sums of Col A + Col B + Col C for each of the twenty rows...
I concatenate multiple rows from one table in multiple columns like this:
--Create Table CREATE TABLE [Person].[Person_1]( [BusinessEntityID] [int] NOT NULL, [PersonType] [nchar](2) NOT NULL, [FirstName] [varchar](100) NOT NULL, CONSTRAINT [PK_Person_BusinessEntityID_1] PRIMARY KEY CLUSTERED
[Code] ....
This works very well, but I want to concatenate more rows with different [PersonType]-Values in different columns and I don't like the overhead, of using the same table in every subquery ([Person_1]). Is there a more elegant way to do this, without using a temp table or something else?
What I want to achieve is to get values from period1 till period04 and used the lasted value to code the value of accoutperiod, if value is from period1 then code it as 01, period2 as 02, period03 as 03 and period04 as 04. So the output should be like this
Within the LinkingID, there are duplicates in ID1 and ID2 but just in opposite columns. I have been trying to figure out a way to remove these set based. It doesn't matter which duplicate is removed. Essentially these are just endpoints and I don't care which side they are on. The solution must recognize the duplicates and not just remove based on every 2nd row.
I have created row level security on two views and adding these two views to particular role.Today I have got an requirement that, middle level managers shouldn't see the all the columns. So I have created another role for Middle level managers and assign securables as those two views with selected columns by grant, and map all the middle level managers to this role. I thought my job is done. But these managers uses this view on SSAS(tabular model) and Excel, In those applications they are not able to load the data.
Later I come to know we can't use -- select * from ViewA ( in viewA I have restristced few columns in the role level) Work around is creating another view and assigning to the role. But how can we achieve column level security to implement this in either SSAS/SSRS/EXCEL?
The first select is running fine but due to extra values added to the table the list of manual difined columns must be added manualy each time new values occur.
Is it possible to make the PIVOT's IN clause dynamicly as stated in the second script (it is based on the same table #source) when running it prompts the next error;
Msg 156, Level 15, State 1, Line 315 Incorrect syntax near the keyword 'select'. Msg 102, Level 15, State 1, Line 315 Incorrect syntax near ')'.
adding or moving ')' or '(' are not working.......
select * into #temp from #source pivot ( avg(value) for drive in ([C], [D], [E], [F], [G], [H], [T], [U], [V] )) as value select * from #temp order by .........
versus
select * into #temp from #source pivot ( avg(value) for drive in (select distinct(column) from #source)) as value
I'm trying to capture Column Statistics Profile as if I was using SSIS data profiling task. I do not have this option and would like to see how I could go about capturing the min max and avg of all numeric columns within a database.
If you see below there are 2 customer names on 1 loan, most of them share the same lastname and address, I want to separate it with fields,LoanID, customer 1 Firstname, Customer 1 Lastname, Customer 2 FirstName, Customer 2 Lastname, Adddress,zip
LEFT JOIN Status As S on S.LoanID = L.LoanID LEFT JOIN Borrower B on B.LoanID = L.LoanID LEFT JOIN MailingAddress MA on MA.LoanID = L.LoanID where S.PrimStat = '1' and B.Deceased = '0'
I have a string that contains series of parameters with separators.i need to split the parameters and its values as rows and columns.e.g string = "Param1 =3;param2=4,param4=testval;param6=11;..etc" here the paramerter can be anything and in any number not fixed parameters. Currently am using the below function and getting the parameters by each in select statement as mentioned below.
select [dbo].[rvlf_fn_GetParamValueWithIndex]('Param1=3;param2=4,param4=testval;param6=11;','param1=',';') as param1, [dbo].[rvlf_fn_GetParamValueWithIndex]('Param1=3;param2=4,param4=testval;param6=11;','param2=',';') as param2 CREATE FUNCTION [dbo].[rvlf_fn_GetParamValueWithIndex] ( @CustomProp varchar(max),
I have a excel sheet with some data and blank columns. I have a ssis package using to import data from excel to sql table. For blank excel columns it is importing as null instead i want to show them as '0'. If data comes in it should update the data.
IF Object_id('GoldenSecurity') IS NOT NULL DROP TABLE dbo.GoldenSecurity; IF Object_id('GoldenSecurityRowVersion') IS NOT NULL DROP TABLE dbo.GoldenSecurityRowVersion;
We are on SQL 2014...we have a bunch of views in a database where we are trying to find the views which have more than 16 columns max for unique index/constraint...this is needed so we can convert them to indexed views...
I have a script that needs to be run for 50 different @ClientID. I dont want to run this script individually for each clientid. Would 'SET @clientID in (111, 222, 333) work? I've been told that it wouldn't. Short version of the script is.....
We are consolidating some old SQL server-environments from 'OLD' to 'NEW' and one of our vendors is protesting on behalve of the collation we use on our 'NEW' SQL server.
Our old server (SQL 2005) contains databases with collation SQL_Latin1_General_CP1_CI_AS
Our new server (2014) has the standard collation Latin1_General_CI_AS
Both collations have CI and AS
From experience I know different databases can reside next to eachother on the same Instance.
The only problem could be ('could be !!') the use of TempDB with a high volume of transaction to be executured in TempDB and choosing for Snapshot Isolation Level ....
The application the databases belong to is very static, hardly updated, and questioned only several time per hour (so no TempDB issue I guess).
using different databases using a different collation running on the same instance?
We have a table with 500+ columns. The data is non-normalized, i.e. there are four groups of fields for for "people", followed by data that applies to all people in the row (a "household").For ad-hoc queries, and because I wanted to index columns within each person (person 1's age, person 2's age, etc.), I used UNION:
SELECT P1Firstname AS FirstName, P1LastName as LastName, P1birthday AS birthday, HouseholdIncome, HouseholdNumber of Children, <other "household" columns> UNION SELECT P2Firstname AS FirstName, P2LastName as LastName, P2birthday AS birthday, HouseholdIncome, HouseholdNumber of Children, <other "household" columns>
I could get at least the P1... P2... P3... columns with PIVOT, but then I believe I'd have to JOIN back to the row anyway for the "household" columns. Performance of UNION good, but another person here chose to use PIVOT instead.I can' find any articles on PIVOT vs. UNION for "de-flattening".
Attempting to build a report were you can place a specific code in the parameter field and it will return all row values based on that particular code. I have a similar report that works great, but the specific code is just in 1 column, the one I'm trying to create has the potential to have that code in up to 20 different spots. I have the report built, but the issue I'm facing is linking the parameter. Is there a way to link 1 parameter to multiple column options?
Here's an example:
Docflo Distribution Group Queue Status Pend1 Pend 2 Pend 3 Pend 4 Pend 5 ABC ABC1 Catch All NEW 123 126 125 621 129 ABC ABC1 Various PENDED 621 123 872 542 630
Right now if I were to link the parameter to the Pend1 field, I would get every line I wanted that had Pend "123", but it would not include any of the lines where Pend "123" was in Pend 2, Pend 3, Pend 4, so on.
How would I link the parameter to more than 1 column so it would return all rows with a specific code no matter which Pend column it was in?
I am playing around in a test environment with SQL Server 2014. I have a question about the default location of the report server databases when you have multiple report server instances installed on one server.
I did a very simple install of SQL Server 2014 with the database and Reporting Services in Native Mode (install only) features selected. Accepting the default locations, I ended up with the following locations as you would expect:
Running the Reporting Services Configuration Manager, I created the Report Server database. After creating the Report Server database, the related files will be located below in the SQL folder as I would expect.
Next I installed another instance, which I called Test, of SQL Server 2014 like I did above. I now have the following folder structure the Test instance as I expect.
I've a SQL server 2014 running on one of our server. We're in the process of implementing security steps for our databases. I've encrypted a column in one of the table in the database on the server. The issue is when I restore the backup on my local SQL server and run a query to decrypt the column data it gives me null values. On the other end when I decrypt the column data on the main server it works fine. I found a thread on this forum which states to do the following when restoring the encrypted database on different server.
USE [master]; GO OPEN MASTER KEY DECRYPTION BY PASSWORD = 'StrongPassword'; ALTER MASTER KEY ADD ENCRYPTION BY SERVICE MASTER KEY; GO
select File_Name , CONVERT(nvarchar,DECRYPTBYKEY(File_Name)) from [test].[dbo].[Orders_Customer]
I can easily query multiple servers using the multi-server query function in Central Management Server and write some of the results to logging tables. I would like to be able to do this via a scheduled job. So far I am finding that even setting up Master/Target Servers this may not work and the only workaround is either using SSIS, SQLCMD (by basically hard coding the servername) and possibly Powershell.
tell me if they have been successful just using standard jobs and querying against multiple servers?
If I can't save the results to a 'central' database/table (I can do this when in SSMS), but can still query against multiple servers I was thinking I could write the results to a CSV file that a SSIS job picks up.
I have attempted using SSIS to iterate through servers and have been plagued with intermittent connection issues when using a For...Loop container.
I have a query with huge number of case statements. Basically I need to short this query with getting rid of these hundreds of CASE statements.
Because of the nature of the application I am not allowed to use a function, and just wondering if there is a possible way to rewrite this with COALESCE().
SELECT CASE WHEN A.[COL_1] LIKE '%cricket%' THEN 'ck' + ',' ELSE '' END + CASE WHEN A.[COL_1] LIKE '%soccer%' THEN 'sc' + ',' ELSE '' END + .... CASE WHEN A.[RESIUTIL_DESC] LIKE '%base%ball' THEN 'BB' + ',' ELSE '' END FROM TableName A