What I want the eventually solution to be is that for each unique/distinct client, reference and sequence there is one record (row). Based on the row_number, I want the first 'AppearanceDate' renamed to 'AppearanceDate1' with the first AppearanceDate as the data and the first Outcome to be renamed 'Outcome1' with the first Outcome as the data. Any subsequent rows would follow as columns:
Only four AppearanceDates and outcomes are required based on the unique/distinct Client, Reference and SequenceIDs. How I can loop through the Appearances based on the row_number and show each record in one row?
I have a report which is a list of items and I display everything about the item. It is great. My report table in the layout tab is simple. Header,Detail,Footer. Each Item has 65 columns. The number of items (rows) vary upon what you want to see. Example data. Item#, Description, CaseSalePrice, Cost, BottleSalePrice, Discount 123, Grenadine, 100.00, 75.00, 15.50, 2.00 456, Lime Juice, 120.00, 81.00, 17.25, 2.00
What I am actually doing is running this the top example and saving to excel. Then copying the sheet. Creating a new sheet then doing a paste special transpose and this gives the users what they want to see.
I want to grab that table object in the report layout tab and twist it 90degrees so the header is on the left, detail is in the middle and the footer is on the right. It would be perfect.
The dynamic column need is really the problem here. I never know how many items will be in the report. They all have the same basic information like description and pricing.
I am all out of creative ideas, any help would be appreciated.
Is it possible to combine a CASE statement with two different columns to retrieve data into one result column? In one column it has multiple JobCode, but this needs to be divided. The only way I can see to do this is take the data from another column to get the results.Ex. JobCode - has one code for several job descriptions (there are about 30), but all within this code cannot have the same level of access. So I need to divide them out, and put them in one column for AccessLevel.
JobTitle - has one code for one job, (but there are over 100).I want to pull from both columns to get the results I need to assign appropriate access level in one column.
Case JobCode (they all have same job code, but everyone in this job code should not have same access) When '45' Then '1' (Principal, Asst. Prin, or any Administrator, Counselors) When '25' Then '2' (this could be teacher, etc. ) Case JobTitle (this is how access should be) When '12345' Then '1' (this is Administration only)
SELECT h1.ftraccode, CASE WHEN FTRAADDRED='A' then h1.ftrascode end as 'From Sec', CASE WHEN FTRAADDRED='r' then h1.ftrascode end as 'To Sec', case when ftraaddred ='A' then h1.ftradesc end as 'From Description', case when ftraaddred ='r' then h1.ftradesc end as 'to Description' from bHISfile h1 where h1.ftradesc like 'sw%' order by 1 ---------------------------------------------------------------- clintcode |from_sec | to_sec| from_desc | to_desc --------------------------------------------------------------- ABADJ16421 |NULL | MMTEI |NULL |SWITCH TO OAPIF ABADJ16421 |OAPIF | NULL |SWITCH FROM MMTEI |NULL
2(row)
Expected output like this
---------------------------------------------------------------- clintcode |from_sec | to_sec| from_desc | to_desc --------------------------------------------------------------- ABADJ16421 |OAPIF | MMTEI |SWITCH FROM MMTEI |SWITCH TO OAPIF
ClaimNumTransactionDateUsername ClaimNum TransactionAmountUserName 2000074 20150209jerry.witt 2000074 -10000DATAFIX INSERTED ON 20150626 AT 162152493 LOCAL 2000074 20150626DATAFIX INSERTED ON 20150626 AT 162152493 LOCAL 2000074 -10000DATAFIX INSERTED ON 20150626 AT 162152493 LOCAL
[Code] .....
So,if we look at the result set, we notice 2 conditions where the IG_FinancialTransactionSummary.Username is like 'Data' and if we see the transaction date then sometimes that is the max transaction date or sometimes there are transactions that happened after but that doesn't have like '%data%' in username . So, i need to add a new column to my sql query which should basically verify if the username is like '%data%' and if that is the max(transaction date) or even if there are any transactions after that doesn't have like '%data%' then YES else No.
Got a query taking too much time because of lack of cross columns MAX/MIN functions. Consider a similar example where a View is required to reflect distribution of Water among different towns each having four different levels of distribution reservoir tanks of different sizes:In this case the basic table has columns like:
Now suppose I need a query to distribute QuantityPurchased in the Four additional Columns computed on the basis depending on the sizes declared in the last four fields,in the same order of preference.For example: I have to use IIFs to check: whether the quantity purchased is less than Tank_A if yes then Qty Purchased otherwise Tank_A_Size itself for Tank_A_Filled
then again IIF but this time to check:
Whether the quantity purchased less Tank_A_Filled (Which again needs to be calculated as above) is less than Tank_B if yes then Tank_A_Filled (Which again needs to be calculated as above) otherwise Tank_B_Size itself for Tank_B_Filled
i have a table with dob and test results , i am trying to pull the data from the table and converting rows columns , below is the table i am using . i used to pivot to do this .
create table #TEST_RESULTS (ID INT,NAME VARCHAR(10),DOB DATETIME,DAYS_SINCE_BIRTH_TO_TEST INT,TEST_RESULTS INT ) INSERT INTO #TEST_RESULTS VALUES(1,'A','2015-01-01' , 0 ,1) ,(1,'A','2015-01-01' , 0 ,1) ,(1,'A','2015-01-01' , 1 ,3) ,(1,'A','2015-01-01' , 2 ,6)
I have persons who speaks multiple languages and they are in one table, each row is added if he/she speaks multiple languages. Instead I want to add additional columns and load the data.(what I have shown in the desired output)
name language ------------- ron english ron french ron spanish andy english andy hindi kate english
Desired output
name language1 language2 language3 language4 language5 language6 ----------------------------------------------------------------- ron english french spanish andy english hindi Kate english
Name Description Date Question Answer Customer A Profile Assessment 01/01/2015
How complex is the structure?
Customer A Profile Assessment 01/01/2015 The total value of assets? Less than GBP 1 million
Customer A Profile Assessment 01/01/2015 The volume of transactions undertaken? Low (-1 pmth)
[Code] ....
However, I would like it to output;
Name Description Date How complex is the structure? The total value of assets? The volume of transactions undertaken? How was the client introduced? Where does the Customer reside?
[Code] ....
The number of questions are unknown for each RiskReviewID and they can be added to in the future.
In a table I have some rows with flag A & B for a scode, some scode with only A and some are only B flags.
I would like to fetch all rows with flag A when both flags are present, no rows with B should be fetched. Fetch all rows when only single flags are present for a scode.How to achieve this using TSQL code.
Below. I have also pasted the current result of this query and the desired result.
Query can be updated to get the desired result as given below?
Query: Select c.OTH_PAYER_ID, c.PAID_DATE, f.GROUP_CODE, f.REASON_CODE, f.ADJUSTMENT_AMOUNT From MMIT_CLAIM_ITEM b, mmit_tpl c , mmit_attachment_link d, MMIT_TPL_GROUP_RSN_ADJ f where b.CLAIM_ICN_NU = d.CLAIM_ICN and b.CLAIM_ITEM_LINE_NU = d.CLAIM_LINE_NUM and c.TPL_TS = d.TPL_TS and f.TPL_TS = c.TPL_TS and b.CLAIM_ICN_NU = '123456788444'
Current Result which I am getting with this query
OTH_PAYER_ID PAID_DATE GROUP_CODE REASON_CODE ADJUSTMENT_AMOUNT 5501 07/13/2015 CO 11 23.87 5501 07/13/2015 PR 12 3.76 5501 07/13/2015 OT 32 33.45 2032 07/14/2015 CO 12 23.87 2032 07/14/2015 OT 14 43.01
Desired/Expected Result for which I need updated query
we can easily load a file into db tables. However, my main concern here is the number of columns in the file. A text file TEXT_1400.txt has 1400 columns. I am unable to load data to my db table using BCP or BULK INSERT commands, as maximum of 1024 columns are allowed per table in SQL Server 2008.
We can still go ahead and create ‘Wide Table’ (a special table that holds up to 30,000 columns. The maximum size of a wide table row is 8,019 bytes.). But when operating on wide table, BCP/BULK INSERT commands still fail. After few hours of scratching my head over BCP and BULK INSERT, I observed that while inserting BCP/BULK INSERT commands are unable to identify SPARSE columns and skip these columns, which disturbs column mapping and results in data conversion and trancation errors.
Is there any proper way to load this kind of files into the db table?
I am relatively new to SQL and have experience with relatively simple queries and data manipulation. I have just finished creating a fully normalized database, which, of course, is presenting me with a few problems retrieving data. I'm not sure if I'm even taking the right approach. Anyway, I am trying to create a web search with a filter. I want my filter to be based upon different columns in a view, rather than rows in one column. I have the following columns:
I want to populate a drop list with these column headings. So I thought I would need a way to make a dummy column named Category or something, and then make each of these columns a row in the the Category colum. Can that be done?
I tend to learn from example and am used to powershell. If for instance in powershell I wanted to get-something and store it in a variable I could, then use it again in the same code. In this example of a table order items where there are order_num, quantity and item_prices how could I declare ordertotal as a variable then instead of repeating it again at "having sum", instead use the variable in its place?
Any example of such a use of a variable that still lets me select the order_num, ordertotal and group them etc? I hope to simply replace in the "having section" the agg function with "ordertotal" which bombs out.
select order_num, sum(quantity*item_price) as ordertotal from orderitems group by order_num having sum(quantity*item_price) >=50 order by ordertotal;
I'm trying to create calculated columns in a SELECT query (inside a sp) that are based on other calculated columns in the same SELECT list. But since I'm not allowed to refer to each calculated column by name, I'm forced to repeat long calculations every time I build one calculation based on another.
I'd like to do something on the order of: SELECT Price*Quantity AS SalePrice, SalePrice*1.08 AS SalesPriceWithTax FROM Orders
Access used to let me do that, but not SQL. Any suggestions? Can I use variables declared and assigned before I start my SELECT, and then put those variables in my SELECT list? Anyone have a useful shortcut?
I have a business need to create a report by query data from a MS SQL 2008 database and display the result to the users on a web page. The report initially has 6 columns of data and 2 out of 6 have JSON data so the users request to have those 2 JSON columns parse into 15 additional columns (first JSON column has 8 key/value pairs and the second JSON column has 7 key/value pairs). Here what I have done so far:
I found a table value function (fnSplitJson2) from this link [URL]. Using this function I can parse a column of JSON data into a table. So when I use the function above against the first column (with JSON data) in my query (with CROSS APPLY) I got the right data back the but I got 8 additional rows of each of the row in my table. The reason for this side effect is because the function returned a table of 8 row (8 key/value pairs) for each json string data that it parsed.
1. First question: How do I modify my current query (see below) so that for each row in my table i got back one row with 19 columns.
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B
If updated my query (see below) and call the function twice within the CROSS APPLY clause I got this error: "The multi-part identifier "A.ITEM6" could be be bound.
2. My second question: How to i get around this error?
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.*, C.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B, fnSplitJson2(A.ITEM6,NULL) C
I am using Microsoft SQL Server 2008 R2 version. Windows 7 desktop.
Hello, I have a survey (30 questions) application in a SQL server db. The application uses several relational tables. The results are arranged so that each answer is on a seperate row: user1 answer1user1 answer2user1 answer3user2 answer1user2 answer2user2 answer3 For statistical analysis I need to transfer the results to an Excel spreadsheet (for later use in SPSS). In the spreadsheet I need the results to appear so that each user will be on a single row with all of that user's answers on that single row (A column for each answer): user1 answer1 answer2 answer3user2 answer1 answer2 answer3 How can this be done? How can all answers of a user appear on a single row Thanx,Danny.
I have a SQL script to insert data into a table as below:
INSERT into [SRV1INS2].BB.dbo.Agents2 select * from [SRV2INS14].DD.dbo.Agents
I just want to set a Trigger on Agents2 Table, which could delete all rows in the table , before carry out any Insert operation using above statement.I had below Table Trigger on [SRV1INS2].BB.dbo.Agents2 Table as below: But it did not perform what I intend to do.
USE [BB] GO /****** Object: Trigger Script Date: 24/07/2015 3:41:38 PM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON
;WITH ctePreAgg AS ( select top 500 act_reference "ActivityRef", row_number() over (partition by act_reference order by act_reference) as rowno, t3.s_initials "Initials" from mytablestuff order by act_reference
[code]...
But what I would love to do next is take each of the above rows - and return the initials either in one column with all the nulls and duplicate values removed, separated by a comma ..
OR the above but using variable number of columns based on the maximum number of different initials for each row.this is not strictly required, but maybe neater for further work on the view