I have a table #vert where I have value column. This data needs to be updated into two channel columns in #hori table based on channel number in #vert table.
CREATE TABLE #Vert (FILTER VARCHAR(3), CHANNEL TINYINT, VALUE TINYINT) INSERT #Vert Values('ABC', 1, 22),('ABC', 2, 32),('BBC', 1, 12),('BBC', 2, 23),('CAB', 1, 33),('CAB', 2, 44) -- COMBINATION OF FILTER AND CHANNEL IS UNIQUE CREATE TABLE #Hori (FILTER VARCHAR(3), CHANNEL1 TINYINT, CHANNEL2 TINYINT) INSERT #Hori Values ('ABC', NULL, NULL),('BBC', NULL, NULL),('CAB', NULL, NULL) -- FILTER IS UNIQUE IN #HORI TABLE
One way to achieve this is to write two update statements. After update, the output you see is my desired output
UPDATE H SET CHANNEL1= VALUE FROM #Hori H JOIN #Vert V ON V.FILTER=H.FILTER WHERE V.CHANNEL=1 -- updates only channel1 UPDATE H SET CHANNEL2= VALUE FROM #Hori H JOIN #Vert V ON V.FILTER=H.FILTER WHERE V.CHANNEL=2 -- updates only channel2 SELECT * FROM #Hori -- this is desired output
my channels number grows in #vert table like 1,2,3,4...and so Channel3, Channel4....so on in #hori table. So I cannot keep writing too many update statements. One other way is to pivot #vert table and do single update into #hori table.
WE have a query which pulls revenue by country and client for the last 3 years. Right now we have each year being reported in separate columns but we would like to have the revenues for each year for each client to appear on one row. Below is the current query we have setup.
SELECT p.country_code, p.local_client_code, wwc.local_client_name, case when pr.fiscal_year = 2015 then sum(pr.local_consulting_fees*er.rate) + sum(pr.local_product_fees * er.rate) + sum(pr.local_admin_fees * er.rate) + sum(pr.local_misc_fees * er.rate) else 0 end as '2015 Revenue',
Here is My requirement, I'm not sure if this is possible. Creating table called master like col1, col2 col3, col4 , col5 ...Where Col1, col2 are updatable - this can be done easily
Col3, col4 are columns in another table but these can be just a read only ?? Is this possible ? this is possible with View but not friendly with share point CRUD...Col 5 is a computed column of col 2 and col5 ? if above step can be done then sure this can be done I guess.
I have a query that has a left join with a large partitioned table. The partitioned table has 10s of millions of records, and each partition has about 100,000 records.
The left join is part of an insert that gets a column from the partitioned table, if the column exists. The query contains the partition ID and all other joined columns are part of a non-clustered index.
Through the profiler, I found that there were millions of reads and the execution plan was giving me a table scan on the partitioned table.
I changed the query to do the insert followed by an update with inner join. That did the trick, but it worries me that SQL Server 2014 behaves differently from 2012 or 2008R2, which can make upgrading very time consuming.
I have a requirement where in I have to concatenate the fields based on their sequence given in another table along with respect to their lengths. eg..
Input 1:
Table A: (below are the fields and their respective values, not all fields will have values) ----------- KSCHL - ZIC0 (KEY) KOTABNR - 521 (KEY) MATNR KUNNR-->1234567890 LIFNR VKORG-->a234 PRCTR KUNRE-->4355325363 LIFRE-->88390234 PRODH
Table BIt contains the same fields as in table A and will have sequence number in which the concatenation should happen. The length field(LEN) will have corresponding field lengths(pipe delimited) should be considered in concatenation)
Note: If the field length given in Table B doesn't match with actual size of the fields then, the field should be filled with 2 left spaces while concatenation.. Eg. In above example say LIFNR value = 88390234(len =icon_cool.gif then after concat the value should be like below:
12345678904355325363a234 88390234
Note:The fields are not constant..I have around 40 fields like that in which any combination of fields can be possible...eg..
I am not sure which field has the value 1, 2 etc.. and how many fields are forming the combination..It can be sometimes 3/40 fields or it can be 10/40 fields...I have to dynamically get those values and concat...
I can have any number of fields for concatenation..above example is just for 4...it should be dynamic enough to handle any number of fields..
Is it possible to assign multiple columns from a SQL query to one variable. In the below query I have different variable (email, fname, month_last_taken) from same query being assigned to different columns, can i pass all columns to one variable only and then extract that column out of that variable later? This way I just need to write the query once in the complete block.
DECLARE @email varchar(500) ,@intFlag INT ,@INTFLAGMAX int ,@TABLE_NAME VARCHAR(100)
I want to search in fulltextindexes for multiple searchterms in multiple columns. The difficulty is: I don't want only the records with columns that contains both searchterms. I also want the records of which one column contains one of the searchterm ans another column contains one of the searchterms.
For example I search for NETWORK and PERFORMANCE in two columns. Jobdescr_________________________|Jobtext Bad NETWORK PERFORMANCE________|Slow NETWORK browsing in Windows XP Bad application PERFORMANCE_______|Because of slow NETWORK browsing, the application runs slow.
I only get the first record because JobDescr contains both searchterms I don't get the second record because none of the columns contains both searchterms
I managed to find a workaround:
SELECT T3.jobid, T3.jobdescr FROM (SELECT jobid FROM dba.job WHERE contains(jobdescr, 'network*') or CONTAINS(jobtext, 'network*') ) T1 INNER JOIN (SELECT jobid FROM dba.job WHERE contains(jobdescr, 'performance*') or CONTAINS(jobtext, 'performance*')) T2 ON T2.Jobid = T1.Jobid INNER JOIN (SELECT jobid, jobdescr FROM dba.job) T3 ON T3.Jobid = T1.Jobid OR T3.Jobid = T2.JobId It works but i guess this will result in a heavy database load when the number of searchterms and columns will increase.
I have some huge tables (think 200+GB for a single table) which are excellent candidates for sparse columns. The tables have many columns which are defined with decimal datatypes (13,2) with a large percentage of them (over 50% in most cases- some as much as 99%) being 0.00. Since this is very expensive in terms of storage my idea is to set all the 0.00 values equal to NULL then set them as sparse. Across 100 or so identical databases, I have 5 such tables, with 20-40 columns in each table.
1.) three steps for each column in each table in each db.
Step 1: update table to allow for nulls
Step 2: update tabe set column=null where column =0.00
Step 3 update table set sparse columns
2.)
Step 1: Create entirely new table with sparse column definitions
Step 2: copy entire table, transforming 0.00 to null for affected columns via SSIS
Step 3: drop original table, rename new table to original name
I have a scenario of where the standard Full-Text search identifies keywords but Semantic Search does not recognize them as keywords. I'm hoping to understand why Semantic Search might not recognize them. The context this is being used in medical terminology and the specific key words I noticed missing right off the bat were medications.
For instance, if I put the following string into a FT indexed table
'J9355 - Trastuzumab (Herceptin)' AND 'J9355 - Trastuzumab emtansine'
The Semantic Search recognized 'Herceptin' and 'Emtansine' but not 'Trastuzumab'
Nor in
'J8999 - Everolimus (Afinitor)'
It did not recognize 'Afinitor' as a keyword.
In all cases the Base of Full-Text did find those keywords and were identifiable using the dmvsys.dm_fts_index_keywords_by_document.It does show the index as having completed.
why certain words might not be picked up while others would be? Could it be a language/dictionary issue? I am using English and accent insensitive settings?
I have a table with 9 fields that I need to check. I want to check if "text1" and "text2" is in any of thoose 9 fields. But I dont want to check the combination of "text1" and "text2" in the same column. I want for example be able to check if "text1" is in column 1 to 9 and if "text2" is in column 1 to 9. Can someone give me a suggestion on how to do this?
I have a excel sheet with some data and blank columns. I have a ssis package using to import data from excel to sql table. For blank excel columns it is importing as null instead i want to show them as '0'. If data comes in it should update the data.
IF Object_id('GoldenSecurity') IS NOT NULL DROP TABLE dbo.GoldenSecurity; IF Object_id('GoldenSecurityRowVersion') IS NOT NULL DROP TABLE dbo.GoldenSecurityRowVersion;
********************alt.php.sql,compdatabases.ms-sqlservermicrosoft.public.sqlserver.programming***********************************Why doesn't this work:SELECT *FROM 'Events'WHERE dayofweekREGEXP 'monday' OR description REGEXP 'monday'When this does work:SELECT *FROM `Events`WHERE dayofweekREGEXP 'monday'
We have a table with 500+ columns. The data is non-normalized, i.e. there are four groups of fields for for "people", followed by data that applies to all people in the row (a "household").For ad-hoc queries, and because I wanted to index columns within each person (person 1's age, person 2's age, etc.), I used UNION:
SELECT P1Firstname AS FirstName, P1LastName as LastName, P1birthday AS birthday, HouseholdIncome, HouseholdNumber of Children, <other "household" columns> UNION SELECT P2Firstname AS FirstName, P2LastName as LastName, P2birthday AS birthday, HouseholdIncome, HouseholdNumber of Children, <other "household" columns>
I could get at least the P1... P2... P3... columns with PIVOT, but then I believe I'd have to JOIN back to the row anyway for the "household" columns. Performance of UNION good, but another person here chose to use PIVOT instead.I can' find any articles on PIVOT vs. UNION for "de-flattening".
I am trying to find books which have the same title and publisher name as at least two other books and need to also show the book ref (ISBN number). I have the below script so far:
SELECT isbn, title, publishername FROM book WHERE title in (SELECT title FROM book GROUP BY title HAVING count(title)>2 or count(publishername)>2) order by title;
This is a snap shot of the output:
ISBN Title Publishername 0-1311804-3-6 C Prentice Hall * 0-0788132-1-2 C OSBORNE MCGRAW-HILL * 0-0788153-8-X C OSBORNE MCGRAW-HILL * 0-9435183-3-4 C Database Development MIS * 1-5582806-2-6 C Database Development MIS
[Code] ....
What I should be seeing is only the ones I have put an * next to. What am I missing from the scrip?
Hi, I have tried this code from http://jtkane.spaces.live.com/Blog/cns!1pWDBCiDX1uvH5ATJmNCVLPQ!316.entry for full-text search on multiple tables & columns. Here's my code: SELECT * from [tStaffDir] AS e, [tStaffDir_PrevEmp] t,CONTAINSTABLE([tStaffDir], *, @Name) as AwhereA.[KEY] = e.[ID] andt.[ID] = e.[ID] I have FT the both the tables above and I am able to get results from the [tStaffDir] table but not the [tStaffDir_PrevEmp] table.The [tStaffDir_PrevEmp] table does have a column (which is [ID]) that is indexed, unique and non-Nullable.Please advise what I should do and look out for. Many Thanks.
What is the most efficient way to write an SP to tackle all kinds of combinations here (where a user could give any search input).I know this must be fairly common to come across this situation.I have written an SP which will take in all the parameters and based on "IF" statements and using "LIKE" in SQL, this SP returns search results.But I wanted to know if there was more efficient ways of doing this, as you can imagine you might end up having several combinations of IF conditions.
So I have been trying to get mySQL query to work for a large database that I have. I have (lets say) two tables Table_One and Table_Two. Table_One has three columns: Type, Animal and TestID and Table_Two has 2 columns Test_Name and Test_ID. Example with values is below:
In Table_One all types come under one column and the values of all Types (Mammal, Fish, Bird, Reptile) come under another column (Animals). Table_One and Two can be linked by Test_ID
I am trying to create a table such as shown below:
This should be my final table. The approach I am currently using is to make multiple instances of Table_One and using joins to form this final table. So the column Bird, Reptile, Mammal and Fish all come from a different copy of Table_one.
For e.g
Select Test_Name AS 'Test_Name', Table_Bird.Animal AS 'Birds', Table_Mammal.Animal AS 'Mammal', Table_Reptile.Animal AS 'Reptile, Table_Fish.Animal AS 'Fish' From Table_One
[Code] .....
The problem with this query is it only works when all entries for Birds, Mammals, Reptiles and Fish have some value. If one field is empty as for Test_Two or Test_Three, it doesn't return that record. I used Or instead of And in the WHERE clause but that didn't work as well.
In the ECASE table there is trigger to get the max value of case_id column in ecase based on project and increment one to that case_id value and insert into ecase table .
When we insert a new record to the ECASE table this trigger calls and insert the case_id column value.
When i run with multiple threads , the transaction is rolled back because of trigger . The reason is , on the project table the lock is happening while getting the max value of case_id column based on project.
Code written so far. this pivots the column deck and jib_in into rows but thats it only TWO ROWS i.e the one i put inside aggregate function under PIVOT function and one i put inside QUOTENAME()
DECLARE @columns NVARCHAR(MAX), @sql NVARCHAR(MAX); SET @columns = N''; SELECT @columns += N', p.' + QUOTENAME(deck) FROM (SELECT p.deck FROM dbo.report AS p GROUP BY p.deck) AS x;
[Code] ....
I need all the columns to be pivoted and show on the pivoted table. I am very new at dynamic pivot. I tried so many ways to add other columns but no avail!!
I have a business need to create a report by query data from a MS SQL 2008 database and display the result to the users on a web page. The report initially has 6 columns of data and 2 out of 6 have JSON data so the users request to have those 2 JSON columns parse into 15 additional columns (first JSON column has 8 key/value pairs and the second JSON column has 7 key/value pairs). Here what I have done so far:
I found a table value function (fnSplitJson2) from this link [URL]. Using this function I can parse a column of JSON data into a table. So when I use the function above against the first column (with JSON data) in my query (with CROSS APPLY) I got the right data back the but I got 8 additional rows of each of the row in my table. The reason for this side effect is because the function returned a table of 8 row (8 key/value pairs) for each json string data that it parsed.
1. First question: How do I modify my current query (see below) so that for each row in my table i got back one row with 19 columns.
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B
If updated my query (see below) and call the function twice within the CROSS APPLY clause I got this error: "The multi-part identifier "A.ITEM6" could be be bound.
2. My second question: How to i get around this error?
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.*, C.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B, fnSplitJson2(A.ITEM6,NULL) C
I am using Microsoft SQL Server 2008 R2 version. Windows 7 desktop.
We are in the middle of re-designing few tables (namely transaction tables) that would store very large data and would be hosted on cloud (Azure). The old design of this product breaks transaction tables into monthly tables. i.e. say ORDERS Table would be physically broke into twelve monthly tables over a year like ORDERS0115 (mmyy), ORDERS0215 and so on.
We are in the opinion that keeping the entire transactions in one Table is better. Would like to know what's the best practices for transaction tables like the one mentioned above? Is it better to use one table with partitions. I read somewhere that partitions can slow down SELECT queries if not designed and thought properly.Since this would be hosted on cloud (Azure), do you think some additional things are to be taken care? How a site like Amazon keeps their transactions tables?
I want to analyze procedure cache, to find inefficient plans and parameter issues.
I do it trow DMV But my requests to DMV are very slow and demand resources because procedure cache is about several GB Actually I dont need on-line analysis.
Is it possible to have fast snapshot of procedure cache?