I have a description field in a table which also stores unit of measure in the same column but with some space between them, I need to split these into two different columns.
I have to split a column using comma a delimiter into multiple columns. I am able to do it if i know how many column will be present in the final output. But in daily run, the columns may vary randomly.
how to split columns without hardcoding how many columns it ll come.
This is the code am using
Code: WITH Split_Names (Fil_id,Name, xmlname) AS ( SELECT Fil_ID,
I have a very interesting problem in T-SQL coding for which I can't figure out the solution. Actually there is a Line_1_Address column in our data warehouse address table which is being populated from various sources. Some sources have already concatenated house number + street address fields in the Line_1_Address column whereas one source has separated columns for both data fields.
Now I'm trying to extract data from this data warehouse table and I need to split the house number from street address and load it into separate columns in my destination table. In case there is no data for house number then I should load it as NULL.
The issue is that data in this Line_1_Address column is very inconsistent so I don't know which functions to use. Here is some sample data for your consideration:
Line_1_Address 101 E Commerce ST 120 E Commerce ST 2 Po Box 301 W. Bel Air Ave West Main Street, PO Box 1388
What I need is split the data into two columns if data in column Main starts with 'PR-' then output result to column P and if it starts with 'CC-' then to column C (the output needs to be in one table).
I have a table #vert where I have value column. This data needs to be updated into two channel columns in #hori table based on channel number in #vert table.
CREATE TABLE #Vert (FILTER VARCHAR(3), CHANNEL TINYINT, VALUE TINYINT) INSERT #Vert Values('ABC', 1, 22),('ABC', 2, 32),('BBC', 1, 12),('BBC', 2, 23),('CAB', 1, 33),('CAB', 2, 44) -- COMBINATION OF FILTER AND CHANNEL IS UNIQUE CREATE TABLE #Hori (FILTER VARCHAR(3), CHANNEL1 TINYINT, CHANNEL2 TINYINT) INSERT #Hori Values ('ABC', NULL, NULL),('BBC', NULL, NULL),('CAB', NULL, NULL) -- FILTER IS UNIQUE IN #HORI TABLE
One way to achieve this is to write two update statements. After update, the output you see is my desired output
UPDATE H SET CHANNEL1= VALUE FROM #Hori H JOIN #Vert V ON V.FILTER=H.FILTER WHERE V.CHANNEL=1 -- updates only channel1 UPDATE H SET CHANNEL2= VALUE FROM #Hori H JOIN #Vert V ON V.FILTER=H.FILTER WHERE V.CHANNEL=2 -- updates only channel2 SELECT * FROM #Hori -- this is desired output
my channels number grows in #vert table like 1,2,3,4...and so Channel3, Channel4....so on in #hori table. So I cannot keep writing too many update statements. One other way is to pivot #vert table and do single update into #hori table.
I'm quite new to SQL. I'm able to extract the info that I need, but only into a result of one row, like:
Order header | Order details
ID | Customer name | Customer address | Product number | Product name | Quantity | Price | Product number | Product name | Quantity | Price 2 Andy Andy's way 2 24 Glue 3 35 39 Oyster 2 9
I'm fairly new to SQL and am just setting up a Windows 8 app using an Azure SQL server. The issue I have is looking up a part number supersession and getting the latest number. One part number can have multiple supersessions (ie RTC5756 > STC8572 > STC3765 > STC9150 > STC9191 > SFP500160 ).The data I am supplied monthly has both the superseeded items and the supersession information in both columns and is not easy to decipher - for example:
The newest part number is kept in a separate table - called "source" - which in this instance is SFP500160. I need access to the latest part number but also to the part's previous numbers, due to the fact that some people may still be stocking them as an old part number and for them to search by. Is there an easy and efficient way of doing both a lookup for the supersessions and a join on the two tables to minimize the queries on the database?
I have created a single FULLTEXT on col2 & col3. suppose i want to search col2='engine' and col3='toyota' i write query as
SELECT
TBL.col2,TBL.col3 FROM
TBL INNER JOIN
CONTAINSTABLE(TBL,col2,'engine') TBL1 ON
TBL.col1=TBL1.[key] INNER JOIN
CONTAINSTABLE(TBL,col3,'toyota') TBL2 ON
TBL.col1=TBL2.[key]
Every thing works well if database is small. But now i have 20 million records in my database. Taking an exmaple there are 5million record with col2='engine' and only 1 record with col3='toyota', it take substantial time to find 1 record.
I was thinking this i can address this issue if i merge both columns in a Single column, but i cannot figure out what format i save it in single column that i can use query to extract correct information. for e.g.; i was thinking to concatinate both fields like col4= ABengineBA + ABBToyotaBBA and in search i use SELECT
TBL.col4 FROM
TBL INNER JOIN
CONTAINSTABLE(TBL,col4,' "ABengineBA" AND "ABBToyotaBBA"') TBL1 ON
TBL.col1=TBL1.[key] Result = 1 row
But it don't work in following scenario col4= ABengineBA + ABBCorola ToyotaBBA
SELECT
TBL.col4 FROM
TBL INNER JOIN
CONTAINSTABLE(TBL,col4,' "ABengineBA" AND "ABB*ToyotaBBA"') TBL1 ON
TBL.col1=TBL1.[key]
Result=0 Row Any idea how i can write second query to get result?
Looking for returning multiple entries from a time span. I have a date, start-time, end-time and duration. I need the start-times separated in a list. It's fine if temp tables are needed - I have that clearance.
The following works in query if I specify one student (PlanDetailUID) when running query. If I try to specify multiple students (PlanDetailUID) when running query, I get variable cannot take multiple entries. I assume I would need to replace (variables) in PART 2 with (case statements / using select everywhere) to get around the issue or is there a better way ?
Currently I have a column with multiple postcodes in one value which are split with the “/” character along with the corresponding location data. What I need to do is split these postcode values into separate rows while keeping their corresponding location data.
Suppose that I have a table with following values Table1 Col1 Col2 Col3 ----------------------------------------------------------- P3456 C935876 T675 P5555 C678909 T8888
And the outcome that I want is: CombinedValues(ColumnName) ---------------------------------------------- P3456 - C935876 - T675 P5555 - C678909 - T8888
where CombinedValues column contains values of coulmn 1,2 & 3 seperated by '-' So is there any way to achieve this?
I'm creating a web-based NT RAS report site and am looking for the most efficient way to import the data from NT Event log into SQL2k. I'm using the 'dumpel' utility from rsc kit and all is fine except the 10th column - the message detail:
"The user DOMAINuserid connected on port Mdm15 on 08/23/2002 at 07:25am and disconnected on 08/23/2002 at 07:27am. The user was active for 2 minutes 23 seconds. 78809 bytes were sent and 50675 bytes were received. The port speed was 49300."
I need to parse this one long text string into 6 distinct columns: userID, port, duration, bytes_xmt, bytes_rcv and portspeed. After a quick review of the rowsets, the strings seem to hold a consistent output ... no real variances I can see.
I've dablled with views but am facing a small performance issue that could get bigger: The sql server not only has to run the text file import package, but also the view to format the text dump into a workable dataset, then my report code bangs over 30 queries against the final dataset. It already takes our SQL2k server over 3 minutes to parse about 20,000 rows and the server's a beast (dual 1.8 p4 cpu, 3gb ram, raid, etc).
What I think would work best is to abandon the view (performance will only get worse as the row count increases) and instead INSERT the rows into one table.
Any ideas anyone? any good scripts out there that can help me to parse the long text string quicker that using substring and replace functions?
Hi, figured out where I was going wrong in my post just prior, but isthere ANY way I can assign several variables to then use them in anUpdate statement, for example (this does not work):ALTER PROCEDURE dbo.UpdateXmlWF(@varWO varchar(50))ASDECLARE @varCust VARCHAR(50)SELECT @varCust = (SELECT Customer FROM tblWorkOrdersWHERE WorkOrder=@varWO)DECLARE @varAssy VARCHAR(50)SELECT @varAssy=(SELECT Assy FROM tblWorkOrdersWHERE WorkOrder=@varWO)UPDATE statement here using declared variables...I can set one @variable but not multiple. Any clues? kinda new tothis.Thanks,Kathy
I created a query that got the following result. But I expect to get the structure like, care_nbr, cust_nbr,legal_name, address_type=physical address, addr_line_1, addr_line_2, address_type-primary address, ddr_line_1, addr_line_2. That means I only need primary and physical address, and expect them to show in a row to each care_nbr. How to perform that?
I have a table that has multiple postal codes in one of the columns. Those have to be split up one per line and stored in another table. The zip codes are comma seperated. Is there a function that can do this...?
Example data in ZipCodeTable. (Name and ZipCode are 2 columns in a table)
So we have new servers that are going to be installed with SQL 2012 and I'm debating the wisdom of splitting tempdb with multiple files.
I know it's a myth that performance automatically improves if you split it into a number of files based on processors, but I'm debating the wisdom of putting a file on each of my data / log file drives.
For instance, I have a server with a C: drive (OS), D: drive (Data for system DBs and install of programs - 458 GB), an F: drive for user DB data files (767 GB), and a J: drive for log files (255 GB).
Obviously no files are going on C:. I'm debating on whether or not we should even leave system DBs on the D: drive given in our current 2k8 servers, we end up with Memory.dmp files over flowing the D: drives as well as .cabs and other install / update files that tend to collect on that drive over the years.
But if we leave the system DBs on D:, I'm wondering if adding a second tempdb file to F: and a third to J: will improve query performance or not.
all the columns are separated with a "|" but the amount of columns are not fixed, so in lines 1 & 2 they are 4 columns and in line 3 there is 7 columns
I concatenate multiple rows from one table in multiple columns like this:
--Create Table CREATE TABLE [Person].[Person_1]( [BusinessEntityID] [int] NOT NULL, [PersonType] [nchar](2) NOT NULL, [FirstName] [varchar](100) NOT NULL, CONSTRAINT [PK_Person_BusinessEntityID_1] PRIMARY KEY CLUSTERED
[Code] ....
This works very well, but I want to concatenate more rows with different [PersonType]-Values in different columns and I don't like the overhead, of using the same table in every subquery ([Person_1]). Is there a more elegant way to do this, without using a temp table or something else?
Hi i want to create a table as follows :if exists (select * from dbo.sysobjects where id =object_id(N'[Indexes]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)drop table [Indexes]GOCreate table Indexes(indexname Varchar(100), index_DescriptionVarchar(100), index_keys Varchar(100))GOINSERT INTO Indexes EXEC sp_helpindex 'SDM_Fact_Order_Detail'GOThis will give me a table (northwind)IX_Auto_SDM_Fact_FK_Shipped_Date nonclustered located onSAMIS_SDM_Index FK_Shipped_DateIX_Auto_SDM_Fact_Order_Detail_FK_Insert_Dateclustered located onSAMIS_SDM_Data1FK_Insert_Date, FK_Insert_TimeAs you see sp_helpindex will give me a comma seperated field. I wantto split the third column FK_Insert_Date, FK_Insert_Time into a extrarowLike this :IX_Auto_SDM_Fact_FK_Shipped_Date FK_Shipped_DateIX_Auto_SDM_Fact_Order_Detail_FK_Insert_Date FK_Insert_DateIX_Auto_SDM_Fact_Order_Detail_FK_Insert_Date FK_Insert_TimeCan anyone help me with this?ThanxHennie
I am working on a sql data that has a list of product names, shipment type and the count of shipments. The values are listed as rows in the database. it will be in the below format.I want to transpose only the shipment type and the corresponding count of each product name in the below format.I tried to do this but i am not able to achieve in the correct format.
Right now I have to do something like this and it is time consuming every time I have to query a specific table...
SELECT lots_of_columns FROM table WHERE (column5 = '1' OR column6 = '1' OR column7 = '1' OR column8 = '1' OR column9 = '1' OR column10 = '1' OR column11 = '1' OR column12 = '1') AND other_query_critiera_here
Typing out the OR statement gets long, time consuming and prone to errors because that first where line with all the ORs can sometimes have 20+ ORs in it. As some insight, the columns are text columns, sometimes they have data, sometimes they are NULL. Sometimes they have the same data (i.e., column5 and column6 and column12 could both have '1' as values).
Is it possible to assign multiple columns from a SQL query to one variable. In the below query I have different variable (email, fname, month_last_taken) from same query being assigned to different columns, can i pass all columns to one variable only and then extract that column out of that variable later? This way I just need to write the query once in the complete block.
DECLARE @email varchar(500) ,@intFlag INT ,@INTFLAGMAX int ,@TABLE_NAME VARCHAR(100)
I am trying to insert values in a single table with four columns from 4 different sources. is it possible to run these 4 insertions in parallel. all these insertion are independent of each other