I have a history table with the following values
CREATE TABLE History (SnapShotDate DATETIME, UID VARCHAR(10), DUEDATE DATETIME)
INSERT INTO History VALUES ('03-23-2015','PT-01','2015-04-22')
INSERT INTO History VALUES ('03-30-2015','PT-01','2015-04-20')
INSERT INTO History VALUES ('04-06-2015','PT-01','2015-06-30')
[Code] ....
I need an output in the below format. I need the most recent changed value for any given UID. Need to get the below result
OUTPUT
UID PreviousDueDate CurrentDueDate
----------------------------------------
PT-01 2015-04-20 2015-06-30
PT-02 2015-04-22 2015-04-22
PT-03 2015-04-18 2015-04-22
PT-042015-04-222015-04-18
I have a requirement where in I have to concatenate the fields based on their sequence given in another table along with respect to their lengths. eg..
Input 1:
Table A: (below are the fields and their respective values, not all fields will have values)
-----------
KSCHL - ZIC0 (KEY)
KOTABNR - 521 (KEY)
MATNR
KUNNR-->1234567890
LIFNR
VKORG-->a234
PRCTR
KUNRE-->4355325363
LIFRE-->88390234
PRODH
Table BIt contains the same fields as in table A and will have sequence number in which the concatenation should happen. The length field(LEN) will have corresponding field lengths(pipe delimited) should be considered in concatenation)
KSCHL - ZIC0 (KEY)
KOTABNR - 521 (KEY)
MATNR
KUNNR--> 1
LIFNR
VKORG-->3
PRCTR
KUNRE-->2
LIFRE -->4
PRODH
LEN10|10|4|10
Expected Result:
---------------------
KSCHL - ZIC0 (KEY)
KOTABNR - 521 (KEY)
MATNR
KUNNR1234567890
LIFNR
VKORGa234
PRCTR
KUNRE4355325363
LIFRE0088390234
PRODH
Concat_String12345678904355325363a2340088390234
Note: If the field length given in Table B doesn't match with actual size of the fields then, the field should be filled with 2 left spaces while concatenation.. Eg. In above example say LIFNR value = 88390234(len =icon_cool.gif then after concat the value should be like below:
12345678904355325363a234 88390234
Note:The fields are not constant..I have around 40 fields like that in which any combination of fields can be possible...eg..
KSCHL - ZIC0 (KEY)
KOTABNR - 521 (KEY)
MATNR -->2
KUNNR--> 4
LIFNR
VKORG-->1
PRCTR
KUNRE
LIFRE --> 3
PRODH
I am not sure which field has the value 1, 2 etc.. and how many fields are forming the combination..It can be sometimes 3/40 fields or it can be 10/40 fields...I have to dynamically get those values and concat...
I can have any number of fields for concatenation..above example is just for 4...it should be dynamic enough to handle any number of fields..
I have 2 tables in a SQL Server database:
Table A(Value), which contains some strings in the column "Value"
Table B(Key,Text), which also contains strings (in the column "Text")
Now I want to find all rows in B which contain at least one string of A and create a result table X with all found rows in B. B should contain the found keys and all found substrings for this key (separated with a comma)
The solution I am looking for may not use a Cursor and may not use the CONTAIN-Statement (fulltextsearch feature)..
What issues will I encounter if I change the MSSQLSERVER password?
View 2 RepliesMy issue is that: here have two datamirror servers, and when the first fail, mirror needs to assume with the same name.
I'll try explain: SD01 - Primary, SD02 - mirror. If SD01 fails, SD02 assumes called SD01. By the way, the instances are in same domain, but not in same computer.
I have a table called Table1 where I have five fields like Tableid, Processigndate, Amount, remainingCollectonCount and Frequency. All total I have more than 5Lacs records.
Now I need to fill up another table Called FutuecashFlow taking the records from Table1. There will be also five Columns like FutureCashflowid, Table1id, Processigndate, Amount.
Now the condition is that if the remainingCollectonCount =6 and the frequency is 12 then there will be the 6 entries in the futurecasflow table where the prcessign datae wille be addeed by 1 month.
For example Table1
Tableid, Processigndate Amount remainingCollectonCount Frequency
1 2014-02-15 48 8 12
the future cash flow table the prcessing date column will be shown in the following way
Processigndate
2014-03-15
2014-04-15
2014-05-15
2014-06-15
2014-07-15
2014-09-15
2014-10-15
I do not to want to use cursor....
I have a scenario where i need to get the starting and ending date time based on the crieteria. The criteria is I always have my start date as NS or GS in the data column and my end date as GX so i need NS or GS to be my strart date based on ts Ascending and my end date as GX to be displayed in the same columns .
Create Table Test
(Tsq INT IDENTITY (1,1),
Data Varchar (150),
ts datetime,
Tpkt_type int)
insert into test values ('GS,000020,000021,000022,000023','2013-11-13 09:47:35.963','2')
[code]....
Expected Output
---------Data----------------- ts as starttime--------------tpkt_type------data-----------------------ts as endtime--------tpkttype-
'GS,000020,000021,000022,000023','2013-11-13 09:47:35.963','2' 'GX,1,0000000000000000000000000','2013-11-13 09:47:37.007','4'
'GS,000020,000021,000022,000023','2013-11-13 09:50:25.987','2', 'GX,1,0000000000000000000000000','2013-11-13 09:50:40.920','4'
'GS,000020,000021,000022,000023','2013-11-13 09:51:28.330','2', 'GX,1,0000000000000000000000000','2013-11-13 09:51:43.257','4'
'NS,000020,000021,000022,000023','2013-12-17 16:51:09.063','18', 'GX,1,0000000000000000000000000','2013-12-17 16:51:15.257','4'
I have some SQL experience, but nothing past basic commands. I'm trying to take some data held by an application to use as CSV import into another application.I have two tables from an application, one holds references made in another.The first tables holds details about a person:
field1=name field2=age field3=country
Joe,50,1
Country is held as a number, then there is another table that holds all the countries:
field1=id field2=description
1,USA
2,France
3,Germany
I want to do a lookup where it returns:
Joe,50,USA
I have several tables a varbinary column in a database. They have names like CSB_BLOB or OBJECT_BLOB. Now I am having intermittent success with getting the data out.
For example this query returns readable text from this data.
0x46726F6D3A20226465616E6E6167726.....etc --data as stored in the column
SELECT CAST(CSB_BLOB AS VARCHAR(MAX)) AS 'Message' FROM OBJECT_BLOB
However this column has the following query results.
0x0001000000FFFFFFFF01000000000000000C....etc. --data as stored in column
--this query returns empty result
SELECT (CSB_BLOB AS VARCHAR(MAX)) AS 'Message' FROM CSB_STATUS_LOG
--this query returns no change???
SELECT CONVERT(VARCHAR(MAX), CONVERT(VARBINARY(MAX), CSB_BLOB, 2), 2) FROM CSB_STATUS_LOG
0001000000FFFFFFFF01000000000000000C....etc
Obviously there is a difference between the two but I am not educated enough to interpret this difference. What do I need to learn / read so I can look at the data in one of these BLOB columns and know how to convert it to something meaningful?
Something like:
1. Try to cast as varchar to see if it is text.
2. Turn into a byte array and see if it is a jpg
3. Turn into a byte array and see if it is a pdf
4. Convert it to hex and then cast as varchar
5. etc....
Below is the query for my procedure
ALTER PROC [dbo].[sp_GetInvitationStatusTest]
(
@invited_by NVARCHAR (50)
)
AS
BEGIN
-- SELECT * FROM dbo.Merck_Acronym_Invitations WHERE invited_by=@invited_by
select distinct t1.invited_isid as 'ISID', t1.invited_name as 'NAME',t1.invitation_status as 'STATUS',
[Code] ....
If you look at the where clause i have invited by , i get the desired output if i just provide 1 name in the execution such as exec [dbo].[sp_GetInvitationStatusTest] 'marfilj' But my requirement is to make the procedure work with more than one input variable such as exec [dbo].[sp_GetInvitationStatusTest] 'marfilj','sujith' now i should get the output for 2 people but if i run this i receive the following error Procedure or function sp_GetInvitationStatusTest has too many arguments specified.
how to make my procedure work with more than 1 input variable?
We have a previous SQL 2012 cluster that emails us when a new database is added. I am unable to figure out why we get this error any time we add a new database to it, and also it prevents us from adding a new availability group to this cluster because of this error.
I also am unable to figure out what profile it is talking about as this was setup before me and I am not a DBA.
I have the following requirement that I would like some inspiration on.We have a requirement in our database to allow data to be entered that may show data errors, however the errors are a bit complex so that a normal FK relationship can not catch it.How this will work is
1) the user will be allowed to enter data in the database, the data can be across a lot of tables, so we will do it in a transaction
2) a store procedure will be executed that before commit will check the data (as a dirty uncommitted read), the store procedure either passes or fail, if it passes the data is committed into the database.
3) If the stored procedure reports that the data is not consistent it is rolled back, and the reason it is rolled back (which will be generated from the store procedure) should be stored in the database.
What I would like to know is this, is there any way of structuring the the code in a specific way that will rollback the dirty changes but commit the "Reason why it failed" bit of it.
How to read multiple excel sheets in same excel file with different table schema.
Basically need to load data into tables from these excel sheet.
So I know how to dynamically read multiple excel sheets in same excel file with same table schema and load into one table.
But how to do this dynamically for multiple excel sheet with different table schema and load into different tables?
I'm fairly new to SQL. I have a function in Excel that I'm trying to translate to SQL to run my query.
Basically what i want to do is if Date1 and Date2 are blank then display Date Not Tracked.
Else if Date1 is blank however Date2 has a value then pull Date2 other wise Date1
This is my Excel function:
=IF(AND(A2="",B2=""),"Date Not Tracked",IF(A2="",B2,A2))
A2 = Product.Date1
B2 = Product.Date2
We know we can use the event lock_deadlock and xml_deadlock_report to capture the deadlock info, however I also want to capture the execution plans for all of the SPIDs in the deadlock graph, how to output the execution plans to the extended events trace results either ? such as if there is an action for execution plan or workaround for it ?If there is no built in action for execution plan , may I know if we can add the customized info to the extended events results file also ? Such as when the deadlock related event happens , then we can run a query to get some info ,then added the info along with other info such as sql_text, dbname etc to the events trace results file either ? The reason is if we also know the execution plans when the deadlock happens, it is useful to turning the query based on the execution plans to reduce deadlock happening .
View 5 Replies