hi there, i've got a Maconomy table that needs to be normalised. I'm fairly new to T-SQL and was hoping i could get advice on this forum. Possibly copy the below tables into Excel to make better sense.
i would like to PiVot/transpose the below table to so that the 8 types of HEADER are the columns but include the NOTENUMBER (i.e. this below table would give an output of 2 rows)
I have two queries from two different tables ex ABC and BCD. For table ABC, according to my query, I got 11 records ; for table BCD I only got 9 records.
Bottom line: I would like to see only 11 records from Table ABC including certain data from table BCD after I joined this two tables.
However, no matter what I did I always got 99 records when I joined.
The database has approx. 2500 temporary tables. The temp tables match a pattern such as APTMP... I tried deleting the tables in SSMS with the statement, Delete from Information_Schema.tables where substring(table_name,1,5) = 'APTMP' This returns the error message"Ad hoc updates to system catalogs are not allowed".
What is the correct way to delete a group of tables whose name match a pattern from within SSMS?
I am trying to insert bulk data into main table from staging table in sql server 2012. If any error comes, this total activity is rollbacked. I don't want that to happen. I want to know the records where ever the problem persists, and the rest has to be inserted.
Dear All,I'm attempting to create a query that will transpose repeated fieldsinto a single table structure. Can anyone think of how this can be doneas I'm stumped at the minute? I'd like to do this without having tocreate a cursor due to the overheads and performance issues associatedwith cursors. The table may also include additional fields which I'mnot interested in.Serial Data is like this.............IkeyIval-----------------------------------------------RAF_EMAILJoin Bytes!RAF_FIRSTNAMEtestFirstName1RAF_LASTNAMEtestLastname1RAF_EMAILJoin Bytes!RAF_FIRSTNAMEtestFirstName2RAF_LASTNAMEtestLastname2....Transposed into table like this ..............EmailFirstnameLastname--------------------------------------------------------------------------Join Bytes!testFirstName1testLastname1Join Bytes!testFirstName2testLastname2....Any help, much appreciated ...Kind Regards,Tim-------------------------------------------------------------------------------------NOTE: these create temporary tables ....DECLARE @XML TABLE(ikey VARCHAR(200),ival VARCHAR(1000))INSERT INTO @XMLSELECT 'RAF_EMAIL', 'testemail1@hotmail.com'UNION ALL SELECT 'RAF_FIRSTNAME', 'testFirstName1'UNION ALLSELECT 'RAF_LASTNAME', 'testLastname1'UNION ALLSELECT 'RAF_EMAIL', 'testemail2@hotmail.com'UNION ALL SELECT 'RAF_FIRSTNAME', 'testFirstName2'UNION ALLSELECT 'RAF_LASTNAME', 'testLastname2'UNION ALLSELECT 'FORM_CATEGORY', 'nothing'UNION ALLSELECT 'NO_DOGS', '1'DECLARE @RESULTS(EMAIL,FIRSTNAME,LASTNAME)
I have a report that summarizes hospital readmissions. Some months may only have a female or male patient that is readmitted but, I want to show both months either way.
parent | NAme | Checked | contactmethod|Check2 | Other 974198 | Employment | true | Face to Face | true | null 974224 | Other | true | Face to Face | true | skills 974224 | Other | true | Contact | true | skills
I'd like to pivot on "parent"
In a perfect world I'd like to see output like
974198 | Employment | true | Face to Face | true | null
974224 | Other | true | Face to Face, Collateral Contact | true | skills
If there are more than one name or contactmethod for the same parent then they would be strung along with commas
I need to update the Denominator column in one row with the value from the Numerator column in a different row. For example the last row in the table is
c010A92NULL
I need to update the Denominator, which is currently NULL, with the value from the Numerator where the MeasureID=c001 and GroupID=A.
I have a XML data passed on to the stored proc in the following format, and within the stored proc I am accessing the data of xml using the nodes() method
Here is an example of what i am doing
DECLARE @Participants XML SET @Participants = '<ArrayOfEmployees xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Employees EmpID="1" EmpName="abcd" /> <Employees EmpID="2" EmpName="efgh" /> </ArrayOfEmployees >'
SELECT Participants.Node.value('@EmpID', 'INT') AS EmployeeID, Participants.Node.value('@EmpName', 'VARCHAR(50)') AS EmployeeName FROM @Participants.nodes('/ArrayOfEmployees /Employees ') Participants (Node)
I saved the result into a csv file and then truncated the table. Now, I am trying to bulk insert the data into the table. So I used:
bulk insert rdb.dbo.scd_event_tab from 'C:userssluintel.ctrdesktopeventtab.csv' with ( codepage = 'RAW', datafiletype = 'native', fieldterminator = ' ', keepidentity, keepnulls ); go
However, I get this error:
Msg 4867, Level 16, State 1, Line 1 Bulk load data conversion error (overflow) for row 1, column 1 (JOB_ID). Msg 4866, Level 16, State 5, Line 1
The bulk load failed. The column is too long in the data file for row 1, column 3. Verify that the field terminator and row terminator are specified correctly.
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
I have a query needs to look for 5 records data in a table. Basically i need to hardcode. Below is my query which didn't work out.
select BF_ORGN_CD, BF_BDOB_CD, BF_TM_PERD_CD,data from BF_DATA WHERE (BF_ORGN_CD,BF_BDOB_CD,BF_TM_PERD_CD) in ***** i guess this is the wrong query**** ('A1', 'B1', 'C1') ('A2', 'B2', 'C2') ('A3', 'B3', 'C3') ('A4', 'B4', 'C4') ('A5', 'B5', 'C5')
but if i use the query below it will generate more records than these 5 records
select BF_ORGN_CD, BF_BDOB_CD, BF_TM_PERD_CD,data from BF_DATA WHERE (BF_ORGN_CD) in ('A1', 'A2', 'A3', 'A4', 'A5') and (BF_BDOB_CD) in ('B1', 'B2', 'B3', 'B4', 'B5') and (BF_TM_PERD_CD) in ('C1', 'C2', 'C3', 'C4', 'C5')
I want to create a XML file with data in my table. I have a question about tags.
SELECT -- Root element attributes 'http://tempuri.org/Form.xsd' AS 'xmlns', 'http://www.w3.org/2001/XMLSchema-instance' AS 'xmlns:xsd', ( SELECT -- Creating a default element
[Code] ....
This is my query. When I use 'xmlns' namespace the result is below:
CREATE TABLE [MailBox].[Message]( [Id] [bigint] IDENTITY(1,1) NOT NULL, [SenderId] [bigint] NOT NULL, [Message] [nvarchar](max) NOT NULL, [SentDate] [datetime] NOT NULL, CONSTRAINT [PK_MailBox.Message] PRIMARY KEY CLUSTERED
[Code] ....
I'm building a messaging functionality in to my application, I'm able to insert a message into the database and this message then appears inside the other users inbox. The issue I have it when I click on this message to view the conversation I make a call to the following sp as shown here:
@UserId bigint, @SenderId bigint AS BEGIN SET NOCOUNT ON;
[Code] .....
The problem with this is I'm trying to connect to the user photos table to return their profile picture, but for some reason even though I have specified IsProfilePic I get all the photos returned, instead it should be two photos, one for the @UserId and the other for the @SenderId, its equivalent to me doing this:
Select * From [User].[User_Photos] where (UserId = 1 or UserId = 2) and IsProfilePic = 1
Production and development servers are on different domains and they do not trust each other. How do I import data from the table t1 from a database db1 in production and load it into table t1 inside database db1 in development?
I want to store Images as binary data in SQL table and compare it each time with a image file I am getting. I've tried below approach but getting error:
DROP TABLE #BLOBTest CREATE TABLE #BLOBTest ( TestID int IDENTITY(1,1), BLOBName varChar(50), BLOBData varBinary(MAX) );
[Code] ....
Error: Msg 4861, Level 16, State 1, Line 10 Cannot bulk load because the file "C:Files12656.jpg" could not be opened. Operating system error code 3(failed to retrieve text for this error. Reason: 15105).
When you load the data into a new partition table, can it to done online without any downtime? because I have few tables that are around 250 gigs and more.
What I am trying to do is count persons in buckets "non-recidivists" and "recidevists" based on how many bkg_nbr they have per STATE_NBR. If they have more than 1 bkg_nbr per STATE_NBR then put them in the "recdivists" bucket. If they only have a 1 to 1 then put them in the "non-recidivists" bucket.
I am trying to create a trigger on a table. Let's call it table ABC. Table looks like this:
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[ABC]( [id] [uniqueidentifier] NOT NULL,
[Code] ....
When someone updates a row on table ABC, I want to insert the original values along with the current date and time getdate() into table ABCD with the current date and time into the updateDate field as defined below:
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[ABCD]( [id] [uniqueidentifier] NOT NULL,
[Code] .....
The trigger I've currently written looks like this:
/****** Object: Trigger [dbo].[ABC_trigger] Script Date: 4/10/2015 1:32:33 PM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TRIGGER [dbo].[ABC_trigger] ON [dbo].[ABC]
[Code] ...
This trigger works, but it inserts all of the rows every time. My question is how can I get the trigger to just insert the last row updated?
I can't sort by uniqueidentifier in descending as those can be random.
is there a way to see the data of a table variable in the SSMS debugger? For example, if I set a breakpoint in SSMS and look at a populated table variable named @MyTable in the Locals tab at the bottom of the IDE, a value of "(table)" is displayed. There does not appear to be a way to expand or drill into this variable in the debugger to see the data. Do you know if there's a way to do this through the debugger or do you use an alternate approach when using the SSMS debugger?
Or can it record before and after column changes based on the LSN only?
An extract from a file based legacy accounting system is performed every night. The system does not have a primary key because transactions are managed through program code. (the more things change...). The extract is copied to text in Unix and FTP'd to Windows, where the file is loaded into SQL Server by kill & fill. Because of the expense of modifying the source system, there is enormous inertia/resistance to injecting a primary key at the source, so kill & fill it stays.
In reading about Change Data Capture, it seemed to me that column level insert update and delete are stored in tables that remember the before and after content of each column tracked. In my reading I have seen many references to the LSN to decide when and what to record as changed, but I have not seen any refereference to the necessity of a primary key for Change Data Capture to work. This is in contrast to replication, where the requirement for the existence of a primary key is made plain.
Is it possible to use Change Data Capture against a table without a primary key? How to use it to change the extract from kill and fill to incremental.
I have a table 300+GB. it holds 10 years of Data. I need to delete 5 years of data and put it to another server so I can have more space.
If I delete 5 years of data, Transaction log gets so huge and size of the database even gets bigger because of the .ldf file which even gets bigger! I think I can shrink the log file and the data file. Is this the best way to do it?
One people created a word input file (15 pages, including check boxes, text boxes, drop down lists...). Is it possible to save data in word input file to SQL table?
I'm trying to load data from old SQL server 2000 to new SQL server 2014. I need to do a checksum to check if all the source data is loaded in the target database(SQL server 2014). I've created the insert statement for the same which works. I need to use checksum to make sure all the source rows are loaded in the target table. I haven't done checksum before.
Here is my insert statement:
INSERT INTO [Test].[dbo].[Order_tab] ([rec_id] ,[date_loaded] ,[Name1] ,[Name2] ,[Address1] ,[Address2]
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[PaymentsLog](
[Code] ....
Is there a way to look at the DatePeriod table and use the StartDtae and EndDate as the periods to be used in the select statement and then cursor through each date between these two dates and then insert the data in to the PaymentsLog table?
I have been trying to write a cursor to fetch required data from table but somehow its running forever and inserting duplicate records.
I have a temp table named getInvoice where I have five important columns
1. invoice number 2.group 3.invoice status 4. Invoice Expiration date 5. Creation date time
and some other columns.One invoice number can belong to one or more group and there can be one or more records for a particular invoice number and group.
An example is below :
InvoiceNumber Group InvoiceStatus InvoiceExpirationDate CreationDateTime
My query condition is complex and that is why Im facing problem retrieving the output.I need a cursor for getting distinct invoice number from the table and for each invoice number I need to get the latest record for each invoice number and suffix combination based on creationdateand time column and if that record has invoice status of 2 and also the invoice expiration date can be either null or greater than today's date, then I need to get that record and put it in a temp table.
The query I wrote is below
declare myData cursor for select distinct invoiceNumber from #getInvoice declare @invoiceNumber varchar(30) open myData fetch next from myData into @invoiceNumber while @@FETCH_STATUS = 0
I need to recover some data in a table but i'm not 100% sure the right way to do this safely.
I'll need to query the two tables to compare the before and after but how do i go about restoring/attaching the backup database to SQL without causing conflicts?
If I restore, I assume this would just overwrite which is obviously the worst thing that can happen. if i attach the backup, how does this affect the current live DB? how do i make sure that it's not getting accessed and mistaken for the live DB?
When assigning permission to an authentication user to connect to a server database, if I want the user to be able to insert / update / delete data on db objects specifically tables, what permission should be assigned to that user?
My thoughts were Insert / Update / Delete; however, someone suggested that the Execute permission would do this ...