SQL Server 2008 :: Getting Duplicate Data When Connecting To Another Table
Jul 25, 2015
I have the following two tables:
CREATE TABLE [MailBox].[Message](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[SenderId] [bigint] NOT NULL,
[Message] [nvarchar](max) NOT NULL,
[SentDate] [datetime] NOT NULL,
CONSTRAINT [PK_MailBox.Message] PRIMARY KEY CLUSTERED
[Code] ....
I'm building a messaging functionality in to my application, I'm able to insert a message into the database and this message then appears inside the other users inbox. The issue I have it when I click on this message to view the conversation I make a call to the following sp as shown here:
@UserId bigint,
@SenderId bigint
AS
BEGIN
SET NOCOUNT ON;
[Code] .....
The problem with this is I'm trying to connect to the user photos table to return their profile picture, but for some reason even though I have specified IsProfilePic I get all the photos returned, instead it should be two photos, one for the @UserId and the other for the @SenderId, its equivalent to me doing this:
Select *
From [User].[User_Photos]
where (UserId = 1 or UserId = 2) and IsProfilePic = 1
I am trying to insert bulk data into main table from staging table in sql server 2012. If any error comes, this total activity is rollbacked. I don't want that to happen. I want to know the records where ever the problem persists, and the rest has to be inserted.
I have a report that summarizes hospital readmissions. Some months may only have a female or male patient that is readmitted but, I want to show both months either way.
parent | NAme | Checked | contactmethod|Check2 | Other 974198 | Employment | true | Face to Face | true | null 974224 | Other | true | Face to Face | true | skills 974224 | Other | true | Contact | true | skills
I'd like to pivot on "parent"
In a perfect world I'd like to see output like
974198 | Employment | true | Face to Face | true | null
974224 | Other | true | Face to Face, Collateral Contact | true | skills
If there are more than one name or contactmethod for the same parent then they would be strung along with commas
I need to update the Denominator column in one row with the value from the Numerator column in a different row. For example the last row in the table is
c010A92NULL
I need to update the Denominator, which is currently NULL, with the value from the Numerator where the MeasureID=c001 and GroupID=A.
In my asp.net project there are about 100 drop down list.I created a table to store data for drop down list in which including [DropdownID],[Order Sequence] and [Description] three columns. The sample like below. Data was input manually by a user. How to code to find out duplicate [OrderSequence]?
I have a XML data passed on to the stored proc in the following format, and within the stored proc I am accessing the data of xml using the nodes() method
Here is an example of what i am doing
DECLARE @Participants XML SET @Participants = '<ArrayOfEmployees xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Employees EmpID="1" EmpName="abcd" /> <Employees EmpID="2" EmpName="efgh" /> </ArrayOfEmployees >'
SELECT Participants.Node.value('@EmpID', 'INT') AS EmployeeID, Participants.Node.value('@EmpName', 'VARCHAR(50)') AS EmployeeName FROM @Participants.nodes('/ArrayOfEmployees /Employees ') Participants (Node)
I saved the result into a csv file and then truncated the table. Now, I am trying to bulk insert the data into the table. So I used:
bulk insert rdb.dbo.scd_event_tab from 'C:userssluintel.ctrdesktopeventtab.csv' with ( codepage = 'RAW', datafiletype = 'native', fieldterminator = ' ', keepidentity, keepnulls ); go
However, I get this error:
Msg 4867, Level 16, State 1, Line 1 Bulk load data conversion error (overflow) for row 1, column 1 (JOB_ID). Msg 4866, Level 16, State 5, Line 1
The bulk load failed. The column is too long in the data file for row 1, column 3. Verify that the field terminator and row terminator are specified correctly.
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
I have a query needs to look for 5 records data in a table. Basically i need to hardcode. Below is my query which didn't work out.
select BF_ORGN_CD, BF_BDOB_CD, BF_TM_PERD_CD,data from BF_DATA WHERE (BF_ORGN_CD,BF_BDOB_CD,BF_TM_PERD_CD) in ***** i guess this is the wrong query**** ('A1', 'B1', 'C1') ('A2', 'B2', 'C2') ('A3', 'B3', 'C3') ('A4', 'B4', 'C4') ('A5', 'B5', 'C5')
but if i use the query below it will generate more records than these 5 records
select BF_ORGN_CD, BF_BDOB_CD, BF_TM_PERD_CD,data from BF_DATA WHERE (BF_ORGN_CD) in ('A1', 'A2', 'A3', 'A4', 'A5') and (BF_BDOB_CD) in ('B1', 'B2', 'B3', 'B4', 'B5') and (BF_TM_PERD_CD) in ('C1', 'C2', 'C3', 'C4', 'C5')
I want to create a XML file with data in my table. I have a question about tags.
SELECT -- Root element attributes 'http://tempuri.org/Form.xsd' AS 'xmlns', 'http://www.w3.org/2001/XMLSchema-instance' AS 'xmlns:xsd', ( SELECT -- Creating a default element
[Code] ....
This is my query. When I use 'xmlns' namespace the result is below:
Production and development servers are on different domains and they do not trust each other. How do I import data from the table t1 from a database db1 in production and load it into table t1 inside database db1 in development?
I want to store Images as binary data in SQL table and compare it each time with a image file I am getting. I've tried below approach but getting error:
DROP TABLE #BLOBTest CREATE TABLE #BLOBTest ( TestID int IDENTITY(1,1), BLOBName varChar(50), BLOBData varBinary(MAX) );
[Code] ....
Error: Msg 4861, Level 16, State 1, Line 10 Cannot bulk load because the file "C:Files12656.jpg" could not be opened. Operating system error code 3(failed to retrieve text for this error. Reason: 15105).
When you load the data into a new partition table, can it to done online without any downtime? because I have few tables that are around 250 gigs and more.
What I am trying to do is count persons in buckets "non-recidivists" and "recidevists" based on how many bkg_nbr they have per STATE_NBR. If they have more than 1 bkg_nbr per STATE_NBR then put them in the "recdivists" bucket. If they only have a 1 to 1 then put them in the "non-recidivists" bucket.
I am trying to create a trigger on a table. Let's call it table ABC. Table looks like this:
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[ABC]( [id] [uniqueidentifier] NOT NULL,
[Code] ....
When someone updates a row on table ABC, I want to insert the original values along with the current date and time getdate() into table ABCD with the current date and time into the updateDate field as defined below:
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[ABCD]( [id] [uniqueidentifier] NOT NULL,
[Code] .....
The trigger I've currently written looks like this:
/****** Object: Trigger [dbo].[ABC_trigger] Script Date: 4/10/2015 1:32:33 PM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TRIGGER [dbo].[ABC_trigger] ON [dbo].[ABC]
[Code] ...
This trigger works, but it inserts all of the rows every time. My question is how can I get the trigger to just insert the last row updated?
I can't sort by uniqueidentifier in descending as those can be random.
is there a way to see the data of a table variable in the SSMS debugger? For example, if I set a breakpoint in SSMS and look at a populated table variable named @MyTable in the Locals tab at the bottom of the IDE, a value of "(table)" is displayed. There does not appear to be a way to expand or drill into this variable in the debugger to see the data. Do you know if there's a way to do this through the debugger or do you use an alternate approach when using the SSMS debugger?
I have table variable in which I am inserting data from sql server database. I have made one of the columns called repaidID a primary key so that a clustered index will be created on the table variable. When I run the stored procedure used to insert the data. I have this error message; Violation of Primary key Constraint. Cannot insert duplicate primary key in object. The value that is causing this error is (128503).
I have queried the repaidid 128503 in the database to see if it is a duplicate but could not find any duplicate. The repaidID is a unique id normally use by my company and does not have duplicates.
Or can it record before and after column changes based on the LSN only?
An extract from a file based legacy accounting system is performed every night. The system does not have a primary key because transactions are managed through program code. (the more things change...). The extract is copied to text in Unix and FTP'd to Windows, where the file is loaded into SQL Server by kill & fill. Because of the expense of modifying the source system, there is enormous inertia/resistance to injecting a primary key at the source, so kill & fill it stays.
In reading about Change Data Capture, it seemed to me that column level insert update and delete are stored in tables that remember the before and after content of each column tracked. In my reading I have seen many references to the LSN to decide when and what to record as changed, but I have not seen any refereference to the necessity of a primary key for Change Data Capture to work. This is in contrast to replication, where the requirement for the existence of a primary key is made plain.
Is it possible to use Change Data Capture against a table without a primary key? How to use it to change the extract from kill and fill to incremental.
I have a table 300+GB. it holds 10 years of Data. I need to delete 5 years of data and put it to another server so I can have more space.
If I delete 5 years of data, Transaction log gets so huge and size of the database even gets bigger because of the .ldf file which even gets bigger! I think I can shrink the log file and the data file. Is this the best way to do it?
One people created a word input file (15 pages, including check boxes, text boxes, drop down lists...). Is it possible to save data in word input file to SQL table?
I'm trying to load data from old SQL server 2000 to new SQL server 2014. I need to do a checksum to check if all the source data is loaded in the target database(SQL server 2014). I've created the insert statement for the same which works. I need to use checksum to make sure all the source rows are loaded in the target table. I haven't done checksum before.
Here is my insert statement:
INSERT INTO [Test].[dbo].[Order_tab] ([rec_id] ,[date_loaded] ,[Name1] ,[Name2] ,[Address1] ,[Address2]
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[PaymentsLog](
[Code] ....
Is there a way to look at the DatePeriod table and use the StartDtae and EndDate as the periods to be used in the select statement and then cursor through each date between these two dates and then insert the data in to the PaymentsLog table?
I have been trying to write a cursor to fetch required data from table but somehow its running forever and inserting duplicate records.
I have a temp table named getInvoice where I have five important columns
1. invoice number 2.group 3.invoice status 4. Invoice Expiration date 5. Creation date time
and some other columns.One invoice number can belong to one or more group and there can be one or more records for a particular invoice number and group.
An example is below :
InvoiceNumber Group InvoiceStatus InvoiceExpirationDate CreationDateTime
My query condition is complex and that is why Im facing problem retrieving the output.I need a cursor for getting distinct invoice number from the table and for each invoice number I need to get the latest record for each invoice number and suffix combination based on creationdateand time column and if that record has invoice status of 2 and also the invoice expiration date can be either null or greater than today's date, then I need to get that record and put it in a temp table.
The query I wrote is below
declare myData cursor for select distinct invoiceNumber from #getInvoice declare @invoiceNumber varchar(30) open myData fetch next from myData into @invoiceNumber while @@FETCH_STATUS = 0
I have opened up a port on a remote SQL instance and can see that the port is LISTENING when using the PortQry tool. I have also set the TCP port in the TCP/IP properties in the IPAll section for that instance, yet I am unable to connect and get an error of
Connection Timeout Expired. The timeout period elapsed while attempting to consume the pre-login handshake acknowledgement. This could be because the pre-login handshake failed or the server was unable to respond back in time. The duration spent while attempting to connect to this server was - [Pre-Login] initialization=1; handshake=14998; (.Net SqlClient Data Provider)
I have done this on other instances, although they were default instances, and it has always worked fine.
Hi All, After I installed sql server 2005 64bit standard edition on Windows Server Enterprise 2008 64bit, I cannot connect to the sql instance using the sql management studio on the same machine! I verified that:
service is running,
in surface area configuration: remote connections to local and remote are enabled, for TCP/IP and named pipes.
ran the command netstat -avn| findstr 49279 to make sure that the server is listening.
firewall is off, but this does not matter since I'm connecting locally to local instance
I'm using domain controller account to login to sql server / also tried the sa account. what else can be wrong?
I need to recover some data in a table but i'm not 100% sure the right way to do this safely.
I'll need to query the two tables to compare the before and after but how do i go about restoring/attaching the backup database to SQL without causing conflicts?
If I restore, I assume this would just overwrite which is obviously the worst thing that can happen. if i attach the backup, how does this affect the current live DB? how do i make sure that it's not getting accessed and mistaken for the live DB?
When assigning permission to an authentication user to connect to a server database, if I want the user to be able to insert / update / delete data on db objects specifically tables, what permission should be assigned to that user?
My thoughts were Insert / Update / Delete; however, someone suggested that the Execute permission would do this ...
I am writing a query to return some production data. Basically i need to insert either 1 or 2 rows into a Table variable based on a decision as to does the production part make 1 or 2 items ( The Raw data does not allow for this it comes from a look up in my database)
I can retrieve all the source data i need easily but when i come to insert it into the table variable i need to insert 1 record if its a single part or 2 records if its a twin part. I know could use a cursor but im sure there has to be an easier way !
Below is the code i have at the moment
declare @startdate as datetime declare @enddate as datetime declare @Line as Integer DECLARE @count INT
set @startdate = '2015-01-01' set @enddate = '2015-01-31'
I'm working on a web app that needs to be able to take a row in the database and duplicate it, creating a new row in the same table with the same data except for the ID field and reference field.So basically: table1.row1 references table2.row1. I need to duplicate the data in table1.row1 (creating table1.row2) with the same reference to table2.row1.Is there any easy way to do this in SQL? I'm just looking for some ideas or a framework to accomplish this.