'm trying to import a text file but the primary key column contains duplicatres (tunrs out to be the nature of the legacy data). How can I kick out all duplicates except, say, for a single primary key value?
Hi All, I`m using BCP to import ASCII data text into a table that already has many records. BCP failed because of `Duplicate primary key`. Now, is there any way using BCP to know precisely which record whose primary key caused that `violation of inserting duplicate key`. I already used the option -O to output error to a `error.log`, but it doesn`t help much, because that error log contains the same error message mentioned above without telling me exactly which record so that I can pull that `duplicate record` out of my import data file. TIA and you have a great day. David Nguyen.
I have one table that stores log messages generated by a web service. I have a second table where I want to store just the distinct messages from the first table. This second table has two columns one for the message and the second for the checksum of the message. The checksum column is the primary key for the table.
My query for populating the second table looks like: INSERT INTO TransactionMessages ( message, messageHash ) SELECT DISTINCT message, CHECKSUM( message ) FROM Log WHERE logDate BETWEEN '2008-03-26 00:00:00' AND '2008-03-26 23:59:59' AND NOT EXISTS ( SELECT * FROM TransactionMessages WHERE messageHash = CHECKSUM( Log.message ) )
I run this query once per day to insert the new messages from the day before. It fails when a day has two messages that have the same checksum. In this case I would like to ignore the second message and let the query proceed. I tried creating an instead of insert trigger that only inserted unique primary keys. The trigger looks like:
IF( NOT EXISTS( SELECT TM.messageHash FROM TransactionMessages TM, inserted I WHERE TM.messageHash = I.messageHash ) ) BEGIN INSERT INTO TransactionMessages ( messageHash, message ) SELECT messageHash, message FROM inserted END
That didn't work. I think the issue is that all the rows get committed to the table at the end of the whole query. That means the trigger cannot match the duplicate primary key because the initial row has not been inserted yet.
We have a SQL Server 6.5 table, with composite Primary Key, having the Duplicate Entry for the Key. I wonder how it got entered there? Now when we are trying to import this table to SQL2K, it's failing with Duplicate row error. Any Help?
I have recently been looking at a database and wondered if anyone can tell me what the advantages are supporting a unique collumn, which can essentially be seen as the primary key, with an identity seed integer primary key.
For example:
id [unique integer auto incremented primary key - not null], ClientCode [unique index varchar - not null], name [varchar null], surname [varchar null]
isn't it just better to use ClientCode as the primary key straight of because when one references the above table, it can be done easier with the ClientCode since you dont have to do a lookup on the ClientCode everytime.
I'm going through my tables and rewriting them so that I can create relationship-based constraints and create foreign keys among my tables. I didn't have a problem with a few of the tables but I seem to have come across a slightly confusing hiccup.
Here's the query for my Classes table:
Code:
CREATE TABLE Classes ( class_id INT IDENTITY PRIMARY KEY NOT NULL,
This statement runs without problems and I Create the relationship with my Users table just fine, having renamed it to teacher_id. I have a 1:n relationship between users and tables AND an n:m relationship because a user can be a student or a teacher, the difference is one field, user_type, which denotes what type of user a person is. In any case, the relationship that's 1:n from users to classes is that of the teacher instructing the class. The problem exists when I run my query for the intermediary table between the class and the gradebook:
Code:
CREATE TABLE Classes_have_Grades ( class_id INT PRIMARY KEY NOT NULL,
Query Analyzer spits out: Quote: Originally Posted by Query Analyzer There are no primary or candidate keys in the referenced table 'Classes' that match the referencing column list in the foreign key 'Classes_have_gradesFKIndex2'. Now, I know in SQL Server 2000 you can only have one primary key. Does that mean I can have a multi-columned Primary key (which is in fact what I would like) or does that mean that just one field can be a primary key and that a table can have only the one primary key?
In addition, what is a "candidate" key? Will making the other fields "Candidate" keys solve my problem?
ALTER TABLE [Students] WITH CHECK ADD CONSTRAINT [FK_Students_Schools] FOREIGN KEY([SchoolId]) REFERENCES [Schools] ([SchoolId])
What kind of index would ensure best performance for INSERTs/UPDATEs, so that SQL Server can most efficiently check the FK constraints? Would it be simply:
CREATE INDEX IX_Students_SchlId ON Students (SchoolId) Or CREATE INDEX IX_Students_SchlId ON Students (SchoolId, StudentId)
In other words, what's best practice for adding an index which best supports a Foreign Key constraint?
Pls let me know How I generate script for All primary keys and foreign keys in a table. Thereafter that can be used to add primary keys and foreign keys in another databse with same structure.
Also how I script default and other constraints of a table?
Hi! I am new .. very grateful for some advice. How do I eliminate duplicate ref keys Or should I have cache mode PARTIAL?
[Data Conversion [4910]] Error: Data conversion failed while converting column "salesperson_id" (88) to column "Copy of salesperson_id" (4929). The conversion returned status value 6 and status text "Conversion failed because the data value overflowed the specified type.". [Lookup [3882]] Warning: The Lookup transformation encountered duplicate reference key values when caching reference data. The Lookup transformation found duplicate key values when caching metadata in PreExecute. This error occurs in Full Cache mode only. Either remove the duplicate key values, or change the cache mode to PARTIAL or NO_CACHE.
Hello,There is a program which performs some scripted actions via ODBC on tablesin some database on mssql 2000. Sometimes that program tries to insertrecord with key that is already present in the database. The error comes upand the program stops.Is there any way to globally configure the database or the whole mssqlserver to ignore such attempts and let the script continue without any errorwhen the script tries to insert duplicate-key records?Thank you for any suggestions.Pawel Banys
SQL server allows to create as many as foreign key constraints on a same table for a same column.
Will this affect the design or performance in anyway ?
Naming the constraint would be a good way to avoid this.But in case if someone has already created, How do I remove the existing duplicate keys ?
====================== For Example , I have 2 tables Author and Book. I could execute the below query n times and create as many as foreign keys I want.
ALTER TABLE Books ADD FOREIGN KEY (AuthorID) REFERENCES Authors (AuthorID)
I have a large table that consists of the columns zip, state, city, county. The primary key "zip" has duplicates but the rows are unique. How do I filter out only the duplicate zips. So in effect I only have one row per unique key. Randy Garland
if you just want a list of all rows with duplicate zipcodes then ...
SELECT * FROM TableName WHERE zip IN ( SELECT zip FROM TableName GROUP BY zip HAVING COUNT(*)>1 )
Duncan
Duncan, I tried this but it does not return one row per key. Randy Garland
I'm trying to merge two Access databases into one SQL server database. I have 3 tables that are all related with primary and foreign keys.
When I try to import my second set of 3 tables I get errors about the keys already existing in the database. Is there any way to force SQL server to assign new keys while preserving my existing relationships? Thanks!
i tried the following query and able to get the list of foreign keys with column names as well as referred tables and referenced column
select parent_column_id as 'child Column',object_name(constraint_object_id)as 'FK Name',object_name(parent_object_id) as 'parent table',name,object_name(referenced_object_id)as 'referenced table',referenced_column_id from sys.foreign_key_columns inner join sys.columns on (parent_column_id = column_id and parent_object_id=object_id) Order by object_name(parent_object_id) asc
but i am not able to get the fks created more than once on same column refering to same pk
Yet another simple query that is eluding me. I need to find records in a table that have the same first name and last name. Because the table has a primaty key, these people were entered twice or they share the same first and last name.
How could you query this:
ID fname lname 10001 Bill Jones 10002 Joe Smith 10003 Sue Jenkins 10004 John Sanders 10005 Joe Smith 10006 Harrold Simpson 10007 Sue Jenkins 10008 Sam Worden
and get a result set of this:
ID fname lname 10002 Joe Smith 10005 Joe Smith 10003 Sue Jenkins 10007 Sue Jenkins
"Violation of PRIMARY KEY of restriction 'PK_Approve_Overtime'. The overlapping key cannot be inserted in object 'Dbo.Approve_Overtime'. The statement was ended." can soemone explain to me why i have this kind of error? i have this two tables. approve_overtime table has a primary key id_no and application_input table with a primary key of id_no! all the values from of application_input will be stored also in approve_overtime. sometimes the datas can be stored.sometimes it cannot and produces an error!
Using SQL Svr 7.0. It appears that primary keys are created as nonclustered unique indexes. Is there a configuration setting I can use to make them be created as clustered unique indexes?
If a table has a column defined as Int, Identity(1,1) which is to be used as the primary key, should that index be defined as clustered or non-clustered? In Enterprise manager when you create a PK on a table it defaults to being a clustered index. I am sure the answer depends on the other index requirements and columns in the table but I'd like to see what other ppl think about this.
I have read that SQL Server tables can't have more than one primary key. I know in Access two keys are allowed. Why can't there be two primary keys in a single table in SQL Server 7.
I accidently removed the primary keys from my table by mistake. Is there anyway ,That i can get the PK's back to what is used to be. Need Help pls...... When I try "resetting" the PK I kep getting this error:
'table_name' table - Unable to create index 'PK_tablename'. ODBC error: [Microsoft][ODBC SQL Server Driver][SQL Server]CREATE UNIQUE INDEX terminated because a duplicate key was found for index ID 1. Most significant primary key is '1'. [Microsoft][ODBC SQL Server Driver][SQL Server]Could not create constraint. See previous errors. [Microsoft][ODBC SQL Server Driver][SQL Server]Warning: The table 'tDetail' has been created but its maximum row size (12521) exceeds the maximum number of bytes per row (8060). INSERT or UPDATE of a row in this table will fail if the resulting row length exceeds 8060 bytes. [Microsoft][ODBC SQL Server Driver][SQL Server]The statement has been terminated.
In access you can have two fields that are primary keys with one or the other repeating as long as the combination is not repeated. i.e. key1 key2 200410 12345 200410 12346 200410 12588 etc for all 200410 there can not be a repeat of any value of key 2
is there a way to have this dual key in MSSQL :confused:
I have declared a table with a single column to be a Primary key. If I write 2 records, the first comprsing "A" and the second "a" then I get an exception about violating the uniqueness of the key. Is there a way that I can define the key to be case sensitive.
I have two tables with similar primary keys, table a and table b, and I want to find out all the key values that are disimilar between the tables. Can this be done with a select? if so what would it be.
I am not entirly sure what a forigen key is, is it a unique ID which is the same as the primary key? If adding a foreign key to a table that already has data, will it update each row with a unique ID or will it only create a unique ID on newly inserted records (for the foreign key)? If the foreign key is the same as the primary key then why do we need to even add a foreign key at all?For example a table like: Table Name : Customers--CustomerID uniqueidentity (primary Key)--FirstName nchar Table Name : Orders --OrderID int--CustomerID int Obviuously CustomerID are going to be the same in both tables, so why would you need to add a foreign key on 'CustomerID' to the 'Order' table, can't SQL match the customerID in each table any way? Cheers
HiI have a database and I been inserting some dummy data into it but now I want to upload it to my website but I want to delete all the dummy data and start the PK back at 1. I truancted all the data but it still keeps counting from the last one.So how do I reset it?
Hi, I have a question on pk? I have a db which has tables which do not have pk?I want to create Pk on each one of them - infact,I want to alter the table , add one more field and make it the pk . My question is - do I will have to go to each table,alter it and then create the pk on it or do someone have any script or better way to do it? Any help is appreciated.
Hi, Just a continuation to my earlier queries on replication. I have a db which I want to replicate - it do not have pk? I do not want to create pk on existing columns - so I thought of creating one more column in all the tables and make them pk . Any one has any idea if this will work fine or may give any problem which I should be prepare for. Any thought appreciated - pvd
I have a 3rd party app which had a primary key with about 5 fields. The last field of this was a trantype. This app had a posting process which uses this in it's sql. Ran rather slow. We added an individual index to this field and cut processing down 90%. It almost seemded like sl server was ignoring this index. Is this because it was the last field in the primary key index?
I have two tables (categories & listings) which create a many-to-many relationship.
I have created an interim table with the primary keys from each table as a composite primary key...(cat_id & list_id).
How does the interim table get populated with the id's?
When I do an insert statement to insert data into the categories table, the cat_id field is automatically generated...same with the listings table, but when (and how) does the primary key data get into the interim table.