SQL Server 2008 :: Trapping Failing Index Rows Or Source
Oct 14, 2015
that violates the targets referential integrity?I am getting error Msg 2601, Level 14, State 1, Line 1Cannot insert duplicate key row in object XXX with unique index YYY.The statement has been terminated.I would like to know if there is a way to examine or determine what source rows are not conforming to the unique index.I'm fine with dropping and reestablishing the index, and i know its cataloged somewhere because during index creation, the error message does tell you the row details clobbering index creation. Ideally i would like to be able to trap all the failing rows and see what i can do about rehabilitating them or ignoring them or managing them some other way, but id like to know what the server knows when it will not create the index.
View 2 Replies
ADVERTISEMENT
Jul 30, 2015
I have flat file source from which data is imported to a Sql table.The target column is int and input column is string .The column has some numeric values and some blank values.when I tried to convert into int values it fails.
View 7 Replies
View Related
Jan 30, 2008
I am writing a package where, at one step, I need to copy data from a source with text columns of 150 characters to a destination with matching text columns of only 60 characters. The data present in the source is all less than 60 characters in length, but if this changes in the future and data begins to be truncated, I want to be informed of this.
This raises two problems. First, because I'm not pre-emptively truncating the columns, my Data Flow Destination allows shows a validation warning. Is there any way I can tell SSIS "I know that data might be truncated, but I want to deal with that at run-time, not as design-time"?
Second, I'm not sure how to pass myself a notification using the options provided by Error Output on the Data Flow Destination. Fail Component would allow me to alter the control flow, but I would prefer not to have the process fail utterly because of one truncated record. I'm not sure if Ignore Failure will simply omit the row that would be truncated (not an acceptable solution) or truncate the data (could work temporarily, but I still need to be warned). Redirect Row is appealing but has a different problem - if I redirect the error/truncated rows to a separate table, that Data Flow Task no longer fails, which means I have no way to raise a notification.
Is there a better way to do this? The only option I can think is to do Redirect Row, and then have the next step in the Control Flow be a script that checks for the presence of records in my error table, and send a notification if there are, but that seems unnecessarily circuituous. Is there a way in SSIS to arbitrarily send a Failure message if a given step is hit (possibly with Events), or is that case reserved strictly for halting failures?
View 1 Replies
View Related
Feb 20, 2008
Hi,
When expoting data from excel to sql server table, using SSIS package, after exporting is done, how would i check source rows are equal to destination rows. If not to throw an error message.
How can we handle transactions in SSIS
1. when some error/something happens during export and the # of rows are not exported fully to destination, how to rollback the transaction in SSIS.
Any sort of help would be highly appreciated.
Thanks,
View 2 Replies
View Related
Feb 20, 2008
Hi,
When expoting data from excel to sql server table, using SSIS package, after exporting is done, how would i check source rows are equal to destination rows. If not to throw an error message.
Any sort of help would be highly appreciated.
Thanks,
View 1 Replies
View Related
Mar 6, 2015
One of my programmers changed their database from full to Simple recovery. Saw that my job that backs up the Full Recovery mode databases failed, so I moved that database to my Simple database backup job plan and removed it from the Full Recovery job. I am unable to remove the db from the Transaction Log task on the Full Plan because when I try to edit that job "Databases with Simple Recovery will be excluded"
My transaction log backups are still failing with the following error: "The statement BACKUP LOG is not allowed while the recovery model is SIMPLE. Use BACKUP DATABASE or change the recovery model using ALTER DATABASE. BACKUP LOG is terminating abnormally.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.Just want to remove that database so my Full Recovery backup job does not try to back it up.
View 2 Replies
View Related
Nov 29, 2013
I am trying to do a bulk import of data from XML into sQL.
My query returns no errors but no data gets imported.
Here is my XML
?xml version="1.0" encoding="utf-8"?>
<status>
<connection_status>successful</connection_status>
<operation_status>successful</operation_status>
<CustomerDeposits>
[code]....
View 2 Replies
View Related
Jul 5, 2015
This index is not unique
ix_report_history_creative_id
Msg 2601, Level 14, State 1, Procedure DFP_report_load, Line 161
Cannot insert duplicate key row in object 'dbo.DFP_Reports_History' with unique index 'ix_report_history_creative_id'.
The duplicate key value is (40736326382, 1, 2015-07-03, 67618862, 355324).
Msg 3621, Level 0, State 0, Procedure DFP_report_load, Line 161
The statement has been terminated.
Exception in Task: Cannot insert duplicate key row in object 'dbo.DFP_Reports_History' with unique index 'ix_report_history_creative_id'. The duplicate key value is (40736326382, 1, 2015-07-03, 67618862, 355324).
The statement has been terminated.
View 6 Replies
View Related
Sep 24, 2015
I have a table which has cluster index on col1 column. If i insert 10 into my table what would be cluster index key value?Is it going to be 10 as well? How do i get cluster index key value?
View 5 Replies
View Related
Dec 26, 2007
Job is failing. Job contains reorganize index for 4 user databases.
Nothing in SQL error logs,
In event log and job history..just showing job was failed
One Database is huge size
View 1 Replies
View Related
Feb 27, 2015
After reading some comments here I decided to look at tables to see if any had a clustered index that was a unique identifier. Yep. So if I have a table with a unique identifier as the primary key/clustered index and an identity column that is indexed, I would like to make the identity a clustered index (maybe even the primary key) and make the unique identifier a unique non-clustered index (not the primary key).
Does this sound reasonable?If I do this will I need to drop and recreate the other indexes? Or maybe just rebuild the other indexes?
Currently:
CREATE TABLE Payments (
IDX INT IDENTITY(1,1) NOT NULL,
GUID UNIQUEIDENTIFIER NOT NULL DEFAULT(NEWID()),
.....
-- many other columns
);
GO
ALTER TABLE [dbo].[PAYMENTS] ADD CONSTRAINT [PK_PAYMENTS_GID] PRIMARY KEY CLUSTERED ([GUID] ASC);
GO
CREATE NONCLUSTERED INDEX [IX_Payments_ID] ON [dbo].[PAYMENTS] ([IDX] ASC);
GO
Would like:
ALTER TABLE [dbo].[PAYMENTS] ADD CONSTRAINT [PK_PAYMENTS_IDX] PRIMARY KEY CLUSTERED (IDX ASC);
GO
CREATE UNIQUE NONCLUSTERED INDEX [IX_Payments_GUID] ON [dbo].[PAYMENTS] (GUID ASC);
GO
View 9 Replies
View Related
Apr 1, 2015
I've yet to use partitioning in a production environment, and pretty much last ran any partitioning related code a few years back when looking at certification; so I'm definitely not an expert on the matter and only loosely clued up on the concepts.
I've recently started with a new employer, and they have just implemented a new system for sms messaging. The database tables tracking the sms messages being sent are going to get big and so they have created decided to implement partitioning on some of the tables using a partition scheme on the CreatedDate column; the DBA involved in designing the partitioning has left and I'm picking this up.
The relevant DDL for the table is below:-
CREATE TABLE [Message].[Sms](
[SmsId] [bigint] IDENTITY(250000001,1) NOT NULL,
[CreatedDate] [datetime] NOT NULL CONSTRAINT [DF_Sms:CreatedDate] DEFAULT (getdate()),
CONSTRAINT [PK_Sms:SmsId] PRIMARY KEY NONCLUSTERED
[code]....
There are some issues with the above that I will be addressing seperately (e.g. the clustered index should be unique as it contians the unique key, and the fillfactors are daft), but my concerns for this post are below.
1) How to define the Primary Key and enforce it's uniqueness whilst trying to ensure it's aligned with the partition in order to be able to switch out old data once an as yet undefined retention period has passed. In books online it states:- "If it is not possible for the partitioning column to be included in the unique key, you must use a DML trigger instead to enforce uniqueness. " Books online - Special Guidelines for Partitioned Indexes. However, I'm not sure what this means, nor how I create the primary key to use the partition function seeing as it doesn't have the CreatedDate in the unique key?
2) The original partition function was envisaged as the following:-
CREATE PARTITION FUNCTION [DateFunction](datetime) AS RANGE
LEFT FOR VALUES (N'2014-01-01T00:00:00.000'
, N'2014-04-01T00:00:00.000'
, N'2014-07-01T00:00:00.000'
, N'2014-10-01T00:00:00.000'
, N'2015-01-01T00:00:00.000'
, N'2015-04-01T00:00:00.000'
, N'2015-04-02T00:00:00.000'
, N'2015-04-03T00:00:00.000'
, N'2015-04-04T00:00:00.000'
, N'2015-04-05T00:00:00.000')
GO
There is a procedure that has been created and scheduled daily that will create a new partition for each day, and then merge these together at the end of the quarter. My understanding of partitioning is that this is a bad idea, as it will result in merging several populated partitions together. Is my understanding correct? If so, I'm planning on removing the day partitions at the end of the function, and simply adding quarterly partitions, maintaining a spare empty partition at the end of the table. Would this make more sense?
View 9 Replies
View Related
Apr 8, 2015
I just did index defragmentation for some databases include MSDB . I notice there are 3 indexes from MSDB database that fragmented quickly ( I did rebuild last nite at 10 PM - > fragmentation level becomes zero but today at 9 am it become 80 % ).The indexes are backupsetuuid, backup media family uuid, backupmediasetuuid. I am thinking to set the fill factor for those indexes = 80 respectively.
View 6 Replies
View Related
Jul 1, 2015
We are adding 4-5 indexes to one database and dropping 2 unused indexes. I don't have proper testing environment. How to monitor these indexes changes? Do we need to run any baseline but we don't get load all the time same load all the days
View 8 Replies
View Related
Dec 4, 2007
SQL Server 2005 version: 2153
I created a maintplan for system and user databases includes rebuild index, maint cleanup tasks.
Job is failing for user databases
It includes rebuild index task( online index enabled) and maintenance cleanup task, scheduled at every sunday 1 AM.
I receive following errors:
In eventvwr log
sql server scheduled job 'DBMP_RebuildIndex_User'
status: failed-Invoked on 2007-12-02 -1:00 Message: The job failed. The job was invoked by schedule 8 ('DBMP_RebuildIndex_User-Schedule).The last step to run was step1 ('DBMP_RebuildIndex_User')[/red]
In log report:
Failed:(-1073548784) Excuting the query "ALTER INDEX [XPKact_log] ON
[dbo].[act log] REBUILD WITH (PAD_INDEX=OFF,
STATISTICS_NORECOMPUTE=OFF,ALLOW_ROW_LOCKS=ON,ALLOW_PAGE_LOCKS=ON,SORT_IN_TEMPDB=OFF,ONLINE=ON)
"failed with the following error "Online index operation cannot be performed for index 'XPKact_log' because the index contains column 'action_desc' of data type text, ntext.image.varchar(max),varbinary(max) or xml. For non clusterd index the column could be an include column of the index. for clusterd index it could be any column of the table .Incase of drop_existing the cloumn could be part of new or old index. The operation must be performed offline". Possible failure reasons : Problems with the querey .'" Resultset" property not set correctly, parameters not set correctly, or connection not established correctly.
Please anyone help me on this?
I really appriciate
Thnks
View 1 Replies
View Related
Apr 15, 2008
Getting this error when running a maintenance plan step. The backup steps work fine.
" Description: Failed to acquire connection "Local server connection". Connection may not be configured correctly or you may not have the right permissions on this connection. End Error Warning: 2008-04-15 09:15:03.02 Code: 0x80019002 Source: OnPreExecute Description: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors. End Warning Error: 2008-04-15 09:15:45.02 Code: 0xC0024104 Source: Reorganize Index Task ... The package execution fa... The step failed."
The step is run under the sql service agent account.
We are at SP2 on a 64 bit machine.
Thanks.
Sam
View 5 Replies
View Related
Jul 9, 2015
Does including non-key columns work for the performance of an index?
View 8 Replies
View Related
Oct 30, 2015
Give a user table ‘MyTable’. How to know whether the table contains a non-unique clustered index by using SQL query?
View 2 Replies
View Related
Feb 1, 2012
I am task with identifying the source database name, id, and server name for each staging table that I create. I need to add this to a derived column on all staging tables created from merging same tables on different servers together.
When doing a Merge Join, there is no way to identify the source of data so I would like to see if data came from one database more than the other servers or if their are duplicates across servers.
The thing that bugs me about SSIS Data Flow task is there is no way to do an easy Execute SQL Task after I select my ADO.NET Source to get this information because my connection string is dynamic and there is no way of know which data source is being picked up at runtime.
For Example I have Products table on Server 1 and 2:
Server 2 has more Products and would like to join the two together to create a staging table.
I want see the following:
Product ID, Product Name, Qty, Src_DB_ID, Src_DB_Name, Src_Server_Name
1 IPAD 1000 2, MyDB1, Server1
100 ASUS Pad 40 1, YourDB, Server2
get database name and server name in DATA FLOW only (without using a for each in Control Flow)
View 5 Replies
View Related
Jul 20, 2005
We are having quite a time since moving a large database to a newserver (actually built new server, renamed as old to make seamless forusers, etc.)Import 104 million row database (5 column) into table (CD_Assets_bad2)from existing (CD_Assets):Account(varchar(8))TransactionDate(datetime(8)Flow(varchar(1))Category(varchar(7))TotalValue(decimal(8))Run DBCC CheckTable - no issues.Create 4 non clustered indexes (3 single column, 1 two-column). Allindexes create fine.Run DBCC CheckTable again and receive the following:Server: Msg 8951, Level 16, State 1, Line 1Table error: Table 'CD_Assets_bad2' (ID 244195920). Missing or invalidkey in index 'idx_totalvalue' (ID 7) for the row:Server: Msg 8955, Level 16, State 1, Line 1Data row (1:11154499:98) identified by (RID = (1:11154499:98) ) hasindex values (TotalValue = -10).Server: Msg 8952, Level 16, State 1, Line 1Table error: Database 'CD', index 'CD_Assets_bad2.idx_totalvalue' (ID244195920) (index ID 7). Extra or invalid key for the keys:Server: Msg 8956, Level 16, State 1, Line 1Index row (1:20855652:338) with values (TotalValue = -0¤4?) points to the data row identified by (RID = (1:11154499:98)).DBCC results for 'CD_Assets_bad2'.There are 104397173 rows in 677904 pages for object 'CD_Assets_bad2'.CHECKTABLE found 0 allocation errors and 2 consistency errors in table'CD_Assets_bad2' (object ID 244195920).repair_fast is the minimum repair level for the errors found by DBCCCHECKTABLE (CD.dbo.CD_Assets_bad2 ).Any ideas? It seems like some sort of corruption, but the indexcreates fine. If anyone can help please let me know. If I can provideany addtional information that might help, please let me know.Thanks,DavidJoin Bytes!
View 2 Replies
View Related
May 31, 2015
I am new to mssql server. There is a table on one of my databases that occupies a lot of space. And the space usage is as follows: (all values in KB)
reserved: 42329064
data: 16272288
index: 26050032
unused: 6744
This table takes up almost 80% of my database size, and the information is this table just captures the time spent by a user on the website(not very critical data)
I would like to know how to delete the entire index (which is what is occupying most space) to free up disk space. the index is a clustered index.
View 3 Replies
View Related
Jun 17, 2015
I run a query
select col1, col2, col3, col4
from Table
where col2=5
order by col1
I have a primary key on the column.The execution plan showing the clustered index scan cost 30% & sort cost 70%..When I run the query I got missing index hint on col2 with 95% impact.So I created the non clustered index on col2.The total executed time decreased by around 80ms but I didn't see any Index name that is using in the execution plan.After creating the index also I am seeing same execution plan
The execution plan showing the clustered index scan cost 30% & sort cost 70% but I can see the total time is reducing & Logical reads on that table is reducing.I am sure that index is useful but why there is no change in the execution plan?
View 7 Replies
View Related
Aug 2, 2015
I am extremely new to database design, and I ran into a problem that I know comes up often, however has many opinions...
Basically I have a table that is going to have 50+ columns. The natural key on this table is actually 8 columns wide, 4 of them being Varchar columns by default. (varchar(50)'s).
I have added an identity column, (1,1) to the table, however I put the clustered index on the 8 natural keys... My plan is to rebuild the clustered index once nightly when the system isn't in use (after 7 pm).
I know others would say it would be better to have the clustered key on the 1,1 column and then add indexes on the other 8 fields... However I don't quite understand why honestly...
Every single query against this table will use the 8 columns, and will NOT use the Identity column (1,1) because they are calls from other systems that do not know the Identity column....
Therefore if your database is set up for query speed, and every single query has to have a value for 8 columns to get a valid result, does it make sense to put a clustered index over the 8 columns?
If not why? Why is putting a clustered index on an identity column (that will literally never be used in a query) a better solution?
View 9 Replies
View Related
Sep 30, 2015
Table Name: Denominator
Already has the following constraint:
PK_Denominatorclustered, unique, primary key located on PRIMARY DenominatorID
How can I add a unique key that will cover the 3 fields --> MemberID,MeasureID,TimePeriodID
I also want to know whether we can include the " WITH ( IGNORE_DUP_KEY=ON ) "
View 3 Replies
View Related
Oct 13, 2015
I have three win2k8 r2 servers. Â 2 are running SQL 2008 r2 and are mirrored. The 3rd server is a witness server. Every so often we get errors and failovers of the mirror. The communication errors are between the witness server and sql servers. No errors between SQL servers. There seem to be no network issues and happen randomly.Â
Event id 1474 and 1479.Â
The mirroring connection to "TCP://witnesssrv:5022" has timed out for database "DB" after 30 seconds without a response. Â Check the service and network connections.Â
Database mirroring connection error 4 'An error occurred while receiving data: '64(The specified network name is no longer available.)'.' for 'TCP://witnesssrv:5022'.
View 4 Replies
View Related
Apr 22, 2015
What is the best way to forecastestimate space for non-clustered index on a table?
Example :
Table name : Test123
Row : 170000000
Reserved : 18000000 KB
Data : 70000000 KB
Index: 40000000 KB
Note: Test123 already has clustered index and 2 non clustered indexes.
View 7 Replies
View Related
Sep 15, 2015
I have query with an expensive Key Lookup on a joined table. The predicate is the column that I'm joining on, and the output list contains two columns from the joined table.
I've created a basic non-clustered index covering the predicate column and include-ing the two output columns. However, the execution plan ignores this, and insists on using the primary key of the joined table to do the expensive key lookup. I've tried adding the included columns to the index directly and there's no change. I've also tried running dbcc freeproccache and no change.
View 3 Replies
View Related
Apr 26, 2015
With merge/insert statements ...Is DISTINCT best way to handle problem of source table containing duplicate rows, along with WHERE NOT IN statement? the source dataset is large and having to do DISTINCT and further filtering is taxing on the ETL.
DDL
source table
CREATE TABLE [dbo].[source](
[Product_ID] [INT] NOT NULL,
[ProductCode] [VARCHAR](20) NULL,
[ProductName] [VARCHAR](100) NULL,
[ProductColor] [VARCHAR](20) NULL,
[code]....
View 0 Replies
View Related
Aug 10, 2015
I'm trying to load data from old SQL server 2000 to new SQL server 2014. I need to do a checksum to check if all the source data is loaded in the target database(SQL server 2014). I've created the insert statement for the same which works. I need to use checksum to make sure all the source rows are loaded in the target table. I haven't done checksum before.
Here is my insert statement:
INSERT INTO [Test].[dbo].[Order_tab]
([rec_id]
,[date_loaded]
,[Name1]
,[Name2]
,[Address1]
,[Address2]
[code]....
View 2 Replies
View Related
Sep 22, 2015
I have a field I am trying to bring into a SQL statement
,ISNULL(Convert(nvarchar(50),OPP1.OriginatingLeadId),'') AS 'OriginatingLeadId'
I get this error message
Conversion failed when converting from a character string to uniqueidentifier.
the field specs' originatingleadid is attached ....
View 9 Replies
View Related
Jul 16, 2015
We noticed a deadlock 3-4 weeks ago on a table (table1) and deadlock graph was captured.
When I am analyzing the deadlock graph, page number using DBCC PAGE, I am getting the object id for a different table (table2). But deadlock graph shows the name of the object as table1.
Is it possible that subsequent defragmentation of indexes would have caused the respective page id to got re-allocated to a different table? I checked the deadlock graph lately only after 3-4 weeks.
View 1 Replies
View Related
Jul 14, 2015
I have a table called Signatures
It contains the fields
Parentobject and [View As]
there can be more than one Parentobject so I want to concatenate them so I have them on one line
for example
901 Joe Dow
901 Jane Dow
I want one line - | 901 | Joe Dow, Jane Dow
I found something similar as below but I'm getting dups like
901 |Joe Dow , Joe Dow
901 | Jane Dow, Jane Dow
DECLARE @Delimiter VARCHAR(10) = ' '; -- this is the delimiter we will use when we concatenate the values
SELECT DISTINCT
ParentObject,
(SELECT STUFF(
(SELECT @Delimiter + s1.[View As]
FROM Signatures s2
[Code] .....
View 3 Replies
View Related
Aug 28, 2015
we need download files from FTP location , so we are using script component as source and when we are using the fallowing code// read a file and write out as columns in rows
WebClient myWebClient = new WebClient();
myWebClient.Credentials = new NetworkCredential(Variables.ftpLogin, Variables.ftpPassword);
// Concatenate the domain with the Web resource filename.
string myStringWebResource = Variables.fileURL + Variables.fileName;
string s = myWebClient.DownloadString(myStringWebResource);
StringReader reader = new StringReader(s);
we are getting the fallowing error The server returned an error: (404) Not Found. are we missing anything in the code.
View 2 Replies
View Related