Hi,
I am trying to import data from a Text file into a database Table using SQLserver BCP utility. I am able to do that when I have all new records in my Text file. But I am getting primary key violation error when I am trying to import the record which is already existing in the table. This is correct, but I want my program to ignore these errors and import only those records which are fine.
I tried [-m maxerrors] option, but it is not working. My BCP program is getting interrupted at the first error itself, even if I give [-m100] option.
my command looks something like this,
bcp pub..employee in C:data.txt -b1 -m100 -c -t,
-Sdatabase -Uuser -Ppassword
here -b1 is, processing 1 row per batch transaction
-m100 is, ignoring first 100 errors
In my stored procedure I'm calling a buggy and flaky stored procedurethat comes from a third party. When I run my stored proceure from QA,I'm getting a whole buch of errors raised inside the third party one.Is there any way I could just ignore them, so that if I run my SP fromQA, only errors from my code, if any, show up?TIA
if there is an error in the trigger then the update to the table does not happen. is there a way to make sql ignore errors in a trigger and still update the table
Ive started using try/catch in my t-sql code now and I rather like it, since im a C# developer. I read that some errors with an error code below 10 will not cause the catch block to be entered. What kind of errors does this include ?
Hello - the very nature of this question seems to make no sense I know - but we received a huge volume of data (29 tables) in flat file format. I first imported them into MS Access because of its portability and it seemed to be more forgiving on imports. Now I have a complete MS Access DB with all tables, so I figured importing to SQL server should be a snap. However, on the import, I had 14 tables import successfully, and 15 failed!
Here is an example of one of the error messages I received: Insert Error, Column 3 - status 6; Data Overflow...this was on a date/time field in access, and here is the data contained in the referenced row/column: "8/19/4999"
the year "4999" is obviously the problem (at least i think), and I have no idea why this successfully imported to MS Access, but not to SQL Server....
what i'd like to be able to do (not the best practice, i know) for now is ignore these types of errors - and just force SQL server to take the data straight from MS Access and replicate it. We received this data from a 3rd party, and there's no telling how many data entry errors like this could be in each table - many of the tables have over 500,000 rows, and i don't want to have to go through fixing each of these errors by hand...anyone have any ideas?
Is there any way to emulate the try/catch mechanism that SqlServer2005 provides using SqlServer2000? Or more simply, Is there any kind of IGNORE_ERROR or CONTINUE_ON_ERROR setting for SqlServer 2000?
I have a package that uses a for loop to iterate through an unknown amount of excel files and pull their data into a table. However, there will be cases when the file is corrupted or has some sort of problem so that either the transformation will fail or the excel source will fail.
I have it so that for each iteration if the transform was successful the file is moved to an archive directory, and if it fails the file is moved to a different directory.
But I don't want the package to be marked as failed. For the control flow tasks I have set the individual components to FailPackageonFailure = False, and for the Data Flow tasks I have set ValidateExternalMetadata = False.
It no use to set the MaxErrorCount higher because I can't guarantee how many files will be processed and how many might fail.
Could anyone suggest a clean way to trap these errors? Specifically, the "Cannot Aquire Connection from Connection Manager", which is the excel connection.
I have a big table and want to make a plausibility check of itīs data.
Problem is, that my query stops, if there is an unexpected datatype in one of the rows. But that is it, what i want to filter out of my table with that query and save the result as new correct table.
How can i write a parameter to my query SQL Code, that if a error occurs, the querry resumes and the error line will not displayed in my final querry overview?
In my books and on the net, i donīt found something to this theme ;-(.
I have an application that is moving from an home made full text search engine to using the full text indexing engine of SQL 2005. I have a stored procedure that I want to behave as: check documents table to determine whether a full text index for SQL's full text engine has been created. If it has not, query the documentText table (which is the table for my in-house full text search) If it has, use the full text indexing engine
My problem is that compilation of the TSQL to create the stored procedure fails when the full text index has not already been created with the followign error:
Msg 7601, Level 16, State 2, Procedure My_FullTextSearch, Line 0
Cannot use a CONTAINS or FREETEXT predicate on table or indexed view 'Documents' because it is not full-text indexed.
In my test lab, I tried: 1. creating the full text index 2. creating the stored procedure 3. deleting the ful text index which gets me to the desired end result of having a stored procedure that can determine whether or not the full text index has been created yet (the procedure works in this state). But I creating this index as part of this stored procedure creation in production is not an option.
My question - Can I somehow tell SQL to ignore the compilation errors it encounters while creating this stored procedure? If not, is there some other way to create this "smart" stored procedure?
Here's a code snippet stripped down to the bare minimum to generate the error:
CREATE PROCEDURE [My_FullTextSearch]
@Term VarChar(1000)
AS
BEGIN
SET NOCOUNT ON;
IF NOT OBJECTPROPERTY(OBJECT_ID('Documents'), 'TableHasActiveFulltextIndex')=1
BEGIN
Select [DocumentID]
from [DocumentText]
where [Term] like '%' + LTRIM(@Term) + '%'
END
ELSE
BEGIN
Select [key] from FREETEXTTABLE(Documents, Contents, @Term)
I recently updated the datatype of a sproc parameter from bit to tinyint. When I executed the sproc with the updated parameters the sproc appeared to succeed and returned "1 row(s) affected" in the console. However, the update triggered by the sproc did not actually work.
The table column was a bit which only allows 0 or 1 and the sproc was passing a value of 2 so the table was rejecting this value. However, the sproc did not return an error and appeared to return success. So is there a way to configure the database or sproc to return an error message when this type of error occurs?
I have a parent package that calls child packages inside a For Each container. When I debug/run the parent package (from VS), I get the following error message: Warning: The Execution method succeeded, but the number of errors raised (3) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
It appears to be failing while executing the child package. However, the logs (via the "progress" tab) for both the parent package and the child package show no errors other than the one listed above (and that shows in the parent package log). The child package appears to validate completely without error (all components are green and no error messages in the log). I turned on SSIS logging to a text file and see nothing in there either.
If I bump up the MaximumErrorCount in the parent package and in the Execute Package Task that calls the child package to 4 (to go one above the error count indicated in the message above), the whole thing executes sucessfully. I don't want to leave the Max Error Count set like this. Is there something I am missing? For example are there errors that do not get logged by default? I get some warnings, do a certain number of warnings equal an error?
Starwin writes "when i execute DBCC CHECKDB, DBCC CHECKCATALOG I reveived the following error. how to solve it?
Server: Msg 8909, Level 16, State 1, Line 1 Table error: Object ID -2093955965, index ID 711, page ID (3:2530). The PageId in the page header = (34443:343146507). . . . . . . . .
CHECKDB found 0 allocation errors and 1 consistency errors in table '(Object ID -1635188736)' (object ID -1635188736). CHECKDB found 0 allocation errors and 1 consistency errors in table '(Object ID -1600811521)' (object ID -1600811521).
. . . . . . . .
Server: Msg 8909, Level 16, State 1, Line 1 Table error: Object ID -8748568, index ID 50307, page ID (3:2497). The PageId in the page header = (26707:762626875). Server: Msg 8909, Level 16, State 1, Line 1 Table error: Object ID -7615284, index ID 35836, page ID (3:2534). The PageId in the page heade"
When i checked the machine this morning i saw that MS DTC service was stopped and when did start it, it was good for a sec or two then it stopped auto matically i was not sure waht was goin on...
Then i checked the Eventviewer, it said the log file is full i tried out to find ms dtc log reset or increase the size but could not find out..where can i find this.
One more thing what is the purpose of DTC, i know in sql 6.5 we have everything in enterprize manager but i can't see the same either in 7.0 or 2000 is the name changed or is it linked to some other source.
Like linked server , does this depend on MS DTC..how and what is the basic purpose of this..kindly tell me what i should do now...
Is there a way I can perform an update on a table, but ignore thetrigger (or disable the trigger) each time I run a particular updatescript from a DTS package?I would like subsequent DTS steps to use the trigger except for my lastupdate statement. Is this possible?Thanks,Frank*** Sent via Developersdex http://www.developersdex.com ***
Hi,I'm trying to upload a large number of log entries currently stored astext files into a database table using bcp. For a few rows I get a"right truncation" error and the offending rows are not uploaded to thetable.I don't want to increase the size of the table varchar fields becauseit's only about a dozen out of almost million rows that have thisproblem ... I want to provide an override - i.e. if a row will resultin truncated data, truncate but still bulk copy the offending row. Isthat possible?I couldn't find such an option in the documentation.Any help is greatly appreciated.Thanks,Mudassir Latif
Is there a way to make a SP ignore an error?e.g. I'm looping through each database on a server, checking of a tableexists then selecting a value from that table. Now I have a database putonto the server where the table exists but all column names aredifferent, my SP is not interested in this database so when it errorswith invalid column name I want it to move onto the next databse and notdisplay any error message.
I'm writing a store procedure to accept search strings from user on my site. Currently, this is what I have.
Code Snippet @schoolID int = NULL, @scholarship varchar(250) = NULL, @major varchar(250) = NULL, @requirement varchar(250) = NULL --@debug bit = 0 AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; SELECT * FROM [scholarship] WHERE ([sectionID] = @schoolID OR @schoolID IS NULL) AND ([schlrPrefix] LIKE '%' + @scholarship + '%' OR [schlrName] LIKE '%' + @scholarship + '%' OR [schlrSufix] LIKE '%' + @scholarship + '%' OR @scholarship IS NULL ) AND ([Specification] LIKE '%' + @major + '%' OR @major IS NULL ) AND ([reqr1] LIKE '%' + @requirement + '%' OR [reqr2] LIKE '%' + @requirement + '%' OR [reqr3] LIKE '%' + @requirement + '%' OR [reqr4] LIKE '%' + @requirement + '%' OR [reqr5] LIKE '%' + @requirement + '%' OR @requirement IS NULL )
The problem is, somtimes the search doesn't work if there is a space behind or infront of the search string. I wonder if there is away to ignore any spaces and go right into whatever character comes next or after. If so, how do I implement that?
I have a stored procedure that does several steps. During the stored proc, error messages are produced when certain conditions warrant. But I want to continue anyway.... ex. in a loop in the proc...
SET @strSQL = 'Update Table1 SET col1 = ''' + @strVariable + '''' EXEC (@strSQL)
--ERROR created by the exec statement.....
When I schedule this as a job, the first error message makes the job fail. How can I force the proc to run completely, even if an error occurs?
Hi! I'm wondering if there is a way to have the AVG() not to include the zero amount as part of the calculation. I'm looking the field, PurchPrice and RepairCost that are used by the AVG() function...
I am importing data into a SQL table and there is a potential for duplicate records to be coming in. How do I simply ignore the duplicates and add only the records that do not violate the keys?
SELECT expense_id, CAST(expense_id AS char(10)) + ' - ' + CAST(trip_km AS char(5))+ ' - ' + CAST(expense_amount AS char(5)) + ' - ' + charge_centre AS ExpenseDesc
If charge center is null, I need to ignore this field. How can I achieve this? The reason is that if any of the field is null, it will return ExpenseDesc as null.
I am looking for best practice when passing a parameter to stored procedure that is not needed. For example, sometime the users will want the list to list only by certain state. Other times the user want all states. How can I make the SP to ignore the where clause if users want all states.
CREATE PROCEDURE usp_Example @State nvarchar(2) AS SELECT FirstName, LastName, State FROM SomeTable WHERE State = @FirstName; GO
Using SQL2000. According to Books Online, the avg aggregrate functionignores null values. ((3+3+3+3+Null)/5) predictably returns Null. Isthere a function to ignore the Null entry, adjust the divisor, andreturn a value of 3? For example:((3+3+3+3)/4) after ignoring Nullentry.If there's more than one null value, then adjust divisor accordingly.For example: ((5+5+5+4+Null+5+5+Null)/8) would be ((5+5+5+4+5+5)/6)after nulls ignored.Thanks for any help or advice.
Hi,I believe my SQL server was configured as Case sensitivity. I have anumber of stored procedures which were moved from a non-Casesensitivity SQL server. Because of the Case sensitivity, I have to doa lot of editing in those stored procedures. Is there a quick way toavoid the editing?Something like ignoring the case in one statement?Thanks in advance, your advice will be greatly appreciated.
I'm using Microsoft SQL Server Management Studio. I've created a view (very basic two field select with a inner join) -- I expect the query to take some time as there are about 1.7 million records in one table and about 3 million in the join table. Problem is, I always get a timeout error when I try to run the query.
I've checked the server settings and "Execution time-out" = 0 (no time-out). Checked this at the server and in Microsoft SQL Server Management Studio. So my question is why am I getting a time-out when I have it set to not time-out?
I connect, but when I try to access any table from this database I get an error indicating that the object doesn't exist and if I use the fullname xx.table I get no errors. What may be happening?
I want to bulkcopy a pipe delimited text file that has two header records to SQL Server using ADO / C#. The first record contains the datetime the file was recreated and the second record contains the column definitions, both records start with a # character (see below).
# May 15, 2008 12:12:12345 # col1|col2|col3 col1rec1data|col2rec1data|col3rec1data col1rec2data|col2rec2data|col3rec2data
Is there a way to ignore the two header records when building the select statement for the text file or in the schema.ini?
That works until it hits a date formatted incorrectly. The date could have blank spaces, double zeros or an invalid month/day/year (e.g. 033986). The command stops with a conversion error and no values are written to the CorrectedDate fields.
Is there a way to have the command skip the invalid dates and continue to write the datetime conversion for the dates that are formatted correctly?
Edit: Forgot to mention this is on SQL Server 2005 (not Express).