I'm trying to upload a large number of log entries currently stored as
text files into a database table using bcp. For a few rows I get a
"right truncation" error and the offending rows are not uploaded to the
table.
I don't want to increase the size of the table varchar fields because
it's only about a dozen out of almost million rows that have this
problem ... I want to provide an override - i.e. if a row will result
in truncated data, truncate but still bulk copy the offending row. Is
that possible?
I couldn't find such an option in the documentation.
I have a data file that has numeric data that looks like:
1.123456
And this column is defined as a DT_NUMERIC(18.6) in the flat file conn mgr.
As an experiment, I changed the destination column to a NUMERIC(18,0) - hoping that this would throw a truncation error at the flat file task level (where I have Truncation on all columns set to "fail component").
Not a peep. It loaded the data into the table, chopping off the 6 digits after the decimal point.
You would THINK that this would cause an error, but no. Why is this? The flat file task complains about all kinds of things, but this is such a gross error, you would think it would catch it!
If you are running in Full Recovery Mode and do a full backup every night but never do a backup of the log during the day does the log file ever truncate? From what I read this should be in Simple Recovery Mode but I'm wondering what happens in the case that I mention in the first sentence. Thanks.
When i checked the machine this morning i saw that MS DTC service was stopped and when did start it, it was good for a sec or two then it stopped auto matically i was not sure waht was goin on...
Then i checked the Eventviewer, it said the log file is full i tried out to find ms dtc log reset or increase the size but could not find out..where can i find this.
One more thing what is the purpose of DTC, i know in sql 6.5 we have everything in enterprize manager but i can't see the same either in 7.0 or 2000 is the name changed or is it linked to some other source.
Like linked server , does this depend on MS DTC..how and what is the basic purpose of this..kindly tell me what i should do now...
Is there a way I can perform an update on a table, but ignore thetrigger (or disable the trigger) each time I run a particular updatescript from a DTS package?I would like subsequent DTS steps to use the trigger except for my lastupdate statement. Is this possible?Thanks,Frank*** Sent via Developersdex http://www.developersdex.com ***
In my stored procedure I'm calling a buggy and flaky stored procedurethat comes from a third party. When I run my stored proceure from QA,I'm getting a whole buch of errors raised inside the third party one.Is there any way I could just ignore them, so that if I run my SP fromQA, only errors from my code, if any, show up?TIA
Is there a way to make a SP ignore an error?e.g. I'm looping through each database on a server, checking of a tableexists then selecting a value from that table. Now I have a database putonto the server where the table exists but all column names aredifferent, my SP is not interested in this database so when it errorswith invalid column name I want it to move onto the next databse and notdisplay any error message.
I'm writing a store procedure to accept search strings from user on my site. Currently, this is what I have.
Code Snippet @schoolID int = NULL, @scholarship varchar(250) = NULL, @major varchar(250) = NULL, @requirement varchar(250) = NULL --@debug bit = 0 AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; SELECT * FROM [scholarship] WHERE ([sectionID] = @schoolID OR @schoolID IS NULL) AND ([schlrPrefix] LIKE '%' + @scholarship + '%' OR [schlrName] LIKE '%' + @scholarship + '%' OR [schlrSufix] LIKE '%' + @scholarship + '%' OR @scholarship IS NULL ) AND ([Specification] LIKE '%' + @major + '%' OR @major IS NULL ) AND ([reqr1] LIKE '%' + @requirement + '%' OR [reqr2] LIKE '%' + @requirement + '%' OR [reqr3] LIKE '%' + @requirement + '%' OR [reqr4] LIKE '%' + @requirement + '%' OR [reqr5] LIKE '%' + @requirement + '%' OR @requirement IS NULL )
The problem is, somtimes the search doesn't work if there is a space behind or infront of the search string. I wonder if there is away to ignore any spaces and go right into whatever character comes next or after. If so, how do I implement that?
Hello Everyone and thanks for your help in advance. I am working on importing a flat text file into SQL Server 2005 and am having problems. The flat file is a CSV text file with " being used as a text qualifier. Each line is broken by a CrLf combination. When I try importing this file into a SQL Server 2000 table using the same datatypes and sizes for each column, it works perfectly fine with the data importing as expected. However, in SQL Server 2005, again using the identical column datatypes and sizes, the import fails giving me warnings such as: * Warning 0x802092a7: Data Flow Task: Truncation may occur due to inserting data from data flow column "Column 0" with a length of 50 to database column "MLS_ID" with a length of 10. (SQL Server Import and Export Wizard) Virtually every columns gives this type of warning, yet I don't understand why since the columns are all variable in length (every message says a column length of 50) and all are delimited rather than fixed size. Then later in the import, errors occur something like: * Error 0xc02020a1: Data Flow Task: Data conversion failed. The data conversion for column "Column 15" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.". (SQL Server Import and Export Wizard) * Error 0xc020902a: Data Flow Task: The "output column "Column 15" (70)" failed because truncation occurred, and the truncation row disposition on "output column "Column 15" (70)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component. (SQL Server Import and Export Wizard) I haven't got a clue as to why this is happening. For the record, on the flat file source screen, I have ensured that delimited has been selected rather than fixed width. Any help on this issue owuld be greatly appreciated. Thanks.
Could someone please let me know what are the exact steps to follow to truncate the transaction log files? As these log files grow very fast and there seems to be no space in the drive.
Currently am using the below steps to truncate the log file: Step1:Use the below syntax: backup log <database name> with no_log Step2:shrink the log file. Right click the correct database and choose shrink file ->chosse the log -> ok
I would be grateful if someone can give me a proper solution.
Hello All,I am attempting a bulk load of fixed position flat file data via bcpand I have noticed that I get a Right Truncation error when trying toload a row where the last column value is NULL.For example:Flat file row:0000016MFMT file:7.031 SQLCHAR 0 7 "" 1 RECORD_KEY2 SQLCHAR 0 1 "" 2 SEX3 SQLCHAR 0 1 " " 3 HEIGHTIn this row, the height info is null and I get a right truncationerror. The row below, with height info goes in fine:Flat file row:0000016M510Let me know what I am doing wrong!Thanks in advance
How is it possible to avoid truncation errors in MS SQL? For example,if I run the followingdeclare @a as decimal(38,8)declare @b as decimal(38,8)declare @c as decimal(38,8)set @a = 30.0set @b = 350.0set @c = @a/@bselect @cset @c = @c*@bselect @cI get 29.99990000 instead of 30.0. Is there a way around this?ThanksBruno
Hello,I am attempting to write a stored procedure that builds and executes adynamic SQL statement which can be up to 8000 characters long.Therefore, I have declared a variable of type varchar(8000) which,according to the documentation, is the maximum acceptable length ofsuch a variable. Unfortunately, however, SQL Server seems allowvarchars to only be half this size: the resulting string keepingsgetting truncated to 4000 characters as reported by the len function.Is there setting somewhere that would fix this behavior or somework-around that I can employ that would allow me to execute a dynamicsql statement that is longer than 4000 characters?(note: I am not using the sp_executesql proc as it maxes out at 4000; Iam simply calling EXEC which, according to the docs, should be fine)Thank You.
We use SQL Server 2005 x64 Enterprise and I have created a SSIS routine to replace a legacy DTS routine that reads from a Data Reader Source and writes to a SQL Server 2005 database. The field I am receiving the truncation error on is "Description" and it is set as nvarchar(50), which it always has been, and the old DTS routine works fine on it. I checked the contents of description and the maximum number of characters in any row is 28. I have tried changing it to nvarchar(max), nvarchar(4000) and ntext but it still fails with a truncation error. Any leads on how I may solve this issue?
I am having a strange problem that I have been looking at for a day now, and my head is straing to hurt as I have banged my head on the desk so many times. I have written an extensible set of classes that allow me to build SSIS packages dynamically via a web front end. I am finding this code is working OK, but I have this silly bug.
The code is trying to generate an SSIS package that does somethign very simple, and transfer data from a 10 coumn table with a mix of data types, move through another component that adds a couple of extra columns on basic on some variables, then map it onto a OLD DB destination. This code works fine, until I start using strings of various lengths.
When the package runs, it fails validation with errors saying that truncation may occur as I am trying to put a 100 character string, into a 50 character string. The error is logical as you wouldn';t want to do that, but this is not what I am doing. I am actually transfering data from a 50 character string into a 100 character string. When I try it with a table where the strings are the same length at both ends, or no strings are involved, everything works fine, and the data goes from the source to the destination.
I must be setting something slightly wrong which only triggers this problem when the sizes don't match, but the data flow direction is fine, and the data types match. I have included the code from the piece of code that 'writes' the output part of the package. If anyone has any idea what might be going wrong, I would be forever in their debt!
IDTSInput90 input = _Destination.InputCollection[0]; IDTSVirtualInput90 vInput = input.GetVirtualInput(); int iIndex = 0; foreach (IDTSVirtualInputColumn90 vColumn in vInput.VirtualInputColumnCollection) { // Call the SetUsageType method of the destination // to add each available virtual input column as an input column. _InstanceOfDestination.SetUsageType(input.ID, vInput, vColumn.LineageID, DTSUsageType.UT_READWRITE); } IDTSExternalMetadataColumn90 exInputColumn; foreach (IDTSInputColumn90 inColumn in _Destination.InputCollection[0].InputColumnCollection) { // create the MAP // What we need to do here, is say what source column, goes to what destination column. // we read by index, and we need to map the specifics, we could control just 3 of 20 columns // to whatever column we wanted in the destination...... // We know the name of the inColumn - It is called inColumn.Name, we need to find the column // we want to map with by finding its name. if (inColumn.Name == "BatchID" || inColumn.Name == "ValidationStatus") { exInputColumn = _Destination.InputCollection[0].ExternalMetadataColumnCollection[inColumn.Name]; // map it _InstanceOfDestination.MapInputColumn(_Destination.InputCollection[0].ID, inColumn.ID, exInputColumn.ID); } else if (Mapping.Map.ContainsKey(inColumn.Name)) { exInputColumn = _Destination.InputCollection[0].ExternalMetadataColumnCollection[Mapping.Map[inColumn.Name]]; // inColumn.Name
// map it _InstanceOfDestination.MapInputColumn(_Destination.InputCollection[0].ID, inColumn.ID, exInputColumn.ID); } } _InstanceOfDestination.ReleaseConnections(); }
I have always assumed that when you backup a SQL Server database the transaction log is automatically truncated so that there is no need to explictly truncate it. It makes sense to me, you would not normally need logs from before the most recent backup. BOL, with all its talk about check points etc, seems to hint at this but I can't find an explicit statement to this effect.
Hello Guys I am using XML files and dumping data to sql server 2005 , i have field called as rate which is having money as datatype and i am getting following error
LoadDataXML to XML Source -- LoadDataXML [907]: The value was too large to fit in the output column "RATE" (95245).
please help me out with the solution of this ...the data which is coming from xml file is unsigned itneger single bit and my database is having money .so should i use a conversion task in between if any body can give idea about this that would be great , if you want more information tell me ... thanks krish
I'm having a problem with one of my packages due to a truncation warning that I can't get rid of. It's not the end of the world, because the package still works. It's just extremely frustrating.
The problem arises in a derived column item in a data flow task. There is a postcode field in the data flow which has space for 20 characters. I create a derived column from this which simply removes any spaces:
Derived Column Name: Postcode
Derived Column: Replace 'Postcode'
Expression: REPLACE(" ",Postcode,"")
Data Type: string [DT_STR]
Length: 20
Code Page: 1252 (ANSI - Latin I)
However when I use this expression, or anything else which uses the replace function, I end up with the warning message:
Warning 1 Validation warning. Create Staging Tables: Derived Column [20555]: The result string for expression "REPLACE(" ",Postcode,"")" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR.
I have tried everything I can think of to get rid of the warning. Is there some way I can use the replace function, but not have the system convinced that I'm about to go over the maximum size limit?
I am trying to write a SSIS package to move data from an access database table to a SQL db table. I have a field which has data that too long for NVARCHAR(255), so, I end upin this error: "A truncation error occurred on the specified object of the specified component"
I have a stored procedure that does several steps. During the stored proc, error messages are produced when certain conditions warrant. But I want to continue anyway.... ex. in a loop in the proc...
SET @strSQL = 'Update Table1 SET col1 = ''' + @strVariable + '''' EXEC (@strSQL)
--ERROR created by the exec statement.....
When I schedule this as a job, the first error message makes the job fail. How can I force the proc to run completely, even if an error occurs?
Hi! I'm wondering if there is a way to have the AVG() not to include the zero amount as part of the calculation. I'm looking the field, PurchPrice and RepairCost that are used by the AVG() function...
I am importing data into a SQL table and there is a potential for duplicate records to be coming in. How do I simply ignore the duplicates and add only the records that do not violate the keys?
SELECT expense_id, CAST(expense_id AS char(10)) + ' - ' + CAST(trip_km AS char(5))+ ' - ' + CAST(expense_amount AS char(5)) + ' - ' + charge_centre AS ExpenseDesc
If charge center is null, I need to ignore this field. How can I achieve this? The reason is that if any of the field is null, it will return ExpenseDesc as null.
if there is an error in the trigger then the update to the table does not happen. is there a way to make sql ignore errors in a trigger and still update the table
Hi, I am trying to import data from a Text file into a database Table using SQLserver BCP utility. I am able to do that when I have all new records in my Text file. But I am getting primary key violation error when I am trying to import the record which is already existing in the table. This is correct, but I want my program to ignore these errors and import only those records which are fine. I tried [-m maxerrors] option, but it is not working. My BCP program is getting interrupted at the first error itself, even if I give [-m100] option. my command looks something like this, bcp pub..employee in C:data.txt -b1 -m100 -c -t, -Sdatabase -Uuser -Ppassword
here -b1 is, processing 1 row per batch transaction -m100 is, ignoring first 100 errors
I am looking for best practice when passing a parameter to stored procedure that is not needed. For example, sometime the users will want the list to list only by certain state. Other times the user want all states. How can I make the SP to ignore the where clause if users want all states.
CREATE PROCEDURE usp_Example @State nvarchar(2) AS SELECT FirstName, LastName, State FROM SomeTable WHERE State = @FirstName; GO
Using SQL2000. According to Books Online, the avg aggregrate functionignores null values. ((3+3+3+3+Null)/5) predictably returns Null. Isthere a function to ignore the Null entry, adjust the divisor, andreturn a value of 3? For example:((3+3+3+3)/4) after ignoring Nullentry.If there's more than one null value, then adjust divisor accordingly.For example: ((5+5+5+4+Null+5+5+Null)/8) would be ((5+5+5+4+5+5)/6)after nulls ignored.Thanks for any help or advice.