SQL Server 2008 :: Replication Error - Field Size Too Large

Feb 1, 2011

I've got two databases on the same server and replicate some tables from one database to another.The replication is configured so not to drop the table if it exists, but to delete the data based on the filter if one exists. There are two tables on the subscriber that have some extra columns.

I get "field size too large" error when trying to replicate them. Is there a workaround without having to make the publisher and the subscriber tables identical by schema?

View 5 Replies


ADVERTISEMENT

Replication Field Size Error

Apr 24, 2006

Hi,

I have been running a merge publication & 4-5 subscriber all running perfectly good, but today I started getting this error below? I generated a new dynamic snapshot and re init the subscriber, but still the same. I synchronized the other subscribers and they are all running with no errors, even after a init. I.E. I would think this is data related, to this subscriber, but I have no further ideas how to track it down?

any ideas?

Regards

Gert Cloete







bcp "conCORD_ODS"."dbo"."MSmerge_contents" in "\OBSQL2005\ConcordReplData\MERCURYSQLEXPRESSOBSQL2005$DEV_CONCORD_ODS_CONCORD_ODSMSmerge_contents90_forall.bcp" -e "errorfile" -t"<x$3>" -r"<,@g>" -m10000 -SMERCURYSQLEXPRESS -T -w





To obtain an error file with details on the errors encountered when initializing the subscribing table, execute the bcp command that appears below. Consult the BOL for more information on the bcp utility and its supported options.





End of file reached, terminator missing or field data incomplete





Field size too large





The process could not bulk copy into table '"dbo"."MSmerge_contents"'.

The merge process was unable to deliver the snapshot to the Subscriber. If using Web synchronization, the merge process may have been unable to create or write to the message file. When troubleshooting, restart the synchronization with verbose history logging and specify an output file to which to write.

View 6 Replies View Related

Field Size Too Large

May 9, 2008

I have set up transaction replication between two databases. Data from a table in the first database is replicated to the same table in another database.

The table at the publisher already has some data in it. The table at the subscriber is empty. When the replication is synchronizing, I get the following errors in the replication monitor:
*The process could not bulk copy into table "dbo"."virtualdatalocations_waitingqueues". (Source: MSSQL_REPL, Error number: MSSQL_REPL20037) Get help: http://help/MSSQL_REPL20037
*Field size too large

The table looks like this:
CREATE TABLE virtualdatalocations_waitingqueues (
dataid int ,
personid int ,
queueid int ,
CONSTRAINT FK_vw_dataid
FOREIGN KEY(dataid) REFERENCES datalocations(id) ON DELETE CASCADE ,
CONSTRAINT FK_vw_personid
FOREIGN KEY(personid) REFERENCES persons(id),
CONSTRAINT FK_vw_queueid
FOREIGN KEY(queueid)REFERENCES waitingqueues(id)
);

It used to run fine in the past. I couldn't find any help on google or on forums.

Any help or comments are greatly appreciated.

View 6 Replies View Related

SQL Server 2008 :: Statement Error Out Due To Source Field Being Unique Identifier

Sep 22, 2015

I have a field I am trying to bring into a SQL statement

,ISNULL(Convert(nvarchar(50),OPP1.OriginatingLeadId),'') AS 'OriginatingLeadId'

I get this error message

Conversion failed when converting from a character string to uniqueidentifier.

the field specs' originatingleadid is attached ....

View 9 Replies View Related

SQL Server 2008 :: BCP XML File Format Error Invalid Ordinal For Field 2

Oct 1, 2010

When creating xml fileformat its throwing me error "invalid ordinal".

When created non-xml file format, no error, and was also able to load data file into sql table. Not sure why bcp (Version: 10.50.1600.1) is not able to create xml file format.

C:>BCP "MyGDB.dbo.Items_Import" format nul -f"C:AnkitTempBCPItemsMaster.xml" -x -w -T -S"(Local)"

SQLState = HY000, NativeError = 0
Error = [Microsoft][SQL Server Native Client 10.0][SQL Server]Invalid ordinal for field 2 in xml format file.

Column_nameTypeComputedLengthPrecScaleNullableTrimTrailingBlanksFixedLenNullInSourceCollation
Item Numbervarcharno18 noyesnoSQL_Latin1_General_CP1_CI_AS
Description1nvarcharno80 yes(n/a)(n/a)SQL_Latin1_General_CP1_CI_AS
Description2nvarcharno80 yes(n/a)(n/a)SQL_Latin1_General_CP1_CI_AS
UMvarcharno3 yesyesyesSQL_Latin1_General_CP1_CI_AS

View 1 Replies View Related

DTS Error: Data For Source Column 2 (‘column_name) Is Too Large For The Specified Buffer Size.

Oct 4, 2005

Hi,
 
I’m attempting to use DTS to import data from a Memo field in MS Access (Jet 4.0 OLE DB Provider) into a SQL Server nvarchar(4000) field.  Unfortunately, I’m getting the following error message:
 
Error at Source for Row number 30. Errors encountered so far in this task: 1.
Data for source column 2 (‘Html’) is too large for the specified buffer size.
 
I also get this error message when attempting to import the same data from Excel.
 
Per the MS Knowledgebase article located at http://support.microsoft.com/?kbid=281517, I changed the registry property indicated to 0.  This modification did not help. 
 
Per suggestions in other SQL Server forums, I moved the offending row from row number 30 to row number 1.  This change only resulted in the same error message, but with the row number indicated as “Row number 1â€?.  (Incidentally, the data in this field is greater than 255 characters in every row, so the cause described in the Knowledgebase article doesn’t seem to be my problem).
 
You might also like to know that the data in the Access table was exported into this table from a SQL Server nvarchar(4000) field.
 
Does anybody know what might trigger this error message other than the data being less than 255 characters in the first eight rows (as described in the KB article)?
 
I’ve hit a brick wall, so I’d appreciate any insight.Thanks in advance!

View 9 Replies View Related

Truncation Error On Large Field

Apr 3, 2007

I have an tab delimiter ed file that I'm trying to load into a database using SSIS. The the database have a column called Comments that can hold up to 1000 Unicode characters (nvarchar[1000])

I have appropriately defined the flat file connection and marked every field to the intended length, but every time I run it it will give me the following error:


Data conversion failed. The data conversion for column "COMMENTS" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.
All the columns have a match, this specific column is has no field longer than 1000, larger record has 528 characters in it and the fields are defined as Unicode string in the file connection.

I already ran out of ideas of what may be giving this error, anyone has an idea of what else to try?

View 3 Replies View Related

Stroing Large Size File In Sql Server Database.

Apr 20, 2007

Hi
I want to store large files like pdf file,Html page,audio file in Sql Server database.How can i do it?
if somebody know then tell me as soon as possible.
Thanks in advance.
Bye

View 1 Replies View Related

SQL Server 2008 :: Add ID And Primary Key To Large Table(s)

Apr 24, 2015

Background:

* SQL Server 2008 R2
* Database was created from a third party product. The product writes to the 3 tables that I need to make changes to 24/7 and downtime is not an option. All changes must be done live.
* Database overall size is ~200 GB
* The 3 tables I must update make up ~190 GB of that space.
* Tables have no primary key or ID columns. Therefore, the data is highly fragmented.
* Of the ~190 GB of space allocated for the tables, there is roughly 70 GB of actual data.
* Rows of the table are not guaranteed to be unique. In fact, on one of the tables, tests were ran with a small sample of data and duplicates were very much evident.

What I'm trying to accomplish here is to get an ID column added to the 3 tables and set that ID field as the primary key. Doing so will force the data to become much less fragmented than it is currently and with purging and new inserts, eventually fragmentation will be nearly non-existent.

Problem:
Making table changes on tables this large while data is constantly being added poses many risks and can cause data loss. This was tried on a smaller table than these three and the entire table was lost in the process. Restore from backup was needed to get back to most recent log backup point.

Original Solution:
My original plan was to create a backup of each table and run the script below to migrate the majority of the data temporarily into the new table. I could then update the original table (which now would contain much less data) and then migrate the data back.

CREATE TABLE #temp
(
MsgDate varchar(10)
,MsgTime varchar(8)
,MsgPriority varchar(30)
,MsgHostname varchar(255)

[Code] ....

Original Solution Problem:
The problem with the solution above is that it calls the DELETE function on the original table using the values from the temporary table. When there are duplicate rows, which have not all been inserted into the backup table yet, they will all be removed from the original table because there is nothing unique to separate them out. In my testing, I had 10,000 rows in the original table and ended up with 9,959 rows in the backup table.

Question 1: Is my approach to making these table changes reasonable?
Question 2a: If so, how can I make sure I don't lose data as part of this temporary migration of the data to my backup tables?
Question 2b: If not, what would be a better approach that isn't going to cause disruption to the application that INSERTs data 24/7 and won't have any risk of data loss?

View 9 Replies View Related

SQL Server 2008 :: Large Tables In OLTP

Jul 14, 2015

How many no of records of the tables are called large tables.

We are getting more deadlocks. We are using default isolation. Read & insert statements are blocking each other and causes dead locks.

I am thinking that might be purging will reduce deadlocks.

The table has 15million records. Is this table consider as large table or not in OLTP systems?

In general how many records we need to consider as large table.

View 1 Replies View Related

SQL Server 2008 :: Selecting Field That Contains Data From Another Field

May 28, 2015

We have a stock code table with a description field and a brand field - when the data was entered, some of the records were entered with the brand field in the description field.

ie.
Code Description Brand
ABC1 BLANK DVD SONY
ABC2 SONY BLANK DVD SONY

what I need to do is identify where the Brand is in the Description field ...

I have tried ;

select * from Table
where Description Like Brand

not very successful.

View 3 Replies View Related

Integration Services :: Error While Changing Field Size

Sep 10, 2015

I am loading from SQL Server 2008 to Access 2010 using SSIS. One of the columns in the table I am loading into is a Number datatype and Fieldsize is long integer. The values are being truncated, so I want to change the Fieldsize to DOUBLE.  However, when I do that I receive the error below. What should I do? I would like not to change my Windows registry.

This error can be caused by one of the following:

The maximum number of columns allowed in a table or the maximum number of locks for a single file is exceeded.

The indexed property of a field is changed from Yes (Duplicates OK) to Yes (No Duplicates) when duplicate data exists in the table.

An expression is not specified in the Expression property of a calculated field. 

If the maximum number of locks per file was exceeded, you can increase the number by editing a registry entry. However, this is not a recommended option.

If you use Registry Editor incorrectly, you could cause serious problems that require you to reinstall the operating system. Microsoft cannot guarantee that you can solve problems that result from using Registry Editor incorrectly. Use Registry Editor at your own risk.

Make a backup of the registry. Find the MaxLocksPerFile registry value by using the Windows Registry Editor, and then increase the value. The MaxLocksPerFile value is saved as part of the following key:

HKEY_LOCAL_MACHINESOFTWAREMicrosoftOffice14.0Access Connectivity EngineEnginesACE

If the Indexed property of a field and duplicate data is located in the table, reset the Indexed property to the previous setting, or remove duplicate records from the table.

While I was loading to the same table a few days ago, I received a warning and the task took approx 9 hours. I am attaching the screen shot.

View 8 Replies View Related

SQL Server 2008 :: Update Statistics With Large Databases

Feb 5, 2015

Currently our database size is around 350G. It will grow up to 1.5 TB

We have the

Auto create statistics option :True,
auto update statistics option :True,
auto update statistics asynchronously option : False

at database level

we have a weekly job, update statistics running very long time. It is created through maintenance plan using the option full scan.

Previously they tested with sampling but instead of full scan running with the sampling effected the queries.

Is there option to avoid the long time job duration.

If we didn't run the statistics manually what will happen? How do you maintain statistics with large databases

View 9 Replies View Related

SQL Server 2008 :: Migrate Contents Of Large Table From One DB To Another?

Mar 3, 2015

I have a large table containing about 800 million rows with an average row length of about 1K. The columns in the table are char columns. I need to move the contents of this table into a similar table where the target columns are varchar. The original table column definitions are compatible with the target table but the reverse is not necessarily true. For example, one column is being changed from int to bigint. The table is partitioned.

So, what is the fastest way to migrate the data. I was thinking to unload each partition into a flat file and load the target table running multiple load streams? Is this a good way?

View 0 Replies View Related

SQL Server 2008 :: Retrofitting Partitioning To Existing (large) Tables

Feb 9, 2015

We have an existing BI/DW process that adds large chunks of data daily (~10M rows) to an existing table, as well as using Deletes to remove stale data. This scenario seems to beg for partitioning to support switching in/out data.

After lots of reading on this, I have figured out the mechanics of the switching, bit I still have some unknowns about the indexes needed to support this.

The table currently has several non-clustered indexes, including one on the partitioning column - let's call that column snapshotdate. Fortunately there are no FKs involved, and no constraints.

Most of the partitioning material I see focuses on creating a clustered PK to assist with switching. Not sure if this is actually necessary, but assume I create one using an Identity column (currently missing) plus snapshotdate.

For the other non-clustered, non-unique indexes, can I just add the snapshotdate to the end of the index? i.e. will that satisfy the switching requirement?

View 1 Replies View Related

SQL Server 2008 :: What Could Cause Large Gaps In Servers Default Trace

Feb 12, 2015

I have the default trace on a SQL Server 2008R2 instance enabled and found today that there is a gap of nearly 4 minutes in the trace during a time of the day when there most certainly is not going to be a 4 minute window of nothing.

What if anything could cause the default trace to have a gap like this? The SQL Server Instance (against my preferences) is hosted on VMware however it has its on HOST and so its resources are not being shared with any other server. The data & log files reside on different parts of the SANS. Our IT & Network admins are looking into the issue on their end but when I looked and found a near 4 minute gap in the default trace it hit me that this could be something above/outside of SQL Server.

View 1 Replies View Related

SQL Server 2008 :: How To Find Statements That Cause Large Memory Paging

Apr 22, 2015

I am monitoring our production server, and noticed that periodically we have spikes of Memory Paging Rate (pages/sec).

How to find particular queries/stored procedures that causing this?

View 5 Replies View Related

SQL Server 2008 :: Speed Up Text Search In Large Result Set?

Jul 14, 2015

I have a query below which filters detail field in the #TempLogins table. The details field is a text field which contains many types of text strings, some containing urls that have parts like "ResultID=5" which is what is contained in the ResultIDSearch and ResultSetIDSearch fields. The records with entries like "ResultID=5" are the ones I'm trying to filter for.

The problem I have is that the query takes way too long to run. The TempLogin table has around 200 K records and the TempSearch table has around 80 K records.

select * from #TempLogins a where exists
(select 1 from #TempSearch t1 where
a.detail like '%' + t1.ResultIDSearch + '%'
or
a.detail like '%' + t1.ResultSetIDSearch + '%')

View 1 Replies View Related

SQL Server 2008 :: Large Binary Dataset - Database Or File System?

Jun 2, 2015

I have a well-structured but also very large binary data-set that is generated by a C++ application every five minutes. The data needs to be accessed by SQL applications. Since data is generated every five minutes, performance is key, both for write and read. The data set is about 500MB.If data is written to the file system, the write performance doesn't involve SQL server. For reading it, I have a CLR to read the portions of the data that I need based on offset and length. That works and is very fast. The problem is that data is stored in the file system, so it is not self-contained within the database.

A second option that I haven't explored yet, is to write the data into a table as VARBINARY(MAX). I would read the data using SUBSTRING with appropriate offset and length. Performance of SQL write/read of binary data of this size, and whether there is a third option I haven't thought off. I'm using SQL Server 2014.

View 5 Replies View Related

SQL Server 2008 :: Making Use Of A Large Transaction File To Delete Records?

Jun 5, 2015

Currently we has a database of size about 300G. Because our backup system failed some time past we were left with a transaction log file which grew to about 160G. However our backups are working again and everything is working fine. My understanding is that now the transaction log file is practically empty but the capacity remains at 160G.

When you delete records the deleted transactions are going to get logged to the transaction file. My understanding is when a backup is done these transactions get discarded out of the transaction file.

could I make use of this relatively large transaction file and start deleting transactions without out actually adding to the transaction file size.

The plan is to delete records from logging tables that are not referenced to by any other table without this increasing the transaction log file.For example over a period of a few weeks we can delete a chunk of records from a table. Then after it has completed a backup we can delete another chunk of records out of this table until we have got the table down to the records that we now need.Will this work?

View 2 Replies View Related

SQL Server 2008 :: Changing Large / Existing Table To Sparse Columns?

Sep 21, 2015

I have some huge tables (think 200+GB for a single table) which are excellent candidates for sparse columns. The tables have many columns which are defined with decimal datatypes (13,2) with a large percentage of them (over 50% in most cases- some as much as 99%) being 0.00. Since this is very expensive in terms of storage my idea is to set all the 0.00 values equal to NULL then set them as sparse. Across 100 or so identical databases, I have 5 such tables, with 20-40 columns in each table.

1.) three steps for each column in each table in each db.

Step 1: update table to allow for nulls

Step 2: update tabe set column=null where column =0.00

Step 3 update table set sparse columns

2.)

Step 1: Create entirely new table with sparse column definitions

Step 2: copy entire table, transforming 0.00 to null for affected columns via SSIS

Step 3: drop original table, rename new table to original name

View 0 Replies View Related

SQL Server 2008 :: Deleting Large Number Of Rows With Foreign Key And Mirroring Setup

Feb 19, 2015

We have a database. It is enabled for mirroring. We need to delete the old records. That is around 500k records from a table. But it has foreign key relation. How to do in Production servers these kind of deletes?

View 2 Replies View Related

SQL Server 2008 :: Estimate Size Of A Table

Feb 4, 2015

I have a table like below,

CREATE TABLE Student
(
Id BIGINT not null
,Name NCHAR(20) not Null
,Branch NVARCHAR (64) null
)

The table contains : 100000 rows .

1)Number of rows in a data page
2)Total number of pages required for the table
3)Total Table size in KB or MB
4)Total file size in Kb or MB

View 4 Replies View Related

SQL Server 2008 :: Table Size After Reindex

Feb 22, 2015

I'm trying to understand what is happening to one large table.

In a DB SQL 2008R2, I'm trying to track a rapidly increasing DB size. It's due to one table recently added.

Despite the table's no rows increasing its size reduces after a scheduled re-index.

I've recorded the space used by this table by recording EXEC sp_spaceused 'tableName'

---
NoRows reserved data index_size unused
128864512300384 KB 2290928 KB9432 KB24 KB AFTER reindex
128864515406232 KB 5366184 KB39280 KB768 KBBEFORE reindex

N.B. The only thing I'm aware of happening in the time period is a reindex as part of scheduled task. I could be missing something else happening.

The table has only one Index the PK which is clustered, No Fill factor is specified. The Server Default fill factor =0.

I read fillfactor=0=100 Will always try and fill the pages so space used will be minimised?

After running the reindex the index fragmentation is v.low. I've not recorded the fragmentation before reindex.

I can see the Data is not added in Clustered index order.

View 2 Replies View Related

SQL Server 2008 :: Data Type And Size Of Each Row

Feb 27, 2015

In my database there is a big table and format is something like this:

CREATE TABLE [dbo].[MyTable](
[aaa] [uniqueidentifier] NULL,
[bbb] [uniqueidentifier] NULL,
[ccc] [nvarchar](max) NULL,
[ddd] [nvarchar](100) NULL,.......etc.........

There are some more columns with more 'nvarchar' (max) and other INT data types. Anyway, I know a page is 8K size. How do I find out how much space does A ROW takes with above datatypes? If users add 5000 Rows per day, how do I figure out how much size the table will increase?

View 9 Replies View Related

SQL Server 2008 :: How To Reduce MDF File Size

Apr 30, 2015

I have issue with my DATA file ( MDF) .. The usage is 99.87% for database DB1 . File size is 3 GB . How do I reduce it ?

I have tried to shrink it by changing the recovery from FULL to SIMPLE and set it back to FULL .

I notice the index Defragmentation is high ..

Can I change the Initial Size into 1 GB for example ?

View 9 Replies View Related

SQL Server 2008 :: TempDB Size Hovering Around 45%

Jul 27, 2015

My prod server (only default instance) is configured TempDB 1024 MB data and Log 200MB. when I run 'sqlperf logspace' it shows most of time around 45% 'log space used'. There nothing going on the instance when I ran 'whoisactive' and select * from sys.sysprocesses where dbid = 2!!!

So my questions are is this normal to see log space around 45%, how to find what what CAUSED the tempdb log space to grow 45%? Is there something to do about it?

View 6 Replies View Related

SQL Server 2008 :: View Log Cache Size

Jun 21, 2010

Can I view the log cache size in SQL Server memory any DMV's which states that.

View 9 Replies View Related

SQL Server 2008 :: How To Find Which Queries / Processes Causing Large Memory Paging Rate

Mar 30, 2015

Our monitoring tool shows that our production system periodically experiencing large rate - up to 800 memory pages/sec. How to find out which particular queries, S.P., processes that initiate this?

View 3 Replies View Related

SQL Server 2008 :: Temp DB Size Values From Different Tables

May 25, 2015

For finding size values of temp database:

select * from sys.master_files -
size column value here is 1024 for .mdf,size here for .ldf is 64
select * from tempdb.sys.database_files -
size column value here is 3576 for .mdf,size here for .ldf is 224

Why is there a difference and not the same. size columns in the above 2 tables for temp db's do they represent different values ?

View 1 Replies View Related

SQL Server 2008 :: Size Limitation For XML Data Type?

Oct 7, 2015

I've never worked with the XML data type in SQL Server, although I know its been there for a few iterations of SQL SErver. Now I've got a situation in which it might store some configuration data as XML, since that's the way it comes. (We had thought about storing the data in a VARCHAR(MAX) field.)

The first question is does the XML data type have a size limitation? For example do you do something like:

ConfigFile XML(1000) NULL

Or is it just something like this:

ConfigFile XML NULL

The second question is persisting the data to a file. As the name I choose for the variable suggests, we want to save the data from a configuration file into a SQL Server database. How do we go about doing that? We'll be developing a C# application, it will read and write the data both from the SQL table and the user's local HD.

View 5 Replies View Related

SQL Server 2008 :: Initial Size For TempDB Data And Log File?

Sep 12, 2011

We have installed SQL Server 2008 R2 SP1 instance and it's having Share Point 2010 databases.

We have 2 dedicated drives for Tempdb on SAN with 50 GB space. Both tempdb data & log files are created with default size. I would like to presize them.

What are the best values to start with?

U ->Tempdbdata having tempdb.mdf file
V->Tempdblog having templog.ldf file

View 9 Replies View Related

SQL Server 2008 :: Finding Backup File Size In Advance?

Mar 8, 2015

My backups are failing sometimes.My db size is around 400 GB and we are taking backup to the remote server. Free size available on the disk is showing 600 GB but my database full backup run more than 10 hrs and failed. The failure reason is there is not enough space on the disk.

What could be the possible failure reasons? It has more than the the database size on the backup server but why it is showed that msg on the job failures. I noticed same thing occurred couple of times in the past.

Is there any way to find how much the backup file will be generate before we run the backup job?

i.e. If we run the full backup of test1 database now, it will generate ....bak file for that test1 db

currently we are using maintenance plan and with compression (2008R2) and the database has TDE enabled

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved