Replication Row Commit Size
Aug 23, 2006
Is there a way to get replication to commit records in batches instead of all at once?? I am in a 24/7 shop and some of my updates end up being thousands of rows and it locks the subscriber table for a few minutes sometimes. If I could get it to commit say every 1000 rows it might give me some relief in this area..
Or am I thinking about this wrong?? If this is possible, would it help at all...
I
View 3 Replies
ADVERTISEMENT
Oct 11, 2006
Hi, All,
if I set the "Maximum insert commit size" to 10 ( 0 is the default) in a OLE destination,
what does the 10 means? 10 records or 10 MB/KB of data?
Thanks
View 4 Replies
View Related
Jul 10, 2007
The DBA is not around and I would like to see if someone had a good recommendation on what the Maximum insert commit size (MICS) should be for an OLE DB Destination where the default of ZERO is not being used.
I want to use Fast Load and I want to use Redirect Row to catch the errors. I just performed a test where the OLE DB Destination was NOT set to Fast Load - it took FOREVER and I cannot have this kind of performance.
I know that this may be totally dependent on what is being inserted, but is there any problem with just setting this value to say 800,000? -.
The destination SQL database's recovery mode is set to SIMPLE as it is not a transactional database.
Suggestions?? Thx
View 4 Replies
View Related
Sep 8, 2006
I'm seeing some strange behavior from the OLE DB Destination when using the "fast load" access mode and setting the "Maximum insert commit size".
When I do not set the "Rows per batch" or the "Maximum insert commit size", the package I'm working with inserts 123,070 rows using a single "insert bulk" statement. The data seems to flow through the pipeline until it gets to the OLE DB Destination and then I see a short pause. I'm assuming the pause is from the "insert bulk" statement handling all of the rows at once.
When I set the "Rows per batch" option but leave the "Maximum insert commit size" alone, I generally see the same behavior -- a single "insert bulk" statement that handles all 123,070. In this case, however, the "insert bulk" statement has a "ROWS_PER_BATCH" option appended to the statement that matches the "Rows per batch" setting. This makes sense. I'm assuming the "insert bulk" then "batches" the rows into multiple insert statements (although I'm unsure of how to confirm this). This version of the "insert bulk" statement appears to run in about the same time as the case above.
When I set the "Maximum insert commit size" option and leave the "Rows per batch" statement alone, I see multiple "insert bulk" statements being executed, each handling the lower of either the value I specify for the "Maximum insert commit size" or the number of rows in a single buffer flowing through the pipeline. In my testing, the number of rows in a buffer was 9,681. So, if I set the "Maximum insert commit size" to 5,000, I see two "insert bulk" statements for each buffer that flows into the OLE DB Destination (one handling 5,000 rows and one handling 4,681 rows). If I set the "Maximum insert commit size" to 10,000, I see a single "insert bulk" statement for each buffer that flows into the OLE DB Destination (handling 9,681 rows).
Now the problem. When I set the "Maximum insert commit size" as described in the last case above, I see LONG pauses between buffers being handled by the OLE DB Destination. For example, I might see one buffer of data flow through (and be handled by one or more "insert bulk" statements based on the "Maximum insert commit size" setting), then see a 2-3 minute pause before the next buffer of data is handled (with its one or more "insert bulk" statements being executed). Then I might see a 4-5 minute pause before the next buffer of data is handled. The pause between the buffers being passed through the OLE DB Destination (and handled via the "insert bulk" statements) is sometimes shorter, sometimes longer.
Using Profiler, I don't see any other activity going on within the database or within SQL Server itself that would explain the pauses between the buffers being handled by the OLE DB Destination and the resulting "insert bulk" statements...
Can anyone explain what is going on here? Is setting the "Maximum insert commit size" a bad idea? What are the differences between it and the "Rows per batch" setting and what are the recommended uses of these two options to try to improve the performance of the insert (particularly when handling millions of rows)?
TIA for any thoughts or information...
Dave Fackler
View 8 Replies
View Related
Apr 20, 2001
Hi All,
Server A has been configured as distributor, publisher and push subscription to Server B. Replication works fine from A to B. But B to A gives an error "OLE DB Provider SQLOLEDB does not support distributed transaction".
The other way I tried is :
Server B has been configured as distributor, publisher and push subscription to Server A. Replication works fine from B to A. But A to B gives an error "OLE DB Provider SQLOLEDB does not support distributed transaction".
Regards,
Suresh.
View 1 Replies
View Related
Mar 28, 2008
Microsoft SQL Server Management Studio Express
select @@trancount
begin tran
select @@trancount
use ProdNetPerfMon
select @@trancount
update Nodes set Caption = 'xxxx' where Vendor = 'yyyy'
select @@trancount
commit tran
select @@trancount
Executes as expected including trancount without errors.
Nodes.Caption is updated but reverts after a few minutes.
sa privileges
What am I missing?
View 1 Replies
View Related
Feb 5, 2004
Hello,
We are setting up Merge replication and size of the database is
85 GB. How much disk size is feasible to keep for distribution Database.
Is there any % basis (SIZE) for Distribution DB according to the size of the database?
Is it feasible to keep seperate server for Distribution Server or keeping
Publisher and distributor on same same.
Can any one help me!
Thanks!
View 3 Replies
View Related
Apr 24, 2006
Hi,
I have been running a merge publication & 4-5 subscriber all running perfectly good, but today I started getting this error below? I generated a new dynamic snapshot and re init the subscriber, but still the same. I synchronized the other subscribers and they are all running with no errors, even after a init. I.E. I would think this is data related, to this subscriber, but I have no further ideas how to track it down?
any ideas?
Regards
Gert Cloete
bcp "conCORD_ODS"."dbo"."MSmerge_contents" in "\OBSQL2005\ConcordReplData\MERCURYSQLEXPRESSOBSQL2005$DEV_CONCORD_ODS_CONCORD_ODSMSmerge_contents90_forall.bcp" -e "errorfile" -t"<x$3>" -r"<,@g>" -m10000 -SMERCURYSQLEXPRESS -T -w
To obtain an error file with details on the errors encountered when initializing the subscribing table, execute the bcp command that appears below. Consult the BOL for more information on the bcp utility and its supported options.
End of file reached, terminator missing or field data incomplete
Field size too large
The process could not bulk copy into table '"dbo"."MSmerge_contents"'.
The merge process was unable to deliver the snapshot to the Subscriber. If using Web synchronization, the merge process may have been unable to create or write to the message file. When troubleshooting, restart the synchronization with verbose history logging and specify an output file to which to write.
View 6 Replies
View Related
Oct 19, 2007
Hi,
I'd like to replicate an SQL Server Database to an SDF file. For Simplicity I want to use the SQL Server 2005 Management Console. The Console reports that the maximum buffer size were to small. In the comment (c# code) I can see it is set to 512. How can I increase the value in the replication assistant?
Miroslaw
View 3 Replies
View Related
Jan 29, 2008
I made a DDL change to a published table. To do this, I had to remove the table from the article, modify two field sizes, save the table, and then add the table back to the publication. I did have to rerun the snapshot agent after this even though the new article is exactly the same (yes, I understand all rowguids are dropped and recreated).
So, at the subscription, I begin synchronization and it replicates every article, thus doubling the size of the subscription .sdf file. yes, it can be compacted and it cuts it back in half.
Question is how to prevent this behavior? Is it possible to create the rowguid before publishing the table/article and that way even if you go back and make a schema change that requires republishing you can avoid the nasty behavior of producing a new snapshot and and a doubling of subscription database size?
This particular merge publication is read-only. Every article is marked as 'download only.'
Ideally if schema change is needed to one table, I only want that table/article to require reinitialization.
Actually, if its a simple change like expanding a varchar by some chars I'd like to not have to republish the article at all.
keep dreaming?
View 1 Replies
View Related
Jul 14, 2015
I have transnational replication setup on two environments, on one server distribution database is tiny, but on the second server the distribution database is 5 times bigger, and taking up lot of space, both environments have almost same size of data.
View 15 Replies
View Related
Sep 9, 2014
Does sql server 2012 support varbinary data type for replication (Merge or transaction)?
And if so, is there a limitation of data size?
View 1 Replies
View Related
Feb 8, 2006
I am in the process of testing a SQL 2005 Std x64 server with merge replication using Windows Mobile 5.0 clients and SQL 2005 Mobile. The test DB is a copy of the currently active DB, but has been expanded to include some new tables to support planned application functionality extensions.
Once the publication exceeds 97 Articles, the error is thrown that "The buffer pool is too small or there are too many open cursors". If I drop one article everything is fine. I ran a test with dummy DB that had 100 blank tables, and this initialized just fine on the client. The additional articles I am publishing (the 98th table) is also empty, but it throws the error anyway.
Is there a limit on the total size/number of changes that can be sent? Since I have run tests sending over 64,000 changes to a client during initialization this does not seem to be the case (I am only attempting a little more than 9,700 changes on this initialization).
Some other ideas that have been tested without success are to stop the user triggers from propagating, and toggling the AWE setting for SQL. The Replication Monitor does say the client completes replication, and it seems to choke at the very end of completing replication when it attempts to write to the tracking tables. The last successful action is sys.sp_MSadd_merge_history90, and it appears to be acting on the last table added to the publication.
There does not appear to be a limit on the number or articles, since I can publish more articles in a dummy DB than I am able here, so it seems to be something to do with size. Any information would be helpful, this is a very frustrating issue. Thanks!
View 1 Replies
View Related
Feb 1, 2011
I've got two databases on the same server and replicate some tables from one database to another.The replication is configured so not to drop the table if it exists, but to delete the data based on the filter if one exists. There are two tables on the subscriber that have some extra columns.
I get "field size too large" error when trying to replicate them. Is there a workaround without having to make the publisher and the subscriber tables identical by schema?
View 5 Replies
View Related
Sep 4, 2007
I am trying to resize a database initial log file from 500M to 2M. I€™m using€?
ALTER DATABASE <DBNAME> MODIFY FILE ( NAME = <DBLOGFILENAME, SIZE = 2 ) "
And I'm getting "MODIFY FILE failed. Specified size is less than current size." I tried going into the database properties and setting the log file to 2M, but it doesn€™t keep the changes.
Any help with this process?
View 1 Replies
View Related
Jun 15, 2006
I installed sql 2005 a while back. Then I recently found out my file system was fat32 (I don't understand why the hardware people did this...) and I had to convert to NTFS. Naturally the sql service no longer worked so I uninstalled inorder to reinstall now I can't reinstall it I keep getting this message
native_error=5039, msg=[Microsoft][SQL Native Client][SQL Server]MODIFY FILE failed. Specified size is less than current size.
I'll try to post the full log in a new post.
View 11 Replies
View Related
Sep 14, 2004
Do delete/update statements in MS SQL Server need a "commit" (or equivalent) run after them as they do in Oracle?
View 1 Replies
View Related
Mar 16, 2007
Hi folks,Can anyone enlighten me here? I'm trying to use a SPROC which, when supplied with an int, looks up the table and returns certain columns from it. I'm using a SqlCommand, here's my codebehind: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ SqlCommand dataSource = new SqlCommand("retrieveData", new SqlConnection(dbConnString)); dataSource .CommandType = CommandType.StoredProcedure; dataSource .Parameters.AddWithValue("id", poid); dataSource .Parameters.AddWithValue("title", title).Direction = ParameterDirection.Output; dataSource .Parameters.AddWithValue("creator", creator).Direction = ParameterDirection.Output; dataSource .Parameters.AddWithValue("assignee", assignee).Direction = ParameterDirection.Output; etc, etc... And the SPROC:------------------------------------------------------------------------------------------------------------------set ANSI_NULLS ONset QUOTED_IDENTIFIER ONGOALTER PROCEDURE [dbo].[retrieveData] @id int, @title varchar(50) OUTPUT, @creator varchar(50) OUTPUT, @assignee varchar(50) OUTPUT, @contact varchar(50) OUTPUT, @deliveryCost numeric(18,2) OUTPUT, @totalCost numeric(18,2) OUTPUT, @status tinyint OUTPUT, @project smallint OUTPUT, @supplier smallint OUTPUT, @creationDateTime datetime OUTPUT, @amendedDateTime datetime OUTPUT, @locked bit OUTPUT AS /**SET NOCOUNT ON; **/ SELECT [title] AS [@title], [datetime] AS [@creationDateTime], [creator] AS [@creator], [assignee] as [@assignee], [supplier] as [@supplier], [contact] AS [@contact], [delivery_cost] AS [@deliveryCost], [total_cost] AS [@totalCost], [amended_timestamp] AS [@amendedDateTime], [locked] AS [@locked] FROM purchase_orders WHERE [id] = @id; ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ The id being passed in is definately not null, and is set to a value of an item I know exists. The resulting error is:
Exception Details: System.InvalidOperationException: String[1]: the Size property has an invalid size of 0.Line 63: retrievePODetails.Connection.Open();Line 64: retrievePODetails.ExecuteNonQuery();[InvalidOperationException: String[1]: the Size property has an invalid size of 0.] System.Data.SqlClient.SqlParameter.Validate(Int32 index) +717091... ... Can anyone see anything I'm missing? Thanks,Ally
View 1 Replies
View Related
Nov 14, 2007
Using C#, SQL Server 2005, ASP.NET 2, in a web app, I've tried removing the size from parameters of type NCHAR, NVARCHAR, and VARCHAR. I'd rather just send a string and let the size of the parameter in the SP truncate any extra chars if need be. I began getting the error below, and eventually realized it happened only with output parameters, as in the code snippet below.String[3]: the Size property has an invalid size of 0. par = new SqlParameter("@BusinessEntity", SqlDbType.NVarChar); par.Direction = ParameterDirection.Output; cmd.Parameters.Add(par); cmd.ExecuteNonQuery();What's the logic behind this? Is there any way around it other than either finding out what the size should be, or assigning a size larger than would ever be needed? ThanksMike Thomas
View 6 Replies
View Related
Jun 8, 1999
Hi ,
I've tried to switch MS-Sql/Server 6.5 on explicit_transactions
without success.
(set implicit_transaction on/off)
What is the exact syntax for doing that ?
Herve
(PS: Thanks Gregory for your quick answer )
View 1 Replies
View Related
Jun 3, 1999
Hi All,
I don't know MS-SQLserver internal system at all. I 've just used Oracle
a couple years ago and so in some cases (e.g using TP-monitor MTS or Tuxedo)
you can switch off the implicit transaction by using
the option AUTOCOMMIT ON/OFF.
How can switch off the implicit transaction system on MS-SQLServer ?
In advance thanks,
Herve
View 1 Replies
View Related
Aug 21, 2002
l also use the
begin transaction
select ........etc
commit
structure when l wrtite queries.My problem is that if l close the query analyser it asks me to commit transaction before l exit. Why?
How do you check for uncommitted trans and commit them?
View 1 Replies
View Related
Aug 17, 2005
I built this in SQL query analyzer to update all records with the 1/1/02 date:
update tbl.EMPPOS set EFFECT_DATE = '2005-01-01'
where EFFECT_DATE = '2002-01-01';
when I run the query it updates and shows records in the lower window, but the tbl isn't altered. What is wrong with my syntax ??
thanks,
View 1 Replies
View Related
Apr 14, 2008
hi friends,
Iam Executing the sp logic.suppose incase if any problem occurs inbetween execution(NO SPACE,communication failure,log full)
data is getting commited partially insteady of rollbacking entire transaction.
CREATE procedure RBI_Control_sp
as
begin
set nocount on
--Checking the count before truncating
exec fin_ods..count_sp
--Truncating the Table
exec fin_ods..trun_sp
--Data Transfer
exec fin_ods..RBI_Data_Transfer_sp
--Checking the count after Data transfer
exec fin_ods..count_sp
--temp table Table population,Fetching data from the fin_ods[erp Table]
exec FIN_wh..RBI_SPExecution_sp
set nocount off
end
View 1 Replies
View Related
Dec 15, 2007
I have a cursor loop through a set of records that looks something like this.
OPEN database_cursor
FETCH NEXT FROM database_cursor
INTO @iID
WHILE @@FETCH_STATUS = 0
BEGIN
update table 1
update table
......
FETCH NEXT FROM database_cursor
INTO @iID
END
CLOSE database_cursor
DEALLOCATE database_cursor
Is there a way i could put all the UPDATE statements within a transaction. either everything goes or nothing.THnaks
Thanks,
View 3 Replies
View Related
Jul 20, 2005
I've a complex stored procedure, that makes a lot of insert, update,delete and so on.I would like to make some commits durint this sp, but of course theyare not "real" commit because who call the sp could decide for arollback.But I know that this commit has to be real. In fact, the transactionlog grows really too much during the execution.Is there a way to force a commit durint a sp ?thank you very much!
View 3 Replies
View Related
Oct 9, 2007
I have an SSIS package that iterates through a thousand or so download text files, parses them and inserts the results into a database via a Stored procedure and an OLE DB Command.
For the most part this process works without any issues, yet I keep obtaining random errors on a DT_STR (500) column. I have reviewed the data extensively and this column - which is the same across all of the rows - does not appear to be any different.
The rest of the rows before and after the error rows all insert properly but these rows consistently fail in the OLE DB task with the following error:
[OLE DB Command [35549]] Error: An OLE DB error has occurred. Error code: 0x80040E14. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The incoming tabular data stream (TDS) remote procedure call (RPC) protocol stream is incorrect. The RPC name is invalid.".
When I inserted the rows into an errors table, I found that the error code, -1071607698, is not defined and the only thing I could find online was a reference to:
DTS_E_COMMANDDESTINATIONADAPTERSTATIC_UNAVAILABLE
which appears to be a DTS error and not an SSIS error.
I have added tasks to explicitly verify the length of the field and that field actually inserts without any issues into the error table - which has the exact same column definition as the target table.
I am at a complete loss as to how to proceed - does anybody have an idea?
View 4 Replies
View Related
Dec 21, 2007
when doing an 'update table set field1 = 'N',using sql query analyzer, is the update committed immediately? If it isn't commited can a 'rollback' be executed, and what is the format of the rollback command?
View 4 Replies
View Related
Jun 15, 2006
How to implement two pase commit in SQL Server 2000, Is there Database Link link Oracle available here ?
View 5 Replies
View Related
Apr 4, 2007
Hi,
I'm using an SQL Express database over a network, using a C# Express program. So I had to use pure SQL connections and commands instead of using Data Sources (couldn't find a way for it to work). In the program / DB I've got a couple of Master - Detail situations. Something like:
Product:
-----------
productID
(...)
Acessories:
----------------
acessID
(...)
ProductAcess:
--------------------
productID
acessID
So when inserting a new Product, I'll have to first insert the product (with product name, price, and so on) and once I get the product ID from the insert command, I'll insert the ProductAcess rows. I've found a problem in this though. If for some reason the insert of the product is successful, but the insert of ProductAcess fails, I've got a big mess in hands because I'll have a row in Product with no rows in ProductAcess (which shouldn't happen in my program scenario). I could solve this by deleting all rows from the DB which connected in someway to the product that failed to insert, but would be far better and correct if I used a commit command at the end of the insert commands to make sure only the right data would be inserted (saving time and resources). I use this all the time in Oracle databases, but don't know if it is possible in SQL Express... Is it? How? Thanks
View 1 Replies
View Related
Sep 3, 2007
Hi,
My data flow has several transformations:
1. Search an employee, if the employee already exists, update it, otherwise insert it.
2. Once the new employee is created, i have to get its id (with another search transformation )to update another table with it. This id is an autonumeric , thats the reason i have to get it once the record is inserted.
At this momment this second search transformation to get the assigned id for the new reacord doesnt find any employee... i suppose its because these new data is not commited in the database....
the question is, Its possible to force a commit?
Thanks!
View 5 Replies
View Related
Sep 9, 2007
This is a really wide spread - more than a time discussed - on SQL CE MSDN Forums - Issue !!!
Is there any way i can commit changes which happens during runtime (when i am developing the application) such as inserts/updates and deletes to the .sdf DB on the machine ?????
View 34 Replies
View Related
Jul 25, 2007
I have one db test with one .mdf and .ldf file...mdf file size is 100mb and for some reson i removed all the tablesfrom that .mdf file and transfer it into new secondary file so all thetables moved into secondary file now i want to reduce the first .mdffile from 100 mb to 50mb is that possible,it's showing 90mb is free.Please reply
View 1 Replies
View Related