Conflicts Consistently Occur On Merge
Aug 15, 2007
Hi,
I was hoping a sharp eye may be able to pick up on what I am doing wrong here, I am i must be making a comon mistake. If necessary I am happy to create and post a sample if necessary.
Problem:
The publication parent table has a filter (defined below) and also has a join filter(defined below) to a child table. When changes are made to the parent and child table and synchronised a conflict is raised. For the child table for some reason the merge agent is saying an explicit update occured at the publisher but the values at the publisher have not changed at all and no sql has been executed to update the publisher rows. This in effect is stopping the changes to the child table being applied until i manually apply them in the conflict resolver. The strange thing is the conflict should not occur in the first place.
Perhaps my SQL server 2005 version? 9.00.2050.00 SP1 Standard Edition
Environment:
SQL Server 2005 - 9.00.2050.00 SP1 Standard Edition
SQL Server Mobile
Detail:
FKs are
FK: Job.JobScheduleID --> JobSchedule.ID
FK: JobDetail.JobID --> Job.ID
All three tables have int based IDs that are auto increment.
Publication Articles:
JobSchedule is download only
Job is Bidirectional, identity range management is MANUAL (only updates occur on this table)
JobDetail is Bidirectional, identity range management is MANUAL (only updates occur on this table)
Filters are of the following form:
Filter Job:
SELECT <published_columns> FROM [dbo].[Job] WHERE convert(nchar,[companyID])=Host_Name() AND [JobCompletedDate] IS NULL AND
( [JobScheduleID] in (SELECT distinct ID from JobSchedule where GETDATE() BETWEEN [JobSchedule].[start] AND [JobSchedule].[end]) )
Join Filter Job --> JobDetail
SELECT <published_columns> FROM [dbo].[Job] INNER JOIN [dbo].[JobDetail] ON [Job].[ID] = [JobDetail].[JobID]
Now the first thing to note with respect to using GetDate() i have read http://msdn2.microsoft.com/en-us/library/ms365153.aspx so i thought that i would remove that portion of the Filter on the Job table just to see what happens.
SELECT <published_columns> FROM [dbo].[Job] WHERE convert(nchar,[companyID])=Host_Name() AND [JobCompletedDate] IS NULL
This still did not resolve the issue. I then Remove the AND [JobCompletedDate] IS NULL and it then started working fine (cool) but of course no longer satisfys the rule i want to create (seriously uncool ).
Any ideas out there ?
Much appreciated,
pdns
View 6 Replies
ADVERTISEMENT
Mar 25, 2008
I have set up merge replication between 1 publisher and 9 subscribers (all push subscriptions). Distributor and publisher are located on the same machine.
Although everything seems to work fine from the outside (most of the time), there are a lot of conflicts in the conflict table for the replication. They appear all the time. There are a lot of "download insert failed" conflicts. They always look like in the following three screenshots:
http://www.tronk.be/conflicts/conflict1.JPG
http://www.tronk.be/conflicts/conflict2.JPG
http://www.tronk.be/conflicts/conflict3.JPG
In the same way, there are also many "upload insert failed" conflicts.
In addition to this, there are some "update conflict"s (but a lot less than the other conflicts). Some of them show the same row at both sides:
http://www.tronk.be/conflicts/conflict4.JPG
Others show a different row at the both sides:
http://www.tronk.be/conflicts/conflict5.JPG
The only thing that causes a real problem is the last screenshot, although I don't understand why the other conflicts are there (the inserst statements actually seem to happen anyway, even though there are conflicts). In case of the last screenshot, I can't find any place where an UPDATE actually happens at APP-STB, while I can clearly pinpoint the UPDATE at the other side (which is what actually comes from our program).
One more thing, the system is running on its limits, but all replication seems to be working fine.
I would appriacte any help or comments very much.
View 1 Replies
View Related
Apr 18, 2007
Hi,
I am using SQL Server 2000 Merge Replication. Sometimes when the data is replicated there are conflicts which when examined show it is due to voliation of foreign key constraint. But the data (keys) in already present in the master tables. Is there a way to give an order to the way the tables are replicated. This is so because i think the data in the details table is relpicated first instead of the master table. The conflicts are resolved properly when done using the conflict viewer.
View 6 Replies
View Related
Jun 30, 2007
Hello.
Let me describe first my replication setup:
- SQL Server 2005 SP1 (SP2 coming soon)
- Approximately 35 remote users (Salesrep laptop) using Pull Subscriptions
- Merge (Bi-Directional) (8 articles - tables only)
- Merge (Uni-Directional) (5 articles - tables only)
- Transactional (5 articles - tables only)
Users receive data based on their territory #, therefore they receive their customers sets of data. It happens that customer change from one territory to another but not frequently. When it happens, so far so good, the data is redirected to the new salesrep using the model we configured (Territory table with SUSER_NAME() to filter the data).
Ok, here's my problem. Since a while, I can see in the replication monitor that some users seems to log the same conflict again and again (Merge process). I mean, checking the history for many subscribers, there is always the same number in the "Conflict" colums.
As an example:
- Merge completed after processing 18 data change(s) (4 insert(s), 14 update(s), 0 delete(s), 31 conflict(s))
- Merge completed after processing 27 data change(s) (10 insert(s), 17 update(s), 0 delete(s), 31 conflict(s))
- Merge completed after processing 20 data change(s) (5 insert(s), 15 update(s), 0 delete(s), 31 conflict(s))
and so on...(Those are only 3 historical entries for a single subscriptions but there are many like that, always with the same count of conflict - vary per user). It appears to me that the same conflicts come over and over.
The thing is that if I decide to reinitialize a subscription, conflicts will disappear, therefore I know that it is not a process on the server that keeps changing the data; anyway, even if it was, changes would be applied on the subscription because the server always win in my setup.
Any idea what should I do with this? Any help would be greatly appreciated.
Thanks.
View 3 Replies
View Related
Jul 31, 2007
Hello!
I got a problem with merge replication. I got a central sql server 2005 database on which i got a publication. Also there are 2 sql servers CE which are subscribers to that publication. I need to add some records on both mobile servers indepently but i'm using primary key as a user id in one table.
So when i add a user on one PDA i use next available number in column ID. In the same time I add a user on other PDA with the same ID, because I don't know that there is such user with the same ID.
Ok then I do synchronization. First PDA synchronizes with server but second tells me that there is a record with the same PK. And my question is. Can these be resolved writing a custom resolver or maybe you know others resolutions, because I think that is a typicall problem but couldn't find any solution other than using ie. HOST_NAME() function.
Thanks for help!
MZ
View 1 Replies
View Related
Aug 14, 2007
We use autogenerated primary keys in most of our tables. Some of these keys are also foreign keys in other tables. Right now there is only 1 database sever at a central location. But now there is a need to have multiple database servers at different locations. Data from these remote sites needs to be replicated to the central server. Some data would also distribute from central server to selected remote sites.
If I could resdesign, I would have choosen something like GUIDs for the primary keys or combination of something like ServerName and AutoGenerated number as a combined key. But that's not possible right now. How do I handle merge replication conflicts in this case?
I am looking for some pointers as to how to handle this case. If it were just simple table with 1 primary key, that would be easy as I can throw the primary key on remote server and let the central server create a new key when data is inserted. But in my case, a single table can be related to 5 or more other tables through these autogenerated keys. Any help is much appreciated.
View 2 Replies
View Related
Sep 20, 2007
I have SQL CE clients replicating against a SQL Server 2005 db using merge replication. The DB has a table A and a table B, which has a foreign key to table A. It is common in my application for records in table A to be deleted on the server. I'm running into issues when a table A record has been deleted, but table B records were created on the clients which point to that record. When I sync I get a conflict because the table B records cannot be applied at the server, and the table A delete cannot be applied at the client.
What I would like to happen is to have the table B records on the client be deleted by the merge process, and to create a log of the event. I've looked into creating a business logic handler to do this, but I'm not sure what type of conflict this is (UpdateDeleteConflict or otherwise), and I'm not sure that deleting the table B records is something I can do in the business logic handler.
This seems like it would be a common problem in merge replication. I'm not locked into using a custom business logic handler at all. Any suggestions are welcome.
Thanks.
View 3 Replies
View Related
Feb 13, 2015
There is an error in one of my merge publication. The error is,
The change for the row with article nickname 2336003 (test), rowguidcol {436456F0-F5AD-E411-80CF-5CF3FC1D2D76} could not be applied at the destination. Further information about the failure reason can be found in the conflict logging tables.
When i checked my tables I got following values in rowguid column
publication 436456F0-F5AD-E411-80CF-5CF3FC1D2D76
subscriptionD824D120-23AD-E411-80E3-00155D0E1001
conflict tables 689C6A61-5359-4BB5-BECD-B03F5F94D79A
View 0 Replies
View Related
Jul 18, 2007
Hi all,
We are using a mix of SQL 2005 and 2000 servers and our "main" database server is running SQL 2005 x64 (SP2 ver. 3042).
Our system has run perfectly for months, then subsequent to an SP2 update we are seeing several instances where the data record counts are different for several tables among all the servers.
We are using Merge Replication, with no filters and published every 2 minutes.
Any ideas?
TIA,
Michael
View 1 Replies
View Related
Aug 3, 2006
Publisher is 2005 x64, subscribers SS2000 (SP3) and SS2005 x64. Pull agents, no filters on subscriptions. We are seeing many seemingly random conflicts on between SS2000 subscriber and publisher. It happens on several different tables.
One table is never editted, only inserts happening everywhere and deletes happening on the SS2000 subscriber. Deletes will sometimes generate conflict. Reason is '"he row was deleted at 'CTS11.CTS' but could not be deleted at 'cts4a.cts'. Unable to synchronize the row because the row was updated by a different process outside of replication." CTS11 is SS2000 subscriber, CTS4A is publisher.
Probably unrelated bug but when looking at conflicts on this same table in SS2005 conflict viewer, get error "ID is neither a DataColumn nor a DataRelation for table summary (System.dATA)" and then "Column ID does not belong to table summary (System.Data)". ID column is rowguid, only unusual thing about table is that it has varchar(8000) field plus some other fields.
Other tables generate conflicts with this reason "The row was updated at 'CTS11.CTS' but could not be updated at 'cts4a.cts'. The merge process was unable to synchronize the row." I enabled verbose logging in the merge agent but the log file didn't contain any further explanation.
This same topology and schema worked fine when all publishers and subscribers were SS2000.
Any insight into how to fix this would be appreciated.
View 9 Replies
View Related
Oct 12, 2007
I am using SQL 2005 build 9.0.2227
I have a custom conflict resolver - which fires on update conflicts (using row level tracking)
I have had a couple of occasions when the resolver has failed with the following error:
"The schema of the custom Dataset object implemented in the business logic handler does not match the schema of the source Dataset object. Verify that the custom Dataset object has been correctly defined"
In both cases I found that the row for which a conflict was being handled was not a conflict at all. One was a straightforward non conflicting update at the publisher and the other was a similar update at the subscriber.
I got round the problem by temporarily using a fix version of the conflict resolver dll that either set the custom Dataset to the publisher dataset or the subscriber dataset - depending on where the update had occurred.
When the first error (publisher update) occurred - the resolver code was basing the custom dataset on the publisher dataset - which was presumably empty - so I changed the code to base the custom dataset on the subscriber dataset. The second error therefore occurred when the custom dataset was based on the subscriber dataset - which again was presumably empty
Note that the tables involved in each occasion were different and neither table is filtered.
Is there a known bug in this area?
I am considering trying to change the resolver code to identify false conflicts in order to workround the problem - but this would be difficult to test as I can't reproduce the problem
aero1
View 2 Replies
View Related
Jun 9, 2006
We have SQL Server 2000 with merge replication at a Publisher and subscriber.
We have some records getting deleted at Publisher and Subscriber and no conflicts are logged.
We have tried the compensate_for_errors setting and this has had no effect.
This is causing serious data corruption and has now become an URGENT issue. Out tech team are almost out of ideas.
Has anyone experienced this or have any ideas as to what to check next?
View 3 Replies
View Related
Aug 15, 2006
I have some problem with ASP.NET cache, I found other people has similar problem, but I didn't find real solution.
The one bother me most is the SQLCacheDependency doesn't work stable. I insert object in cache and has SQLCacheDependency linked. After a period of time, it stopped working. That means the the object is still in cache, but change on db side doesn't remove the cache entry. I am not sure if it is ASP side or SQL side, I feel it is ASP side.
I am using 2.0 + SQL 2005.
Once the db command notification stop working, you have to restart IIS or clear all items in cache since you don't kno which one is changed.
The following is the code I use to handle the cache :
string cacheKey = LinkSites.GetMappedKey(virtualPath, fileid.ToString()); // this will return a key from virtualPath
if (!String.IsNullOrEmpty(cacheKey)) frd = (FileRecordData)HttpContext.Current.Cache[cacheKey];
if (frd == null)
{
int siteid = 0;
SqlCacheDependency scd = null;
lock (_connection)
{
try
{
SqlCommand sqlcmd = new SqlCommand("select ownerid,id,uniqueid,parentid,category,name,content,dated=isnull(updated,created),created,updated,isdirectory from dbo.link_sourcestore where id=@id", Connection);
sqlcmd.CommandType = CommandType.Text;
SqlParameter sqlparam;
sqlparam = sqlcmd.Parameters.Add("@id", SqlDbType.Int);
sqlparam.Value = fileid;
scd = new SqlCacheDependency(sqlcmd);
using (SqlDataReader reader = sqlcmd.ExecuteReader())
{
if (!reader.HasRows) return null;
reader.Read();
siteid = LinkRoutine.Convert(reader["ownerid"], 0);
frd = GetRecordData(reader);
}
}
catch (Exception e)
{
ErrorHandler.Report("GetCachedFileRecord 2 [" + realVirtualPath + "," + virtualPath + "]", e);
return null;
}
}
if (scd != null)
{
frd.CacheKey = cacheKey;
frd.CacheDependency = scd;
HttpRuntime.Cache.Insert(cacheKey, frd, scd, Cache.NoAbsoluteExpiration, new TimeSpan(24, 0, 0), CacheItemPriority.NotRemovable, new CacheItemRemovedCallback(LinkCacheHandler.RemovedCallback));
}
}
It just read the record and add into cache, when cache item removed, call the static method RemovedCallback in LinkCacheHandler, LinkCacheHandler is posted below. After I restart IIS, it will work for a while, 5, 10 or more minutes, but after a while, even I set breakpoint in RemovedCallback, I don't get anything when I change the record. (when I call my clear cache method, which will remove all records from cache, it runs to the breakpoint. So the callback is fine)
public class LinkCacheHandler
{
public static void RemovedCallback(string k, object v, CacheItemRemovedReason r)
{
if (!k.Contains("system/cache.ascx"))
{
LinkSites._cacheLog += "RemovedCallback[" + DateTime.Now.ToString() + "]<br/> " + k + ((v is FileRecordData)?(" : " + ((FileRecordData)v).CacheKey) : "") + " " + r.ToString() + "<br/>";
LinkSites.NotifyCacheObject(k);
}
}
}
View 1 Replies
View Related
Mar 15, 2008
hi,
so first of all, I really hope this is the right place to ask this, as the term data mining sounds like what I am doing.
so I am using c# along sql express editions.
I have a client, item and order database. now for each time a new order is sold, the user has to fill an order data entry by entering information about the client (name, address...) and the sold item (name, price...).
now the thing is that I already have an item and client database. so in case the user enterys a new client or item, I will add those entries to the existing database. but in case they already exist in the my data (previous client, for instace), then I want to be able to point to his ID instead of creating a new replicate entry.
Now would that be data mining, and what can I do to accomplish this. some software autocompletes the entry process for instance, meaning that it already has detected the data, is there a function like that in sql express?thank you
View 3 Replies
View Related
Jul 20, 2005
Hi I am a newbie to SQL.I have a historical list of digatal points listed by time.ie: 3 fieldsPointName;Date/Time;State.I need to return a list of When a specific point chsnges state.For example a list everytime Point A transitions to State 1.Any help is appreciated.--Posted via http://dbforums.com
View 1 Replies
View Related
May 10, 2007
Hi all,
With C# or VC++ we can use ADO.NET that support the system work smothly when failover occur. I would like to handle failover in t-sql enviroment and it seam to be hard for me when swiching ":connect <servername> code
Do you have any idea to handle it with T-SQL. I need to make a demo on it. Please help!
View 1 Replies
View Related
Aug 9, 2006
Hello all. I have a table with two coulmns CODE and DESCRITPION. Can anyone suggest how i can go about deleteing the entire record where two or more codes are the same?
Thanks.
View 4 Replies
View Related
Jul 17, 2007
Hi,
I'm trying to export sql table as fixed length text file with format file but I got the following error message:
Error = [Microsoft][ODBC SQL Server Driver][SQL Server]
Warning: Server data (61 bytes) exceeds host-file field length (60 bytes) for field (4).
Use prefix length, termination string, or a larger host-file field size.
Truncation cannot occur for BCP output file
All fields in the SQL input table have char data type and I use the format file like this:
7.0
29
1 SQLCHAR 0 10 "" 1 SEQ
2 SQLCHAR 0 1 "" 2 NPARSED
3 SQLCHAR 0 115 "" 3 COMPANY
4 SQLCHAR 0 60 "" 4 ADDR1
....
27 SQLCHAR 0 1 "" 27 LACS
28 SQLCHAR 0 2 "" 28 DPV
29 SQLCHAR 0 2 "
" 29 ZIP4CODE
I've been researched about this error but I couldn't find the clear answer.
The strange thing is that all the records are char() fields, not varchar()
And I checked the max length record for the 4th column(ADDR1) and it was 60, not 61.
However I'm still getting the error.
The output file was exported but some of the records have short length.
Is this some kind of bcp bug?
I used SQL Server 2000 Standard w/ SP4
And the following is the command that I used:
declare @cmd varchar(2000)
SET @cmd = 'bcp "Input_table" out "D:AddressUpdateTmpxFixADDR.dat" -fD:AddressUpdateTmpxFixADDR.fmt -Usa -Psapass -SMyMachine''
print(@cmd)
EXEC master..xp_cmdshell @cmd
Please let me know if anyone solve the similar problem.
Thanks,
- Hyung -
View 3 Replies
View Related
Mar 25, 2008
Hi,
I am using following code to fill my dataset. when multiple user connect to the web-site it will give me following error:
Error Message: Transaction (Process ID 98) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
I am using stored procedure to select records from database.
DataSet ds = new DataSet();SqlConnection mc = OpenSqlConnection();
try
{SqlCommand cmd = new SqlCommand(strCmd, mc);cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandTimeout = 0;if (parameters != null)
{foreach (KeyValuePair<string, object> p in parameters)
{
cmd.Parameters.AddWithValue(p.Key, p.Value);
}
}SqlDataAdapter da = new SqlDataAdapter(cmd);
da.Fill(ds);
}catch (Exception ex)
{
}
finally
{
mc.Close();
}
return ds;
Appreciate your help,
prashant
View 1 Replies
View Related
Jan 6, 2004
I had write a ActiveX service to delete several tables and those records are more than 100000. When I test it by deleted 1000 records it is ok, but once the volum is increase until 100000, it will give me a error message said timeout operation fail.
how can i overcome this problem. please!!!!
View 8 Replies
View Related
Apr 30, 2008
We get this error when we add IP addresses to the Windows system when SQL 2005
database activity is on-going:
Database error: A transport-level error has occurred when receiving results
from the server. (provider: TCP Provider, error: 0 - The semaphore timeout
period has expired.)
.NET application can be running for weeks without error, but after adding a
new IP address, application gets 5-16 'transport-level errors' before
correcting itself.
Error occurs on Windows XP computer in our case. SQL Server, running on Windows server 2003, doesn't seem to pick up on the newly added IP address.
View 12 Replies
View Related
Jan 19, 2008
Hi, I work in a hosting company and one of the customer's has the following error listed in the event viewer, they have asked us to look into the problem as the web server is showing the error and they suspect it is a connection problem to the database. From a windows OS point of view I cannot find anything that could be causing this. Could someone confirm that this looks like and app/coding issue rather than an OS issue?? System.Data.SqlClient.SqlException: Timeout expired. The timeout periodelapsed prior to completion of the operation or the server is notresponding.--------------------------------------------------------------------DEBUG INFOUnable to connect to SQL Server session database. Timeout expired. The timeout period elapsed prior to completion ofthe operation or the server is not responding.--------------------------------------------------------------------BASE EXCEPTION TOSTRINGSystem.Data.SqlClient.SqlException: Timeout expired. The timeout periodelapsed prior to completion of the operation or the server is notresponding. at System.Data.SqlClient.SqlConnection.OnError(SqlExceptionexception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlExceptionexception, Boolean breakConnection) atSystem.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior,SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSetbulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReaderds, RunBehavior runBehavior, String resetOptionsString) atSystem.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehaviorcmdBehavior, RunBehavior runBehavior, Boolean returnStream, Booleanasync) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehaviorcmdBehavior, RunBehavior runBehavior, Boolean returnStream, Stringmethod, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehaviorcmdBehavior, RunBehavior runBehavior, Boolean returnStream, Stringmethod) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehaviorbehavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader() at System.Web.SessionState.SqlSessionStateStore.DoGet(HttpContextcontext, String id, Boolean getExclusive, Boolean& locked, TimeSpan&lockAge, Object& lockId, SessionStateActions& actionFlags)--------------------------------------------------------------------STACKTRACE at System.Data.SqlClient.SqlConnection.OnError(SqlExceptionexception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlExceptionexception, Boolean breakConnection) atSystem.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior,SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSetbulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReaderds, RunBehavior runBehavior, String resetOptionsString) atSystem.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehaviorcmdBehavior, RunBehavior runBehavior, Boolean returnStream, Booleanasync) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehaviorcmdBehavior, RunBehavior runBehavior, Boolean returnStream, Stringmethod, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehaviorcmdBehavior, RunBehavior runBehavior, Boolean returnStream, Stringmethod) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehaviorbehavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader() at System.Web.SessionState.SqlSessionStateStore.DoGet(HttpContextcontext, String id, Boolean getExclusive, Boolean& locked, TimeSpan&lockAge, Object& lockId, SessionStateActions& actionFlags)
View 1 Replies
View Related
Mar 28, 2007
What are conflicts from MERGE replication point of view?
------------------------
I think, therefore I am - Rene Descartes
View 1 Replies
View Related
Jul 10, 2006
I have two tables that are getting conflicts between the subscriber and the publisher, however I am pretty sure we only update these tables at the subscriber.
I have column level tracking turned on.
The tables both have Nonoverlapping, single subscription (3) set for partition options. I wonder does this do any maintenance to the tables?
I have put in place some triggers to audit what makes changes to the data, but won't know until tomorrow, so if there is something that anyone knows about that might help, please let me know.
Thanks
View 6 Replies
View Related
Oct 1, 2007
I have a database on SQL Server 2005 and am using Access 2003 for my front-end. Today, I started having problems with ONE of the tables. I can't make changes - I get the "another user has made changes" and do not have the Save Changes option - only the Copy to Clipboard or Drop Changes options. I can make changes directly to the table using the Management Studio.
I don't know if it's relative, but it happened as I was trying to add an "After Update" event to set a field equal to currentuser().
Do you have any idea what may be happening?
Thanks
View 7 Replies
View Related
Jun 14, 2006
Can someone who has had direct experience with this tell me exactly what happens when a conflict (updating same record on two nodes at the same time) occurs in a P2P replication topology? Does the Dist. Agent throw an error? More importantly does the replication set continue to replicate the articles after any error occurs?
Thanks,
Derek
View 4 Replies
View Related
Feb 19, 2007
Can someone answer/point me to an article to handle update conflicts when replicating?
Say for example, you pull down data into your sql everywhere database and start making changes. Before you replicate, changes are made in the master database that conlift with changes you've made. For example, when you pull down your data there was a product with 4 units left. So maybe you place an order for that product for 4 units while offline. However, while you were offline someone else has taken all 4 of those units and when you replicate, you need to throw an error. How do you handle this situation?
View 1 Replies
View Related
Jan 31, 2008
Hai friends,
I was created report with 9 sub reports , with multiple column driildown for based on client requirement, iam displying the correct data and in database server data is availabel between 11/01/2007 to 11/30/2007, iam applying the input parameters are begin date, end date, region(defaultly ALL), Department(defaulty ALL), site(defaulty ALL).
my problem is while selecting the default dates (minum and maximum dateranges) with selecting ALL regions, ALL departments, ALL sites iam not getting the output. it showing error like
" An error occur on local report processing, an internal error occur on report server, see error log for more details"
but iam selecting the date as 11/01/2007 to 11/02/2007( that is not large records, between two dates only) it shows correct output.
while applying the default ranges that is 11/01/2007 to 11/30/2007 report shows above mentioned error.
problem with 9 sub reports ? or any thying
please help me how to solve my problem next coming two days i have to deploy my report to client, at the time it will show data between default date ranges.
Help me thanks in advance
JACKS V
View 1 Replies
View Related
Mar 6, 2004
Hi, I need to set an alert when a replication-conflict occurs rather that to check for conflicts manually. How can i accomplish this? I couldn't find the particular error-message to trap in an alert.
Regards!!
View 1 Replies
View Related
Sep 11, 2007
I've been experiencing conflicts in my replication system that I can't seem to get my head around. The following is the scenario:
3 sqlservers, all running sql server 2005. Server B is the publisher and Server A and Server C (64 bit) are subscribers. The Queue Reader Agent runs on the publisher. I set up transactional repl with updateable subscriptions with the default conflict resolution policy of 'Publisher wins'.
There are 2 kinds of processes: 1. Nightly batch updates and 2. Daytime updates by real clients. The Nightly batch updates runs an on the publisher, which is B. Batch updates are massive updates and running it on the publisher makes sense and it works like a charm. Online updates are made on the subscriber 'C'. This subscriber is set to Queued update mode, and everyday I see a significant number of transactions that are detected as conflicts and the Publisher wins. As a result the changes made on Server C are getting lost. I have verified that no user/client is logged into Server B to do any updates. Users complain that their updates are lost. This is the most puzzling and frustrating bit. I don't see how a conflict can happen if nobody is updating data on the Publisher during the day. SQL Updates on Server C are getting rolled back on a conflict detection because the "Publisher wins", and SQL Inserts on server C are getting deleted because they don't exist on the publisher. Now, how can a insert done on the subscriber be marked as a conflict. There is no row on the publisher to compare the unique guid with, how can it be a conflict?
And the Queue Reader Agent crashes every 3-4 days. No useful information except it creates a dump file for which users have no tools to read it.
Has anyone seen this behavior ? Or is there a known bug in the QueueReader Agent?
My users are losing faith in the replication system and so am I.
Thanks for your time,
-chiraj.
View 3 Replies
View Related
Dec 3, 2006
Hi everyone, I'm creating a ASP.NET 2.0 web application utilizing sql server 2000 as a database. My problem revolves around multiuser acces with long running processes. Some of the pages in the application have long running processes against large tables in the database, lets say that take 2minutes to complete. My problem is how do I utilize ado properly in the rest of my application to display a message to users who may try to access data associated with a table while in the midst of one of its long running processes? For instance I would like to notify the user that "Table X is currently locked, please try again in a few minutes".Do I catch sqlException and examine the .Number property? Any insight is appreciated.
Thanks.
View 2 Replies
View Related
Dec 30, 1998
I have heard that there are potential problems if you put service pack for NT on your server, in that it conflicts with SQL 6.5. Does anyone know of
any such problems with either running 6.5 under these conditions or installing 6.5 if the service pack is already on the server.
Thanks
View 2 Replies
View Related
Apr 2, 2008
Hi:
I have maintenance plan on DBABC backup log to .trn job to run every 90 minutes (daily).
in order to keep the log file small, I also set up a job (T-SQL) to run at 4:15 am to backup log ABC with truncate_only, then run dbcc shrinkdatabase (DBABC, 10)
it looks "backup log ABC with truncate_only" has conflicts with the every90 minutes backup transaction log.
Question: could I keep the backup transaction log every90 minutes, but still could shrink the log file. The log file is growing very fast.
Or I have to use differential backup instead of backup tran log?
thanks
David
View 4 Replies
View Related