Loooonnngggg Delay In Loading Due To Pre-validation. Can It Be Turned OFF?
Mar 14, 2006During dev. I _know_ certain things are invalid. I don't need the validation, it just wastes a lot of time...
View 3 RepliesDuring dev. I _know_ certain things are invalid. I don't need the validation, it just wastes a lot of time...
View 3 RepliesHi Folks,
I've recently implemented a CLR stored procedure on a couple of databases but i've noticed the first time i use the stored proc in a given database i get a delay of around 10-15 seconds. Subsequent calls in new connection sessions execute immediately. I'm assuming the delay is due to the loading of the CLR and the assemblies needed. The assembly being loading by the CLR is not signed. Has anyone else seen this or know a way round this as it is impacting the user experience of our application.
Many thanks for any help you can provide in advance
Simon
I have a main package calling another package through the Execute Package task.
The main package is passing the Job Instance id as a parameter to the other package.
When i execute the Execute Package task the concerned package is not showing any execution progress.However when i set the Delay validation Property to True , I saw that the package executed instantly and the desired result was obtained.
I am not sure how the Delay Validation property worked for the cause , as in my package I had no scenario of a temp table being called or any other temporary variables being used which needed a Delay Validation.
I created a package loading Excel file. I developed the package using test excel file c:TestTest.xls and use a config file to assign actual folder (the file names will be got from a foreach loop container) when run it.
However, the package cannot run with a configure file which specify a different source folder and still ask for test.xls. It's not possible that it require reset the folder and file every time when it's used for a new folder.
Information: 0x40016041 at Package1: The package is attempting to configure from the XML file "C:workingFinal3.dtsConfig".
SSIS package "Package1.dtsx" starting.
Information: 0x4004300A at Load Raw1, DTS.Pipeline: Validation phase is beginning.
Error: 0xC0202009 at Package1, Connection manager "Excel Connection Manager": SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005.
An OLE DB record is available. Source: "Microsoft JET Database Engine" Hresult: 0x80004005 Description: "The Microsoft Jet database engine could not find the object '\SourceFolderTest.xls'. Make sure the object exists and that you spell its name and the path name correctly.".
Error: 0xC020801C at Load Raw1, Excel Source [1188]: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "Excel Connection Manager" failed with error code 0xC0202009. There may be error messages posted before this with more information on why the AcquireConnection method call failed.
Error: 0xC0047017 at Load Raw1, DTS.Pipeline: component "Excel Source" (1188) failed validation and returned error code 0xC020801C.
Error: 0xC004700C at Load Raw1, DTS.Pipeline: One or more component failed validation.
Error: 0xC0024107 at Load Raw1: There were errors during task validation.
SSIS package "Package1.dtsx" finished: Failure.
I have a couple of databases that see a lot of updates (WEB user session state stuff) and I don't care at all if I lose any or all of the data. They don't get backed up and fresh copies can be created in seconds. Is there a way I can turn off the transaction logging to cut down on overhead?
Thanks
I`m using SQL Server 6.5 and my application does a weekly batch load of data
into a database. Since it`s a re-runnable batch process, there is no need for
logging, the log files just fill up, cause problems and slow the process down.
Is there a way to do an Insert into a database and have it NOT write to the
transaction log?
Alternately, if I just want to dump the transaction log, is there a stored
proc for doing that?
Thanks for your help.
This morning one of our client's database appeared as Suspect. I checked the event log and saw the following error:
Could not redo log record (413:835:2), for transaction ID (0:156861), on page (1:1246), database 'CLP' (database ID 26). Page: LSN = (412:2642:4), type = 1. Log: OpCode = 4, context 2, PrevPageLSN: (412:12236:2). Restore from a backup of the database, or repair the database.
I then tried to run a DBCC CHECKDB on the database and got the following output:
Msg 7987, Level 16, State 1, Line 1
System table pre-checks: Object ID 13 has chain linkage mismatch. (1:339)->next = (1:771), but (1:771)->prev = (1:341). Check statement terminated due to unrepairable error.
DBCC results for 'CLP'.
CHECKDB found 0 allocation errors and 0 consistency errors in database 'CLP'.
After a bit of research I began to get very sad, for a number of reasons. First I can't find much information on this error and what there is implies that I'm totally up the creek. Secondly and worst of all, backups have not been working so we don't have a viable backup of the database.
I did go as far as to run repair_allow_data_loss and received the same Msg: 7897 error as above.
Is there any hope? Can this be repaired? Any help is greatly appreciated.
-Will
Should the AutoClose option be turned on for MSDE databases?
When use as the data source for a website.
Thanks
I got into confusion while working on DB Growth report for 2012 databases which has compression on.I am analyzing the DB Growth based on MSDB..dbo.backupset table which stores the backup information.
But here it gets tricky, In Previous versions we use "backup_size" column to get actual backup size and estimate the db growth based on the previous all backup file info. But now since compression is on in 2012 the "backup_size" colmuns gives a compressed file size(If i am right) so how do you know the actual backup size to estimate the db growth over a period of time??
Today we received an issue on an application database on internal free space on the DB is 0% that was designed with as below
name   fileid   filename   filegroup   size   maxsize   growth   usage
XX   1   I:DataMSSQL.1MSSQLDataNew XX.mdf   PRIMARY   68140032 KB   Unlimited   0 KB   data only
XX_log   2   I:DataMSSQL.1MSSQLDataNew XX_log.LDF   NULL   1050112 KB   2147483648 KB   102400 KB   log only
XX_2   3   I:DataMSSQL.1MSSQLDataNew XX_2.ndf   PRIMARY   15458304 KB   Unlimited   0 KB   data only
XX_3   4   I:DataMSSQL.1MSSQLDataNew XX_3.ndf   PRIMARY   13186048 KB   Unlimited   0 KB   data only
XX_4   5   I:DataMSSQL.1MSSQLDataNew XX_4.ndf   PRIMARY   19570688 KB   Unlimited   204800 KB   data only
XX_5   6   I:DataMSSQL.1MSSQLDataNew XX_5.ndf   PRIMARY   19591168 KB   Unlimited   204800 KB   data only
2 of the secondary data files had its autogrowth enabled to unrestricted with 200MB and 3 of the data files including primary had its Autogowth turned OFF. Application use is complaining that there is no internal freespace on the DB.
What fails to understand us is that when the Auto growth was already TURNED OFF on 3 data files ( 1 primary and 2 secondary ) still why was the application trying to increase the space on the .mdf and .ndf files; as well when the Autogrowth is TURNED ON on 2 of the secondary data files, why was the DB not able to expand these file groups when the autogrowth is already turned off on 3 of its  other files.
What more data i need to ensure i submit an analysis to this.
I want to ship 500,000 aged transactions each night to an archive table and delete them from their source table in one or more logical units of work (LUW). Each row is approx 60 bytes and there is only one non clustered index on the source table presently.
I'm trying to weigh the pros and cons of 3 alternatives. One of them would basically insert the non-aged rows into tempdb, ship the aged records, truncate the table and then insert the tempdb records back into their source all in the same LUW.
For this alternative, I'd at least like to turn off logging when the records get inserted into tempdb as I dont see any value in logging that part of the activity. Is this possible?
Env: SQL Server 2000It actually waited for 50 seconds instead of 5, triedWAITFOR DELAY '000:00:005' and WAITFOR DELAY '000:00:5' respectively,and got same behavior. Bug or ?TIA
View 3 Replies View RelatedSystem: Dell PowerEdge Server with 3 GB RAM, 2.4 GHz CeleronSoftware: Microsoft SQL Server 2000 Enterprise running on Windows 2003ServerSoftware: Microsoft SQL Server 2000 Enterprise running on Windows 2000ServerIf you run the code below, you'll notice something odd occuring. TheMilliSecond value does not change after a 1Millisecond delay. Is this a bugor am I doing something wrong?Any assistance will be greatly appreciated-----------------------------------------------------------------------------------/*Programmer : Des L. DavisDate : July 4, 2004Purpose : Testing Delayed Reaction*/CREATE PROCEDURE [sp_TestDelay] ASDECLARE @DELAYPERIOD NCHAR(12)SET @DELAYPERIOD = '00:00:00:001'DECLARE @Retries INTSET @Retries = 0DECLARE @MAXTRIES INTSET @MAXTRIES = 1000 -- Maximum number of tries before timing outDECLARE @Modified DATETIMEWHILE ( ( @Retries <= @MAXTRIES ) ) BEGINSET @Modified = GetDate() -- Set Modification DatePRINT @RetriesPRINT @ModifiedPRINT 'Year = ' + CONVERT( NCHAR(4), DATEPART( YEAR, @Modified ) )PRINT 'DayOfYear = ' + CONVERT( NCHAR(3), DATEPART( DAYOFYEAR ,@Modified ) )PRINT 'Hour = ' + CONVERT( NCHAR(2), DATEPART( HOUR, @Modified ) )PRINT 'Minute = ' + CONVERT( NCHAR(2), DATEPART( MINUTE, @Modified ) )PRINT 'Second = ' + CONVERT( NCHAR(2), DATEPART( SECOND, @Modified ) )PRINT 'MilliSecond = ' + CONVERT( NCHAR(3), DATEPART( MILLISECOND,@Modified ) )SET @Retries = @Retries + 1 -- Increment loop counter and retryWAITFOR DELAY @DELAYPERIODENDGO-----------------------------------------------------------------------------------
View 14 Replies View RelatedI am parsing a file where along the flow I use a conditional split. One path of the split is the primary table (with IDENTITY) values. The rest of the paths have a FOREIGN KEY to the primary table.
It seems that SSIS is trying to insert the rows at the same time (which makes sense) but this is causing a problem with the secondary tables and their FK constraint since the primary table is not yet written.
Is there a way to delay the secondary tables until the primary table is done?
(I guess one way is to run through the file twice... once for the primary table and another for the rest but that seems wasteful to me...)
Thanks.
Hi,
I am using Sql 2005 SP1 and merge replication on a database. One of the tables is used for an audit trail and has a dynamic filter applied so that it doesn't replicated every audit trail record to every subscriber.
Our sp's tend to insert records in to the audit trail table when someone inserts a new product (for example). The problem is that just recently the insert of new products has been taking >2 seconds, this is relatively slow compared to how it used to be 2 months ago.
Using profiler I have found that it is the insert in to the audit trail table that is taking all the time, and this is taking a long time because of something replication is doing. From profiler I have found that the following statement is the culprit. This is something that replication is doing but why it take so long I don't know:
select count(*) from [dbo].[MSmerge_repl_view_000CC979122E4C88AF27FE08CDCC84EB_B5F96F71937D4D9A949DEECFE540D0C4] [AUDIT_TRAIL_DETAIL] with (rowlock)
where [RowGUID] in
(select [AUDIT_TRAIL_DETAIL].[RowGUID] from inserted [AUDIT_TRAIL_HEADER],
[dbo].[MSmerge_repl_view_000CC979122E4C88AF27FE08CDCC84EB_B5F96F71937D4D9A949DEECFE540D0C4] [AUDIT_TRAIL_DETAIL] with (rowlock)
where (AUDIT_TRAIL_HEADER.ID = AUDIT_TRAIL_DETAIL.FKAuditTrailHeaderID))
The AUDIT_TRAIL_DETAIL table currently has 1.1 million row in it.
Can anyone give me any clues as to what I should do help improve the performance once again? Should I stop filtering on this table?
Thanks for your help
Graham
I'm working with Sql 2005 developer edition
It works well but some times I get long delay in connection and read data.is it any way to solve the problem?
for more information whene is working well I can connect to database
and get all information I need in .1 sec. when is going to be late this
action may takes 20 sec
Hi,
I created a stored procedure that run as a service in the SQL Server , as long as the server is up.
It queries a table , does some work on the rows one by one , and after each row is done - deletes it from the table.
If there is no data - it goes to 'sleep' - waitfor delay ('00:00:03')
does anyone knows if that waitfor command releases the cpu , or just counting and uses the cpu ?
Eyal.
I`ve been following the newsgroups, and the consensus had seemed to be
that 7.0 would be released around November. However, I spoke to a Microsoft
partner last week who told me that the release date would be sometime in
the second quarter of 1999. Does anyone know whether if this is true/untrue?
http://www.castellcomputers.com/?p=44
This bit of SQL script can be used to insert a pause of one hundredth of a second in a trigger, stored procedure or SQL script. This can be useful for when MSSQL’s built-in WAITFOR DELAY function is just too long (it’s minimum increment is a full second!)
declare @later datetime
declare @now datetime
set @later = current_timestamp+'00:00:00.01'
set @now = current_timestamp
while @now < @later
begin
set @now = current_timestamp
end
I'm sure it could be further tweaked to become a function.. And heck, maybe one of the gurus will tell me that there's something i don't know about WAITFOR or whatnot.. Feedback appreciated!
--
Doug Castell
GoldMine Guru
www.castellcomputers.com
Hi,
when call a sub-package from the parent package with the executeoutofprocess=true I have about 5-10 sec delay before sub-package starts running. ( this is a big delay for me as the sub_package is in a foreach loop)
the sub_package itself it's very simple and contains just one script task with a small script in in it.
even if this delay is for validation I still can't understand why it is taking 5-10 ses.
cheers
Hello
I have researched this extensively and have seen a few similar posts on this site, and on the rest of the Internet, but have found nothing which solves my problem. This is absolutely crippling my productivity at work and I would greatly appreciate any advice anyone can suggest.
I have several packages which I need to run at quite regular intervals, and initially they worked absolutely perfectly. They ran quickly and efficiently. As time has passed though, seemingly at random, the pre execute phase of some of these has started to take unreasonable amounts of time. There have been times where I have left it on overnight and found the pre execute phase still at 0% the next morning. This is for queries which shouldn't take more than about 15 minutes to run.
However, seemingly at random, every so often one of the packages works absolutely perfectly. For example this afternoon one of these packages completed in 7 minutes, whereas this morning I had left it pre executing for 2.5 hours until I got tired of it and hit the stop button.
What affects the pre-execute phase and what sort of thing would cause this kind of behaviour? I work for a pretty large charity and I have had most of the IT team look into this at some point, and none of them can see any server activity which would explain it.
As an example of a package I am running which displays this behaviour, all that happens is an SQL statement runs, writes the results to a csv file and then emails me to say that it's finished. This is the SQL it uses:
select
REConstituentID as ID,
convert(varchar(10),DeceasedTickedDate,103) as dDate,
coalesce(Title1,'') as Title,
coalesce(FirstName,'') as FirstName,
Surname,
Gender,
AddressLine1,
AddressLine2,
AddressLine3,
AddressLine4,
City,
County,
PostCode,
coalesce(HomePhone,'') as Phone,
coalesce(convert(varchar(10),DeceasedDate,103),'') as DeceasedDate,
coalesce(
convert(varchar(10),(
select min(Date) from dbo.WH_ConstituentAttributes ca where ca.ConstituentID = WH_Constituent.ID and Type = 'Death Notification Date' and (left(Value,8) not in ('Experian', 'Ebiquita') or Value is null)
),103),
convert(varchar(10),(
select top 1 date from dbo.WH_Action where ConstituentID = WH_Constituent.ID and type = 'DM Remark' and ID in (select ActionID from dbo.WH_ActionAttribute where value = 'Deceased notification') order by date asc
),103),
'') as 'Date Notified of Death',
case
when exists (select top 1 null from dbo.WH_ConstituentAttributes ca where ca.ConstituentID = WH_Constituent.ID and Type = 'Death Notification Date' and Value = 'Golden Charter: Funeral Plan Confirmed')
then 1
when exists (select top 1 null from dbo.WH_Action where ConstituentID = WH_Constituent.ID and Category = 'Phone Call' and type = 'DM Remark' and ID in (select ActionID from dbo.WH_ActionAttribute where value = 'Deceased notification'))
then 3
else 5
end as 'Notification Method Code'
from
WH_Constituent
where
Deceased = 'Yes'
and AddressLine1 is not null
and PostCode is not null
and Country = 'United Kingdom'
and DeceasedTickedDate >= {d '2006-09-20'}
and not (
exists (
select top 1 null
from dbo.WH_ConstituentAttributes ca
where
ca.ConstituentID = WH_Constituent.ID
and Type = 'Death Notification Date'
and left(Value,8) in ('Experian', 'Ebiquita')
)
and not exists (
select top 1 null
from dbo.WH_ConstituentAttributes ca
where
ca.ConstituentID = WH_Constituent.ID
and Type = 'Death Notification Date'
and coalesce(left(Value,8),'') not in ('Experian', 'Ebiquita')
)
and not exists (
select top 1 null
from dbo.WH_Action
where
ConstituentID = WH_Constituent.ID
and type = 'DM Remark'
and ID in (
select ActionID
from dbo.WH_ActionAttribute
where value = 'Deceased notification'
)
)
)
There are no lookups, fuzzy groupings or anything else even slightly complicated.
Any help would be great!!!
Thanks
Iain
Hi,
when I try to run my package with dtexec.exe, it starts fine but in the process it package calles another subpackage and at the time there is big delay before start processing the subpackage.
the subpackage has been setup so executionoutofprocess pramater has been set to true.
any idea what migth be the problem.
I have to metion even when I run this with in the visual studio still I have a big delay.
cheers
Can someone advise if there is a delay in data being written to the database following a tableadapter.update(datatable) command?
I save transactions which are subjected to the above and then a listview is updated to reflect them.
As I work through all is OK and the transactions appear in view.
I then run a backup through my app using a backup object to do this and this reports all OK
I then close the app and re-open and as as I am in debug the database is empty.
I perform a restore through my app using a restore object and selecting the backup file I created previoulsy which reports all OK
The retore procedure calls application.restart to allow the app to initialise to the restored data.
The problem is quite a bit of my data in missing from the restore as if the last block I did prior to backup never actaully made it to the database?
I also rememeber noting that at times when the update method is performed the actual timestamp on the physical databse is not updated....until I close the app and return to the designer?
So does this mean then prior to performing a backup I have to somehow force the app to ensure it has written all changes to the databse?
Thanks
I am looking for some suggestions on how to implement a delay in processing
View 1 Replies View RelatedI have a problem in running queries.
I developed an application uses sqlserver 2005 express edition
I create all queries in storedprocedures.every things work perfect but some times I get long delay in running queries. but after some minutes it comes regular . I coudlnt find any relation between delay and time of work.it comes by chance . also I set timeout for running query for 30 sec. but some times it took more than minutes.
what should I do?
Hi,
I have a package that will be using an FTP task to send a file (file1) to an ftp site.
Once the file (file1) is received at the ftp site, a corresponding file (file2) will be placed in the ftp directory, and then I will need to ftp the corresponding file (file2) back.
However, there will be a time delay between the time that file1 is received and file2 is put in the ftp directory. Maybe 5 to 30 minutes.
What is the best way to handle this scenario? I want to ftp file2 back as soon as it's available. However, I don't know when that is, and so I was wondering if there is a way to dynamically check for the availability of the file, so that I don't have to "hard code" a wait time into my process.
Thanks!
Hi!
I plan to prepare a lot of DMX statements, but can I control
when the datamining model actually was calculated?
Lets say i make following
1. create mining structure... - I expect that this execution will take "no" time
2. create mining model... - I expect the same
3. insert into <miningmodel>... select ... - I expect that that actually will take some time- right?
I really only want to prepare definition of mining structures/models and telling the model what data (insert into) it should use.
Is it possible to control the actual processing, eg. execution of insert statement (= execution of time consuming select-part)?
I have a batchjob that only need to define things but calculation of mining model is not needed until someone really
want's to see it (and it is ok that this person has to wait on processing).
BOL writes
>>
Processing Options
The following table describes the processing methods that are available in Analysis Services, and identifies the objects for which each method is supported.
Process Default
Detects the process state of an object, and performs processing necessary to deliver unprocessed or partially processed objects to a fully processed state. This processing option is supported for cubes, databases, dimensions, measure groups, mining models, mining structures, and partitions.
...
<<
But how to control this behaviour?
Best regards
Hi, Folks!
I have a multitier app running 24 x 7...All is running ok until the server delay too much to respond when I'm trying to add the records, so I start to
receive erros from the msdtc and the transaction is aborted...How can I resolved this trouble? I'm not sure if the cause is too long response from the server, but I have noted that in some moments the server is busier than when it's running ok...just for clearing this, my app have a maximum of 5 threads each one create an transactional objects to process the new entry.
Any idea?
TIA!
Armando Marrero
CTI. Miami
Hello All SQL Experts.
Need your advise on this. I have a DTS package which check for 2 dates and execute tasks when the date do not matched. The problem I am facing now is I could make the next step to start only if the previous step is completed. When the DTS package is executed, all steps being completed almost at the same time. See below / attached DTS package.
In the disgram, I have labelled 5 steps A ~ E, each step needs info from the finished product from previous step to produce correct result in it's own step. I couldn't schedule each step to run at different time because the DTS kicks off based on a file that comes in and each step doesn't have a fixed processing time to complete.
I have tried using On Success or On Complete and both options start the next step immediately not not wait for the job the complete or success. I guess this is because I have transferred the command to external when using command. Is there a way to control by some delay between each task?
Please advise. Thank you.
Each of the step has something like below (refreshing of excel file with macro build in):- I cannot build all macros into one file and run from the main excel.
declare @MainUpdate datetime
declare @TempUpdate datetime
select @MainUpdate=Main_Update_CET from APMEAPV_Compare
select @TempUpdate=Temp_Update_CET from APMEAPV_Compare
--select @MainUpdate, @TempUpdate
if @MainUpdate<>@TempUpdate
begin
DECLARE @commandK varchar(1000)
SET @commandK='Start Excel.exe "D:Daily_Status_Report_EDWHEDWH_Runbook_BTS.xls"'
exec master..xp_cmdshell @commandK, No_Output
END
Hi all,I have recently changed IP addresses on my MS SQL cluster (the new IPaddresses belongs to a different network). Both physical node's IPaddresses and resource's IP addresses have been changed.Everything seems to work fine. The cluster is up and respondingnormally. The resources failover to another node when needed correctly.However, I have noticed that the time it takes for the resource tofailover to another node is much longer than in the past (before IPaddresses change). I have noticed that "Network Name" is the resourcetype that causes the delay. Could you please advise if there issomething that I overlooked and needs to be changed.updated as well, sothat groups fail over faster?Thanks,Aleu
View 5 Replies View RelatedWhen a workstation losts connection to server,it can leave an uncomplete transaction. ThenSQL Server removes the transaction.Could anyone guide me how to set the delaybefore SQL Server do it ?Thanks in advanceJohn S.*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
View 3 Replies View RelatedI am currently experiencing a 30 second delay when starting an SSIS package from a query window or stored procedure in SQL 2005 Management Studio, using xp_cmdshell and dtexec.
When I run the package in BI Dev the execution results state an elapsed time of 4.82 sec, at a command prompt using dtexec the elapsed time is 3.48 sec, from MStudio the elapsed time is 33.86 sec, this test was run using the same configuration and databases. For the MStudio run, if I look at the DTS log file I€™m creating or the PC Application log, it states the package doesn't actually start until 31 sec after the execute button is pressed. I€™ve tried executing the package as both a SQL package and a file package without any difference in elapsed times. I have also set DelayValidation = True for every Task, ConnectionManager and the package itself.
When I look at the package log one difference I see is that the Management Studio executes using €˜NT AUTHORITYSYSTEM€™, BI Dev and the cmd prompt use the local user €˜[Server]Administrator€™, which in this case is the administrator. From this I have to believe it is some kind of user rights problem. I think SQL or the OS is waiting for something and after it times out at 30 sec, it allows the package to run. If this is the case I€™m not sure what it might be or how to find it.
I also tried making an xp_cmdshell_proxy_account with admin rights but this didn€™t seem to work either. I€™ve included the query code below. Any ideas, help or solutions are greatly appreciated.
DECLARE @cmd varchar(250)
DECLARE @Result INT
SET @Result = 0
--SET @cmd = 'dtexec /F "C: empDP2000 DataTransfer.dtsx" /DE ttalg /REP EW'
SET @cmd = 'dtexec /SQL "DP2000 DataTransfer" /DE ttalg /REP EW'
EXEC @Result = xp_cmdshell @cmd
SELECT @result
I have a couple of tables set up with column level tracking and using the "Subscriber Always Wins" conflict resolver.
I am still trying to work out why there is a conflict as the only updates to these rows happen at the subscriber.
What I am finding is that a conflict occurs and there appears to be a delay between the conflict being recognised and then resolved.
The reason I know there is a delay is because this is playing havoc with a data extract that relies on the updates from the subscriber.
For instance, I have a job on which the status is changed. That status change triggers an extract via BizTalk to another system. BizTalk, via an SQL adapter extracts the data from the server and I find that some of it is missing in the extract.
I go and look into the database at the raw data and find that the data is in fact there.
I have certain triggers running which set up export flags in the database to say this data should be exported. This works perfectly when there are no conflicts.
In each case where data has not been extracted, a conflict occurs.
Is there any way to guarantee that the conflict will be resolved at the same time as the synchronisation is complete - i.e. within the same transaction?
I thought replication was supposed to be a single transaction - you get it all or you get nothing.
I'm interested to hear from anyone who has experienced this behaviour.
Thanks