I have a table with 110 nvarchar (255) columns in the database. I receive the data from the server (array) and them loop through it to insert the data into the database. The problem is it takes more than 3 minutes to insert (or update) (in memory) the 310 records it got from the array. I am reusing the Statement (with parameters). How could I improve the performance?
It's really a small quantity of data, the CPU in the device is 520Mhz... it should be alright!
I decided to change over from Microsoft Access Database file to the New SQLServerCe Compact edition. Although the reading of data from the database is greatly improved, the inserting of the new rows is extremely slow.
I was getting between 60 to 70 rows per sec using OLEDB and an Access Database but now only getting 14 to 27 rows per sec using SQLServerCe.
I have tried the below code changes and nothing seams to increase the speed, any help as I would prefer to use SQLServerCe as the database is much smaller and I€™m use to SQL Commands.
Details: VB2008 Pro .NET Frameworks 2.0 SQL Compact Edition V3.5 Encryption = Engine Default Database Size = 128Mb (But needs to be changes to 999Mbs)
Where Backup_On_Next_Run, OverWriteQuick, CompressAns are Booleans, all other column sizes are nvarchar and size 10 to 30 expect for Full Folder Address size 260
14 to 20 rows per second (Was 60 to 70 when using OLEDB Access)
TRY 2
Using Record Sets
Private Sub InsertRecordsIntoSQLServerce(ByVal Group_Name1 As String, ByVal Full_Folder_Address1 As String, ByVal File1 As String, ByVal File_Size_KB1 As String, ByVal Schedule_To_Run1 As String, ByVal Backup_Time1 As String, ByVal Last_Run1 As String, ByVal Result1 As String, ByVal Last_Modfied1 As String, ByVal Latest_Modfied1 As String, ByVal Backup_On_Next_Run1 As Boolean, ByVal Total_Backup_Times1 As String, ByVal Server_File_Number1 As String, ByVal Server_Number1 As String, ByVal File_Break_Down1 As String, ByVal No_Of_Servers1 As String, ByVal Full_File_Address1 As String, ByVal OverWriteQuick As Boolean, ByVal CompressAns As Boolean)
cmd.CommandText = "SELECT * FROM BackupDatabase" cmd.ExecuteNonQuery() Dim rs As SqlCeResultSet = cmd.ExecuteResultSet(ResultSetOptions.Updatable Or ResultSetOptions.Scrollable)
Dim rec As SqlCeUpdatableRecord = rs.CreateRecord()
rec.SetString(1, Group_Name1) rec.SetString(2, Full_Folder_Address1) rec.SetString(3, File1) rec.SetSqlString(4, File_Size_KB1) rec.SetSqlString(5, Schedule_To_Run1) rec.SetSqlString(6, Backup_Time1) rec.SetSqlString(7, Last_Run1) rec.SetSqlString(8, Result1) rec.SetSqlString(9, Last_Modfied1) rec.SetSqlString(10, Latest_Modfied1) rec.SetSqlBoolean(11, Backup_On_Next_Run1) rec.SetSqlString(12, Total_Backup_Times1) rec.SetSqlString(13, Server_File_Number1) rec.SetSqlString(14, Server_Number1) rec.SetSqlString(15, File_Break_Down1) rec.SetSqlString(16, No_Of_Servers1) rec.SetSqlString(17, Full_File_Address1) rec.SetSqlBoolean(18, OverWriteQuick) rec.SetSqlBoolean(19, CompressAns) rs.Insert(rec) Catch e As Exception MessageBox.Show(e.Message) Finally conn.Close() End Try End Sub
€™20 to 24 rows per sec
TRY 3
Using SQL Commands Direct
Private Sub InsertRecordsIntoSQLServerce(ByVal Group_Name1 As String, ByVal Full_Folder_Address1 As String, ByVal File1 As String, ByVal File_Size_KB1 As String, ByVal Schedule_To_Run1 As String, ByVal Backup_Time1 As String, ByVal Last_Run1 As String, ByVal Result1 As String, ByVal Last_Modfied1 As String, ByVal Latest_Modfied1 As String, ByVal Backup_On_Next_Run1 As Boolean, ByVal Total_Backup_Times1 As String, ByVal Server_File_Number1 As String, ByVal Server_Number1 As String, ByVal File_Break_Down1 As String, ByVal No_Of_Servers1 As String, ByVal Full_File_Address1 As String, ByVal OverWriteQuick As Boolean, ByVal CompressAns As Boolean)
UPDATE TABLE SET FIELD1=NULL WHERE FIELD2=something - is running veryvery slow. Table has 200 000 recors and on FIELD2 is index. If number of records to be updated is about 40 000, it takes about 3-4 hours.
I tried this: - create a new MSSQL DB. - migration of TABLE from original databes to new test DB - So, in new DB was only one table, with 200 000 records. Index was only on FIELD2. - I ran that update and it takes about 3-4 hours
I tried that on Interbase and update takes under ONE SECOND!!!!!!!
I have a stored procedure that can take over 5 seconds to add simpledata. Please give any advice on optimizing?Here are the details:***** Access Info ******~10 rows are added to the table every secondthe table is read ~20 times a second (with no lock)***** Stored Procedures *****-------CREATE PROCEDURE dbo.getMessagesForSR@serviceRequestId numericASselect *from SRMessages with (nolock)where srid= sridorder by SRMessageIDGOSET QUOTED_IDENTIFIER OFFGOSET ANSI_NULLS ONGO--------CREATE PROCEDURE dbo.addMessage@sridnumeric,@timestamp bigint,@type smallint,@initiator nvarchar (100),@data ntextASINSERT INTO SRMessages VALUES(@srid,@timestamp,@type,@initiator,@data)GO---------***** The Table *****CREATE TABLE [dbo].[SRMessages] ([SRMessageId] [bigint] IDENTITY (1, 1) NOT NULL ,[srid] [bigint] NOT NULL ,[Timestamp] [bigint] NOT NULL ,[Type] [smallint] NOT NULL ,[Initiator] [nvarchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NOTNULL ,[Data] [ntext] COLLATE SQL_Latin1_General_CP1_CI_AS NULL) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]GOALTER TABLE [dbo].[SRMessages] ADDCONSTRAINT [PK_SRMessages] PRIMARY KEY NONCLUSTERED([SRMessageId]) ON [PRIMARY]GOCREATE INDEX [SRMessages2] ON [dbo].[SRMessages]([srid]) ON [PRIMARY]GO
I have the following stored procedure that I use for an update. The table now has just over two million rows and the stored procedure takes days to run. I am looking for any help that would produce the same results faster. This runs on SQL Server 6.5, on an NT machine with 4 pentium pro 200 processors and 512 MB of RAM so I am relatively sure that the performance issue is in the below statement.
UPDATE PAY_CHECK_DETAILS SET JobID = j.JobID ,HROrganizationID = j.HROrganizationID ,JobDetailID = j.JobDetailID ,GLAccountNumber = j.GLAccountNumber FROM JOBS j, PAY_CHECK_DETAILS p WHERE j.EmployeeID = p.EmployeeID AND j.HRActionEffectiveDate = (SELECT max(HRActionEffectiveDate) FROM JOBS jj WHERE p.EmployeeID = jj.EmployeeID AND jj.HRActionEffectiveDate <= p.PayDate)
AND j.HRActionSequence = (SELECT max(HRActionSequence) FROM JOBS jjj WHERE p.EmployeeID = jjj.EmployeeID AND jjj.HRActionEffectiveDate = (SELECT max(HRActionEffectiveDate) FROM JOBS jjjj WHERE p.EmployeeID = jjjj.EmployeeID AND jjjj.HRActionEffectiveDate <= p.PayDate)) AND NOT (ISNULL(p.JobID,1) = ISNULL(j.JobID,1) AND ISNULL(p.HROrganizationID,'A') = ISNULL(j.HROrganizationID,'A') AND ISNULL(p.JobDetailID,1) = ISNULL(j.JobDetailID,1) AND ISNULL(p.GLAccountNumber,'A') = ISNULL(j.GLAccountNumber,'A'))
Thanks for any help or suggestions you have. Keith
I have designed a 22 table database in sql server that is to act as a backup/alternate access to the data we have stored in an ADABAS database. I've also written a vb.net console program that will take data from ADABAS through a broker connection (one row at a time), checks the sql server database to see if that information is already stored, and then either performs an insert or an update.
I can write the rows from ADABAS to a text file (not using the broker), at the rate of about 1.3 million rows in 1.3 hours. Data can be imported (I'm not sure how this import is done, possibly via a CSV file. SELECTS/UPDATES are not done, just INSERTS) at about 1 million or so an hour. But when I do the update, receiving information via the broker from ADABAS to the VB program (with it's SELECT, then UPDATE/INSERT), I'm only doing about 20-25 thousand rows an hour.
I ran a trace using the SQL Server Analyzer on the database while running the update program, and then ran Profiler using the generated workload. It created a few indices, but I just restarted the update program (I'm still developing, so I delete all rows from all tables each time I rerun the update), but I haven't seen that it's really any faster.
I have a rather large set of data to transfer over, and this 20-25 thousand row time is not nearly fast enough.
I have the following statement that takes quite a long time. Longestof any of my SQL statment updates.UPDATE F_REGISTRATION_STD_SESSIONSET PREVIOUS_YEAR_SESSION_ID = (SELECT s.previous_year_session_idFROM F_REGISTRATION_STD_SESSION R,DS_V4_TARGET.dbo.D_H_Session_By_SessQtr SWHEREr.STUDENT_ID = F_REGISTRATION_STD_SESSION.STUDENT_IDand s.previous_year_SESSION_ID = r.SESSION_IDand s.session_id = F_REGISTRATION_STD_SESSION.SESSION_ID)STUDENT_ID varchar(20) and SESSION_ID char(10) and are indexedWhat I want to accomplish:I want to know if there was a student registration from the prior yearof a registration.Example, if there is a registration for Fall 2004, was there also aregistration for the same student Fall 2003?Maybe there is a better way to approach this?TIARob
Hi,It takes 4 minutes to insert 40 000 records on a SQL Server and 40 secon an other SQL Server. The slower run on windows 2000 terminal serverand the faster on windows 2000. The slower The slower have 2 Go ofram, the faster 512 Mo!!!How to diagnostic what's going wrong on the slower server? I can'tchange the application who do the insertions.
I'm having what you might call an optimisation issue - but I'm also not sure if my approach to this problem is right. I've spent the whole day trying various methods but none seem to be performing as admirably as I'd hoped.
Basically, here's the scenario:
1. Log files are being inserted into a SQL table via Log Parser 2.2. 2. I have a lookup table of ip address ranges to countries and other geographic data. 3. Once the log row has been inserted from Log Parser, I want to update the same log table with data from the lookup table.
The main problem seems to be the updating (the initial insert from Log Parser is lightning quick).
I initially wrote a trigger on AFTER INSERT on the log table that converted the actual user's IP address into IPNumber format using a simple algorithm, then pumped the IPNumber into a quick select query which retrieved the geolocation data. Then, in the same trigger, there was an update statement which basically said:
update dbo.Logs set [Country] = @Country where [IpNumber] = @IpNumber and [Country] = Null
Therein lies the problem. The update statement slows everything down to almost a standstill and also seems to obtain a lock on the table.
Critical factors:
1. The log table must remain readable. 2. The query must execute in seconds -- not over half hour :)
I've experimented with various combinations of indexing, so far to no avail.
I have two tables:T1 : Key as bigint, Data as char(20) - size: 61M recordsT2 : Key as bigint, Data as char(20) - size: 5M recordsT2 is the smaller, with 5 million records.They both have clustered indexes on Key.I want to do:update T1 set Data = T2.Datafrom T2where T2.Key = T1.KeyThe goal is to match Key values, and only update the data field of T1if they match. SQL server seems to optimize this query fairly well,doing an inner merge join on the Key fields, however, it then does aHash match to get the data fields and this is taking FOREVER. Ittakes something like 40 mins to do the above query, where it seems tome, the data could be updated much more efficiently. I would expectto see just a merge and update, like I would see in the followingquery:update T1 set Data = [someconstantdata]from T2where T2.Key = T1.Key and T2.Data = [someconstantdata]The above works VERY quickly, and if I were to perform the above query5 mil times(assuming that my data is completely unique in T2 and Iwould need to) it would finish very quickly, much sooner than theprevious query. Why won't SQL server just match these up while it ismerging the data and update in one step? Can I make it do this? If Iextracted the data in sorted order into a flat file, I could write aprogram in ten minutes to merge the two tables, and update in onestep, and it would fly through this, but I imagine that SQL server iscapable of doing it, and I am just missing it.Any advice would be GREATLY appreciated!
I have a linked server set up on my local SQL 2000 instance. I try and run an update against an SQL 2005 database and it take 29 seconds. I checked the execution plan and it says it takes the entire time on the Remote Scan. Is there something I need to do to speed this up? There is an index on the PK that I am searching against.
I've got 2 servers, with sql server 2000 sp3 and ms windows 2003 server. I've written a very simple stored procedure to insert 20,000 rows into a very simple table TEST (id int, msg varchar_50)
On the first server (P-IV 2 GHz), it takes 700 ms / 1000 insertions and on the second (2x Xeon 2,6 GHz), it takes 13 s / 1000 insertions...
(insertion is : INSERT INTO TEST (id, msg) VALUES (@id, 'dummy text'))
...
SQL Server was installed exactly in the same way...
what could I do see where the problem is ? With profiler, I see no difference while logging all events....
We had a table that had logical fragmentation of 50%. After rebuilding(with default fillfactor 0) I noticed that inserts are much faster. Ifmy page density is 100% wouldn't I get more page splits? I know I ammissing something fundamental here. Could someone get me back ontrack?Table Size 1.5 millionInsert Size 70KBefore: 15 minutesAfter: 3 minutesIndex:Compound clustered across varchar columnsThere are also a couple non-clustered indexes
I support a website (ASP.NET 2.0) where recently users have been unable to insert data due to timeout issues. The functionality executes a query that inserts a single row into a SQL Server 2005 db table. I tried running this query from the backend, and it took 4:48 to insert a single row! Interestingly, after that initial agony any similar inserts I tried took no time whatsoever. I have checked the execution plan, but unfortunately don't really know what I'm looking for with inserts, as most of my experience with execution plans is with select statements. Any resources anyone could point me to for troubleshooting this would be much appreciated.
Hi everybody !!! I have a problem with database SQL server 2000, and I need anyone help me.
I have PC machine with hardware configuration following:
+ RAM : 512M
+ CPU : 2.4 GHz (intel Pentium 4)
+ System : Microsoft Windows Server 2003 Enterprice Edition , Service Pack 1
(this is call Machine1 )
and other Server machine with hardware configuration the following :
+ RAM : 2G
+ CPU : 3.6 GHz( intel Xeon)
+System : Microsoft Windows Server 2003 Enterprice Edition , Service Pack 2
(use RAID 5)
(this is call Machine2 )
.And SQL Server 2000 is installed in these machine.
I created one table and one script to insert several records into this table. and script for inserting:
declare @count int , @max int
set @count =1
set @max = 5000
WHILE (@count < @max or @count = @max)
BEGIN
insert into TBAT_STOCKEOD(TRDT,SEQNO,COMPID,TRANSTYPE) values('20071219',cast(@count as varchar(5)),'13215','1')
set @count = @count +1
END
I execute this script on Machine1 and Machine2 :
+ Machine1 : spent 3 seconds.
+ Machine2 : spent 30 seconds. I don't understand why Machine2 is slower than Machine1 ( while configuration of Machine2 is better than Machine1's ). I don't know what happen.
I checked resource usages in perfmon when SQL server execute script and result following :
Machine1:
Proccessor used 100%
whereas,Machine2 used physical Disk 100%
Is there the way to configurate SQL server to improve speed writting of hard disk?
The following basic UPDATE SQL statement has been running for 16 hours and counting. I need to get this done ASAP.
UPDATE Recipients SET UndeliverableTime = getdate() FROM Recipients INNER JOIN Domains ON (Recipients.DomainID = Domains.ID) INNER JOIN Undeliverables ON ( Recipients.UserName + '@' + Domains.Domain = Undeliverables.EmailAddress)
Is there any way I can see how far this has gone and how long it will take to finish? Will this take another hour to finish or another week?
Both tables (Recipients and Undeliverables) have approximately 80 million records
I did a nearly identical operation with another table that had only 7 million records and it took 10.5 hours. I hope this doesn't scale linearly to 115 hours.
I am tempted to cancel, retune, and rerun but that may be trigger a really expensive rollback operation that could take days. Any ideas?
am using FOR UPDATE triggers to audit a table that has 67 fields. Myproblem is that this slows down the system significantly. I havenarrowed down the problem to the size (Lines of code) that need to becompiled after the trigger has been fired. There is about 67 IFUpdate(fieldName) inside the trigger and a not very complex selectstatement inside the if followed by an insert to the audit table. WhenI leave only a few IF-s in the trigger and comment the rest of thecode performance increased dramatically. It seems like it is checkingevery single UPdate() statement. Assuming that this was slowing downdue to doing a select for every update i tried to do to seperateselects in the beginning from Deleted and Inserted and assigningcolumns name to specific variables and instead of doingif Update(fieldName) i didif @DelFieldName <> @InsFieldNamebeginINSERT INTO AUDITSELECT WHAT I NEEDENDThis did not improve performance. If you have any ideas on how to getaround this issue please let me know.Below is an example of what my triggers look like.------------------------------------Trigger 1 -- this was my original designCREATE trigger1 on TableFOR UPDATEASif update(field1)begininsert into AuditSELECT What I needENDif update(field2)begininsert into AuditSELECT What I needEND...... Repeated about 65 more timesif update(field67)insert into AuditSELECT What I needEND---------------------------------------------------------------------------Trigger 2 -- this is what i tried but did not improve performanceCREATE trigger2 on TableFOR UPDATEASDeclare @DelField1 varcharDeclare @DelField2 varchar....Declare @DelField67 varcharSelect@DelField1 = Field1,@DelField2 = Field2,....@DelField67 = Field67From DeletedDeclare @InsField1 varcharDeclare @InsField2 varchar....Declare @InsField67 varcharSelect@insField1 = Field1,@insField2 = Field2,....@InsField67 = Field67From Inserted-- I do not do if Update() but instead compare variablesif @DelField1 <> InsField1beginInsert into AUDITSELECT what I needendif @DelField2 <> InsField2beginInsert into AUDITSELECT what I needend............if @DelField67 <> InsField67beginInsert into AUDITSELECT what I needend----------------------------------------------IF you have any idea how to optimize this please let me know. Anyinput is greatly appreciated. I do not have a problem with triggersdoing what they are supposed to, they are very slow this is myconcern. The reason I gave you two examples is because i suspect ithas something to do with the enormouse amount of code inside thetrigger. both examples perform about the same whether i use the twohuge selects from the Inserted and Deleted or not.Thanks,Gent
we just moved a database from a shared SQL 2000 server to SQLExpress on a VPS server. Every thing seems to work great, really no performance loss at all, except for one stored proc that is giving us problems. It's function is to update a history table, and then update the production table. On the old server, it would take less than a second to run this, now it takes anywhere from 45 seconds to 1 minute or it times out. This database is used on a classic asp web app. ANY help at all on this would be appreciated as I am pulling my hair out trying to figure out what's wrong here. here is the proc.
------------------------------------------ code ---------------------------------------------- set ANSI_NULLS ON set QUOTED_IDENTIFIER ON SET NOCOUNT ON go
On 1 pc i installed VS2008 besides VS2005 and SQL 2005 (before i installed VS2008 everything was fine)
On the other is a fresh installed XP machine with only VS2008 SQL2550 and Office2007 Latets service packs are installed
On both machines when i open a table in SQL 2005 with the 'open table' command the show of the table is very slow. Every row is updated file by field, the grid is not showing, the row and column headers are not 'grey' as normal.
Scrollign is impossible as the screen updates start again that slow.
Even a small table with 10 rows and 15 field behaves this slow. Its not workable.
Originally posted by Jeremy at 12/10/2001 11:39:38 AM
Hello all,
I've written a simple dts job that uses oracle (8.x) as a source and oracle (8.x) as a destination. I'm using SQL 2000 and Microsoft's oledb provider for oracle as the two connections. I've chosen "Transform Data Task" with the following SQL "SELECT * FROM REPORTER_STATUS WHERE LASTOCCURRENCE > TRUNC(SYSDATE)". As you can see, it's very simple, however it's very very very slow. (averages about 1000 rows per minute). In my column transformations, I've selected many to many versus the one to one. There are no activex scripts or anything along those lines. Just a simple push of the data from one oracle box to the other. The table schemas are identical etc... I've had this problem before with writing to Oracle and I can't imagine that it's really supposed to be this slow. If you need more details, please just let me know.
The official response from microsoft is that dts only allows for single inserts... not bulk or bcp for oracle. There must be someone out there who has figured out how to configure / modify / call (something) from a dts pacakage to insert millions of records into Oracle in a decent time frame...
I am using an aggregate with the OVER clause.Running the script is fast less than 1 second but when I say insert into a temp table the execution plan is very different at it take 8 seconds.I have attached the execution plans. Also the Statistics IO, Time messages. I am using SQL Server 2014 with backward compatibility to 2008 R2.
if (select OBJECT_ID('tempdb..#MM')) is not null drop table #MM CREATE TABLE #MM ([MyTableID] [int], [ParticipantID] [int], [ConferenceID] [nvarchar](50), [Points] [money], [DateCreated] [datetime], [StartPoints] [money], [EndPoints] [money], [LowPoints] [money], [HighPoints] [money]) insert into #MM ([MyTableID], [ParticipantID], [ConferenceID], [Points], [DateCreated], [StartPoints], [EndPoints], [LowPoints], [HighPoints]) selectmm.MyTableID, mm.ParticipantID, mm.ConferenceID, mm.Points, mm.DateCreated,
I would like to know whether it is possible to add and updated date column in a slow changing dimension table using the slow changing dimension data flow transformation.
I would like to keep track of what record is updated in the dimension table based on the data being processed.
If I have a table with 1 or more Nullable fields and I want to make sure that when an INSERT or UPDATE occurs and one or more of these fields are left to NULL either explicitly or implicitly is there I can set these to non-null values without interfering with the INSERT or UPDATE in as far as the other fields in the table?
EXAMPLE:
CREATE TABLE dbo.MYTABLE( ID NUMERIC(18,0) IDENTITY(1,1) NOT NULL, FirstName VARCHAR(50) NULL, LastName VARCHAR(50) NULL,
[Code] ....
If an INSERT looks like any of the following what can I do to change the NULL being assigned to DateAdded to a real date, preferable the value of GetDate() at the time of the insert? I've heard of INSTEAD of Triggers but I'm not trying tto over rise the entire INSERT or update just the on (maybe 2) fields that are being left as null or explicitly set to null. The same would apply for any UPDATE where DateModified is not specified or explicitly set to NULL. I would want to change it so that DateModified is not null on any UPDATE.
INSERT INTO dbo.MYTABLE( FirstName, LastName, DateAdded) VALUES('John','Smith',NULL)
INSERT INTO dbo.MYTABLE( FirstName, LastName) VALUES('John','Smith')
INSERT INTO dbo.MYTABLE( FirstName, LastName, DateAdded) SELECT FirstName, LastName, NULL FROM MYOTHERTABLE
I have a web form with a text field that needs to take in as much as the user decides to type and insert it into an nvarchar(max) field in the database behind. I've tried using the new .write() method in my update statement, but it cuts off the text after a while. Is there a way to insert/update in SQL 2005 this without resorting to Bulk Insert? It bloats the transaction log and turning the logging off requires a call to sp_dboptions (or a straight-up ALTER DATABASE), which I'd like to avoid if I can.
I am running A View that INSERTS into #Temp Table - On Only Certain Days the INSERT Speed into #tempDB is so slow. Â Attached snapshot that shows after one minute so many few records are inserted - and it dosent happen every day somedays its very fast.Â
In an ASP, I have a dynamically created SQL statement that amounts to "SELECT * FROM Server1.myDB.dbo.myTable WHERE Col1 = 1" (Col1 is the table's primary key). It returns the data immediately when executed.
However, when the same record is updated with "UPDATE Server1.myDB.dbo.myTable SET Comments = 'blah blah blah' WHERE Col1 = 1", the page times out before the query can complete.
I watched the program in Profiler, and I saw on the update that sp_cursorfetch was being executed as an RPC once per each row in the table. In a table of 78000 records, the timeout occurs well before the last record is fetched, and the update bombs.
I can run the same statements in Query Analyzer from a linked server and have the same results. The execution plan shows that a Remote Query is occurring on the select that returns 1 row, and a Remote Scan is taking place on the update scanning 78000 rows (I guess this is where all the sp_cursorfetch calls are happening...?).
How can I prevent the Remote Scan? How can I prevent the execution of the RPC sp_cursorfetch for each row in the remote table?
We have a big problem here with merge replication, specifically whenever a schema change occurs. We are replicating schema changes, and triggers/stored procedures. The example is that we changed about 150 stored procedures and about 30 triggers. This is then replicated to the subscriber database (which is also a merge publisher for further remote systems that were offline at the time) over a 10Mb link - hardly low bandwidth. However, the replication process takes about an hour and a half - considering the SQL on the primary server took less than a minute to run this is a big suprise.
We've run a trace to see if we can identitfy what is going on. There seems to be a great number of calls to sp_MSunmarkschemaobject - we are still waiting for a trace to complete to fully analyse this however it looks like it calls this repeatedly for every stored procedure in the database. We are currently re-testing to one of the remote servers with the merge agent set to the slow profile (not much hope that this will alter the poor performance).
This task looks to be excessive - and certainly does not seem to function in a sensible manner. Has anyone else had similar issues or have any suggestions. This is very infuriating as it means that the servers are effectively offline for a minimum of and hour and a half (in fact the remoter servers take over 4 hours !).
In a Data Flow Task, I have an insert that occurs into a SQL Server 2000 table from a fixed width flat file. The SQL Server table that the data goes into is accessed through an OLE DB connection manager that uses the Native OLE DBMicrosoft OLE DB Provider for SQL Server.
In the OLE DB Destination, I changed the access mode from Table or View - fast load to Table or View because I needed to implement OLE DB Destination Error Output. The Error output goes to a SQL Server 2000 table that uses the same connection manager.
The OLE DB Destination Editor Error Output 'Error' option is configured to 'Redirect' the row. 'Set this value to selected cells' is set to 'Fail component'.
Was changing the access mode the simple reason why the insert from the flat file takes so much longer, or could there be other problems?
I'm working on inserting data into a table in a database. The table has two separate triggers, one for insert and one for update (I don't like it this way, but that's how it's been for years). When there is a normal insert, done via a program, it looks like the triggers work fine. When I run an insert manually via a script, the first insert trigger will run, but the update trigger will fail. I narrowed down the issue to a root cause.
This root issue is due to both triggers using the same temporary table name. When the second trigger runs, there's an error stating that a few columns don't exist. I went to my test server and test db and changed the update trigger so that the temporary table is different than the insert trigger temporary table, the triggers work fine. The weird thing is that if the temporary table already exists, when the second trigger tries to create the temporary table, I would expect it to fail and say that it already exists.I'm probably just going to update the trigger tonight and change the temporary table name.