I have a table containing 8 million records. I need to replace 2 million of these records with a scaled down query that goes something like: SELECT 1, ShareholderID, Assets1 FROM MyTable (Yields appx. 200,000 recods) SELECT 2, ShareholderID, Assets2 FROM MyTable (Yields appx. 200,000 recods) . . . SELECT 10, ShareholderID, Assets1 + Assest2 + Assets3 + ... + Assets9 FROM MyTable (Yields appx. 200,000 recods)
Updates and cursors just seem to be too slow.
So far I have done the following, but was wondering if anyone could think of a better way. SELECT 6 million records that don't need to be deleted into a #TempTable Use statements above to select into same #TempTable DROP and recreate Original Table SELECT 6 + 2 million records INTO original table.
This seems rather convoluted. Is there a better approach? Would it be worth while to dump data to a file and use bcp / Bulk Insert
The requirement is: I should allow single row delete from a table but not bulk delete. An audit table should get updated if there is any single delete or single update. So I wrote the triggers as follows: for single and bulk delete
ALTER TRIGGER [dbo].[TRG_Delete_Bulk_tbl_attendance] ON [dbo].[tbl_attendance] AFTER DELETE AS
[code]...
When I try to run the website, the database error I am getting is:Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 0, current count = 1.
we are trying to delete data from a huge 75 million records tableit takes 4hr to prune datadelete from Company where recordid in (select top 10000 recordid fromrecordid_Fed3 where flag = 0)we have a loop that prunes 10000 records at a time in a while looplet me know if there is a better way to acheive this
Hello,I read several articles of newsgroup about the bulk delete, and I foundone way is to:-create a temporary table with all constraints of original table-insert rows to be retained into that temp table-drop constraints on original table-drop the original table-rename the temporary tableMy purge is a daily job, and my question is how this work on a heavyload operational database? I mean thousand of records are written intomy tables (the same table that I want to purge some rows from) everysecond. While I am doing the copy to temp table and drop the table whathappens to those operational data?I also realized another way of doing the bulk delete is using BCP:1) BCP out rows to be deleted to an archive file2) BCP out rows to be retained3) Drop indexes and truncate table4) BCP in rows to be retained5) Create indexesAgain the same question: When I'm doing the BCP is there any insertionblocking to my original table? What happens to my rows meantime to beinserted?Does BCP acquire an exclusive lock on the table which prevents anyother insertion?Does any one have an experience with a BCP command for querying out 2million records, and how long will it take?I appreciate your help.
I've got a large MS Sql Server 2000 database that has 15 indexes, with roughly 180 million rows representing 240 GB worth of data. Due to the massive size of the database we are trying to purge it down to a smaller dataset, about 40 million rows, in order to speed up the query performance and to be able to defrag the indexes (which are 30-50% fragmented). To complicate the matter, this table is also a publisher in a transactional replication setup, with one subscriber. Also, the system needs to be up constantly so I'm only allowed about a 3-5 hour period to take an outage a week.
So far I've tested several methods of delete following all best practices (batch deletes, using indexes in delete's where clause), and have come up with deleting/commiting 500 rows at a time. The problem is that it still takes 3-4 seconds to delete this many rows, on a 8 GB RAM, 4 processor machine that is not currently used or replicated.
I'm at a loss on a way to pare down the data with a delete as the current purge script will take 7 hours a day for about 3 months. Another option I'm considering is to do a truncate and copy the data back over from the replicated database, but again this has its own set of problems, i.e. network latency and slow inset times. Yet another option would be to create a replica of the table on the production db, copy the data to it, then rename the table.
Any one have experience with purging such a massive amount of data? Any help would be greatly appreciated.
I have a fundamental problem with how CDC works for bulk updates.When CDC enabled table is updated for single row - My CDC system tables its recording it as update (3 & 4) which is perfect and what it should be. No Complains!But when I do a bulk update in the same CDC enabled tables for the same columns - My CDC system tables its recording as delete and then insert (1 & 2). This is not correct and this is what my problem is. We used triggers before CDC we did not face this problem with triggers every thing was fine with triggers other than performance.The way how the CDC is handling the bulk update is a big problem for me because based on the output of CDC system tables we are doing some migration work to legacy system.
It will be impossible for me to go and change my migration logic scripts because we have 100's or procedures in it.Is it a know problem with CDC? Is there any solution in CDC when a bulk update happens on a table the CDC system tables record it as updates. I don't think CDC 'net changes' in this situation because the net change would show as single inserted row.If this can't be done with CDC then I have to completely abandon CDC and go back to triggers..
I have a large table with 100 Million records that has around 1 million duplicate records that need to be deleted.
I am running a script that creates a staging table called,DuplicateTable that collects all the duplicates and then I want to write a an effecient delete statement.
Is it possible to write something like:
delete from OrigTable O join DuplicateTable D on O.Key = D.key
Or do I have to run a loop on the DuplicateTable and run a delete statement record by record ?
I have an SSIS package doing a bulk insert from a file. Then later on I'm trying to delete that file (in a file delete task), but I'm getting an error:[File System Task] Error: An error occurred with the following error message: "The process cannot access the file 'xyz' because it is being used by another process.".I'm wondering if there isn't some way to 'tweak' the bulk insert syntax so that it doesn't lock the file?
I have to update a field within a table of 60 records or so. Each record has a different field value. it's type varchar. i was given an excel file with the field values and was thinking of a bulk update like bulk insert, but i don't recall that it's possible that way.
Is the only way to create a table, bulk insert, then merge the two tables together with UPDATE?
Just wanted to see if there was an easier way to do it, otherwise i'll take the latter route. Thanks!
I'm trying to use Bulk insert for the first time and getting the following error. I think it might have something to do with my Format File and from the error msg there's a conversion error for the first column. In my database the Field is nvarchar(6) so my best guess is to use SQLNChar for the first column. I've checked the end of each line is CR LF therefore the is correct for line 7 right?
Msg 4863, Level 16, State 1, Line 1 Bulk load data conversion error (truncation) for row 1, column 1 (ASXCode). Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
BULK INSERTtbl_ASX_Data_temp FROM 'M:DataASXImportTest.txt' WITH (FORMATFILE='M:DataASXSQLFormatImport.Fmt')
Before implementing memory based bulk copy insert with IRowsetFastLoad interface of SQL Server 2005 OLE DB provider, I want to know some considerations.
- performance : compared with T-SQL's "BULK INSERT ..." and bcp utility
- SQL Server's resource usage : when running memory based bulk copy, server resource's influence
- server side action(behavior) : when server is busy, delayed-update means IRowsetFastLoad::Commit(true) method can insert right after?
- row-count : The rowcount limitation can be inserted by IRowsetFastLoad::InsertRow() method before IRowsetFastLoad::Commit
I'm just learning SSIS and I've hit my first bump. I am doing a bulk import from a tab delimited text file to an empty sql table that has a Idendity column defined. How do I tell the bulk insert task to skip that column when inserting from the text file. If I remove the identity column it imports the data fine, but I want to create the indentity column in the table too.
I have some simple files but they are failing because the delete history task is failing as it is looking for files in a non existent directory.
It is looking for files in C:Program FilesMicrosoft SQL ServerMSSQL10_50.INSTANCEMSSQLLog whereas it should be looking in C:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLLog
how I can get this corrected so I can get the Maintenance Plans to run correctly.
I have tried deleting and recreating the Plan but to no avail
I am using Master Data Service for couple of months now. I can load, update, merge and soft delete data in MDS. Occasionally we even have to hard delete data from MDS. If we keep on soft deleting records in a MDS table eventually there will be huge number of soft deleted records. Is there an easy way to hard delete all the soft deleted records from all MDS tables in a specific Model.
Background: Am working on completing an ORM that can not only handles CRUD actions -- but that can also updates the structure of a table transparently when the class defs change. Reason for this is that I can't get the SQL scripts that would work for updating a software on SqlServer to be portable to other DBMS systems. Doing it by code, rather than SQL batch has a chance of making cross-platform, updateable, software...
Anyway, because it needs to be cross-DBMS capable, the constraints are that the system used must work for the lowest common denominator....ie, a 'recipe' of steps that will work on all DBMS's.
The Problem: There might be simpler ways to do this with SqlServer (all ears :-) - just in case I can't make it cross platform right now) but, with simplistic DBMS's (SqlLite, etc) there is no way to ALTER table once formed: one has to COPY the Table to a new TMP name, adding a Column in the process, then delete the original, then rename the TMP to the original name.
This appears possible in SqlServer too --...as long as there are no CASCADE operations. Truncate table doesn't seem to be the solution, nor drop, as they all seem to trigger a Cascade delete in the Foreign Table.
So -- please correct me if I am wrong here -- it appears that the operations would be along the lines of: a) Remove the Foreign Key references b) Copy the table structure, and make a new temp table, adding the column c) Copy the data over d) Add the FK relations, that used to be in the first table, to the new table e) Delete the original f) Done?
The questions are: a) How does one alter a table to REMOVE the Foreign Key References part, if it has no 'name'. b) Anyone know of a good clean way to get, and save these constraints to reapply them to the new table. Hopefully with some cross platform ADO.NET solution? GetSchema etc appears to me to be very dbms dependant? c) ANY and all tips on things I might run into later that I have not mentioned, are also greatly appreciated.
I am having great difficulty with cascading deletes, delete triggers and referential integrity.
The database is in First Normal Form.
I have some tables that are child tables with two foreign keyes to two different parent tables, for example:
Table A / Table B Table C / Table D
So if I try to turn on cascading deletes for A/B, A/C, B/D and C/D relationships, I get an error that I cannot have cascading delete because it would create multiple cascade paths. I do understand why this is happening. If I delete a row in Table A, I want it to delete child rows in Table B and table C, and then child rows in table D as well. But if I delete a row in Table C, I want it to delete child rows in Table D, and if I delete a row in Table B, I want it to also delete child rows in Table D.
SQL sees this as cyclical, because if I delete a row in table A, both table B and table C would try to delete their child rows in table D.
Ok, so I thought, no biggie, I'll just use delete triggers. So I created delete triggers that will delete child rows in table B and table C when deleting a row in table A. Then I created triggers in both Table B and Table C that would delete child rows in Table D.
When I try to delete a row in table A, B or C, I get the error "Delete Statement Conflicted with COLUMN REFERENCE". This does not make sense to me, can anyone explain? I have a trigger in place that should be deleting the child rows before it attempts to delete the parent row...isn't that the whole point of delete triggers?????
This is an example of my delete trigger:
CREATE TRIGGER [DeleteA] ON A FOR DELETE AS Delete from B where MeetingID = ID; Delete from C where MeetingID = ID;
And then Table B and C both have delete triggers to delete child rows in table D. But it never gets to that point, none of the triggers execute because the above error happens first.
So if I then go into the relationships, and deselect the option for "Enforce relationship for INSERTs and UPDATEs" these triggers all work just fine. Only problem is that now I have no referential integrity and I can simply create unrestrained child rows that do not reference actual foreign keys in the parent table.
So the question is, how do I maintain referential integrity and also have the database delete child rows, keeping in mind that the cascading deletes will not work because of the multiple cascade paths (which are certainly required).
I'm trying to clean up a database design and I'm in a situation to where two tables need a FK but since it didn't exist before there are orphaned records.
Tables are:
Brokers and it's PK is BID
The 2nd table is Broker_Rates which also has a BID table.
I'm trying to figure out a t-sql statement that will parse through all the recrods in the Broker_Rates table and delete the record if there isn't a match for the BID record in the brokers table.
I know this isn't correct syntax but should hopefully clear up what I'm asking
this is my Delete Query NO 1 alter table ZT_Master disable trigger All Delete ZT_Master WHERE TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0) alter table ZT_Master enable trigger All
I have troble in Delete Query No 2 here is a select statemnt , I need to delete them select d.* from ZT_Master m, ZT_Detail d where (m.Prikey=d.MasterKey) And m.TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND m.TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0) I tried modified it as below delete d.* from ZT_Master m, ZT_Detail d where (m.Prikey=d.MasterKey) And m.TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND m.TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0) but this doesn't works..
can you please help? and can I combine these 2 SQL Query into one Sql Query? thank you
I'm using SqlDataSource and an Access database. Let's say I got two tables:user: userID, usernamemessage: userID, messagetextLet's say a user can register on my website, and leave several messages there. I have an admin page where I can select a user and delete all of his messages just by clicking one button.What would be the best (and easiest) way to make this?Here's my suggestion:I have made a "delete query" (with userID as parameter) in MS Access. It deletes all messages of a user when I type in the userID and click ok.Would it be possible to do this on my ASP.net page? If yes, what would the script look like?(yes, it is a newbie question)
I ran the following query in Query Analyzer on a machine running SQL Server 2000. I'm attempting to delete from a linked server running SQL Server 2005:
DELETE FROM sql2005.production.dbo.products WHERE vendor='Foo' AND productId NOT IN ( SELECT productId FROM sql2000.staging.dbo.fooProductList )
The status message (and @@ROWCOUNT) told me 8 rows were affected, but nothing was actually deleted; when I ran a SELECT with the same criteria as the DELETE, all 8 rows are still there. So, once more I tried the DELETE command. This time it told me 7 rows were affected; when I ran the SELECT again, 5 of the rows were still there. Finally, after running this exact same DELETE query 5 times, I was able to remove all 8 rows. Each time it would tell me that a different number of rows had been deleted, and in no case was that number accurate.
I've never seen anything like this before. Neither of the tables involved were undergoing any other changes. There's no replication going on, or anything else that should introduce any delays. And I run queries like this all day, involving every thinkable combination of 2000 and 2005 servers, that don't give me any trouble.
Does anyone have suggestions on what might cause this sort of behavior?
I have a problem with one report on my server. A user has requested that I exclude him from receiving a timed email subscription to several reports. I was able to amend all the subscriptions except one. When I try to remove his email address from the subscription I receive this error:
An internal error occurred on the report server. See the error log for more details. (rsInternalError) Get Online Help
For more information about this error navigate to the report server on the local server machine, or enable remote errors
Online no help couldn't offer any advice at all, so I thought I'd just delete the subscription and recreate it again, but I receive the same message. "Okay, no problem, I'll just delete the report and redeploy it and set up the subscription so all the other users aren't affected", says I. "Oh, no!", says the report server, and then it give me this message:
System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Data.SqlClient.SqlException: Only members of sysadmin role are allowed to update or delete jobs owned by a different login. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.ReportingServices.Library.InstrumentedSqlCommand.ExecuteNonQuery() at Microsoft.ReportingServices.Library.DBInterface.DeleteObject(String objectName) at Microsoft.ReportingServices.Library.RSService._DeleteItem(String item) at Microsoft.ReportingServices.Library.RSService.ExecuteBatch(Guid batchId) at Microsoft.ReportingServices.WebServer.ReportingService2005.ExecuteBatch() --- End of inner exception stack trace ---
What's even weirder is that I'm the owner and creator of the report and I'm a system admin and content manager on the report server and I set up the subscription when the report was initially deployed. Surely I should have sufficient rights to fart around with this subscription/report as I see fit?
I have rebooted the server, redeployed the report, checked credentials on the data source and tried amending and deleting from both the report manager and management studio but still I am prevented from doing so.
Hi~, I have 3 questions about memory based bulk copy.
1. What is the limitation count of IRowsetFastLoad::InsertRow() method before IRowsetFastLoad::Commit(true)? For example, how much insert row at below sample?(the max value of nCount) for(i=0 ; i<nCount ; i++) { pIFastLoad->InsertRow(hAccessor, (void*)(&BulkData)); }
2. In above code sample, isn't there method of inserting prepared array at once directly(BulkData array, not for loop)
3. In OLE DB memory based bulk copy, what is the equivalent of below's T-SQL bulk copy option ? BULK INSERT database_name.schema_name.table_name FROM 'data_file' WITH (ROWS_PER_BATCH = rows_per_batch, TABLOCK);
------------------------------------------------------- My solution is like this. Is it correct?
// CoCreateInstance(...); // Data source // Create session
This one is weird and I am missing something fundamental on this one. A developer was getting a timeout with this...
CREATE PROCEDURE p_CM_DeleteBatch ( @SubmitterTranID VARCHAR(50) ) AS DECLARE @COUNT INT, @COMMIT INT
SET @COUNT = 0 SET @COMMIT = 1 --DO NOT CHANGE THIS. The Operation will be commited only when this value is 1
select @COUNT = COUNT(*) from claimsreceived where (claimstatus NOT IN ('Keyed', 'Imported')) AND SubmitterTranID = @SubmitterTranID
IF (@COUNT = 0) --This means that that Claims under this Batch have not been adjudicated & it is safe to delete BEGIN BEGIN TRANSACTION DELETE FROM INVOICECLAIMMAPPING WHERE CLMRECDID IN (SELECT DISTINCT CLMRECDID FROM CLAIMSRECEIVED WHERE SUBMITTERTRANID = @SUBMITTERTRANID) IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPayment WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPaymentServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsreceivedPayorServices where ClmRecdPyID in (SELECT ClmRecdPyID FROM ClaimsReceivedPayors WHERE SubmitterTranID = @SubmitterTranID)
IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceivedPayors WHERE ClmRecdid in (SELECT ClmRecdID FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID) IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceivedServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0 DELETE FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0 DELETE FROM BATCHLOGCLAIMS WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
IF (@COMMIT = 1) BEGIN --ROLLBACK TRANSACTION --For Testing Purpose ONLY COMMIT TRANSACTION RETURN (0) END ELSE BEGIN ROLLBACK TRANSACTION RETURN (-1) END END ELSE BEGIN RaisError ('This Batch cannot be deleted. It has claim(s) which has been Adjudicated', 16, 1) END GO
I applied a couple of indices and got ride of the uncorrelated subqueries
CREATE PROCEDURE p_CM_DeleteBatch ( @SubmitterTranID VARCHAR(50) ) AS DECLARE @COUNT INT, @COMMIT INT
SET @COUNT = 0 SET @COMMIT = 1 --DO NOT CHANGE THIS. The Operation will be commited only when this value is 1
select @COUNT = COUNT(*) from claimsreceived where (claimstatus NOT IN ('Keyed', 'Imported')) AND SubmitterTranID = @SubmitterTranID
IF (@COUNT = 0) --This means that that Claims under this Batch have not been adjudicated & it is safe to delete BEGIN BEGIN TRANSACTION
DELETE INVOICECLAIMMAPPING FROM INVOICECLAIMMAPPING JOIN CLAIMSRECEIVED ON INVOICECLAIMMAPPING.CLMRECDID = CLAIMSRECEIVED.CLMRECDID WHERE CLAIMSRECEIVED.SUBMITTERTRANID = @SUBMITTERTRANID
IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPayment WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPaymentServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE ClaimsreceivedPayorServices FROM ClaimsreceivedPayorServices JOIN ClaimsReceivedPayors ON ClaimsreceivedPayorServices.ClmRecdPyID = ClaimsReceivedPayors.ClmRecPyID WHERE ClaimsReceivedPayors.SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE ClaimsReceivedPayors FROM ClaimsReceivedPayors JOIN ClaimsReceived ON ClaimsReceivedPayors.ClmRecdid = ClaimsReceived.ClmRecdid WHERE ClaimsReceived.SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceivedServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM BATCHLOGCLAIMS WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
IF (@COMMIT = 1) BEGIN --ROLLBACK TRANSACTION --For Testing Purpose ONLY COMMIT TRANSACTION RETURN (0) END ELSE BEGIN ROLLBACK TRANSACTION RETURN (-1) END END ELSE BEGIN RaisError ('This Batch cannot be deleted. It has claim(s) which has been Adjudicated', 16, 1) END GO SET QUOTED_IDENTIFIER OFF GO SET ANSI_NULLS ON
GO
Suddenly this constraint was being violated with the change
Is the delete on ClaimsReceivedPayors starting before the delete on ClaimsreceivedPayorServices finishes? If so why would it matter between the join and subquery? This one is making me depressed because I can not explain it.
Need some advice solving a little problem I have with my database!
Current setup:
I have a person table that is made up of 39 columns. I also allow for person records to be deleted but I do this by having another table I call LogicallyDeletedrecords. This table is made up of the PersonId, Reason for deletion/suppression and a date time stamp. To access Live records I created a view based on my Person table which contains a WHERE clause to exclude records that exist in the LogicallyDeletedrecords. Similarly, I have another view DeadPersonData which contains Person records that have been removed. Hope it all makes sense so far! Now on to my worries!
The problem:
My Person table contains 9+ million records. The LogicallyDeletedrecords table has 500k+ but I anticipate further growth over the coming weeks/months. My worry is that my LivePersonData view will be too slow to access as my LogicallyDeletedrecords table grows. What’s more, as part of my Load routine, I have to make sure that Person data loaded on to the system is excluded if that same person exists as a deleted member. Both of these actions could slow down my system as the deleted table grows.
My thoughts:
I’ve been thinking of physically deleting dead Person records from my person table (possibly creating an archive table to hold them). But then if I delete them how do I cross check the details when new Person details get loaded?! As I said, my current LogicallyDeletedrecords table holds the PersonId, ReasonDeleted and CreationStamp. The only way is to add further columns which I use to match Person Details?
there are two tables involve in replication let say table1 and replicated table is also rep.table1.
we are not deleting records physically in table1 so only a bit in table1 has true when u want to delete a record but the strange thing is that replication agaent report that this is hard delete operation on table1 so download and report hard delete operation and delete the record in replicated table which is very crucial.
plz let me know where am i wrong and how i put it into right way.
there is no triggers on published tables and noother trigger is created on published table.
I receive the following error message when I try to use the Bulk Insert Task to load BCP data into a table:
Error: 0xC002F304 at Bulk Insert Task, Bulk Insert Task: An error occurred with the following error message: "Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.The bulk load failed. The column is too long in the data file for row 1, column 4. Verify that the field terminator and row terminator are specified correctly.Bulk load data conversion error (overflow) for row 1, column 1 (rowno).".
Task failed: Bulk Insert Task
In SSMS I am able to issue the following command and the data loads into a TableName table with no error messages: BULK INSERT TableName FROM 'C:DataDbTableName.bcp' WITH (DATAFILETYPE='widenative');
What configuration is required for the Bulk Insert Task in SSIS to make the data load? BTW - the TableName.bcp file is bulk copy file as bcp widenative data type. The properties of the Bulk Insert Task are the following: DataFileType: DTSBulkInsert_DataFileType_WideNative RowTerminator: {CR}{LF}
Any help getting the bcp file to load would be appreciated. Let me know if you require any other information, thanks for all your help. Paul