Here's what I'm trying to do (Maybe there's a better way)
I'm using Transaction Replication and a pull subscription to get all the transactions from the ORDERS table within the last year from Server_A to Server_B. I filter the rows "WHERE (ORDR_DATE > (GETDATE()) - 365)"
That works fine, BUT, the old rows in the subscriber's ORDERS table do not get deleted, so I have all the records for the last year, plus the older records that I don't want any more. Do I need another step some where that does a "Delete WHERE (ORDR_DATE < (GETDATE()) - 365)" But then I have to maintain my date logic in more than 1 place if I ever want to change it.
Any suggestions, thoughts, improvements will be appreciated .... Thanks a lot
Hello, During an intantanée arrest of replication of data base sql server 7 sp2 (nt4), I lose keys on the subscriber, how don't have it? Thank you in advance easter pascal
I am trying to disable transactional replication, but am having some problems. I used the wizard, however, it has taken 7 hours so far and is still not done. SQL Server (7.0) is showing the connection as runnable but it seems as if nothing is being removed. Am I missing something? Should I have done something else before running the wizard? I can't even kill the SPID.
I am sure this has been asked multiple times before, but I have a DB (SQL 2K) that is involved in transactional replication. The log periodically grows to a seriously big size. How can I shrink it without damaging replication? I have opened SSIS (SQL 2005 is the subscriber) and tried to shrink the file, but to no avail. It reckons I could shrink it to 0MB, but I feel this would throw out replication.
I have 2 SQL 2000 servers (both have SP4) and are running on Win2003 SP1. We will call them SQL1 and SQL2.
SQL1 is the publisher and distributor for trans replication, SQL2 is the subscriber with immediate updating, and queued updating as failover.
I configured the publisher and subscriber. The snapshot replicates fine to the subscriber and all the agents are working fine. There is only 1 table article configured for replication.
Let's say I am trying to update a single row.
I can make as many updates on this row to the publisher as I want and they all replicate just fine to the subscriber. (Note: an update to publisher row causes a new GUID to be generated in the "msrepl_tran_version" column.) The updated data and the new GUID are successfully replicated to subscriber. I can continue to successfully make as many updates to this row on the publisher just fine.
Now I want to make an update to this row at the subscriber: When I do this (via enterprise manager), I update column 1 with a new value, but a new GUID is NOT created on the subscriber. The column I updated successfully replicates back to the publisher on the first update attempt. This update causes the publisher to create a new GUID for the row, but the new GUID does not replicate back to the subscriber. (at this point the publisher and subscriber do not have matching GUID values).
Further updates to this row on the subscriber cause an error "...rows do not match between publisher and subscriber...", and further updates to the publisher do not cause an error message, but simply does not update back to the subscriber.
This is one of sql server 2005 's users. I put filter for one of my merge replicate articles .But it dosent work correctly .plaese help me to set my article correctly.
I'm learning replication (snapshot for now) and was trying filtering & got strange results.
SQL 2K sp2
Publisher: Table1
FieldA FieldB FieldC
Subscriber: Table1
FieldA FieldB FieldC
But now, I decide after the fact to filter FieldC. The subscriber Table still has 3 fields, but the publisher data from Field 1 + 2 gets shifted across all 3 fields at the subscriber.
If I manually drop the table at the subscriber, then next time the job runs, it recreates the table with just 2 fields and looks good.
Is there some way to set replication so that it will drop & re-create the subscriber table automatically if the filter changes ? Or am I missing something else in my understanding ?
Under "Default Table Article Properties - Snapshot" the option "DROP the existing table & recreate it" is checked. When does that apply ?? Just when first set up ?
I understand that it is possible to set filters dynamically using functions 1) SUSER_SNAME() 2) and HOST_NAME(). SUSER_SNAME() returns the login credentials used in the subscritption. HOST_NAME() returns the host machine and can be overloaded with buisiness information.
My application should work as below; 1) User enters the login credentials. 2) Some information as the User name passed to server and if the user name is valid , the rows related to this particular user get downloaded to device.
A new user is added directly in the Users table in master database.
My questions are : 1) If I have 3000 users , should I create 3000 subscriptions with 3000 HOSTName or Login credential information to differentiate btween users? 2) If yes , other than using the wizard is there any scripts available to create large number of subscription? 3) If add subscription programatically , should I re-initialise subscription for each new user which is assigned with a diferrent host name value?
I have a complex join filtering on a replicated sql server database which was working fine in previous versions of sql compact. The query is something like the following:
SELECT <published columns> FROM <filtered table> INNER JOIN <child table> ON <child table>.ID = <filtered table>.ID and <child table>.date > getdate()-30 After I upgraded to compact databse 3.5, for some weird reason whichever tables have both these Join filter and article filter together behaving improperly. If I insert any row in any of these table, the row is replicated properly to the server, but it does not send the new row to any other users. Again this thing works fine in older version. I have switched back tyo the old version of sql ce and again it's started working.
I am having problems with my sql merge replication. Whenever a user syncs up to my main database, most of their records are deleted instead of being merged. Or the records on the main database are inserted and it replaces the whole table with the records on the remote laptops. Is there a way to prevent this from happening? Someone please help me.
I have maually deleted the Database.LDF file by stopping the SQL Server. When i start the SQL Server, merge replication which was configured for the database does not work. How do i fix this problem.
I am running a simple merge replication in SQL Server 2000. I have one database that is the publisher, and a second database that is the subscriber. When I add a new row to the subscriber it will replicate to the publisher as expected. However, the new row at the subscriber will then be deleted without explanation. The row will remain at the publisher though.
Hi i have to delete the master table data without deleting the child table records,is there any solution for this, parent table has relation with the child table. regards vinod.t.v
/****** Object: StoredProcedure [dbo].[dbo.ServiceLog] Script Date: 07/18/2014 14:30:59 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER proc [dbo].[ServiceLogPurge]
-- Purge records dbo.ServiceLog older than 3 months: -- Purge records in small portions to avoid locking production tables -- for a long time. The process takes longer, but can co-exist with -- normal usage of the tables.
[Code] ...
*** Getting this error below when executing the code ***
Msg 102, Level 15, State 1, Procedure ServiceLogPurge, Line 45 Incorrect syntax near 'Failed:'.
I wnat to have a log file with all the updates and inserts made to the DB. I understand I have to do a trans-log,how? in order to make a query a transaction do I only have to declare trans...commit ??
I have set up a simple trans rep from a server in the office to the web. both servers are NT4sp6aSQL 7.0. Tables only.
The publication and distribution db are on the server at the office, and there is a full time connection through the firewall.
The initialization and first replication works perfectly, but after that, there is a message from the snapshot agent that "no subscriptions needed initialization", the logreader says thare are no replicated transactions and the Distribution agent says there are no replicated transactions. What am I missing?
Does anyone know how to move the transaction log(s) of a LIVE database to a new location. I must move the log of a database to a new mirrored drive without any disruption to users. I cannot take the database offline or use the sp_detach_db stored procedure. Your inputs are much appreciated!
I have a 300MB db, and a transaction log near 1.3GB. Upon notification, I backed up the db log with truncate_only - no luck getting it smaller. Later, tried backup with no_log ( assuming the o/s was full - no diff)
I tried shinkfile (logfile,truncateonly) and no luck.
I tried dbcc opentrans to see if any pending trans. The db looks fine with dbcc checkdb. I managed to free up a mere 50MB. I checked the permissions on the db, and added backup db, and backup log in the db permissions for the user logged in (also tried this with sa)
I am unable to free up the space to the os. Can I somehow rid the log file and start off with a fresh log file? I need this space. As a patch I moved the log to a larger filesystem as a temporarily fix.
start/stop SQL- nothing? reboot -nothing? I played with waiting game. This log does not want to release space. The log grew from data loads.
Question1: Suggestions how to truncate this log? The contents are not really impt, but the space is.
Question2: Can I add another logfile, then use EMPTYFILE to transfer the contents to the newly added log file, then REMOVE The original logfile? In theory does this make sense?
We have a testing database we're using to convert large amounts of data from 1 system to another. We might process 5-6 million records, but don't care about being able to recover point-in-time.
I set recovery mode to simple, do a full backup every night. I keep getting large transaction logs. I manually run Shrink Database when I realize the logs are big
What can I do to prevent the logs from getting big in the first place ?? Can I prevent logging from happening ?
I keep reading various books and BOL, but I guess I don't quite "get it" yet ......
Any plain spoken, detailed suggestions would be very appreciated .... thanks in advance.
The error log file for the transaction log backups is...
Microsoft (R) SQLMaint Utility (Unicode), Version Logged on to SQL Server 'ZCHQ_SQLPRODUCTION' as 'ZCISQLSERVICE' (trusted) Starting maintenance plan 'zeon_live_commerce DB Maintenance Plan1' on 6/12/2002 8:15:03 AM Backup can not be performed on database 'zeon_live_commerce'. This sub task is ignored.
Deleting old text reports... 1 file(s) deleted.
End of maintenance plan 'zeon_live_commerce DB Maintenance Plan1' on 6/12/2002 8:15:03 AM SQLMAINT.EXE Process Exit Code: 1 (Failed)
If you are set up for AutoCommit why would you or should you set a explicit transaction? I have noticed that in some called stored procudures from a "container" stored procedure. (Hope I got that right) that in the called stored procedure a Begin tran is used. Can anyone help with the why and what fors? It seems to me that you want to let SQL Server handle this becuase of the danger of leaving out a Commit or Rollback? But thats me. I may be very wrong? Thanks.
Hi, I was just wondering, if I have a DB for log shipping, and change the recovery model to bulk-insert before I do a dbreindex, does the log still grow as big as the full recovery model? as in when the DB in bulk-insert model, will dbreindex still writes to the log? What i'm trying to do is try to make the log files smaller for log ship when i'm doing the db reindexing job. Thanks.
I’m really scratching my head with this Transact-SQL, say you wanted to cycle through a set of rows, then perform an operation on each row, so in VB/DAO it might look like this: Dim rsTables As Recordset, rsIndex As Recordset Set rsTables = dbSource.OpenRecordset("SELECT * FROM INFORMATION_SCHEMA.tables") Do While rsTables.EOF Set rsIndex = dbSource.OpenRecordset("Select * From SysIndexes Where Name = '" & rsTables!Table_Name & "'") DoSomethingToIndex rsIndex!Name rsTables.MoveNext Loop
Hi all, Just wondering if the virtual filesize of the transaction log can be changed to suite us. run dbcc loginfo(sagent_dev) display the followings:
FileId FileSize StartOffset FSeqNo Status Parity 213041664 8192 2110128 213041664 130498561960128 213041664 26091520197064 2524288039133184201064 2524288065404928204064 2524288070647808203064 25242880102105088210064 25242880107347968000 25242880112590848214264 25242880117833728213064 25242880123076608212064 Note that the virtual filesize can comprise of different configuration. In other database I can specify this logical size to whatever I like.
We would like to replicate from a SQL 7 DB some data onto a SQL 2000 Server running SQL 2000. We plan to take some data off the SQL 7 DB's and create a Data Warehouse on the SQL 2000 Box.
Question: Will the data replicate successfully from the SQL 7 box onto the SQL 2000 box?
I have a trans log backup that runs every 15 minutes on san, works fine until I do a backup after a large load. I get the below error message. Anyone had this before?
Executed as user: DPSCSDOMsqlexec. The file on device 'n:sqllogsmdentallgdmp' is not a valid Microsoft Tape Format backup set. [SQLSTATE 42000] (Error 3242) BACKUP LOG is terminating abnormally. [SQLSTATE 42000] (Error 3013). The step fail
We're running Lawson software on our SQL 2000 box. We're using Veritas Backup Exec to back up the databases to tape. I'm also using a Maint Plan for an extra backup (kinda redundant, but I need the practice at all this). I just added transaction logs to the maint plan (so I thought) but I only see a log for 1 of the 3 databases.
Also, my trans log file is 11 gig. I thought backing up the trans logs would get the trans to delete afterwards (databases are recovery model=full). Can anyone point a newbie in the right direction. Fortunately it's not in production yet, and we haven't had any disasters (yet).