Is There A Limit On The Time Replication Can Be Down?
May 31, 2007
I was asked if there was a limit on the amount of clock time that the Replication agents may be down without snapshotting the subscribers. I think that the only factor is if the MSDB database contains the necessary data to feed a restarted Replication Agent.
Hello,I ran a query that I thought would take an hour, but instead took 14hours to run. The consequence was it bogged down our data warehouseand the overnight build was adversely impacted.Is there a local setting I can set to limit the execution time myquery will take? I dont want to have a server setting and impact otherqueries, just the one I am running.I know there will be people asking about the 14 hour build and what isit doing and so forth. I will address that but I also look to thesesituations as a learning opportunity.Thanks in advance.Rob
I am experiencing a wired problem withe merge replication and SQL 2000 SP4.
We had replication working for 4 years without a problem, one of the table that we replicate has been growing in columns, now it has 55 columns.
I have noticed a problem, when I update the last column on the table and waits for the change to be updated on the subscriber the change is not there. But if I replicate the first 40 columns it works.
Changes made:
1) Upgraded to SP4 2) Added some extra fields using sp_addreplcolumn
I am being told that the colid in syscolumns may not exceed 255 if the table is replicated. Is that true? Where in BOL or elsewhere can I read-up on this? This is a shocking development!!!
We are using SQL 7 at 2 different locations, on 2 different Domains that are connected via a T1 line.
We need to have our 2 SQL servers be identical at all times, meaning when a change is made on one server, it is immediately replicated to the other. The problem is that both servers are always being used, so there is no primary/backup server scenario.
Does anyone know if there is a third party software available, or if this is even possible to do, given the fact that the databases are always open ?
I have two database, one is for production and another for reporting database, in this Scenario we need to transfer the real time data to reporting database, which replication we need to use and what at the bottelnecks in this..can any one give some suggesstions or give some links...
I want to create real time replication of my databaser server.I have two database srevers,one is master which having real time data and another slave,where i want to replicate data from master .
There is a database "Foo" sitting on server "A". There is a database "Bar" sitting on server "B". A.Foo publishes a subset of its schema. B.Bar subscribes to A.Foo's publication. The distribution database is on "B" (B.distributor). This a push subscription (transactions are pushed to the subscriber from the distributor). Every day (including the weekend) I get the following alert:
"5/12/2015 3:53:16 AM, Unsubscribed Transactions (Count) on "B" is Warning.
SQL Server instance "B" has 636771 unsubscribed replication transactions received by the Distributor and not received by a Subscriber.
Unsubscribed Transactions (Count): Number of replication transactions received by the Distributor and not received by a Subscriber."
The number of transactions will vary. The alerts will be sent between 1:20 AM (EST) and 3:30 AM (EST). I'm trying to figure out what is causing the backup of transactions. I assume the issue precedes the alerts by 30-minutes or so.
There are no backups occurringNothing is blocking the distributor agent in the subscription databaseJob activity is at a minimum; the few jobs running run throughout the dayThe machine has plenty of resources -- CPU, RAM, etc.The publisher database shows no signs of stress.
We have many users with a mobile application running SQL Mobile and using merge replication to get data back to the SQL 2008 R2 database. This has worked very well for many years.
We now have a requirement to have this data reported on using Reporting Services. This is where it gets messy.
Due to a limitation of Report Builder(see this blog) we cannot provide access to users for creating their own reports. The report database is remote from the host and there is no VPN.
We hit upon the idea of creating an almost identical publication but the articles as read-only. It was only after this was done that we started having trouble with our existing mobile users.
It seems that a published article is EITHER Bi-directional OR Read-only even if they are in separate publications.
I then thought of using Transactional Publication but this too is blocked on creation with "automatic identity range support is useful only for publications that allow updating subscribers"(Merge and Transactional publication are mutually exclusive)
So in the final analysis is there a way for me to have merge replication AND some other form of SQL replication/data transfer that can have the same data transmitted readonly to a separate full SQL server database?
We recently implemented merge replication.We were expereincing. The replication is between 2 SQL Servers (2005) over same network box, and since we have introduced the replication, the performance has degraded considerably on subscriber end.
1) One thing that should be mention is that its a "unidirectional Direction" flow of changes is from publisher towards subscriber (only one publisher and distributor as well and one subscriber ).
2) Updates are high than inserts and only one article let say "Article1" ave update up to 2000 per day and i am experiecing that dbo.MSmerge_upd_sp_Article1_GUID taking more cpu time.what should be do..
on subscriber database response time is going to slow and i am experiencing a lot of number of LOCK time outs on application end.
can any one can also suggest me server level settings for aviding locking time out.
USE [Testing] GO /****** Object: Table [dbo].[Testing] Script Date: 4/25/2014 11:08:18 AM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON
[Code] ....
It seems to work fine with one million records.
Each primary key is unique, but the begindate is non-unique, and i guess even if i use datetime2 and add nanoseconds, from what i have read, there is a chance that i could have a duplicate datetime since the date is imported via XML from multiple sources.
Is there a way to keep track in real time on how long a stored procedure is running for? So what I want to do is fire off a trace in a stored procedure if that stored procedure is running for over like 5 minutes.
I am trying to load previous days data at 3 am via a SSIS job.
The Date variable is initiated as DATEADD("dd",-1, GETDATE()) in the for loop.
Now, as this job runs at 3 am, and I set the variable as GETDATE() - 1, it excluded the data from 12 am to 3 am in the resultset as Date is set as YYYY-MM-DD 03:00:00:000 I need this to be set as YYYY-MM-DD 00:00:00:000
I hope to update a DateTime column value with a Time input parameter.  Poor attempt below but it looks like the @ApptTime param is coming in as 10:45:00.0000000 and I might have an existing @SendOnDate as: 2015-10-05 07:00:00.000...I hope to end up with 2015-10-05 10:45:00.000
ALTER PROCEDURE [dbo].[SendEditUPDATE] @QuePoolID int=null ,@ApptTime time(7) ,@SendOnDate datetime
I am using VS2005 (VB) to develop a PPC WM5.0 Program. And I am using SQLCE 3.0. My PPC Hardware is in 400MHz.
The question is when the program try to insert the first record into sdf database after each time the program started. It takes a long time. Does anyone know why and how can I fix it?
I will load the whole database into a dataset when the program start and do all the "Insert", "Update", "Delete" in this dataset and fill it into database after each action.
cn.Open() sda = New SqlCeDataAdapter(SQL, cn) 'SQL = Select * From Table scb = New SqlCeCommandBuilder(sda) sda.Update(dataset) cn.Close()
I check the sda.update(), it takes about 0.08s for filling one record into database normally. But:
1. Start the PPC Program
2. Load DB into dataset
3. Create a ONE new record in dataset
4. Fill back to DB
When I take this four steps everytime, the filling time is almost 1s or even more!
Actually, 0.08s is just a normal case. Sometimes, it still takes over 1s to filling back a dataset which only inserted one record when the program is running. (Even all inserted records are exactly the same in data jsut different in the integer key)
However, when I give up the dataset and using the following code:
cn.Open() Dim cmd As New SqlCeCommand(SQL, cn) ' I have build the insert SQL before (Insert Into Table values(XXXXXXXXXXXXXXX All field)
I found that it is still the same that the first inserted record takes more time, but just about 0.2s. And the normal insert time is around 0.02s. It is 4 times faster!!!
We need to select rows from the database that have been recently inserted/updated. We have a main primary table (COMMIT_TEST) and a second update table (COMMIT_TEST_UPDATE). The update table contains the primary key and a LAST_UPDATE field which is a datetime (to tell us when an update occurred). Triggers on the primary table are used to populate the update table.
If we insert or update the primary table in a transaction, we would expect that the datetime of the insert/update would be at the commit, however it seems that the insert/update statement is cached and getdate() is executed at the time of the cache instead of the commit. This causes problems as we select rows based on LAST_UPDATE and a commit may occur later but the earlier insert timestamp is saved to the database and we miss that update.
We would like to know if there is anyway to tell the SQL Server to not execute the function getdate() until the commit, or any other way to get the commit to create the correct timestamp.
We are using default isolation level. We have tried using getdate(), current_timestamp and even {fn Now()} with the same results. SQL Queries that reproduce the problem are provided below:
/* Different functions to get current timestamp €“ all have been tested to produce the same results */ /* SELECT GETDATE() GO SELECT CURRENT_TIMESTAMP GO SELECT {fn Now()} GO */ /* Use these statements to delete the tables to allow recreate of the tables */ /* DROP TABLE COMMIT_TEST DROP TABLE COMMIT_TEST_UPDATE */ /* Create a primary table and an UPDATE table to store the date/time when the primary table is modified */ CREATE TABLE dbo.COMMIT_TEST (PKEY int PRIMARY KEY, timestamp) /* ROW_VERSION rowversion */ GO CREATE TABLE dbo.COMMIT_TEST_UPDATE (PKEY int PRIMARY KEY, LAST_UPDATE datetime, timestamp ) /* ROW_VERSION rowversion */ GO /* Use these statements to delete the triggers to allow reinsert */ /* drop trigger LOG_COMMIT_TEST_INSERT drop trigger LOG_COMMIT_TEST_UPDATE drop trigger LOG_COMMIT_TEST_DELETE */ /* Create insert, update and delete triggers */ create trigger LOG_COMMIT_TEST_INSERT on COMMIT_TEST for INSERT as begin declare @time datetime select @time = getdate()
insert into COMMIT_TEST_UPDATE (PKEY,LAST_UPDATE) select PKEY, getdate() from inserted end GO create trigger LOG_COMMIT_TEST_UPDATE on COMMIT_TEST for UPDATE as begin declare @time datetime select @time = getdate()
update COMMIT_TEST_UPDATE set LAST_UPDATE = getdate() from COMMIT_TEST_UPDATE, deleted, inserted where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY end GO /* In our application deletes should never occur so we don€™t log when they get modified we just delete them from the UPDATE table */ create trigger LOG_COMMIT_TEST_DELETE on COMMIT_TEST for DELETE as begin if ( select count(*) from deleted ) > 0 begin delete COMMIT_TEST_UPDATE from COMMIT_TEST_UPDATE, deleted where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY end end GO /* Delete any previous inserted record to avoid errors when inserting */ DELETE COMMIT_TEST WHERE PKEY = 1 GO /* What is the current date/time */ SELECT GETDATE() GO BEGIN TRANSACTION GO /* Insert a record into the primary table */ INSERT COMMIT_TEST (PKEY) VALUES (1) GO /* Simulate additional processing within this transaction */ WAITFOR DELAY '00:00:10' GO /* We expect at this point that the date is written to the database (or at least we need some way for this to happen) */ COMMIT TRANSACTION GO /* get the current date to show us what date/time should have been committed to the database */ SELECT GETDATE() GO /* Select results from the table €“ we see that the timestamp is 10 seconds older than the commit, in other words it was evaluated at */ /* the insert statement, even though the row could not be read with a SELECT as it was uncommitted */ SELECT * FROM COMMIT_TEST GO SELECT * FROM COMMIT_TEST_UPDATE
Any help would be appreciated, we understand we could make changes to the application/database to approximate what we need, but all the solutions have identified suffer from possible performance issues, or could still lead to missing deals (assuming the commit time is larger than some artifical time window).
When creating a database, SQL Server 6.5 seems to have a 2 gig limit. What I mean is that if the device chosen is over 2 gigs SQL Server displays the size of the device as anegative number. This prevents me from being able to expand the database when I need to. Can anyone tell me why this is so, and if there`s anyway around it??
we all know mysql: select * from table limit ?1, ?2
equals
sqlserver: SELECT TOP ?2 * FROM table WHERE (IDENTITYCOL NOT IN (SELECT TOP ?1 IDENTITYCOL FROM table order by IDENTITYCOL)) order by IDENTITYCOL
but the below SQL in mysql,how to convert?I enmesh........... select pageid,pagename,pageaddr,pageauditflag,pageartaudi tflag,startplaytime from pageinfo where entryid= ?1 and startplaytime= ?2 limit ?3, ?4
hi,I have a question.Maybe You know the equivalent to command LIMIT from MySQLI couldn`t find something like this in MS SQLPSI try to display 10 records begining form e.g. 4 sort by idsomething like: "SELECT * FROM table WHERE name=... LIMIT 4, 10 ORDER BY id"in MySQLthanx,Urban
I recently read up and found an msdn forum post that said the 4gb limit only applied to data files not log files.
However recently i had an error on a sql express database saying that the log file was full, the database log file was set to auto grow and there was plenty of space left.
So i am guessing the 4gb limit applies to any database file mdf or ldf, is this correct ?
I need to take a temporary table that has various times stored in a text field (4:30 pm, 11:00 am, 5:30 pm, etc.), convert it to miltary time then cast it as an integer with an update statement kind of like:
Update myTable set MovieTime = REPLACE(CONVERT(CHAR(5),GETDATE(),108), ':', '')
how this can be done while my temp table is in session?
We are using SQL Server 2008 as our database and use Access as a GUI. I am looking to create a form in Access where employees can access their time card and request changes from management. I want to use the format from the attached screen shot for the form. I pretty much know how to do it all, the only point of complication is trying to figure out the easiest way to get the transaction punch record data on employee_punch_record into a format where I can easily populate the form in the horizontal format you see in the screen shot.
I am not super strong in SQL, but figure I can do it using a formatting table of some sort. quick and easy way to move transaction records into a more horizontally oriented record?
I have a very simple time series model which processing works fine without any problem. However when I run the following query
SELECT
[TimeSeries].[PriceChange],
[TimeSeries].[Symbol],
PredictTimeSeries(PriceChange, -3, 2)
From
[TimeSeries]
WHERE
[TimeSeries].[Symbol] = 'x'
I get the following error:
TITLE: Microsoft SQL Server 2005 Analysis Services ------------------------------ Error (Data mining): A time series prediction was requested with a start time further in the past than the internal models of the mining model, TimeSeries, specified in the HISTORIC_MODEL_GAP and HISTORIC_MODEL_COUNT parameters can process.
The following is the excerpt of the minding model script related to the two parameters:
<AlgorithmParameters>
<AlgorithmParameter>
<Name>MISSING_VALUE_SUBSTITUTION</Name>
<Value xsi:type="xsdtring">Previous</Value>
</AlgorithmParameter>
<AlgorithmParameter>
<Name>HISTORIC_MODEL_GAP</Name>
<Value xsi:type="xsd:int">1</Value>
</AlgorithmParameter>
<AlgorithmParameter>
<Name>HISTORIC_MODEL_COUNT</Name>
<Value xsi:type="xsd:int">10</Value>
</AlgorithmParameter>
</AlgorithmParameters>
These HISTORIC_MODEL_GAP (1) and HISTORIC_MODEL_COUNT (10) should accommodate PredictTimeSeries(PriceChange, -3, 2). Could anyone shed some light on this?
we have problems with our SQL Reporting Service 2012 (SSRS) server . We have setup Kerberos delegation between SSRS and the database server (SQL Server Always-on cluster) so users are authenticated down to the database. The issue occurs from time to time that SSRS loses the ability to delegate the user credentials to the database. At this point in time the Report Server logs contain rejected database connections because of ANONYMOUS logon. After restarting SSRS the problem is gone.
I have a table which has a few fields, one being "datetime_traded". I need to write a query which returns the row which has the closest time (down to second) given a date/time. I'm using MS SQL.
Here's what I have so far:
Code:
select * from TICK_D where datetime_traded = (select min( abs(datediff(second,datetime_traded , Convert(datetime,'2005-05-30:09:31:09')) ) ) from TICK_D)
But I get an error - "The conversion of a char data type to a datetime data type resulted in an out-of-range datetime value.".
Does anyone know how i could do this? Thanks a lot for any help!
Ok, so I have some horribly convuluted SQL that I would love to optomize. I'm not happy leaving it in it's current state, that's for sure!
I'm currently working on our test bed servers, so obviously my stats are out because of the "crap-ness" (yes, that's the technical term) of the hardware, but still, it should NEVER need to take this long!!
Basically, the issue arises in the nasty join to the career table (one employee can have multiple career lines). Just to make things complicated, employees can have any number of career records on any given date, these can even be input for future career events. The following SQL picks out the latest-current career date for each employee based on the career_date being <= GetDate() and the date of entry for this date being the greatest.
From the above we want to return 2007-01-01 | 2006-05-05 13:54:18.000
SET STATISTICS IO ON SET STATISTICS TIME ON
SELECT a.sAMAccountNameAs 'sAMAccountName' , a.userPrincipalNameAs 'userPrincipalName' , 'TRUE'As 'Modify' , RTRIM(e.unique_identifier)As 'employeeID' , RTRIM(e.employee_number)As 'employeeNumber' , RTRIM(e.known_as) + CASE WHEN RTRIM(e.surname) IS NOT NULL THEN ' ' + RTRIM(e.surname) ELSE NULL ENDAs 'displayName' , RTRIM(e.known_as)As 'givenName' , RTRIM(e.surname)As 'sn' , RTRIM(c.job_title)As 'title' , RTRIM(c.division)As 'company' , RTRIM(c.department)As 'department' , RTRIM(l.description)As 'physicalDeliveryOfficeName' , RTRIM(REPLACE(am.dn,'\',''))As 'manager' , t.full_mobile + CASE WHEN RTRIM(t.mobile_number) IS NOT NULL THEN ' (DD: ' + RTRIM(t.mobile_number) + ')'ELSE NULL END As 'mobile' , t.mobile_numberAs 'otherMobile' , ad.address_ad_countryAs 'c' , ad.address_ad_address1 + CASE WHEN ad.address_ad_address2 IS NOT NULL THEN ', ' + ad.address_ad_address2 ELSE NULL END + CASE WHEN ad.address_ad_address3 IS NOT NULL THEN ', ' + ad.address_ad_address3 ELSE NULL END + CASE WHEN ad.address_ad_address4 IS NOT NULL THEN ', ' + ad.address_ad_address4 ELSE NULL END + CASE WHEN ad.address_ad_address5 IS NOT NULL THEN ', ' + ad.address_ad_address5 ELSE NULL ENDAs 'streetAddress' , ad.address_ad_poboxAs 'postOfficeBox' , ad.address_ad_cityAs 'l' , ad.address_ad_CountyAs 'st' , ad.address_ad_postcodeAs 'postalCode' , RTRIM(ad.address_ad_telephone) + CASE WHEN RTRIM(a.othertelephone) IS NOT NULL AND RTRIM(ad.address_ad_telephone) IS NOT NULL THEN ' (Ext: ' + RTRIM(a.othertelephone) + ')' ELSE CASE WHEN RTRIM(a.othertelephone) IS NOT NULL AND RTRIM(ad.address_ad_telephone) IS NULL THEN 'Ext: ' + RTRIM(a.othertelephone) ELSE NULL END ENDAs 'telephoneNumber' FROM employee e LEFT JOIN career c ON c.parent_identifier = e.unique_identifier AND c.career_date =( SELECTmax(c2.career_date) FROMpwa_master.career c2 WHEREc2.parent_identifier = c.parent_identifier ANDc2.career_date <= GetDate() ) AND c.datetime_created =( SELECT max(c3.datetime_created) FROMpwa_master.career c3 WHEREc3.parent_identifier = c.parent_identifier ANDc3.career_date = c.career_date ) LEFT OUTER JOIN AD_Import am ON am.employeeNumber = c.manager_number INNER JOIN AD_Import a ON a.employeeID = e.unique_identifier LEFT JOIN AD_Telephone t ON t.unique_identifier = e.unique_identifier LEFT JOIN AD_Address ad ON ad.address_pwa_location = e.location LEFT JOIN xlocat l ON l.code = c.location WHERE (a.employeeNumber IS NOT NULL OR a.employeeID IS NOT NULL)
SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times: CPU time = 15203 ms, elapsed time = 8114 ms.
Any advice on what I can do to optomize?
Oh judt to point out that "employee" is a view on the "Table 'people'." EDIT: I know it's pointing out the obvious, but I'm pulling out the managers "DN" from AD_Import based on the manager_number and employeeNumber matching.