Is There A Limit On The Time Replication Can Be Down?
May 31, 2007
I was asked if there was a limit on the amount of clock time that the Replication agents may be down without snapshotting the subscribers. I think that the only factor is if the MSDB database contains the necessary data to feed a restarted Replication Agent.
Any comments?
View 4 Replies
ADVERTISEMENT
Aug 15, 2007
Hello,I ran a query that I thought would take an hour, but instead took 14hours to run. The consequence was it bogged down our data warehouseand the overnight build was adversely impacted.Is there a local setting I can set to limit the execution time myquery will take? I dont want to have a server setting and impact otherqueries, just the one I am running.I know there will be people asking about the 14 hour build and what isit doing and so forth. I will address that but I also look to thesesituations as a learning opportunity.Thanks in advance.Rob
View 3 Replies
View Related
Oct 26, 2000
Does anyone know if there is a limit to the number of fields you can replicate from one table ??
View 1 Replies
View Related
Nov 20, 2006
my snapshot is exceeding the compression limit.
is there a way to configure the server to have a higher compression limit
thanks,
joey
View 1 Replies
View Related
Dec 4, 2006
Hi,
I am experiencing a wired problem withe merge replication and SQL 2000 SP4.
We had replication working for 4 years without a problem, one of the table that we replicate has been growing in columns, now it has 55 columns.
I have noticed a problem, when I update the last column on the table and waits for the change to be updated on the subscriber the change is not there. But if I replicate the first 40 columns it works.
Changes made:
1) Upgraded to SP4
2) Added some extra fields using sp_addreplcolumn
Is this a known bug? Any ideas?
Regards
View 3 Replies
View Related
Apr 4, 2007
I am being told that the colid in syscolumns may not exceed 255 if the table is replicated. Is that true? Where in BOL or elsewhere can I read-up on this? This is a shocking development!!!
View 3 Replies
View Related
May 6, 2001
We are using SQL 7 at 2 different locations, on 2 different Domains that are connected via a T1 line.
We need to have our 2 SQL servers be identical at all times, meaning when a change is made on one server, it is immediately replicated to the other. The problem is that both servers are always being used, so there is no primary/backup server scenario.
Does anyone know if there is a third party software available, or if this is even possible to do, given the fact that the databases are always open ?
Thanks.
View 1 Replies
View Related
Apr 24, 2008
I have two database, one is for production and another for reporting database, in this Scenario we need to transfer the real time data to reporting database, which replication we need to use and what at the bottelnecks in this..can any one give some suggesstions or give some links...
Thanks in advance
View 4 Replies
View Related
Jun 28, 2007
hello,
I want to create real time replication of my databaser server.I have two database srevers,one is master which having real time data and another slave,where i want to replicate data from master .
plz help me to configure solution
Thanks,
Chetan S. Raut.
View 1 Replies
View Related
May 6, 2015
How do I recover point in time if the database is participating in transactional replication and is a publisher.
View 0 Replies
View Related
May 12, 2015
There is a database "Foo" sitting on server "A". There is a database "Bar" sitting on server "B". A.Foo publishes a subset of its schema. B.Bar subscribes to A.Foo's publication. The distribution database is on "B" (B.distributor). This a push subscription (transactions are pushed to the subscriber from the distributor). Every day (including the weekend) I get the following alert:
"5/12/2015 3:53:16 AM, Unsubscribed Transactions (Count) on "B" is Warning.
SQL Server instance "B" has 636771 unsubscribed replication transactions received by the Distributor and not received by a Subscriber.
Unsubscribed Transactions (Count): Number of replication transactions received by the Distributor and not received by a Subscriber."
The number of transactions will vary. The alerts will be sent between 1:20 AM (EST) and 3:30 AM (EST). I'm trying to figure out what is causing the backup of transactions. I assume the issue precedes the alerts by 30-minutes or so.
There are no backups occurringNothing is blocking the distributor agent in the subscription databaseJob activity is at a minimum; the few jobs running run throughout the dayThe machine has plenty of resources -- CPU, RAM, etc.The publisher database shows no signs of stress.
View 3 Replies
View Related
Apr 21, 2015
We have many users with a mobile application running SQL Mobile and using merge replication to get data back to the SQL 2008 R2 database. This has worked very well for many years.
We now have a requirement to have this data reported on using Reporting Services. This is where it gets messy.
Due to a limitation of Report Builder(see this blog) we cannot provide access to users for creating their own reports. The report database is remote from the host and there is no VPN.
We hit upon the idea of creating an almost identical publication but the articles as read-only. It was only after this was done that we started having trouble with our existing mobile users.
It seems that a published article is EITHER Bi-directional OR Read-only even if they are in separate publications.
I then thought of using Transactional Publication but this too is blocked on creation with "automatic identity range support is useful only for publications that allow updating subscribers"(Merge and Transactional publication are mutually exclusive)
So in the final analysis is there a way for me to have merge replication AND some other form of SQL replication/data transfer that can have the same data transmitted readonly to a separate full SQL server database?
View 9 Replies
View Related
Feb 21, 2007
We recently implemented merge replication.We were expereincing. The replication is between 2 SQL Servers (2005) over same network box, and since we have introduced the replication, the performance has degraded considerably on subscriber end.
1) One thing that should be mention is that its a "unidirectional Direction" flow of changes is from publisher towards subscriber (only one publisher and distributor as well and one subscriber ).
2) Updates are high than inserts and only one article let say "Article1" ave update up to 2000 per day and i am experiecing that dbo.MSmerge_upd_sp_Article1_GUID taking more cpu time.what should be do..
on subscriber database response time is going to slow and i am experiencing a lot of number of LOCK time outs on application end.
can any one can also suggest me server level settings for aviding locking time out.
looking for any experieced solution/suggestion.
Thanks in advance.
View 3 Replies
View Related
Aug 7, 2007
Hi all,
I have created a report in SSRS 2005 which is being viewed by users from different Time Zones.
I have a dataset which has a field of type datetime (UTC). Now I would like to display this Date according to the User Time Zone.
For example if the date is August 07, 2007 10:00 AM UTC,
then I would like to display it as August 07, 2007 03:30 PM IST if the user Time Zone is IST.
Similarly for other Time Zones it should display the time accordingly.
Is this possible in SSRS 2005?
Any pointers will be usefull...
Thanks in advance
sudheer racha.
View 5 Replies
View Related
Apr 25, 2014
Sample Table
USE [Testing]
GO
/****** Object: Table [dbo].[Testing] Script Date: 4/25/2014 11:08:18 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
[Code] ....
It seems to work fine with one million records.
Each primary key is unique, but the begindate is non-unique, and i guess even if i use datetime2 and add nanoseconds, from what i have read, there is a chance that i could have a duplicate datetime since the date is imported via XML from multiple sources.
View 7 Replies
View Related
Jun 11, 2015
Is there a way to keep track in real time on how long a stored procedure is running for? So what I want to do is fire off a trace in a stored procedure if that stored procedure is running for over like 5 minutes.
View 5 Replies
View Related
Oct 22, 2015
I am trying to load previous days data at 3 am via a SSIS job.
The Date variable is initiated as DATEADD("dd",-1, GETDATE()) in the for loop.
Now, as this job runs at 3 am, and I set the variable as GETDATE() - 1, it excluded the data from 12 am to 3 am in the resultset as Date is set as YYYY-MM-DD 03:00:00:000 I need this to be set as YYYY-MM-DD 00:00:00:000
How can i do this?Â
View 2 Replies
View Related
Oct 3, 2015
I hope to update a DateTime column value with a Time input parameter.  Poor attempt below but it looks like the @ApptTime param is coming in as 10:45:00.0000000 and I might have an existing @SendOnDate as: 2015-10-05 07:00:00.000...I hope to end up with 2015-10-05 10:45:00.000
ALTER PROCEDURE [dbo].[SendEditUPDATE]
@QuePoolID int=null
,@ApptTime time(7)
,@SendOnDate datetime
[code]...
View 14 Replies
View Related
Dec 15, 2006
I am using VS2005 (VB) to develop a PPC WM5.0 Program. And I am using SQLCE 3.0. My PPC Hardware is in 400MHz.
The question is when the program try to insert the first record into sdf database after each time the program started. It takes a long time. Does anyone know why and how can I fix it?
I will load the whole database into a dataset when the program start and do all the "Insert", "Update", "Delete" in this dataset and fill it into database after each action.
cn.Open()
sda = New SqlCeDataAdapter(SQL, cn) 'SQL = Select * From Table
scb = New SqlCeCommandBuilder(sda)
sda.Update(dataset)
cn.Close()
I check the sda.update(), it takes about 0.08s for filling one record into database normally. But:
1. Start the PPC Program
2. Load DB into dataset
3. Create a ONE new record in dataset
4. Fill back to DB
When I take this four steps everytime, the filling time is almost 1s or even more!
Actually, 0.08s is just a normal case. Sometimes, it still takes over 1s to filling back a dataset which only inserted one record when the program is running. (Even all inserted records are exactly the same in data jsut different in the integer key)
However, when I give up the dataset and using the following code:
cn.Open()
Dim cmd As New SqlCeCommand(SQL, cn) ' I have build the insert SQL before (Insert Into Table values(XXXXXXXXXXXXXXX All field)
cmd.CommandType = CommandType.Text
cmd.ExecuteNonQuery()
cn.Close()
StartTime = Environment.TickCount
I found that it is still the same that the first inserted record takes more time, but just about 0.2s. And the normal insert time is around 0.02s. It is 4 times faster!!!
View 1 Replies
View Related
Apr 21, 2015
SELECTÂ
  CONVERT(VARCHAR(10),attnc_chkin_dt,101) as INDATE,
  CONVERT(VARCHAR(10),attnc_chkin_dt,108) as TimePart
FROM pmt_attendance
o/p
indate   04/18/2015
time part :17:45:00
I need to convert this 17:45:00 to 12 hours date format...
View 8 Replies
View Related
Nov 12, 2007
Hi,
We need to select rows from the database that have been recently inserted/updated. We have a main primary table (COMMIT_TEST) and a second update table (COMMIT_TEST_UPDATE). The update table contains the primary key and a LAST_UPDATE field which is a datetime (to tell us when an update occurred). Triggers on the primary table are used to populate the update table.
If we insert or update the primary table in a transaction, we would expect that the datetime of the insert/update would be at the commit, however it seems that the insert/update statement is cached and getdate() is executed at the time of the cache instead of the commit. This causes problems as we select rows based on LAST_UPDATE and a commit may occur later but the earlier insert timestamp is saved to the database and we miss that update.
We would like to know if there is anyway to tell the SQL Server to not execute the function getdate() until the commit, or any other way to get the commit to create the correct timestamp.
We are using default isolation level. We have tried using getdate(), current_timestamp and even {fn Now()} with the same results. SQL Queries that reproduce the problem are provided below:
/* Different functions to get current timestamp €“ all have been tested to produce the same results */
/*
SELECT GETDATE()
GO
SELECT CURRENT_TIMESTAMP
GO
SELECT {fn Now()}
GO
*/
/* Use these statements to delete the tables to allow recreate of the tables */
/*
DROP TABLE COMMIT_TEST
DROP TABLE COMMIT_TEST_UPDATE
*/
/* Create a primary table and an UPDATE table to store the date/time when the primary table is modified */
CREATE TABLE dbo.COMMIT_TEST (PKEY int PRIMARY KEY, timestamp) /* ROW_VERSION rowversion */
GO
CREATE TABLE dbo.COMMIT_TEST_UPDATE (PKEY int PRIMARY KEY, LAST_UPDATE datetime, timestamp ) /* ROW_VERSION rowversion */
GO
/* Use these statements to delete the triggers to allow reinsert */
/*
drop trigger LOG_COMMIT_TEST_INSERT
drop trigger LOG_COMMIT_TEST_UPDATE
drop trigger LOG_COMMIT_TEST_DELETE
*/
/* Create insert, update and delete triggers */
create trigger LOG_COMMIT_TEST_INSERT on COMMIT_TEST for INSERT as
begin
declare @time datetime
select @time = getdate()
insert into COMMIT_TEST_UPDATE (PKEY,LAST_UPDATE)
select PKEY, getdate()
from inserted
end
GO
create trigger LOG_COMMIT_TEST_UPDATE on COMMIT_TEST for UPDATE as
begin
declare @time datetime
select @time = getdate()
update COMMIT_TEST_UPDATE
set LAST_UPDATE = getdate()
from COMMIT_TEST_UPDATE, deleted, inserted
where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY
end
GO
/* In our application deletes should never occur so we don€™t log when they get modified we just delete them from the UPDATE table */
create trigger LOG_COMMIT_TEST_DELETE on COMMIT_TEST for DELETE as
begin
if ( select count(*) from deleted ) > 0
begin
delete COMMIT_TEST_UPDATE
from COMMIT_TEST_UPDATE, deleted
where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY
end
end
GO
/* Delete any previous inserted record to avoid errors when inserting */
DELETE COMMIT_TEST WHERE PKEY = 1
GO
/* What is the current date/time */
SELECT GETDATE()
GO
BEGIN TRANSACTION
GO
/* Insert a record into the primary table */
INSERT COMMIT_TEST (PKEY) VALUES (1)
GO
/* Simulate additional processing within this transaction */
WAITFOR DELAY '00:00:10'
GO
/* We expect at this point that the date is written to the database (or at least we need some way for this to happen) */
COMMIT TRANSACTION
GO
/* get the current date to show us what date/time should have been committed to the database */
SELECT GETDATE()
GO
/* Select results from the table €“ we see that the timestamp is 10 seconds older than the commit, in other words it was evaluated at */
/* the insert statement, even though the row could not be read with a SELECT as it was uncommitted */
SELECT * FROM COMMIT_TEST
GO
SELECT * FROM COMMIT_TEST_UPDATE
Any help would be appreciated, we understand we could make changes to the application/database to approximate what we need, but all the solutions have identified suffer from possible performance issues, or could still lead to missing deals (assuming the commit time is larger than some artifical time window).
Regards,
Mark
View 8 Replies
View Related
Sep 21, 2015
I need to do a time test for restoring an Azure SQL database from a point in time. Can I automate this through PowerShell.
View 3 Replies
View Related
Jul 6, 2004
Hi there guys!
I was wondering if anyone know the number of rows you can import from an external source into MS SQL Server 2000 Developer Edition?
I have tried importing a whole table from another database with over a million records. But failed after 1.7 million row.
Any ideas?
Thank you!
View 1 Replies
View Related
Aug 6, 1998
When creating a database, SQL Server 6.5 seems to have a 2 gig limit. What I mean is that if the device chosen is over 2 gigs SQL Server displays the size of the device as anegative number. This prevents me from being able to expand the database when I need to. Can anyone tell me why this is so, and if there`s anyway around it??
View 1 Replies
View Related
Feb 21, 2006
we all know
mysql: select * from table limit ?1, ?2
equals
sqlserver: SELECT TOP ?2 *
FROM table WHERE (IDENTITYCOL NOT IN
(SELECT TOP ?1 IDENTITYCOL
FROM table order by IDENTITYCOL))
order by IDENTITYCOL
but the below SQL in mysql,how to convert?I enmesh...........
select pageid,pagename,pageaddr,pageauditflag,pageartaudi tflag,startplaytime
from pageinfo where entryid= ?1 and startplaytime= ?2
limit ?3, ?4
thanks!
View 14 Replies
View Related
Jul 20, 2005
hi,I have a question.Maybe You know the equivalent to command LIMIT from MySQLI couldn`t find something like this in MS SQLPSI try to display 10 records begining form e.g. 4 sort by idsomething like: "SELECT * FROM table WHERE name=... LIMIT 4, 10 ORDER BY id"in MySQLthanx,Urban
View 2 Replies
View Related
Mar 26, 2007
Hi There
I recently read up and found an msdn forum post that said the 4gb limit only applied to data files not log files.
However recently i had an error on a sql express database saying that the log file was full, the database log file was set to auto grow and there was plenty of space left.
So i am guessing the 4gb limit applies to any database file mdf or ldf, is this correct ?
Thanx
View 5 Replies
View Related
Dec 26, 2014
I need to take a temporary table that has various times stored in a text field (4:30 pm, 11:00 am, 5:30 pm, etc.), convert it to miltary time then cast it as an integer with an update statement kind of like:
Update myTable set MovieTime = REPLACE(CONVERT(CHAR(5),GETDATE(),108), ':', '')
how this can be done while my temp table is in session?
View 2 Replies
View Related
Oct 7, 2015
I have a table called employee_punch_record that we use to store employee time clock punches.
The columns are:
employeeid,
punch_timestamp,
punch_type (In / Out),
closed (bit used as status for open or closed pay periods),
ident
Here are some examples of a record:
bkingery62015-10-06 16:59:04.000In0
bkingery72015-10-06 16:59:09.000Out0
bkingery82015-10-06 16:59:13.000In0
bkingery92015-10-06 18:22:44.000Out0
bkingery102015-10-06 18:22:46.000In0
bkingery112015-10-06 18:22:48.000Out0
bkingery122015-10-06 18:22:51.000In0
tfeller52015-10-05 17:00:05.000In0
We are using SQL Server 2008 as our database and use Access as a GUI. I am looking to create a form in Access where employees can access their time card and request changes from management. I want to use the format from the attached screen shot for the form. I pretty much know how to do it all, the only point of complication is trying to figure out the easiest way to get the transaction punch record data on employee_punch_record into a format where I can easily populate the form in the horizontal format you see in the screen shot.
I am not super strong in SQL, but figure I can do it using a formatting table of some sort. quick and easy way to move transaction records into a more horizontally oriented record?
View 0 Replies
View Related
Feb 19, 2008
Hi all,
I have a very simple time series model which processing works fine without any problem. However when I run the following query
SELECT
[TimeSeries].[PriceChange],
[TimeSeries].[Symbol],
PredictTimeSeries(PriceChange, -3, 2)
From
[TimeSeries]
WHERE
[TimeSeries].[Symbol] = 'x'
I get the following error:
TITLE: Microsoft SQL Server 2005 Analysis Services
------------------------------
Error (Data mining): A time series prediction was requested with a start time further in the past than the internal models of the mining model, TimeSeries, specified in the HISTORIC_MODEL_GAP and HISTORIC_MODEL_COUNT parameters can process.
The following is the excerpt of the minding model script related to the two parameters:
<AlgorithmParameters>
<AlgorithmParameter>
<Name>MISSING_VALUE_SUBSTITUTION</Name>
<Value xsi:type="xsdtring">Previous</Value>
</AlgorithmParameter>
<AlgorithmParameter>
<Name>HISTORIC_MODEL_GAP</Name>
<Value xsi:type="xsd:int">1</Value>
</AlgorithmParameter>
<AlgorithmParameter>
<Name>HISTORIC_MODEL_COUNT</Name>
<Value xsi:type="xsd:int">10</Value>
</AlgorithmParameter>
</AlgorithmParameters>
These HISTORIC_MODEL_GAP (1) and HISTORIC_MODEL_COUNT (10) should accommodate PredictTimeSeries(PriceChange, -3, 2). Could anyone shed some light on this?
View 3 Replies
View Related
Apr 30, 2015
we have problems with our SQL Reporting Service 2012 (SSRS) server . We have setup Kerberos delegation between SSRS and the database server (SQL Server Always-on cluster) so users are authenticated down to the database. The issue occurs from time to time that SSRS loses the ability to delegate the user credentials to the database. At this point in time the Report Server logs contain rejected database connections because of ANONYMOUS logon. After restarting SSRS the problem is gone.
View 2 Replies
View Related
May 26, 2005
Hi,
I have a table which has a few fields, one being "datetime_traded". I need to write a query which returns the row which has the closest time (down to second) given a date/time. I'm using MS SQL.
Here's what I have so far:
Code:
select * from TICK_D
where datetime_traded = (select min( abs(datediff(second,datetime_traded , Convert(datetime,'2005-05-30:09:31:09')) ) ) from TICK_D)
But I get an error - "The conversion of a char data type to a datetime data type resulted in an out-of-range datetime value.".
Does anyone know how i could do this? Thanks a lot for any help!
View 2 Replies
View Related
Aug 17, 2007
Ok, so I have some horribly convuluted SQL that I would love to optomize. I'm not happy leaving it in it's current state, that's for sure!
I'm currently working on our test bed servers, so obviously my stats are out because of the "crap-ness" (yes, that's the technical term) of the hardware, but still, it should NEVER need to take this long!!
Basically, the issue arises in the nasty join to the career table (one employee can have multiple career lines). Just to make things complicated, employees can have any number of career records on any given date, these can even be input for future career events. The following SQL picks out the latest-current career date for each employee based on the career_date being <= GetDate() and the date of entry for this date being the greatest.
E.g.
career_date | datetime_created
2009-01-01 | 2006-05-05 13:55:21.000
2007-01-01 | 2006-05-05 13:54:18.000
2007-01-01 | 2006-05-05 13:52:55.000
From the above we want to return
2007-01-01 | 2006-05-05 13:54:18.000
SET STATISTICS IO ON
SET STATISTICS TIME ON
SELECT a.sAMAccountNameAs 'sAMAccountName'
, a.userPrincipalNameAs 'userPrincipalName'
, 'TRUE'As 'Modify'
, RTRIM(e.unique_identifier)As 'employeeID'
, RTRIM(e.employee_number)As 'employeeNumber'
, RTRIM(e.known_as)
+ CASE WHEN RTRIM(e.surname) IS NOT NULL THEN
' ' + RTRIM(e.surname) ELSE NULL ENDAs 'displayName'
, RTRIM(e.known_as)As 'givenName'
, RTRIM(e.surname)As 'sn'
, RTRIM(c.job_title)As 'title'
, RTRIM(c.division)As 'company'
, RTRIM(c.department)As 'department'
, RTRIM(l.description)As 'physicalDeliveryOfficeName'
, RTRIM(REPLACE(am.dn,'\',''))As 'manager'
, t.full_mobile
+ CASE WHEN RTRIM(t.mobile_number) IS NOT NULL THEN
' (DD: ' + RTRIM(t.mobile_number) + ')'ELSE NULL END
As 'mobile'
, t.mobile_numberAs 'otherMobile'
, ad.address_ad_countryAs 'c'
, ad.address_ad_address1
+ CASE WHEN ad.address_ad_address2 IS NOT NULL THEN
', ' + ad.address_ad_address2 ELSE NULL END
+ CASE WHEN ad.address_ad_address3 IS NOT NULL THEN
', ' + ad.address_ad_address3 ELSE NULL END
+ CASE WHEN ad.address_ad_address4 IS NOT NULL THEN
', ' + ad.address_ad_address4 ELSE NULL END
+ CASE WHEN ad.address_ad_address5 IS NOT NULL THEN
', ' + ad.address_ad_address5 ELSE NULL ENDAs 'streetAddress'
, ad.address_ad_poboxAs 'postOfficeBox'
, ad.address_ad_cityAs 'l'
, ad.address_ad_CountyAs 'st'
, ad.address_ad_postcodeAs 'postalCode'
, RTRIM(ad.address_ad_telephone) +
CASE WHEN RTRIM(a.othertelephone) IS NOT NULL
AND RTRIM(ad.address_ad_telephone) IS NOT NULL THEN
' (Ext: ' + RTRIM(a.othertelephone) + ')'
ELSE
CASE WHEN RTRIM(a.othertelephone) IS NOT NULL
AND RTRIM(ad.address_ad_telephone) IS NULL THEN
'Ext: ' + RTRIM(a.othertelephone)
ELSE NULL
END
ENDAs 'telephoneNumber'
FROM employee e
LEFT
JOIN career c
ON c.parent_identifier = e.unique_identifier
AND c.career_date =(
SELECTmax(c2.career_date)
FROMpwa_master.career c2
WHEREc2.parent_identifier = c.parent_identifier
ANDc2.career_date <= GetDate()
)
AND c.datetime_created =(
SELECT max(c3.datetime_created)
FROMpwa_master.career c3
WHEREc3.parent_identifier = c.parent_identifier
ANDc3.career_date = c.career_date
)
LEFT
OUTER
JOIN AD_Import am
ON am.employeeNumber = c.manager_number
INNER
JOIN AD_Import a
ON a.employeeID = e.unique_identifier
LEFT
JOIN AD_Telephone t
ON t.unique_identifier = e.unique_identifier
LEFT
JOIN AD_Address ad
ON ad.address_pwa_location = e.location
LEFT
JOIN xlocat l
ON l.code = c.location
WHERE (a.employeeNumber IS NOT NULL
OR a.employeeID IS NOT NULL)
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
(1706 row(s) affected)
Table 'AD_Import'. Scan count 4, logical reads 106, physical reads 0, read-ahead reads 0.
Table 'AD_Address'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0.
Table 'AD_Telephone'. Scan count 2, logical reads 10, physical reads 0, read-ahead reads 0.
Table 'Worktable'. Scan count 868, logical reads 956, physical reads 0, read-ahead reads 0.
Table 'xlocat'. Scan count 2, logical reads 8, physical reads 0, read-ahead reads 0.
Table 'career'. Scan count 5088, logical reads 2564843, physical reads 0, read-ahead reads 0.
Table 'people'. Scan count 1697, logical reads 5253, physical reads 0, read-ahead reads 0.
Table 'Worktable'. Scan count 826, logical reads 914, physical reads 0, read-ahead reads 0.
SQL Server Execution Times:
CPU time = 15203 ms, elapsed time = 8114 ms.
Any advice on what I can do to optomize?
Oh judt to point out that "employee" is a view on the "Table 'people'."
EDIT: I know it's pointing out the obvious, but I'm pulling out the managers "DN" from AD_Import based on the manager_number and employeeNumber matching.
View 14 Replies
View Related