I recently installed sharepoint 3.0 on our fileserver, which has our main db using MSDE. I didn't know at the time that it would also install 2k5 embedded edition, but even if I had, I don't think it would have changed my decision.
Anyway, soon after, performance on MSDE completely tanked. Queries would execute extremely slowly, or not at all. I checked the CPU and mem usage, and all were fine. No blocked sql commands either. We ended up just killing the Sql 2k5 EE service. We're moving off MSDE eventually, but I would still like to find out why it happened, and if there's a fix or workaround.
Hi, We have a SQL 7 / Win2K cluster and yesterday afternoon the users were complaining about poor performance. Their queries were timing out.. (Not all of them, just some on some large tables)
I ran just an ad-hoc query against the table from my machine and I also timed out. THen I went right to the box that had control of the cluster and did the same thing there and also timed out. Because of time constraints (and we are in testing mode) we tested a failover and everything was back to normal after that.
So now we want to try to figure out what could have been the problem. At the time I checked out the Memory and CPU usage and they were very low (0-5%) and using only 1/3 of the memory. It couldn't be a bad query or index because after the failover it worked fine.
Could there be something wrong with the specific box that had control at the time? I dont' know where to look?
We have several SQL 2000 databases on one server. One of the applications I'm responsible for has batch jobs that run for an hour; all activity is on the database. During this hour, other applications that use other databases on the same server experience time-outs. One of my coworkers did a count(*) on an empty table and it took 11 seconds.
We pay people to keep our servers up and running. Is this something they might solve by reconfiguring the server? It seems strange to me that a single database is allowed to hog all server resources. We are meeting with them later this week, and I'd like to have some knowledge about this; we don't want to BS'ed into buying a new server.
Hi All.I have some rather large SQL Server 2000 databases (around 60GB).I have set up jobs to re-index the tables and update statistics everysunday. This worked will for a few months. Now after a day or two ofusing it the connections to it keep timing out. If i start the jobsmanually, all is well for two days or so.Surely there can be a better solution to this ?TIA.Ryan,.
The C++ application calls the database to look up property data. Onetroublesome query is a function that returns a table, finding data whichis assembled from four or five tables through a view that has a join,and then updating the resulting @table from some other tables. Thereare several queries inside the function, which are selected accordingto which parameters are supplied (house #, street, zip, or perhaps parcelnumber, or house #, street, town, city,...etc.). If a lot of parametersare provided, and the property is not in the database, then several queriesmay be attempted -- it keeps going until it runs out of queries or findssomething. Usually it takes ~1-2 sec for a hit, but maybe a minute insome failure cases, depending on the distribution of data. (~100 milproperties in the DB) Some queires operate on the assumption the input datais slightly faulty, and take relatively a long time, e.g., if WHEREZIP=@Zip fails, we try WHERE ZIP LIKE substring(@Zip,1,3)+'%'. Whileall this is going on the application may decide the DB is never going toreturn, and time out; it also seems more likely to throw an exception thelonger it has to wait. Is there a way to cause the DB function to fail ifit takes more than a certain amount of time? I could also recast it asa procedure, and check the time consumed after every query, and abandonthe search if a certain amount of time has elapsed.Thanks in advance,Jim Geissman
Okay, we have are running our Master Package (and therefore all related Child packages) through a .bat file. The .bat file is scripted using the following logic for an entire month of daily runs:
Code Snippet
DTExec /FILE E:ETLFinancialDataMartMaster.dtsx /DECRYPT masterpwd /SET Package.Variables[ReportingDate].Value;"2/01/2007" > E:ETLErrorLogsProcessingetl_20070201log.txt IF NOT %ERRORLEVEL%==0 GOTO ERROR%ERRORLEVEL% mkdir E:ETLErrorLogsArchive20070201 move E:ETLErrorLogsProcessing*.txt E:ETLErrorLogsArchive20070201
DTExec /FILE E:ETLFinancialDataMartMaster.dtsx /DECRYPT masterpwd /SU /SET Package.Variables[ReportingDate].Value;"2/02/2007" > E:ETLErrorLogsProcessingetl_20070202log.txt IF NOT %ERRORLEVEL%==0 GOTO ERROR%ERRORLEVEL% mkdir E:ETLErrorLogsArchive20070202 move E:ETLErrorLogsProcessing*.txt E:ETLErrorLogsArchive20070202
etc...
Generally it takes about 40-45 minutes to run one days worth of data. However, we have found unpredictable instances where the job will take 3 hours or even 6 hours and appear to hang....
The weirdness sets in when we kill the job and rerun it. In all instances of a rerun, the job will execute in the normal 40-45 minute time frame. Obviously, we would like to institute some sort of logging, monitoring and error handling....including if need be a method to timeout a process and restart it.
I am reviewing the WMI (Windows Management Instrumentation) Task but I'm not entirely convinced that it's the right tool for the job.
Questions:
Has anyone else experienced the type of processing behavior that I described? Has anyone been successful at using WMI or another process to monitor and timeout packages? If so, are there sample packages or a good tutorial that maps it out? Unrelated to this issue, we also have instances incomplete processing logs. The logs don't finish writing and the weird part is that they all end at the same point, does anyone have experience with incomplete job logs?:
Hi everyone! I'm new to this forum and I suspect I'll be using this forum frequently. Good stuff.
Allow this question may appear to be Web-related, I think the problem is with what I'm doing with the database. Please read.
I'm trying to implement a page tracking solution using ASP and SQL 2000. It basically writes a new record to a table every time a user visits a page on the site. It appeared to work fine at first, then I've increasingly been getting time out errors on my pages -- all pointing to the include file that fires the database write.
Here's the code that's referenced on every page:
Set Conn = Server.CreateObject("ADODB.Connection") Conn.Open "dsn=x;uid=y;pwd=z;"
Set objRecordset1= Server.CreateObject("ADODB.Recordset") objRecordset1.Open "SELECT * FROM table",Conn,1,2 objRecordset1.AddNew objRecordset1.Fi elds("PAGE") = Left(request.servervariables("SCRIPT_NAME"),100) objReco rdset1.Fields("QUERY_STRING") = Left(request.servervariables("QUERY_STRING"),100) objRec ordset1.Fields("DATE") = Date() objRecordset1.Fields("TIME") = Time() objRecordset1.Fields("PLATFORM") = Left(request.servervariables("HTTP_USER_AGENT"),100) obj Recordset1.Fields("REFERRER") = Left(request.servervariables("HTTP_REFERER"),100) objRec ordset1.Fields("USER_IP") = Left(request.servervariables("REMOTE_ADDR"),20) If Request.Cookies("TEST")("ID")<>"" Then objRecordset1.Fields("VISITOR_ID") = Request.Cookies("TEST")("ID") End If objRecordset1.Update
Conn.Close Set Conn=Nothing %>
After taking out the reference to the above code everything speeds back up. So, I know the performance hit and time out issues have to do with the code above.
Is it the simultaneous write to the table, the constant opening and closing of the recordset, the cursor type, the lock type – or combination of things?
We have 4 clustered SQL2000 Servers each contains information specific to its application related to customer information in a casino player tracking database. My problem is as follows On the Playertracking database I can join and return information from the tables there with no problems the performance accross the decently sizable transactional based table is pretty decent. The problem is I need to filter this query down by the Type of machine the customer plays. The child key exists in the playertransaction table the parent key is on another server. Here is the lay out of the tables unecessary information from the tables were truncated for brevity.
CREATE TABLE [dbo].[PlayerSession] ( [PlayerId] [int] NOT NULL , [Mnum] [int] NOT NULL , [CoinIn] [money] NOT NULL , [CoinOut] [money] NOT NULL , [Games] [int] NOT NULL , [Jackpot] [money] NULL , [Win] [money] NULL , [TheoWin] [money] NOT NULL , [PlayerMod] [tinyint] NOT NULL ) ON [PRIMARY]
-- Player Demographics information CREATE TABLE [dbo].[Player] ( [PlayerId] [int] NOT NULL , [Status] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL , [Title] [varchar] (5) COLLATE SQL_Latin1_General_CP1_CI_AS NULL , [FirstName] [varchar] (40) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL , [MiddleName] [varchar] (40) COLLATE SQL_Latin1_General_CP1_CI_AS NULL , [LastName] [varchar] (40) COLLATE SQL_Latin1_General_CP1_CI_AS NULL , [SSN] [varchar] (16) COLLATE SQL_Latin1_General_CP1_CI_AS NULL , ) ON [PRIMARY]
--Machine Information that links to Machine Type table
CREATE TABLE [dbo].[Machine] ( [MNum] [int] NOT NULL , [MachineTypeId] [smallint] NOT NULL , ) ON [PRIMARY]
-- Machine type code table
CREATE TABLE [dbo].[MachineType] ( [MachineTypeId] [smallint] NOT NULL , [Denom] [int] NOT NULL , [Par] [decimal](6, 2) NOT NULL , [GameType] [varchar] (40) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL , ) ON [PRIMARY]
From the server where all the player information is ran I can query the linked database for the machine and machine type information like this.
SELECT m.MNum, mt.MachineTypeId, mt.GameType, mt.DisplayType FROM ACCTV.Accounting.dbo.Machine m INNER JOIN ACCTV.Accounting.dbo.MachineType mt ON m.MachineTypeId = mt.MachineTypeId
This is the information I am trying to get out but the query times out on me.
SELECT Player.PlayerId, Player.FirstName, Player.LastName, SUM(PlayerSession.CoinIn) AS sumCI, SUM(PlayerSession.CoinOut) AS SumCO,SUM(PlayerSession.TheoWin) AS SumTheo, AVG(PlayerSession.TheoWin) AS AvgTheo, SUM(PlayerSession.Win) AS SumWin, AVG(PlayerSession.Win) AS AvgWin, mt.GameType FROM Player INNER JOIN PlayerSession ON Player.PlayerId = PlayerSession.PlayerId INNER JOIN ACCT.Accounting.dbo.Machine M ON PlayerSession.Mnum = M.MNum INNER JOIN ACCT.Accounting.dbo.MachineType mt ON M.MachineTypeId = mt.MachineTypeId GROUP BY Player.PlayerId, Player.FirstName, Player.LastName, mt.GameType
The other option would be some sort of SubQuery but I dont know how to return results from the subqueries to the root query to be returned to the restulting recordset. I am not necessarily looking for an answer more of looking for a direction to go to find my solution.
Hi - I am a bit of an amatuer, but I am trying to use the EMS data Pump to get all of the data out of an Interbase 6 DB into an MSDE DB. I have managed to instal an Interbase ODBC driver, and I ca connect to it and then automatically create the tables in my MSDE DB, but I get an error stating that there cannot be multiple Timestamp columns in the MSDE databse. Is this true ?
I am joining the thread that initially Mike started. We have a x64 bit SQL Server with 32GB RAM. On start up sqlserver.exe starts with 15 to 20MB of RAM. After that the memory gradually increases at a step of 100MB and reaches 31.8 GB at the end. We don't see a out of memory situation so far and the memory remains the same, once the sqlserver.exe reaches that level we are facing application in stability issue. There is no other application running in this server. At present the database size is 28GB and we have employed the following,
There is a database replication running
A weekly maintenance plan to reindex, backup and other maintenance is running.
There is a log back up job which runs once in 2hrs time.
The temdb size grows to 3gb max. We didn't see any temp table created left out orphan in the temdb.
We have the required indexes placed in the tables to reduce the scan time. Also the server is configured to use dynamic memory allocation that is all are factory settings.
The database is encounters on an average 200 to 500 connections at a time. One observation is the memory goes up as soon as the replication starts, this is at one of the 2 servers.
Please advise what is causing this issue and how to go abt it.
I'm not sure if it's the setup I did wrong, but I can't seem to get my text datatype in my database to store more than 900 characters. I'm trying to setup a news database for my website, which will populate the information into a datagrid. To test, I manually added a news item in the database through the visual studio 2003 gui. I immediately noticed a problem as the I was getting an error after a long news item saying:
"The value you entered is not consistent with the data type or length of the column, or over grid buffer limit."
I couldn't find anthing to set the buffer limit and the datatype is "text" filled with simple text in the column. As a further test, I simply entered 12334567890123... up to 900 characters and still recevied the error.
I would appreciate someone leading me in the right direction on this one.
I am trying to change the computer name of a machine running MSDE but I get an error when SQL Server starts. With regular SQL when I change the name of a computer I re-run setup and setup fixes this problem. MSDE can only be installed from unattended mode so I can’t rerun setup and fix the problem.
My question is "How do I change the name of a computer running MSDE with out reinstalling MSDE"
We currently have the problem, that all our machines are produced with the same name and afterwards the name is changed. So we have the problem that the checksum key for the MSDE isn't valid anymore. As MSDE can only be installed from unattended mode so I can’t rerun setup and fix the problem. Does anyone know a solution for this problem ?? A program recalculating the cheksum ??
I'm not sure if this is the correct forum for this this question but I'll give it a shot.
The only db development that I have ever done is in MS Access. I have a project at work that is being accomplished in VB and I need a db engine to use as the back end. Visual studio came with a copy of MSDE. Is this tool worth using or should I invest in mySQL? Are there any advantages to using MSDE over mySQL?
USE [Testing] GO /****** Object: Table [dbo].[Testing] Script Date: 4/25/2014 11:08:18 AM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON
[Code] ....
It seems to work fine with one million records.
Each primary key is unique, but the begindate is non-unique, and i guess even if i use datetime2 and add nanoseconds, from what i have read, there is a chance that i could have a duplicate datetime since the date is imported via XML from multiple sources.
Is there a way to keep track in real time on how long a stored procedure is running for? So what I want to do is fire off a trace in a stored procedure if that stored procedure is running for over like 5 minutes.
I am trying to load previous days data at 3 am via a SSIS job.
The Date variable is initiated as DATEADD("dd",-1, GETDATE()) in the for loop.
Now, as this job runs at 3 am, and I set the variable as GETDATE() - 1, it excluded the data from 12 am to 3 am in the resultset as Date is set as YYYY-MM-DD 03:00:00:000 I need this to be set as YYYY-MM-DD 00:00:00:000
I hope to update a DateTime column value with a Time input parameter.  Poor attempt below but it looks like the @ApptTime param is coming in as 10:45:00.0000000 and I might have an existing @SendOnDate as: 2015-10-05 07:00:00.000...I hope to end up with 2015-10-05 10:45:00.000
ALTER PROCEDURE [dbo].[SendEditUPDATE] @QuePoolID int=null ,@ApptTime time(7) ,@SendOnDate datetime
I am using VS2005 (VB) to develop a PPC WM5.0 Program. And I am using SQLCE 3.0. My PPC Hardware is in 400MHz.
The question is when the program try to insert the first record into sdf database after each time the program started. It takes a long time. Does anyone know why and how can I fix it?
I will load the whole database into a dataset when the program start and do all the "Insert", "Update", "Delete" in this dataset and fill it into database after each action.
cn.Open() sda = New SqlCeDataAdapter(SQL, cn) 'SQL = Select * From Table scb = New SqlCeCommandBuilder(sda) sda.Update(dataset) cn.Close()
I check the sda.update(), it takes about 0.08s for filling one record into database normally. But:
1. Start the PPC Program
2. Load DB into dataset
3. Create a ONE new record in dataset
4. Fill back to DB
When I take this four steps everytime, the filling time is almost 1s or even more!
Actually, 0.08s is just a normal case. Sometimes, it still takes over 1s to filling back a dataset which only inserted one record when the program is running. (Even all inserted records are exactly the same in data jsut different in the integer key)
However, when I give up the dataset and using the following code:
cn.Open() Dim cmd As New SqlCeCommand(SQL, cn) ' I have build the insert SQL before (Insert Into Table values(XXXXXXXXXXXXXXX All field)
I found that it is still the same that the first inserted record takes more time, but just about 0.2s. And the normal insert time is around 0.02s. It is 4 times faster!!!
We need to select rows from the database that have been recently inserted/updated. We have a main primary table (COMMIT_TEST) and a second update table (COMMIT_TEST_UPDATE). The update table contains the primary key and a LAST_UPDATE field which is a datetime (to tell us when an update occurred). Triggers on the primary table are used to populate the update table.
If we insert or update the primary table in a transaction, we would expect that the datetime of the insert/update would be at the commit, however it seems that the insert/update statement is cached and getdate() is executed at the time of the cache instead of the commit. This causes problems as we select rows based on LAST_UPDATE and a commit may occur later but the earlier insert timestamp is saved to the database and we miss that update.
We would like to know if there is anyway to tell the SQL Server to not execute the function getdate() until the commit, or any other way to get the commit to create the correct timestamp.
We are using default isolation level. We have tried using getdate(), current_timestamp and even {fn Now()} with the same results. SQL Queries that reproduce the problem are provided below:
/* Different functions to get current timestamp €“ all have been tested to produce the same results */ /* SELECT GETDATE() GO SELECT CURRENT_TIMESTAMP GO SELECT {fn Now()} GO */ /* Use these statements to delete the tables to allow recreate of the tables */ /* DROP TABLE COMMIT_TEST DROP TABLE COMMIT_TEST_UPDATE */ /* Create a primary table and an UPDATE table to store the date/time when the primary table is modified */ CREATE TABLE dbo.COMMIT_TEST (PKEY int PRIMARY KEY, timestamp) /* ROW_VERSION rowversion */ GO CREATE TABLE dbo.COMMIT_TEST_UPDATE (PKEY int PRIMARY KEY, LAST_UPDATE datetime, timestamp ) /* ROW_VERSION rowversion */ GO /* Use these statements to delete the triggers to allow reinsert */ /* drop trigger LOG_COMMIT_TEST_INSERT drop trigger LOG_COMMIT_TEST_UPDATE drop trigger LOG_COMMIT_TEST_DELETE */ /* Create insert, update and delete triggers */ create trigger LOG_COMMIT_TEST_INSERT on COMMIT_TEST for INSERT as begin declare @time datetime select @time = getdate()
insert into COMMIT_TEST_UPDATE (PKEY,LAST_UPDATE) select PKEY, getdate() from inserted end GO create trigger LOG_COMMIT_TEST_UPDATE on COMMIT_TEST for UPDATE as begin declare @time datetime select @time = getdate()
update COMMIT_TEST_UPDATE set LAST_UPDATE = getdate() from COMMIT_TEST_UPDATE, deleted, inserted where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY end GO /* In our application deletes should never occur so we don€™t log when they get modified we just delete them from the UPDATE table */ create trigger LOG_COMMIT_TEST_DELETE on COMMIT_TEST for DELETE as begin if ( select count(*) from deleted ) > 0 begin delete COMMIT_TEST_UPDATE from COMMIT_TEST_UPDATE, deleted where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY end end GO /* Delete any previous inserted record to avoid errors when inserting */ DELETE COMMIT_TEST WHERE PKEY = 1 GO /* What is the current date/time */ SELECT GETDATE() GO BEGIN TRANSACTION GO /* Insert a record into the primary table */ INSERT COMMIT_TEST (PKEY) VALUES (1) GO /* Simulate additional processing within this transaction */ WAITFOR DELAY '00:00:10' GO /* We expect at this point that the date is written to the database (or at least we need some way for this to happen) */ COMMIT TRANSACTION GO /* get the current date to show us what date/time should have been committed to the database */ SELECT GETDATE() GO /* Select results from the table €“ we see that the timestamp is 10 seconds older than the commit, in other words it was evaluated at */ /* the insert statement, even though the row could not be read with a SELECT as it was uncommitted */ SELECT * FROM COMMIT_TEST GO SELECT * FROM COMMIT_TEST_UPDATE
Any help would be appreciated, we understand we could make changes to the application/database to approximate what we need, but all the solutions have identified suffer from possible performance issues, or could still lead to missing deals (assuming the commit time is larger than some artifical time window).
I need to take a temporary table that has various times stored in a text field (4:30 pm, 11:00 am, 5:30 pm, etc.), convert it to miltary time then cast it as an integer with an update statement kind of like:
Update myTable set MovieTime = REPLACE(CONVERT(CHAR(5),GETDATE(),108), ':', '')
how this can be done while my temp table is in session?
We are using SQL Server 2008 as our database and use Access as a GUI. I am looking to create a form in Access where employees can access their time card and request changes from management. I want to use the format from the attached screen shot for the form. I pretty much know how to do it all, the only point of complication is trying to figure out the easiest way to get the transaction punch record data on employee_punch_record into a format where I can easily populate the form in the horizontal format you see in the screen shot.
I am not super strong in SQL, but figure I can do it using a formatting table of some sort. quick and easy way to move transaction records into a more horizontally oriented record?
I have a very simple time series model which processing works fine without any problem. However when I run the following query
SELECT
[TimeSeries].[PriceChange],
[TimeSeries].[Symbol],
PredictTimeSeries(PriceChange, -3, 2)
From
[TimeSeries]
WHERE
[TimeSeries].[Symbol] = 'x'
I get the following error:
TITLE: Microsoft SQL Server 2005 Analysis Services ------------------------------ Error (Data mining): A time series prediction was requested with a start time further in the past than the internal models of the mining model, TimeSeries, specified in the HISTORIC_MODEL_GAP and HISTORIC_MODEL_COUNT parameters can process.
The following is the excerpt of the minding model script related to the two parameters:
<AlgorithmParameters>
<AlgorithmParameter>
<Name>MISSING_VALUE_SUBSTITUTION</Name>
<Value xsi:type="xsdtring">Previous</Value>
</AlgorithmParameter>
<AlgorithmParameter>
<Name>HISTORIC_MODEL_GAP</Name>
<Value xsi:type="xsd:int">1</Value>
</AlgorithmParameter>
<AlgorithmParameter>
<Name>HISTORIC_MODEL_COUNT</Name>
<Value xsi:type="xsd:int">10</Value>
</AlgorithmParameter>
</AlgorithmParameters>
These HISTORIC_MODEL_GAP (1) and HISTORIC_MODEL_COUNT (10) should accommodate PredictTimeSeries(PriceChange, -3, 2). Could anyone shed some light on this?
we have problems with our SQL Reporting Service 2012 (SSRS) server . We have setup Kerberos delegation between SSRS and the database server (SQL Server Always-on cluster) so users are authenticated down to the database. The issue occurs from time to time that SSRS loses the ability to delegate the user credentials to the database. At this point in time the Report Server logs contain rejected database connections because of ANONYMOUS logon. After restarting SSRS the problem is gone.
I have a table which has a few fields, one being "datetime_traded". I need to write a query which returns the row which has the closest time (down to second) given a date/time. I'm using MS SQL.
Here's what I have so far:
Code:
select * from TICK_D where datetime_traded = (select min( abs(datediff(second,datetime_traded , Convert(datetime,'2005-05-30:09:31:09')) ) ) from TICK_D)
But I get an error - "The conversion of a char data type to a datetime data type resulted in an out-of-range datetime value.".
Does anyone know how i could do this? Thanks a lot for any help!
Ok, so I have some horribly convuluted SQL that I would love to optomize. I'm not happy leaving it in it's current state, that's for sure!
I'm currently working on our test bed servers, so obviously my stats are out because of the "crap-ness" (yes, that's the technical term) of the hardware, but still, it should NEVER need to take this long!!
Basically, the issue arises in the nasty join to the career table (one employee can have multiple career lines). Just to make things complicated, employees can have any number of career records on any given date, these can even be input for future career events. The following SQL picks out the latest-current career date for each employee based on the career_date being <= GetDate() and the date of entry for this date being the greatest.
From the above we want to return 2007-01-01 | 2006-05-05 13:54:18.000
SET STATISTICS IO ON SET STATISTICS TIME ON
SELECT a.sAMAccountNameAs 'sAMAccountName' , a.userPrincipalNameAs 'userPrincipalName' , 'TRUE'As 'Modify' , RTRIM(e.unique_identifier)As 'employeeID' , RTRIM(e.employee_number)As 'employeeNumber' , RTRIM(e.known_as) + CASE WHEN RTRIM(e.surname) IS NOT NULL THEN ' ' + RTRIM(e.surname) ELSE NULL ENDAs 'displayName' , RTRIM(e.known_as)As 'givenName' , RTRIM(e.surname)As 'sn' , RTRIM(c.job_title)As 'title' , RTRIM(c.division)As 'company' , RTRIM(c.department)As 'department' , RTRIM(l.description)As 'physicalDeliveryOfficeName' , RTRIM(REPLACE(am.dn,'\',''))As 'manager' , t.full_mobile + CASE WHEN RTRIM(t.mobile_number) IS NOT NULL THEN ' (DD: ' + RTRIM(t.mobile_number) + ')'ELSE NULL END As 'mobile' , t.mobile_numberAs 'otherMobile' , ad.address_ad_countryAs 'c' , ad.address_ad_address1 + CASE WHEN ad.address_ad_address2 IS NOT NULL THEN ', ' + ad.address_ad_address2 ELSE NULL END + CASE WHEN ad.address_ad_address3 IS NOT NULL THEN ', ' + ad.address_ad_address3 ELSE NULL END + CASE WHEN ad.address_ad_address4 IS NOT NULL THEN ', ' + ad.address_ad_address4 ELSE NULL END + CASE WHEN ad.address_ad_address5 IS NOT NULL THEN ', ' + ad.address_ad_address5 ELSE NULL ENDAs 'streetAddress' , ad.address_ad_poboxAs 'postOfficeBox' , ad.address_ad_cityAs 'l' , ad.address_ad_CountyAs 'st' , ad.address_ad_postcodeAs 'postalCode' , RTRIM(ad.address_ad_telephone) + CASE WHEN RTRIM(a.othertelephone) IS NOT NULL AND RTRIM(ad.address_ad_telephone) IS NOT NULL THEN ' (Ext: ' + RTRIM(a.othertelephone) + ')' ELSE CASE WHEN RTRIM(a.othertelephone) IS NOT NULL AND RTRIM(ad.address_ad_telephone) IS NULL THEN 'Ext: ' + RTRIM(a.othertelephone) ELSE NULL END ENDAs 'telephoneNumber' FROM employee e LEFT JOIN career c ON c.parent_identifier = e.unique_identifier AND c.career_date =( SELECTmax(c2.career_date) FROMpwa_master.career c2 WHEREc2.parent_identifier = c.parent_identifier ANDc2.career_date <= GetDate() ) AND c.datetime_created =( SELECT max(c3.datetime_created) FROMpwa_master.career c3 WHEREc3.parent_identifier = c.parent_identifier ANDc3.career_date = c.career_date ) LEFT OUTER JOIN AD_Import am ON am.employeeNumber = c.manager_number INNER JOIN AD_Import a ON a.employeeID = e.unique_identifier LEFT JOIN AD_Telephone t ON t.unique_identifier = e.unique_identifier LEFT JOIN AD_Address ad ON ad.address_pwa_location = e.location LEFT JOIN xlocat l ON l.code = c.location WHERE (a.employeeNumber IS NOT NULL OR a.employeeID IS NOT NULL)
SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times: CPU time = 15203 ms, elapsed time = 8114 ms.
Any advice on what I can do to optomize?
Oh judt to point out that "employee" is a view on the "Table 'people'." EDIT: I know it's pointing out the obvious, but I'm pulling out the managers "DN" from AD_Import based on the manager_number and employeeNumber matching.
I need a formula to calculate the time (let's say in minutes) between two dates/times. The problem is that I have to exclude the time between 06 PM and 06 AM and also exclude the time in the weekend (Saturday and Sunday). I will use this in a couple of reports made in Reporting Services. If anyone have an algoritm that could be modified for this and is willing to share this I would be very grateful. Many thanks! /Per Lissel
I have created several global temp tables to cache some intermediate results ... However, it seems that after a while those tables will be dropped by SQL Server 2005 automatically (I have not restarted the server and no drop table statement ever executed against those tables). Is this a feature by design? How to make those global temp tables persistence to next service restart?