For a particular report it is sometimes failing to execute and sometimes successful. When querying the execution log it is apparent that the processing time is exceeding the timeout period setttings on the report server. However what is not clear is that the data retrieval is taking all the time and none for data retrieval. The report is just displaying data from a stored procedure. Can someone interpret the following data from the execution log table:
I'm populating a new table based on information in an existing table. The new table is a list of all "items" and contains a primary key. The old table is a database of receipts where items can appear many times in any order.
I have put together the off-the-shelf components to do this, using a lookup transformation to see if the item is already in the new table. Problem is, because there's so much repetition in the old table I need to process the old table one row at a time. Batch processing is generating errors because the lookup doesn't detect duplicates within the buffer.
I tried setting the "DefaultBufferMaxRows" property of the task to 1, but that doesn't seem to have any effect.
To get the data from the old table, I'm using an OLE DB source. To get the data into the new table, I'm using the OLE DB Command transformation with parameters to execute an INSERT statement.
This is a job I have to do exactly once, so I don't care if I have to run it overnight. I'd rather have a simple, easy to understand but inefficient script so I understand what it's doing completely.
Any help on forcing SSIS to process one row at a time?
I am verifying my reports processing time. I get the information from the Reporting Service DB - [ExecutionLogs] table. I have the following information:
[TimeEnd] €“ time that reports generation ends.
[TimeStart] - time that reports generation starts.
[TimeDataRetrieval] - amount of time spent running the data sources.
[TimeProcessing] - time spent processing the report.
[TimeRendering] - time spent generating the output format.
If this information is correct the following statement should be true:
Is there anybody out there that can help me on how can I know the processing time taken for one transaction by using SQL Analyzer??
1)For example, I want to update using Analyzer and I would like to know time taken to do this update???
2) How to reduce processing time by using Store Procedures that using cursor?? I have add in some commit statement in my update statement.. Is there any other ways??
I made a website in ASP.net and using sql server 2005 as database. There is sometime processing data that need long time processing ( about 20 minutes ) and big data. It works fine in dev box, but when I place on shared hosting, and some people access it crashed. The website can not be accessed. Hosting support told me maybe I need to reprogram my code. So anybody has solution for this problem ? Should I create new thread ?
Hi, cube processing is taking more time in a new server while same cubes takes less time in another server. the cubes are processed through DTS package can anybody help finding out the possible reasons for this. Regards Naseem
I am running into a barrier and need to understand the average length of time that a fully optimized data cube should take to process.
We are currently running an average of 15 to 20 minutes per cube, with average of 2000 aggregations, 25% performance increase, and approximately 2 million rows, with around 40 dimensions and 30 measures.
I personally think this is a pretty good time to process. However, I am being challenged to reduce this time frame. In theroy I can't possibly see it getting below where we currently are. SO I am reaching out to the group of guru's...
What is your average length of time to process your Data Cubes? Please respond to me at ken.kolk@medcor.com I would greatly appreciate it and need the averages from the field.
I was wondering if there was any way to add a value field to a report, with the time it took for the report to Process.
It would probably be a text field with an Expression, but don't know how that would go.
I know that in Expression there is a value for ExecutionTime (when report began to run), but nothing about when it ends.Can this be done? and if yes, how?
With SASS Database i have created Data mining Structure Using Time series algorithm, while processing the SSAS db, Data mining  taking long time to process, so how we can  reduce processing time ???
USE [Testing] GO /****** Object: Table [dbo].[Testing] Script Date: 4/25/2014 11:08:18 AM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON
[Code] ....
It seems to work fine with one million records.
Each primary key is unique, but the begindate is non-unique, and i guess even if i use datetime2 and add nanoseconds, from what i have read, there is a chance that i could have a duplicate datetime since the date is imported via XML from multiple sources.
Is there a way to keep track in real time on how long a stored procedure is running for? So what I want to do is fire off a trace in a stored procedure if that stored procedure is running for over like 5 minutes.
I am trying to load previous days data at 3 am via a SSIS job.
The Date variable is initiated as DATEADD("dd",-1, GETDATE()) in the for loop.
Now, as this job runs at 3 am, and I set the variable as GETDATE() - 1, it excluded the data from 12 am to 3 am in the resultset as Date is set as YYYY-MM-DD 03:00:00:000 I need this to be set as YYYY-MM-DD 00:00:00:000
I hope to update a DateTime column value with a Time input parameter.  Poor attempt below but it looks like the @ApptTime param is coming in as 10:45:00.0000000 and I might have an existing @SendOnDate as: 2015-10-05 07:00:00.000...I hope to end up with 2015-10-05 10:45:00.000
ALTER PROCEDURE [dbo].[SendEditUPDATE] @QuePoolID int=null ,@ApptTime time(7) ,@SendOnDate datetime
I am using VS2005 (VB) to develop a PPC WM5.0 Program. And I am using SQLCE 3.0. My PPC Hardware is in 400MHz.
The question is when the program try to insert the first record into sdf database after each time the program started. It takes a long time. Does anyone know why and how can I fix it?
I will load the whole database into a dataset when the program start and do all the "Insert", "Update", "Delete" in this dataset and fill it into database after each action.
cn.Open() sda = New SqlCeDataAdapter(SQL, cn) 'SQL = Select * From Table scb = New SqlCeCommandBuilder(sda) sda.Update(dataset) cn.Close()
I check the sda.update(), it takes about 0.08s for filling one record into database normally. But:
1. Start the PPC Program
2. Load DB into dataset
3. Create a ONE new record in dataset
4. Fill back to DB
When I take this four steps everytime, the filling time is almost 1s or even more!
Actually, 0.08s is just a normal case. Sometimes, it still takes over 1s to filling back a dataset which only inserted one record when the program is running. (Even all inserted records are exactly the same in data jsut different in the integer key)
However, when I give up the dataset and using the following code:
cn.Open() Dim cmd As New SqlCeCommand(SQL, cn) ' I have build the insert SQL before (Insert Into Table values(XXXXXXXXXXXXXXX All field)
I found that it is still the same that the first inserted record takes more time, but just about 0.2s. And the normal insert time is around 0.02s. It is 4 times faster!!!
We need to select rows from the database that have been recently inserted/updated. We have a main primary table (COMMIT_TEST) and a second update table (COMMIT_TEST_UPDATE). The update table contains the primary key and a LAST_UPDATE field which is a datetime (to tell us when an update occurred). Triggers on the primary table are used to populate the update table.
If we insert or update the primary table in a transaction, we would expect that the datetime of the insert/update would be at the commit, however it seems that the insert/update statement is cached and getdate() is executed at the time of the cache instead of the commit. This causes problems as we select rows based on LAST_UPDATE and a commit may occur later but the earlier insert timestamp is saved to the database and we miss that update.
We would like to know if there is anyway to tell the SQL Server to not execute the function getdate() until the commit, or any other way to get the commit to create the correct timestamp.
We are using default isolation level. We have tried using getdate(), current_timestamp and even {fn Now()} with the same results. SQL Queries that reproduce the problem are provided below:
/* Different functions to get current timestamp €“ all have been tested to produce the same results */ /* SELECT GETDATE() GO SELECT CURRENT_TIMESTAMP GO SELECT {fn Now()} GO */ /* Use these statements to delete the tables to allow recreate of the tables */ /* DROP TABLE COMMIT_TEST DROP TABLE COMMIT_TEST_UPDATE */ /* Create a primary table and an UPDATE table to store the date/time when the primary table is modified */ CREATE TABLE dbo.COMMIT_TEST (PKEY int PRIMARY KEY, timestamp) /* ROW_VERSION rowversion */ GO CREATE TABLE dbo.COMMIT_TEST_UPDATE (PKEY int PRIMARY KEY, LAST_UPDATE datetime, timestamp ) /* ROW_VERSION rowversion */ GO /* Use these statements to delete the triggers to allow reinsert */ /* drop trigger LOG_COMMIT_TEST_INSERT drop trigger LOG_COMMIT_TEST_UPDATE drop trigger LOG_COMMIT_TEST_DELETE */ /* Create insert, update and delete triggers */ create trigger LOG_COMMIT_TEST_INSERT on COMMIT_TEST for INSERT as begin declare @time datetime select @time = getdate()
insert into COMMIT_TEST_UPDATE (PKEY,LAST_UPDATE) select PKEY, getdate() from inserted end GO create trigger LOG_COMMIT_TEST_UPDATE on COMMIT_TEST for UPDATE as begin declare @time datetime select @time = getdate()
update COMMIT_TEST_UPDATE set LAST_UPDATE = getdate() from COMMIT_TEST_UPDATE, deleted, inserted where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY end GO /* In our application deletes should never occur so we don€™t log when they get modified we just delete them from the UPDATE table */ create trigger LOG_COMMIT_TEST_DELETE on COMMIT_TEST for DELETE as begin if ( select count(*) from deleted ) > 0 begin delete COMMIT_TEST_UPDATE from COMMIT_TEST_UPDATE, deleted where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY end end GO /* Delete any previous inserted record to avoid errors when inserting */ DELETE COMMIT_TEST WHERE PKEY = 1 GO /* What is the current date/time */ SELECT GETDATE() GO BEGIN TRANSACTION GO /* Insert a record into the primary table */ INSERT COMMIT_TEST (PKEY) VALUES (1) GO /* Simulate additional processing within this transaction */ WAITFOR DELAY '00:00:10' GO /* We expect at this point that the date is written to the database (or at least we need some way for this to happen) */ COMMIT TRANSACTION GO /* get the current date to show us what date/time should have been committed to the database */ SELECT GETDATE() GO /* Select results from the table €“ we see that the timestamp is 10 seconds older than the commit, in other words it was evaluated at */ /* the insert statement, even though the row could not be read with a SELECT as it was uncommitted */ SELECT * FROM COMMIT_TEST GO SELECT * FROM COMMIT_TEST_UPDATE
Any help would be appreciated, we understand we could make changes to the application/database to approximate what we need, but all the solutions have identified suffer from possible performance issues, or could still lead to missing deals (assuming the commit time is larger than some artifical time window).
I need to take a temporary table that has various times stored in a text field (4:30 pm, 11:00 am, 5:30 pm, etc.), convert it to miltary time then cast it as an integer with an update statement kind of like:
Update myTable set MovieTime = REPLACE(CONVERT(CHAR(5),GETDATE(),108), ':', '')
how this can be done while my temp table is in session?
We are using SQL Server 2008 as our database and use Access as a GUI. I am looking to create a form in Access where employees can access their time card and request changes from management. I want to use the format from the attached screen shot for the form. I pretty much know how to do it all, the only point of complication is trying to figure out the easiest way to get the transaction punch record data on employee_punch_record into a format where I can easily populate the form in the horizontal format you see in the screen shot.
I am not super strong in SQL, but figure I can do it using a formatting table of some sort. quick and easy way to move transaction records into a more horizontally oriented record?
I have a DTS that imports data from an orcle database into SQL Server. Doesn't the processing mostly occur on the SQL Server, not on the oracle database from which the data is being imported? The oracle database is vendor provieded and they are saying our SQL Server DTS package is killing their server. Any insight is appreciated. Thanks
I've got a process that creates records in my database based on XML input that I've gotten. What I am doing it giving this XML to a stored procedure to handle a specific task, then modify the XML and send it to the next stored procedure.
For instance, the XML could hold header records with detail records, I would first send the XML to a stored procedure that creates the header records, then updates the XML so the XML now knows the identity values of the header records I have just created, and then send the XML to the next stored procedure to create the details for those headers.
All works great and fine, but I have a problem with writing the identity values back to the XML. It seems I can only change one item in the XML at a time and thus need to loop this. For many records this really takes a long time.
Here is some sample code of what I'm doing (please excuse any typos, this is a simplified version of the code) :
declare @lvSeq numeric(15) declare @lvRowNo int declare @lvNumRows int
insert into myHeaderTable ( recid, recdesc ) select ref.value('@recid', 'nvarchar(25)') recid, ref.value('@recdesc', 'nvarchar(250)') recdesc from @pXML.nodes('//headers/header') R(ref)
select @lvRowNo=1, @lvNumRows = @pXML.value('count(//headers/header)', 'int') while (@lvRowNo<=@lvNnumRows) begin select @lvSeq = recseq from myHeaderTable where recid = @pXML.value('//headers/header[position()=sql:variable("@lvRowNo")]/@recid)
set @pXML.modify('replace value of (//headers/header[position()=sql:variable("@lvRowNo")]/@recseq with sql:variable("@lvSeq")')
select @lvRowNo=@lvRowNo+1 end
Obviously I am looking for a better way to update the XML with the sequences. The insert takes a second, the loop takes minutes with large XML sets. I guess MSSQL is searching the whole XML to find the item to update.
It would be nice if I didn't have to loop through the XML. One solution I was thinking off is to store the XML in a temporary table with a single record per header item. Then I could do the modify in one go and recreate the XML by simply selecting the contents of the temporary tabel. I have no idea if this is possible.
So something like this:
select ref.value('@recid','nvarchar(25)') recid, ref.value('.','XML') XMLData -- this gives an error into #TMP_XML from @pXML.nodes('//headers/header') R(ref)
insert into myHeaderTable ( recid, recdesc ) select recid, ref.value('@recdesc', 'nvarchar(250)') recdesc from #TMP_XML CROSS APPLY XMLData.nodes('/header') R(ref)
update #TMP_XML set XMLData.modify('replace ....') from myheadertable where #TMP_XML.recid = myheadertable.recid
I have a very simple time series model which processing works fine without any problem. However when I run the following query
SELECT
[TimeSeries].[PriceChange],
[TimeSeries].[Symbol],
PredictTimeSeries(PriceChange, -3, 2)
From
[TimeSeries]
WHERE
[TimeSeries].[Symbol] = 'x'
I get the following error:
TITLE: Microsoft SQL Server 2005 Analysis Services ------------------------------ Error (Data mining): A time series prediction was requested with a start time further in the past than the internal models of the mining model, TimeSeries, specified in the HISTORIC_MODEL_GAP and HISTORIC_MODEL_COUNT parameters can process.
The following is the excerpt of the minding model script related to the two parameters:
<AlgorithmParameters>
<AlgorithmParameter>
<Name>MISSING_VALUE_SUBSTITUTION</Name>
<Value xsi:type="xsdtring">Previous</Value>
</AlgorithmParameter>
<AlgorithmParameter>
<Name>HISTORIC_MODEL_GAP</Name>
<Value xsi:type="xsd:int">1</Value>
</AlgorithmParameter>
<AlgorithmParameter>
<Name>HISTORIC_MODEL_COUNT</Name>
<Value xsi:type="xsd:int">10</Value>
</AlgorithmParameter>
</AlgorithmParameters>
These HISTORIC_MODEL_GAP (1) and HISTORIC_MODEL_COUNT (10) should accommodate PredictTimeSeries(PriceChange, -3, 2). Could anyone shed some light on this?
we have problems with our SQL Reporting Service 2012 (SSRS) server . We have setup Kerberos delegation between SSRS and the database server (SQL Server Always-on cluster) so users are authenticated down to the database. The issue occurs from time to time that SSRS loses the ability to delegate the user credentials to the database. At this point in time the Report Server logs contain rejected database connections because of ANONYMOUS logon. After restarting SSRS the problem is gone.
I have a table which has a few fields, one being "datetime_traded". I need to write a query which returns the row which has the closest time (down to second) given a date/time. I'm using MS SQL.
Here's what I have so far:
Code:
select * from TICK_D where datetime_traded = (select min( abs(datediff(second,datetime_traded , Convert(datetime,'2005-05-30:09:31:09')) ) ) from TICK_D)
But I get an error - "The conversion of a char data type to a datetime data type resulted in an out-of-range datetime value.".
Does anyone know how i could do this? Thanks a lot for any help!
Ok, so I have some horribly convuluted SQL that I would love to optomize. I'm not happy leaving it in it's current state, that's for sure!
I'm currently working on our test bed servers, so obviously my stats are out because of the "crap-ness" (yes, that's the technical term) of the hardware, but still, it should NEVER need to take this long!!
Basically, the issue arises in the nasty join to the career table (one employee can have multiple career lines). Just to make things complicated, employees can have any number of career records on any given date, these can even be input for future career events. The following SQL picks out the latest-current career date for each employee based on the career_date being <= GetDate() and the date of entry for this date being the greatest.
From the above we want to return 2007-01-01 | 2006-05-05 13:54:18.000
SET STATISTICS IO ON SET STATISTICS TIME ON
SELECT a.sAMAccountNameAs 'sAMAccountName' , a.userPrincipalNameAs 'userPrincipalName' , 'TRUE'As 'Modify' , RTRIM(e.unique_identifier)As 'employeeID' , RTRIM(e.employee_number)As 'employeeNumber' , RTRIM(e.known_as) + CASE WHEN RTRIM(e.surname) IS NOT NULL THEN ' ' + RTRIM(e.surname) ELSE NULL ENDAs 'displayName' , RTRIM(e.known_as)As 'givenName' , RTRIM(e.surname)As 'sn' , RTRIM(c.job_title)As 'title' , RTRIM(c.division)As 'company' , RTRIM(c.department)As 'department' , RTRIM(l.description)As 'physicalDeliveryOfficeName' , RTRIM(REPLACE(am.dn,'\',''))As 'manager' , t.full_mobile + CASE WHEN RTRIM(t.mobile_number) IS NOT NULL THEN ' (DD: ' + RTRIM(t.mobile_number) + ')'ELSE NULL END As 'mobile' , t.mobile_numberAs 'otherMobile' , ad.address_ad_countryAs 'c' , ad.address_ad_address1 + CASE WHEN ad.address_ad_address2 IS NOT NULL THEN ', ' + ad.address_ad_address2 ELSE NULL END + CASE WHEN ad.address_ad_address3 IS NOT NULL THEN ', ' + ad.address_ad_address3 ELSE NULL END + CASE WHEN ad.address_ad_address4 IS NOT NULL THEN ', ' + ad.address_ad_address4 ELSE NULL END + CASE WHEN ad.address_ad_address5 IS NOT NULL THEN ', ' + ad.address_ad_address5 ELSE NULL ENDAs 'streetAddress' , ad.address_ad_poboxAs 'postOfficeBox' , ad.address_ad_cityAs 'l' , ad.address_ad_CountyAs 'st' , ad.address_ad_postcodeAs 'postalCode' , RTRIM(ad.address_ad_telephone) + CASE WHEN RTRIM(a.othertelephone) IS NOT NULL AND RTRIM(ad.address_ad_telephone) IS NOT NULL THEN ' (Ext: ' + RTRIM(a.othertelephone) + ')' ELSE CASE WHEN RTRIM(a.othertelephone) IS NOT NULL AND RTRIM(ad.address_ad_telephone) IS NULL THEN 'Ext: ' + RTRIM(a.othertelephone) ELSE NULL END ENDAs 'telephoneNumber' FROM employee e LEFT JOIN career c ON c.parent_identifier = e.unique_identifier AND c.career_date =( SELECTmax(c2.career_date) FROMpwa_master.career c2 WHEREc2.parent_identifier = c.parent_identifier ANDc2.career_date <= GetDate() ) AND c.datetime_created =( SELECT max(c3.datetime_created) FROMpwa_master.career c3 WHEREc3.parent_identifier = c.parent_identifier ANDc3.career_date = c.career_date ) LEFT OUTER JOIN AD_Import am ON am.employeeNumber = c.manager_number INNER JOIN AD_Import a ON a.employeeID = e.unique_identifier LEFT JOIN AD_Telephone t ON t.unique_identifier = e.unique_identifier LEFT JOIN AD_Address ad ON ad.address_pwa_location = e.location LEFT JOIN xlocat l ON l.code = c.location WHERE (a.employeeNumber IS NOT NULL OR a.employeeID IS NOT NULL)
SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times: CPU time = 15203 ms, elapsed time = 8114 ms.
Any advice on what I can do to optomize?
Oh judt to point out that "employee" is a view on the "Table 'people'." EDIT: I know it's pointing out the obvious, but I'm pulling out the managers "DN" from AD_Import based on the manager_number and employeeNumber matching.
I need a formula to calculate the time (let's say in minutes) between two dates/times. The problem is that I have to exclude the time between 06 PM and 06 AM and also exclude the time in the weekend (Saturday and Sunday). I will use this in a couple of reports made in Reporting Services. If anyone have an algoritm that could be modified for this and is willing to share this I would be very grateful. Many thanks! /Per Lissel
I have created several global temp tables to cache some intermediate results ... However, it seems that after a while those tables will be dropped by SQL Server 2005 automatically (I have not restarted the server and no drop table statement ever executed against those tables). Is this a feature by design? How to make those global temp tables persistence to next service restart?
Hello friends, I needed a suggestion, I am currently working on a reporting website that generates reports and i need to store all the reports in the database.
I usually go by row wise processing as it can be easily controlled but the problem is there will be a lot of reports, that is an estimation of 30,000 rows in a month and i m not sure if sql server can hold more than 2 billion rows per table.
I will just explain whole scenario what I m facing in tricky problem..
We have xml files coming at regular interval by some other source into sql server 2000…daily having records near @10000 to 70000…we have job scheduled to run it regular interval…we doing this by some filter criteria… suppose the flow is like staging table into secondary table and then final into primary table…. We design DTS package accordingly means take the records from staging table put into secondary table and then into primary table…(near @ 8 task involved in it…) Suppose xml file came at 8:30 am and our DTS package will run at 9:00 am…and then 11:00 am and the 1:00 pm like that….what I observing from many days is that after running job at 9:00 am successfully some good data still pending in secondary table not processed into primary table. But when again job ran at 11:00 am it processed that pending good records into primary table…some times when I ran this job manually through DTS design level the good data that pending in secondary table processed!!!
My question is that why this job not processed all the good records in single shot????