I am running into a barrier and need to understand the average length of time that a fully optimized data cube should take to process.
We are currently running an average of 15 to 20 minutes per cube, with average of 2000 aggregations, 25% performance increase, and approximately 2 million rows, with around 40 dimensions and 30 measures.
I personally think this is a pretty good time to process. However, I am being challenged to reduce this time frame. In theroy I can't possibly see it getting below where we currently are. SO I am reaching out to the group of guru's...
What is your average length of time to process your Data Cubes? Please respond to me at ken.kolk@medcor.com I would greatly appreciate it and need the averages from the field.
I created a DTS which does some data transformations before processing some cubes. It finished processing in abt 10mins when I run this DTS manually. However, when I schedule this DTS to run, it took around 3 over hours to run. Does anybody know where the problem lies? I have been looking for a solution for this for a long time and I'm hoping that somebody can help me...
I just finished migrating SQL 2005 - 2012 and I am having issues processing my cubes. I included the error message below with generic names, but I think it should be transparent. I seem to be having an issue with one of the logins, but which one. The error message points to "domain_nameserver_name$" with a $ at the end. I am assuming this is a SQL login? I have included the error message below:
Hi everyone, I need some help with creating a report that calculates the average turnaround time in days that it takes for units to return from trips destined to a location.
The database that I am working with lists a trip each time a unit is dispatched to a destination, and then another trip is created for the units return. In the example below I am trying to calculate the number of days that it takes for a unit to return to Vancouver by calculating the difference between the departure date from Vancouver and the arrival date back into Vancouver. I then need to calculate the average number of days that it takes for a unit to return from a trip. See sample data below.
UNIT -- TRIP -- START LOCATION --START DATE--FIN LOCATION--FIN DATE ================================================== ======= U12 ----001 --- VANCOUVER -------FEB 10 ------ ONTARIO ----- FEB 15 U10 ----002 --- VANCOUVER -------FEB 13 ------ ONTARIO ----- FEB 18 U12 ----003 --- ONTARIO ----------MARCH 13 --- VANCOUVER -- MARCH 18 U10 ----004 --- ONTARIO ----------MARCH 1 ---- VANCOUVER ---MARCH 6
Unit U12 took 36 days to return back to Vancouver Unit U10 took 21 days to return back to Vancouver
Therefore based on the two trips it takes an average of aproximately 28.5 days for a unit to return from trips destined to Ontario.
Hello~, The table has columns like this. ________________________________ time smalldatetime value1 int value2 int ----------------------------------------------------------
for example, .... '2006-11-16 12:00:00',100,200 '2006-11-16 13:00:00',110,210 '2006-11-16 14:00:00',120,220 ....
The record is inserted at every hour.
I want get daily,monthly,yearly average and display the result ordered by time.
I am trying to figure out how to calculate the average time between phone calls for a user. The initial requirement is to calcualte this on all calls for an entire month but I would guess that would lead to other periods as well, such as daily, weekly, etc. One hurdle is what to do when going from one day to the next. I could possibly just week out any times between calls that are greater than a certain amount of time to address that. Any way, here is a small sample of what I'll be dealing with. Any ideas on how to approach this or get it to work would be greatly appreciated.
So some have just 1 test and some have multiple. I have a count for each. What I need to do is use that count and get an average time between each test per student.
I'm populating a new table based on information in an existing table. The new table is a list of all "items" and contains a primary key. The old table is a database of receipts where items can appear many times in any order.
I have put together the off-the-shelf components to do this, using a lookup transformation to see if the item is already in the new table. Problem is, because there's so much repetition in the old table I need to process the old table one row at a time. Batch processing is generating errors because the lookup doesn't detect duplicates within the buffer.
I tried setting the "DefaultBufferMaxRows" property of the task to 1, but that doesn't seem to have any effect.
To get the data from the old table, I'm using an OLE DB source. To get the data into the new table, I'm using the OLE DB Command transformation with parameters to execute an INSERT statement.
This is a job I have to do exactly once, so I don't care if I have to run it overnight. I'd rather have a simple, easy to understand but inefficient script so I understand what it's doing completely.
Any help on forcing SSIS to process one row at a time?
I am verifying my reports processing time. I get the information from the Reporting Service DB - [ExecutionLogs] table. I have the following information:
[TimeEnd] €“ time that reports generation ends.
[TimeStart] - time that reports generation starts.
[TimeDataRetrieval] - amount of time spent running the data sources.
[TimeProcessing] - time spent processing the report.
[TimeRendering] - time spent generating the output format.
If this information is correct the following statement should be true:
Is there anybody out there that can help me on how can I know the processing time taken for one transaction by using SQL Analyzer??
1)For example, I want to update using Analyzer and I would like to know time taken to do this update???
2) How to reduce processing time by using Store Procedures that using cursor?? I have add in some commit statement in my update statement.. Is there any other ways??
For a particular report it is sometimes failing to execute and sometimes successful. When querying the execution log it is apparent that the processing time is exceeding the timeout period setttings on the report server. However what is not clear is that the data retrieval is taking all the time and none for data retrieval. The report is just displaying data from a stored procedure. Can someone interpret the following data from the execution log table:
I made a website in ASP.net and using sql server 2005 as database. There is sometime processing data that need long time processing ( about 20 minutes ) and big data. It works fine in dev box, but when I place on shared hosting, and some people access it crashed. The website can not be accessed. Hosting support told me maybe I need to reprogram my code. So anybody has solution for this problem ? Should I create new thread ?
Hi, cube processing is taking more time in a new server while same cubes takes less time in another server. the cubes are processed through DTS package can anybody help finding out the possible reasons for this. Regards Naseem
Calculation of an average using DAX' AVERAGE and AVERAGEX.This is the manual calculation in DW, using SQL.In the tabular project (we're i've noticed that these 4 %'s are in itself strange), in a 1st moment i've noticed that i would have to divide by 100 to get the same values as in the DW, so i've used AVERAGEX:
The results were, respectively: 701,68; 2120,60...; -669,441; and  finally **-694,74** for Avg_FMPdollar.i can't understand the difference to SQL calculation, since calculations are similar to the other ones. After that i've tried:
test:=SUM([_FMPdollar])/countrows('Fct Sales') AND the value was EQUAL to SQL: -672,17 test2:=AVERAGE('Fct Sales'[_Frontend Margin Percent ACY]), and here, without dividing by 100 in the end, -696,74...
So, AVERAGE and AVERAGEX have a diferent behaviour from the SUM divided by COUNTROWS, and even more strange, test2 doesn't need the division by 100 to be similar to AVERAGEX result.
I even calculated the number of blanks and number of zeros on each column, could it be a difference on the denominator (so, a division by a diferente number of rows), but they are equal on each row.
I have a temp_max column and a temp_min column with data for every day for 60 years. I want the average temp for jan of yr1 through yr60, averaged... I.E. the avg temp for Jan of yr1 is 20 and the avg temp for Jan of yr2 is 30, then the overall average is 25. The complexity lies within calculating a daily average by month, THEN a yearly average by month, in one statement. ?confused?
Here's the original query. accept platformId CHAR format a6 prompt 'Enter Platform Id (capital letters in ''): '
SELECT name, country_cd from weather_station where platformId=&&platformId;
SELECT to_char(datetime,'MM') as MO, max(temp_max) as max_T, round(avg((temp_max+temp_min)/2),2) as avg_T, min(temp_min) as min_temTp, count(unique(to_char(datetime, 'yyyy'))) as TOTAL_YEARS FROM daily WHERE platformId=&&platformId and platformId = platformId and platformId = platformId and datetime=datetime and datetime=datetime GROUP BY to_char(datetime,'MM') ORDER BY to_char(datetime,'MM');
I was wondering if there was any way to add a value field to a report, with the time it took for the report to Process.
It would probably be a text field with an Expression, but don't know how that would go.
I know that in Expression there is a value for ExecutionTime (when report began to run), but nothing about when it ends.Can this be done? and if yes, how?
With SASS Database i have created Data mining Structure Using Time series algorithm, while processing the SSAS db, Data mining  taking long time to process, so how we can  reduce processing time ???
I have virtual PC loaded on my machine .Everytime i try to view data in the Data Pane in Analysis Manager,it comes up with an error "Unable to browse the data(Unspecified error).I have tried restarting the DHCP services but has not no effect . Could someone please advice me.
Have 2 virtual PC's running on is the Domain Controller named as London and the other one is Brisbane which is a memebr of the domain controller. Niether the London and Brisabn are able to dispaly the data in the data pane in Analysis Manager. Please Advice.
I use the SQL Analysis Manager to create a cube and it can process successful, but when i want to browse the cube, it shows the error message: " Unable to browse the cube "my cube name", Unspecified error"
can anybody tell me what's wrong with that? thanks
I have created a cube in sql server analysis manager and now i want to give that to end user...like .exe program what are all the possible ways we can give that to the end user.....
how to do I use these cubes on web for browsers as I do in business intelligent IDE in visual studio. Are there any free component available for web throught which cubes dtaa can be browsed or are there any methods available in SQL server to publish cubes to web
Any help in this regard will greately appriciated. Thanks
I'm looking into adding OLAP Cubes as part of our software to be distributed with our OLTP and eventually OLAP databases. Is there any books that deal with distributing OLAP Cubes and or security. Our clients will have SQL Server with our databases. Thanks
I am working on datawarehouse using sql server analysis manager. I created a cube ..that is working fine but now i have to distribute to end users so how to do it and how many ways we can do that
1)can we make that .cub file 2)how can give access to endusers without giving access to database 3)how to host a cube and access from excel or any other software