Hi All, any advice or help greatly appreciated, I need to Process Dimensions and Cubes Overnight, what is the best and most reliable way of achieving this.
-- Error 1 OLE DB error: OLE DB or ODBC error: The query has been canceled because the estimated cost of this query (628) exceeds the configured threshold of 300. Contact the system administrator.; 42000. 0 0
Error 2 Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Revenue Labeled Prod ~MC-LPROD', Name of 'Revenue Labeled Prod ~MC-LPROD' was being processed. 0 0 --
I don't understand why this happens. Previously I've created one cube and 4 dimensions. I got the same error before. I think the cube was the culprit so I removed the cube and dimensions. After removing them, I build the project. Successful. But fail again when processing the mining model. The mining model was fairly simple, only 3 columns (one key time, one key, one input as well as predicted column, using Microsoft Time Series algorithm).
Why the estimated cost is even higher when I created another project using only one table (Revenue, the same fact table)?
Error 1 OLE DB error: OLE DB or ODBC error: The query has been canceled because the estimated cost of this query (1493) exceeds the configured threshold of 300. Contact the system administrator.; 42000. 0 0
I worked in local machine, there should be no network-related issue when querying. The machine is 2-processor Xeon 2.4 GHz with 3 GB memory.
How to solve this problem? I have checked the Properties of Analysis Service. I have set higher value for timeout in ODBC Administrator.
I need to design an SSIS package that processes three cubes within a SSAS database and all their related dimensions.
I also need to ensure this SSIS task is transactional (i.e users are able to browse and query the cube continuously throughout the day...even while the cubes are being processed).
I have noticed that simply adding an Analysis Services Processing Task, and having that task process the 3 cubes with the process options set to "Process Update" and the "Process affected Objects" (visible after clicking change settings) selected DOES NOT force the processing of the dependent dimensions of the cubes.
I have also tried changing the Process Options of the 3 cubes to "Process Full"
I also get errors implying that attribute keys in the fact table are not found in the dimension indicating that the cube is being re-processed before the dimensions have or the dimensions were not re-processed. I came to that conclusion because..If I manually (via management studio) process the dimensions and then the cubes immediately after this error...there are no errors.
I have worked around this by creating two tasks. The first processes the dimensions and the second processes the cubes. This however will not work for jobs run in the day..since it means the cubes may be in an invalid state when the dimensions are being processed.
Has anyone else experienced this? Any advice or standards that might help?
Hi guys, I am developing this site http://www.onlineacademicadvisor.com and having DB problems for the 3rd time in a row. Whenever the traffic on the site is getting bigger, the transaction log becomes full and no user can login. This problem is described at http://support.microsoft.com/kb/317375 From there, I got the feeling that the problem occurrs if transactions are not committed and last for too long. However, I do not have any explicit transactions, just the usual select, insert, update statements in stored procedures. I do not call COMMIT (or RETURN) explicitly at the end of my stored procedures though. My stored procedures are short. Have you got any ideas about what can cause the problem? I really have not idea what that could be. Your help is much appreciated.
Over the past few days we noticed severe performance issues on some of our more complicated queries. I ran a DBCC ShowContig on the problematic tables, and noted that the Logical Scan Fragmentation was very high, like over 90%. I ran a DBCC DBREINDEX on the tables, the Logical Scan Fragmentation reduced down to between 0% and 10%, and the queries ran instantly.
However...the next day, the queries were causing problems again. Running ShowContig showed the fragmentation was up to over 90% again. Now, these are very static tables I'm dealing with...absolutely no UPDATE, INSERT or DELETE commands have been run against them (we import the data once a month). I set up a job to monitor the state of the index fragmentation overnight. All is well until 0100, when the LSF hits 90% again. I can't figure out what could be causing this, we have no jobs that run on, or affect, this database overnight, except the backup, which runs at 2100. Has anyone experienced anything like this before, or does SQL Server do something on the fly that could cause it to happen?
During the day I frequently use watch the current activity window under Enterprise Manager to see who is doing what and when.
However, overnight there are various users running various jobs that I am not always informed about. Wactcing the current activity isn't an option here.
Does anyone have a job that I could periodically run overnight to perfrom the same function as the current activty box? Which system tables does the current activity functionality use?
Thanks. Any information that can be provided here will be appreciated.
Hi, I've just been given the task of finding out how to implement a backup procedure for our SQL server databases. Most are running 2000, some 2005. I'm a programmer, and I'm used to having a DBA to help me! I've seen a few methods on the web involving a stored procedure and running the task from task manager. I need to backup and restore all the databases in SQL Server 2000 and work out a way of displaying whether or not it was successful. Can anybody please point me in the right direction as I've no idea how to do any of this really. I guess if I could setup a sproc to loop through the databases that would help, but I'm not sure where to start. Thanks in advance.
We have an MIS system which has approx 100 reports. Each of thesereports can take up to several minutes to run due to the complexity ofthe queries (hundreds of lines each in most cases). Each report can berun by many users, so in effect we have a slow system.I want to seperate the complex part of the queries into a process thatis generated each night. Then the reports will only have to querypre-formatted data with minimal parameters as the hard part will havebeen completed for the users when they are not in. Ideally we willgenerate (stored procedure possibly) a set of data for each report andhold this on the server. We can then query with simpler parameterssuch as by date and get the data back quite quickly.The whole process of how we obtain the data is very complex. There arevarious views which gather data from the back office system. These arevery complex and when queries are run against them including othertables to bring in more data, it gets nicely complicated.The only problem is that the users want to have access to LIVE datafrom the back office system, specifically the Sales team who want toaccess this remotely. My method only allows for data from the nightbefore, so is there an option available to me which will allow me todo this ? The queries can't be improved on an awful lot, so they willtake as long as they take. The idea of running them once is the onlyway I can see to improve the performance in any significant way.True I could just let them carry on as they are and let them sufferwith the performance on live data, but I'd like to do something toimprove the situation for them.Any advice would be appreciated.ThanksRyan
I was trying to extract data from the source server using OLEDB Source and SQL Server Destination when i encountered this error:
"Transaction (Process ID 135) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.".
What must be done so that even if the table being queried is locked, i wouldn't experience any deadlock?
Hello all, I am running into an interesting scenario on my desktop. I'm running developer edition on Windows XP Professional (9.00.3042.00 SP2 Developer Edition). OS is autopatched via corporate policy and I saw some patches go in last week. This machine is also a hand-me-down so I don't have a clean install of the databases on the machine but I am local admin.
So, starting last week after a forced remote reboot (also a policy) I noticed a few of the databases didn't start back up. I chalked it up to the hard shutdown and went along my merry way. Friday however I know I shut my machine down nicely and this morning when I booted up, I was in the same state I was last Wenesday. 7 of the 18 databases on my machine came up with
FCB:pen: Operating system error 32(The process cannot access the file because it is being used by another process.) occurred while creating or opening file 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataTest.mdf'. Diagnose and correct the operating system error, and retry the operation. and it also logs FCB:pen failed: Could not open file C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataTest.mdf for file number 1. OS error: 32(The process cannot access the file because it is being used by another process.).
I've caught references to the auto close feature being a possible culprit, no dice as the databases in question are set to False. Recovery mode varies on the databases from Simple to Full. If I cycle the SQL Server service, whatever transient issue it was having with those files is gone. As much as I'd love to disable the virus scanner, network security would not be amused. The data and log files appear to have the same permissions as unaffected database files. Nothing's set to read only or archive as I've caught on other forums as possible gremlins. I have sufficient disk space and the databases are set for unrestricted growth.
Any thoughts on what I could look at? If it was everything coming up in RECOVERY_PENDING it's make more sense to me than a hit or miss type of thing I'm experiencing now.
Dear list Im designing a package that uses Microsofts preplog.exe to prepare web log files to be imported into SQL Server
What Im trying to do is convert this cmd that works into an execute process task D:SSIS ProcessPrepweblogProcessLoad>preplog ex.log > out.log the above dos cmd works 100%
However when I use the Execute Process Task I get this error [Execute Process Task] Error: In Executing "D:SSIS ProcessPrepweblogProcessLoadpreplog.exe" "" at "D:SSIS ProcessPrepweblogProcessLoad", The process exit code was "-1" while the expected was "0".
There are two package varaibles User::gsPreplogInput = ex.log User::gsPreplogOutput = out.log
How do I use the execute process task? I am trying to unzip the file using the freeware PZUnzip.exe and I tried to place the entire command in a batch file and specified the working directory as the location of the batch file, but the task fails with the error:
SSIS package "IngramWeeklyPOS.dtsx" starting.
Error: 0xC0029151 at Unzip download file, Execute Process Task: In Executing "C:ETLPOSDataIngramWeeklyUnzip.bat" "" at "C:ETLPOSDataIngramWeekly", The process exit code was "1" while the expected was "0".
Then I tried to specify the exe directly in the Executable property and the agruments as the location of the zip file and the directory to unzip the files in, but this time it fails with the following message:
SSIS package "IngramWeeklyPOS.dtsx" starting.
Error: 0xC002F304 at Unzip download file, Execute Process Task: An error occurred with the following error message: "%1 is not a valid Win32 application".
The command in the batch file when run from the command line works perfectly and unzips the file, so there is absolutely no problem with the command, I believe it is just the set up of the variables on the execute process task editor under Process. Any input on resolving this will be much appreciated.
I am designing a utility which will keep two similar databases in sync. In other words, copying the new data from db1 to db2 and updating the old data from db1 to db2.
For this I am making use of the 'Tablediff' utility which when provided with server name, database, table info will generate .sql file which can be used to keep the target table in sync with the source table.
I am using the Execute Process Task and the process parameters I am providing are:
The customer.bat file will have the following code: tablediff -sourceserver "LV-SQL5" -sourcedatabase "TC_CTI" -sourcetable "CUSTOMER_1" -destinationserver "LV-SQL2" -destinationdatabase "TC_CTI" -destinationtable "CUSTOMER" -f "c:SQL_bat_Filessql5TC_CTIsql_filescustomer1"
the .sql file will be generated at: C:SQL_bat_Filessql5TC_CTIsql_filescustomer1.
The Problem: The Execute Process Task is working fine, ie., the tables are being compared correctly and the .SQL file is being generated as desired. But the task as such is reporting faliure with the following error :
[Execute Process Task] Error: In Executing "C:SQL_bat_FilesSQL5TC_CTIpackage_occurrence.bat" "" at "C:Program Files (x86)Microsoft SQL Server90COM", The process exit code was "2" while the expected was "0". ]
Some of you may suggest to just set the ForceExecutionResult = Success (infact this is what I am doing now just to get the program working), but, this is not what I desire.
I'm pulling data from Oracle db and load into MS-SQL 2008.For my data type checks during the data load process, what are options to ensure that the data being processed wouldn't fail. such that I can verify first in-hand with the target type of data and then if its valid format load it into destination table else mark it with error flag and push into errors table... All this at the row level.One way I can think of is to load into a staging table then get the source & destination table -column data types, compare them and proceed.
should I just try loading the data directly and if it fails try trouble shooting(which could be a difficult task as I wouldn't know what caused error...)
I am having this table locking issue that I need to start paying attention to as its getting more frequent.
The problem is that the data in the tables is live finance data that needs to be changed and viewed almost real time so what I have picked up so far is that using 'table Hints' may not be a good idea.
I have a guy at work telling me that introducing a data access layer is the only way to solve this, I am not convinced but havnt enough knowledge to back my own feeling up. (asp system not .net).
Hi, I'm trying to upload the ASPNETDB.MDF file to a hosting server via FTP, and everytime when it was uploaded half way(40% or 50%) I would get an error message saying: "550 ASPNETDB.MDF: The process cannot access the file because it is being used by another process" and then the upload failed. I'm using SQL Express. Does anybody know what's the cause? Thanks a lot
Hi. When I try to start a package manually clicking the Start Debugging button I get this after a little while:
Cannot process request because the process (3880) has exited. (Microsoft.DataTransformationServices.VsIntegration)
How can I prevent this from happening? This happens every time I want to start the package and every time the process id is different. Here it is 3880.
Hi, I have noticed the the cubes that we have here use shared dimensions. For almost all cubes(5-6) there are at least 4-5 common dimensions. According to what I have been preached so far, the shared dimensions are so that you can reuse them. That is not what is practised here. for example. cube1 has somedim1, dim2_c1, dim3_c1... cube2 has xyzdim1_c2,xyzdim2_c2,dim3_c2..
dim3_c1 and dim3_c2 are the same dimensions, one for each cube. I don't know if I am missing something. Shouldn't the use the same dimensions? Could there be any reason for this. pls. advice.
I am new to SSIS and I am investigating using the Slowly Changing Dimension transform.
The data source that I receive is a daily snapshot of the external source system table. I need to store the history of the entity attributes (Type 2 SCD) and I am using the Start / End Date mechanism.
When an entity (identified by the business key) is no longer received in the source snapshot, I would like the data flow to update the End Date of the current row to show that the entity has now expired.
Does anyone have any suggestions for a good way to achieve this ?
NB: Changing the source system extract to include and flag expired entities is not an option for me.
This is my first task in attempting to populate a fact and dimension table from SSIS. I have a Fact table Sales and dimension tables Customer and Location. The data I am getting to fill this structure is in one file. where each record contains the sales information as well the customer information and location details on the same row.
I am using the SSIS to fill this structure by using the slowly changing dimension for the Customer dimension.
I am filling the customer dimension by using the slowly changing dimension. If I have 2 records having same BusinessKey but each with a different first name, where first name is set as a changing attribute, it is creating the customer twice in the table. But shouldn't it create one record with the most recent first name? or am I miss using the SCD.
I have another conceptual question, I am not sure which is the best way to fill my fact and dimension. Should I fill the customer and location dimensions first through 2 different loops on the data and then fill the fact table and map to corresponding dimensions? Or should I do like a for each loop on each record and for each record fill the dimensions and fact simultanously?
I have a question regarding a proble with two dimensions I built.
The first is named Account and contains approx 40k records. The second dimension is named Contact and contains the emps from the Account dim and contains approx 58k records. In the cube I also have two measures, One is a count of courses a Contact has taken. The second measure is a count of certifications a Contact may have earned.
The Account dim table has an AccountKey primary key and the Contact dim table has a ContactKey primary key and an AccountKey foreign key to the Account table. The key fields are not operational keys. They are surrogates. Both Contact and Account dim tables have as the first 10 or so records values that are used as parent groupings in the Cube dimension. For instance.
key = 1, name value = 'A-C'
Each proceeding value has the parent grouping's key value set as its parentkey.
The fact table contains both the AccountKeys and ContactKeys and an ItemId that corresponds to a specific course or certification. This itemid is used for the measures in the Cube
That's the background... here's my problem.
Using BIDS or Mgmt Studio, whenever I add the root dimAccount level (actual account names) as a row and then add the root Contact level (Contact names) as another row and drill down to a specifc Accounts contacts, everything locks up. I have only one measure in the data pane. The fact table only has about 20k records in it. I would think this should return data instantly. If I browse the cube with any other comination of dimensions besides the Contact and Account dimensions, the cube runs fine. It is just the combination of Account and Contact. I am getting really frustrated as I cannot figure this out.
I am rusty at SSAS so forgive me if I left out any pertinent info.
I am relatively new to SSIS/SSAS. I have searched the forums but cannot find an answer to my question.
I created a cube in SSAS and have deployed it. Now I am trying to use SSIS to populate the cube. I have setup a DS that points to the SSAS instance - it uses OLEDB Provider for Analysis services 9.0.
When I try to use a data flow task OLE DB source to truncate the dimension/cubes I do not see the DS in the list to select?
I am finding it hard to get into the SSIS way of organizing the processing.
I have a problem where I have 3 three measures in a virtual cube: "Actual", "Budget" and "Full Year Budget".
The dimensions I have are: - Account No_ / Name - Cost Code - Sub Cost Code - Time/Dates - Budget Name
Both "Actual" & "Budget" measures need to be filtered/dimensioned by: - Account No_ / Name - Cost Code - Sub Cost Code - Time/Dates (exclusive to "Actual", "Budget")
Thus have put these in one cube
AND "Full Year Budget" needs to be filtered/dimensioned by: - Account No_ / Name - Cost Code - Sub Cost Code - Budget Name (exclusive to "Full Year Budget")
THUS have put this as one cube…
I then created a virtual cube, with the 2 cubes thinking that the dimensions I created in the original cubes would only filter the measures of the original cube measures in the virtual cube. ...BUT all dimension filters in the virtual cube filter all measures in the virtual cube, irrespective of which dimensions were created with the original cubes.
I am building a health care application that marries transaction-level data (health care services provided) with person-level characteristics that have a time-dimension. The person-level characteristics are diseases that the person has (these disease all have a start and some have an end date). The diseases are stored in a table in which the foreign keys are a person-identifier, a time identifier (month/year) and a surrogate for the disease. Persons can have more than one disease at a time (the diseases are NOT mutually-exclusive). There are no measures in this table. The transaction table has a foreign key for person and time (month/day/year), a procedure code (the type of service rendered) and money (the cost of the services).
How do I answer the following questions:
What is the total cost of care (the sum of all service costs) last year for persons with "disease A"?
What is the total cost of care last year for persons with "disease A" AND "disease B"?
What is the total cost of care last year for persons with "disease A" OR "disease B"?
I've tried a factless fact table but can't get it to work. If anyone has the right solution and can communicate to me before I slit my wrists, I would be greatly appreciative!!!
When I open my Cube the default dimension on Rows is the dimension name that is first on the alphabetic list and on the columns it defaults to the time dimension.
I need to specify a specific dimension to be shown on rows and columns when the cube is viewed.