SQL 2012 :: Composite Key Partitioning In Tabular Model
Jun 17, 2014
I have a question about partitions in both SQL Server table and Tabular Model. I started to use Tabular Model recently.
I need to partition a table that collects daily rows for different clients.
The natural partition key is a combination of clientID+dateID (something like CL-YYYYMMDD)
I created a configuration table with a primary key PartitionID IDENTITY(1,1) , that contains also the field clientID and dateID
Every day I add a new row in it and I get a partitionID for the new client and date
Then I created a partitioned fact table using PartitionID as the partition field, using the partition function and the partition schema as well.
The daily client data is inserted in the partitioned table using the partitionID
Everything works fine, and the data are loaded correctly into the partitioned fact table.
Then I created a Tabular Model where the fact table is the partitioned table, and I created tabular model partitions using something like "select <field list> from PartitionedTable where partitionID = <partitionID>"
In this way, every day I load partitioned data in both sql server and tabular model. I have two dimensions, client and calendar
Now my question is: when I browse the Tabular Model, and I'm selecting a specific dimension date and dimension client, am I using the partitionID index correctly?
Or should I put in the tabular model partition query something like "select <field list> from PartitionedTable where clientID = <clientID> and dateID = <dateID>"? In this case is still working the partitionID index? How can I check it?
View 0 Replies
ADVERTISEMENT
Mar 29, 2015
I am trying to learn building SSAS tabular model. While following a tutorial I need to add a column to an existing table but for some reason the ADD Column option (insert column in other menu is also not appearing) is greyed out.
View 4 Replies
View Related
Jul 2, 2013
I have a Tabular model, it has 8 tables , 7 of the tables process in 3 mins, but the main table is 22.3gb in size//I am processing the table on a 64 bit pc with 8 gb ram to the local instance from a Production server//At this rate it will take days to process the Main table 22.3gb, the main table was running off a view but I have inserted the data into TableA and updated the model///Only other optimization process I can think of is copying TableA to my local machine but the performance to the Server on the network is not an issue.I am just process the main table for the last 4 hrs and it is only at 11.7 mill records out of 124 mill total @ this rate it will take 44 hours
Size: 22342752 KB
CREATE
TABLE [dbo].[StagingTablevwKeyEventsfact](
[PKID] [bigint]
NULL,
[ManufactureGeoLocationCode]
[nvarchar](14)
NULL,
[code]....
View 6 Replies
View Related
Sep 25, 2015
We have a requirement to convert Tabular model to Multidimensional as few functionalists like actions does not support in tabular. Looking for list of things needs to take care to convert tabular solution to Multidimensional.
View 2 Replies
View Related
Nov 4, 2015
I have been looking at implementing a tabular model based on an OLTP database that's not dimensional. I know that this is possible but during my proof of concept I have encountered numerous problems ...
The things that I have run into are: After setting up the relationships I have found that measures filter context don't propagate along the relationships as I would expect. if the measure is coming from a target table and not a source then an ALL member is returned ( as in multi dimensional when a dimension isn't related to a measure group). Given the lay out of an OLTP database this will be hard to avoid.
One thing I have done to try an mitigate the above problem is to combine the tables used for measures in a view and using that in the source to connect to the rest of the tables. however due to the tables being of different grains this has then created duplication in some of the keys and measures. so the keys cant be used in relationships and the measures aren't accurate.
Are these things other people have come across? or should I give up the ghost and just recommend using dimensional models for the source? is tabular just geared towards a DW the same as multidimensional?
View 2 Replies
View Related
Oct 22, 2015
I'm using SQL Server 2012 Bi edition.
I created a model in Visual Studio 2012 and when I process the model, most tables (the ones that should) process 77,546 records and that is reflected in VS.
However, when I deploy the model to the server it is only deploying 76,500 records for those tables.
Where those 1,046 records went to? Is there a setting that limits the records to be deployed? My base tables in the Data Mart have 77,546 records, just as in my Visual Studio model.
View 3 Replies
View Related
Oct 18, 2012
how can i automatically update my tabular model in the future when there's an update in my database.
View 4 Replies
View Related
Jul 11, 2013
how to set 'Ignore Unrelated Dimensions' property in Tabular Model.
View 3 Replies
View Related
Nov 4, 2015
I get the following error while processing a SSAS tabular model (2014) on a new server.The SSAS service on this server is running under a login which has access to the SQL server data sources. I tried changing the provider to OLEDB from SQLCLNI11 in the connection string but that doesn't work too. The error message isn't useful to debug further.
The cube processing succeeds on a different server. I scripted out the cube DB and ran it on the new server and am trying to process full but it fails with the following error.
Error Message:
The operation failed because the source database does not exist, the source table does not exist, or because you do not have access to the data source.
More Details:
OLE DB or ODBC error: A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections.
For more information see SQL Server Books Online.; 08001; SSL Provider: No credentials are available in the security package
; 08001; Client unable to establish connection; 08001; Encryption not supported on the client.; 08001.
A connection could not be made to the data source with the DataSourceID of 'd7a37dae-be87-44e0-a8b2-498069af82c9', Name of 'connection name'.
An error occurred while processing the partition 'XXXX_460f3467-1a99-4dc9-aaf2-bcf3d54a5c4c' in table 'XXXX_460f3467-1a99-4dc9-aaf2-bcf3d54a5c4c'.
The current operation was cancelled because another operation in the transaction failed. The operation failed because the source database does not exist, the source table does not exist, or because you do not have access to the data source.
View 3 Replies
View Related
Aug 10, 2015
i have a model contains fact of account revenue , a time dimension and scd of account.
the scd (type 2) is changing when an account get a new color.
when i queries the model by PowerPivot i sometimes get wrong color for an account in a date priod , for example , I would expect to see color 3 for account 1 at 04-06-2010 , But instead I see color 1 - as you can see in the picture bellow.
FACT , SCD :
Result
i process all , deploy , mark dim_time as date dim , and i still cant find the error.
View 3 Replies
View Related
Jun 2, 2015
We are working on implementing DAX formula(s) on our tabular model (SSAS 2012). When we tried to deploy the solution, deployment wizard giving us then below error:The given key was not present in the dictionary...I checked all the formula(s) that we recently implemented and all looks fine.
View 4 Replies
View Related
Jun 27, 2015
I'm working on a Tabular model at my job since the users have a requirement to query against a live data source. I've been looking online for clients that issue DAX as opposed to MDX to the model, but besides SSRS and Power View, I could not find anything else. Any other client side interfaces to a DirectQuery Tabular model?
View 2 Replies
View Related
Jul 6, 2015
I built my first tabular model and see that my fact tables are also appearing as dimensions. In Multi dimensional mode i could choose which are the dimensions. How do i do that in tabular model.
View 2 Replies
View Related
Apr 17, 2015
I have a separate date dimension marked as Date table in Tabular Model and having proper relationship with another table(e.g SalesTable) with column of date type. But still time intelligence functions are not working. I am using the date column from other table (e.g SalesTable) in the formula. What exactly going wrong in our tabular model.
View 3 Replies
View Related
Jun 16, 2015
how to combine measures and fields coming from an analytical model (tabular) along with some Excel calculations. Basically I want to provide users with a simple report (to be displayed in SharePoint Excel services) containing charts and slicers. The data comes from a tabular model, and most of the calculations are in the model as well.However there is some little tweaking that must be done. For example I might need additional calculated columns, but I don't feel the need to modify the tabular model for that. I was wondering if I could do this within Excel as well -- but without having to bring all the data through a pivot table, then manipulate it and then show it on the report. So to be clear I do not want any pivot tables lying around, even if on a hidden sheet.
I noticed that when selecting a pivot chart in Excel, at the ribbon menu under "PIVOTCHART TOOLS"/"ANALYSE" there is a group of buttons named "Calculations". One of them is named OLAP Tools.Is it fair to assume that these options will allow me to create new measures at the Excel side, without affecting my tabular model?
View 2 Replies
View Related
May 9, 2014
The SQL CAT team's recommendation is to avoid partitioning dimension tables: URL.....I have inherited a dimension table that has almost 3 billion rows and is 1TB and been asked to look at partitioning and putting maintenance in place, etc.I'm not a DW expert so was wondering what are the reasons to not partition dimensions?
View 7 Replies
View Related
Feb 20, 2007
Hello,
I have a table which has a composite primary key consisting of four columns, one of them being a datetime called Day.
The nice thing afaik with this composite key is that it prevents duplicate entries in the table for any given day. But the problem is probably two-fold
1. multiple columns need to be used for joins and I think this might degrade performance?
2. in client applications such as asp.net these primary keys must be sent in the query string and the query string becomes long and a little bit unmanagable.
A possible solutions I'm thinking of is dropping the existing primary key and creating a new identity column and a composite unique index on the columns from the existing composite key.
I would like to have some tips, recommendations and alternatives for what I should do in this case.
View 1 Replies
View Related
Dec 3, 2014
I need Dynamic Partition of SQL Table.
1. What is the best practice for partitioning (on date column)
2. The project on which i am working correctly have a case where in i get the update of my status flag after few days (Say 15 - 30) in that case if my data got into partition table how to update and how to search which partition has my data
3. Is creating partition occupies more disk space?
4. Is every partition would need index?
View 7 Replies
View Related
Nov 9, 2014
We are storing changed data of tables into XML format for auditing purpose. The functionality is already achieved. We are using FOR XML Path clause to convert relational data of tables into XML format.
Now, a table is having column name with '(' . For example name of the column is, ColumnName(). In this case we can not convert into XML using For XML clause. Showing error as,
Column name 'columnName()' contains an invalid XML identifier as required by FOR XML; '(' (0x0028) is the first character at fault.
View 1 Replies
View Related
Feb 25, 2014
Script to do the table partitioning for a 500gb in sliding window technique?
View 9 Replies
View Related
Mar 6, 2014
If the partitioning MERGE command attempts to drop historic data at the wrong boundary point then data movement between file groups may be necessary before or during the next index rebuild. The script below creates 2 test tables, one using a range right function and the other using range left. The partitioning key is a number between 0 - 59, an empty partition is maintained at the start and end of ranges, 4 partitions contain data in the ranges between 0-14, 15-29, 30-44, 45-59. Data in the lowest range (0 - 14) is switched out and a merge command is run, edit the script to try the different merge boundaries, edit the variables at the start to suit runtime environment 'Data Drive' & 'Log Drive' paths.Variables are redeclared but commented out at the start of code blocks to allow stepping through if desired.
--=================================================================================
-- PartitionLabSetup_20140330.sql - TAKES ABOUT 1 MINUTE TO EXECUTE
-- Creates a test database (workspace)
-- Adds file groups and files
-- Creates partition functions and schema's
-- Creates and populates 2 partitioned tables (PartitionedRight & PartitionedLeft)
[Code] ....
The T-SQL code below illustrates one of the problems caused by MERGE at the wrong boundary point. File Group 3 of the Range Right table is empty according to the data space views, it cannot be dropped though. File Group 2 contains data according to the views but you are allowed to drop it's file.
USE workspace;
DROP TABLE dbo.PartitionedRightOut;
USE master;
ALTER DATABASE workspace
REMOVE FILE PartitionedRight_f3 ;
--Msg 5042, Level 16, State 1, Line 2
--The file 'PartitionedRight_f3 ' cannot be removed because it is not empty.
ALTER DATABASE workspace
REMOVE FILE PartitionedRight_f2 ;
-- Works surprisingly although contains data according to system views.
If the wrong boundary point is used then the system 'Data Space' views show where the data should be (FG2), not where it actually still is (FG3). You can't tell if data movement between file groups is pending and the file group files are not protected from deletion by the OS.
I'm not sure this is worth raising a connect item for but it would be useful knowing where data physically resided after a MERGE RANGE and before an INDEX REBUILD, the data space views reflect the logical rather than the physical location if a data movement is pending.
View 0 Replies
View Related
May 15, 2014
I have a very large table that I need to partition. Ideally the table will write to three filegroups. I have defined the Partition function and scheme as follows.
CREATE PARTITION FUNCTION vm_Visits_PM(datetime)
AS RANGE RIGHT FOR VALUES ('2012-07-01', '2013-06-01')
CREATE PARTITION SCHEME vm_Visits_PS
AS PARTITION vm_Visits_PM TO (vm_Visits_Data_Archive2, vm_Visits_Data_Archive, vm_Visits_Data)
This should create three partitions of the vm_Visits table. I am having a few issues, the first has to do with adding a new clustered index Primary Key to the existing table. The main issue here is that the closed column is nullable (It is a datetime by the way). So running the following makes SQL Server upset:
ALTER TABLE dbo.vm_Visits
ADD CONSTRAINT [PK_vm_Visits] PRIMARY KEY CLUSTERED
(
VisitID ASC,
Closed
)
ON [vm_Visits_PS](Closed)
I need to define a primary key on the VisitId column, but I need to include the Closed column in order to partition on it.how I would move data between partitions on a monthly basis. Would I simply update the Partition function, or have to to some sort of merge, split, or switch function?
View 2 Replies
View Related
Aug 25, 2014
We have a database and have 6-7 growing tables. All the tables have Primary and foreign key relation. I want to do partition based on the date column.
I need 3 partitions
First partition has to hold present data
second partition need to hold the previous year data (SAS storage)
Third partition need to hold all the old data and need to be in the archive database
I understand that first we need to disable the constraints (Indexes PK & FK)
Then create partition function and partition schema
Then Create the Constraints again
View 9 Replies
View Related
Dec 5, 2005
Running 2005 Beta 3 Refresh. When I first deploy, it works fine. Subsequent deployments yield the following error:
View 9 Replies
View Related
May 14, 2015
I am using sql2012 se and we want to use couchbase to store some data as documents. So the database will be Hybrid(partly SQLServer and Partly couchbase). However the database is still in the design phase. What are the things that we should keep in mind when designing this database from a design perspective? Our database which was completely SQLServer based(RDBMS) was using GUIDs everywhere based off NEWID() and prime goal is to get rid of GUIDs for the most part.
View 6 Replies
View Related
Jul 27, 2015
I never created table on the basis of Data model diagram . I have to create the 3 table on the basis of given Data model diagram. There are 3 tables
1.md_geographyleveltype
2.md_geographylevel
3.md_geographylevelxref
I have tried to create 2 table but unable to create 3rd table. Need to review the script of first 2 table and to create table script for 3rd table.
View 5 Replies
View Related
Aug 12, 2015
Been practicing DR strategies with a test SQL instance by following the scenarios listed here: [URL] ....
> Took a backup of the Model database
> Stopped SQL Server
> Deleted model database data & log file
> Started SQL Server and it obviously wouldn't start because TempDB needs a model database present.
> Started SQL instance with trace flags 3608 & 3609
> Connected to SQL instance using command prompt.
> Issued restore command but was met with this error:
Shared Memory Provider: The pipe has been ended.
Communication link failure
And found this in the SQL log..
2015-08-12 16:21:32.83 spid51 Starting up database 'tempdb'.
2015-08-12 16:21:36.88 spid51 Error: 3456, Severity: 21, State: 1.
2015-08-12 16:21:36.88 spid51 Could not redo log record (59:136:21), for transaction ID (0:0), on page (1:20), allocation unit 458752, database 'tempdb' (database ID 2). Page: LSN = (30:165:3), allocation unit = 458752, type = 1.
[Code] .....
View 9 Replies
View Related
Jul 9, 2015
I have an existing project that I have added a simple text file. I am using the Project Deployment Model for this project. I save the project, close it and open the project and the file is there under the Miscellaneous folder. I successfully deployed the project to the server. When I retrieve the project using the Integration Services Import Project Wizard, all of my package modifications are there and the packages up to date but the txt file I added to the Miscellaneous folder is not there.
View 1 Replies
View Related
Nov 24, 2014
How to write Stored Procedures to load the Data Model from OLTP to DWH ?
View 9 Replies
View Related
Oct 19, 2015
I need to develop a language specific dwh, meaning that descriptions of products are available from a SAP system in multiple languages. English is the most important language and that is the standard. But, there are also requirements of countries that wants productdescriptions in their language.
Productnr Productdesc Language
1 product EN
1 produkt DE
One option is to column the descriptions, but that is not very elegantly. I was thinking of using bridge tables to model this but you have to always select a language in a filter (I think)..
I'm thinking of a technical solution, such that when a user logs on, the language is determined and a view determines whether to pick a certain product table specific for a certain language. But then I don't have the opportunity to interchange the different language specific fields in a report (or in my case PowerPivot).
View 2 Replies
View Related
Apr 25, 2008
We have the following scenario,
We have our Production server having database on which Few DTS packages execute every night. Most of them have Bulk Insert stored procedures running.
SO we have to set Recovery Model of the database to simple for that period of time, otherwise it will blow up our logs.
Is there any way we can set up log shipping between our production and standby server, but pause it for some time, set recovery model of primary db to simple, execute DTS Bulk Insert Jobs, Bring it Back to Full recovery Model AND finally bring back Log SHipping.
It it possible, if yes how can we achieve this.
If not what could be another DR solution in this scenario.
Thanks Much
Tejinder
View 6 Replies
View Related
Oct 27, 2007
Hi all,
I have MS Time Seeries model using a database of over a thousand products each of which has hundreds of cases. It amazingly takes only a few minutes to finish processing the model, but when I click Mining Model Viewer to view the models, it takes many hours to show up. Once the window is open, I can choose model for different products almost instantly. Is this normal?
View 1 Replies
View Related
Aug 10, 2005
hi !!i have a question about the connected and disconnected model to access the Sql server DB.......i know that there is better to choose one rather than the other in some situantions and there is no better model in all cases...... os i hope you can help me to decide what shall i choose...i will use the DB to connect to Web services and read data from the DB and wrtie some data back.......i do not know that to use ..... i hope you advise me and tell me about the rules that will allow me to choose what model to choose .... i appreciate your help!!Thanks !!!
View 2 Replies
View Related