Hello all,
I was wondering if anyone else ran into this and if how you got around it.
In a nut shell the SQL optimizer it NOT pruning the additional partitions from the execution plan as would be expected when applying a constraint directly against the partitioned table€™s partition key, Instead its scanning every partition that you have set up in you partition function range.. Yet when you apply the actual value against the table the plan return as expected.
Hmm.... strange......ghost...ooooooo?
I have created a simple test to reproduce:
Code Snippet
CREATE PARTITION FUNCTION [PTFunction](int) AS RANGE LEFT FOR VALUES (1,2,3)
GO
CREATE PARTITION SCHEME [PTDataScheme] AS PARTITION [PTFunction] TO ([PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY])
GO
CREATE TABLE tblPartitionTest(
ID int identity(1,1) ,
PartitionKey int,
Sales money)
ON PTDataScheme(PartitionKey)
GO
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,10.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,20.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,30.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,40.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,50.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,10.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,20.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,30.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,40.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,50.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,10.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,20.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,30.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,40.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,50.00);
I want to find a way to get partition info for all the tables in all the databases for a server. Showing database name, table name, schema name, partition by (maybe; year, month, day, number, alpha), column used in partition, current active partition, last partition (for date partitions I want to know if the partition goes untill 2007, so I can add 2008)
all I've come up with so far is:
Code Block
SELECT distinct o.name From sys.partitions p inner join sys.objects o on (o.object_id = p.object_id) where o.type_desc = 'USER_TABLE' and p.partition_number > 1
I am implementing a table partitioning on our database with TSQL. At the moment (it is under developing) the data are correctly located in the relavant file group. Our target is to meke that the oldest partions/File groups can be backup and removed from the database. This to reduce the size of DB (time period is used for partitioning). Then, if the need arises, restoring the filegroup to make reporting or analysis. Take care that data are conitnuosly added and thus new File groups are added to represent the new time period (eg: new file group is the new month). Based on your experience is it possible a solution like that?
When do partitioned tables/indexes become beneficial? When a table has several million rows? Hundreds of millions of rows?
My tables all have clustered indexes based on the bigint identity PK. I am considering partitioning some of the larger tables by year. If the field I use is not part of the current clustered index then I can't use create index to create my partitions? I need to create an empty table for each year and then use the Alter Table switch? I have header/detail/sub-detail tables. As long as I create the partition function using a similar date field the partitions will be able to be joined? How do I insure my indexes will be aligned? Once I set up the partitions I assume new rows will be stored in the proper partitions based on the value of the date field.
I've read BOL, etc & they are good sources for theory but I need a "Building Partitions for Dummies" type paper with step by step explanations. Anything out there like that?
We are using partitions and all the table are properly aligned as per the partition keys. When this particular sp, which is inserting data to a table from a different table based on the partitionkeys is called by Web UI where threading has been applied, dead lock appears.
Let me make it more clear.
ThreadOne: Insert into table A(partitionKey,BatchId,...) select * from table B where partitionkey = 1 ThreadTwo: Insert into table A(partitionKey,BatchId,...) select * from table B where partitionkey = 2
I can see sometimes it gives deadlock for this procedure, not sure about the reason, as far as I guess since the tables are partitioned and escalation is set to Auto the deadlock should not occur.
I apologize in advance if this is something obvious I've missed ... fresh eyes/brain and all that.
If I have a table that is using a particular partition scheme/function, is there a quick and easy way to determine which column of that table is being used for partitioning? We're examining a number of legacy structures and we're hoping to reduce the time it's going to take us to get the report management wants.
We have a partitioned view with 4 underlying tables. The view and eachof the underlying tables are in seperate databases on the same server.Inserts and deletes on the view work fine. We then add insert anddelete triggers to each of the underlying tables. The triggers modifya different set of tables in the same database as the view (differentthan the underlying table). The problem is those triggers aren't firedwhen inserting or deleteing via the view. Inserting or deleteing theunderlying table directly causes the the triggers to fire, but not whenthe tables are accessed as a result of using the view.Am I missing something? The triggers are 'for insert' and 'fordelete'. No 'instead of' or 'after' triggers.
My database's design is set out here. In summary, I'm trying to model a Stock Exchange for a Technical Analysis application written using Visual C++. In order to create the hierachy I'm using a Nested Set Model. I'm now trying to write code to add and delete equities (or, more generically, nodes) to the database using a form presented to the user in my application. I have example SQL code to create the necessary add and delete procedures that calculate the changes to the values in the lft and rgt columns, but these examples focus around a single table, where as my design aggregates rows from multiple tables using UNION ALL:
Code Snippet CREATE VIEW vw_NSM_DBHierarchy -- Nested Set Model Database Hierarchy AS SELECT clmStockExchange, clmLeft, clmRight FROM tblStockExchange_ UNION ALL SELECT clmMarkets, clmLeft, clmRight FROM tblMarkets_ UNION ALL SELECT clmSectors, clmLeft, clmRight FROM tblSectors_ UNION ALL SELECT clmEPIC, clmLeft, clmRight FROM tblEquities_
Essentially, I'm trying to create an updateable view but I receive the error "UNION ALL View is not updatable because a partitioning column was not found". I suspect that my design in wrong or lacks and this problem is highlighting the design flaws so any suggestions would be greatly appreciated.
I have a query that joins two large partitioned tables and depending on the values in the where clause, I can get dramatically different performance results.
The first query completed in around 7s and has 47,000 logical reads.
select mo.monitor_id,
mo.site_id,
mo.testtime,
sum(mo.NumBytes),
sum(mo.DNSTime),
sum(mo.ConnectTime),
sum(mo.FirstByteTime),
sum(mo.ContentTime),
sum(mo.RelocTime)
from monitor_raw mr(nolock), monitor_object mo(nolock)
where mr.monitor_id in (5339, 5341, 5342, 943842, 943866)
and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'
and mo.returncode = 200
and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)
and mr.escalationlevel = 0
and mr.monitor_id = mo.monitor_id
and mr.testtime = mo.testtime
and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime
The second query takes 188s to complete and has 1.8m logical reads. The only difference between the two is the value of the monitor_ids in the where clause.
select mo.monitor_id,
mo.site_id,
mo.testtime,
sum(mo.NumBytes),
sum(mo.DNSTime),
sum(mo.ConnectTime),
sum(mo.FirstByteTime),
sum(mo.ContentTime),
sum(mo.RelocTime)
from monitor_raw mr(nolock), monitor_object mo(nolock)
where mr.monitor_id in (152682, 5339, 5341, 5342, 268080)
and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'
and mo.returncode = 200
and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)
and mr.escalationlevel = 0
and mr.monitor_id = mo.monitor_id
and mr.testtime = mo.testtime
and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime
The two tables have clustered indexes on monitor_id, testtime and site_id. Comparing the execution plan, I can see why there is such a difference in performance. The second query performs a clustered index seek on the monitor_object table starting at the lowest monitor_id, testtime & site_id through the highest monitor_id, testtime & site_id. The first query performs a clustered index seek where the monitor_id, testtime and site_id equals the same values from the monitor_raw table.
My question is, how can I force the second query to use the same execution plan as the first so that I can get better performance?
One possible workaround that I could use is to execute five individual queries, one for each monitor_id and then union the results together but this would require significant code changes to my stored procs.
I was just wondering if something could be explained to me.
I have the following:
1. A table which has fields with data types and lengths / sizes 2. A stored procedure for said table which also declares variables with datatype and lengths/ sizes 3. A function in written in VB .net that uses said stored procudure. The code used to add the parameters to the sql command also requires that i give a data type and size.
How come i need to specify data type and length in three different places? Am i doin it wrong?
Any information is greatly appreciated.
Thanks
Im using SQL Server 2000 with Visual Studio .Net using Visual Basic..
Hi I have 2 tables with more then million records in each and I have to perform full outer join. The problem is that the join clause contains 2 different parameters (int and string) like this:
Select * From a full outer join b On a.cli = b.cli OR a.reference = b.reference
Because of the OR in the clause and the million records the query is infinite. If I change to one rule only then it works fine.
How can I join these 2 big tables with 2 rules? Thanks Itay
I'm trying to list out my a multi-value parameter in a table or list. Is this possible?
So far I've tried the simple do-it-the-same-way-as-a-dataset approach, and that doesn't work because it comes back with an error saying I need a dataset in my table.
I then tried creating a dataset that's identical my multi-value parameter, but I was stumped when it came to creating the correct SQL. I am able to pull the last value of my parameter into a dataset with a simple SELECT @MyParam query, but that's not going to cut it...
I am learning T SQL and SQL queries and have limited VB knowledge, and have a some simple queries to run on a table with parameters, and would like verification of the proposed methodology and suggestions. Simply put, I have a [Transactions] table with columns [Price], [Ticker], [TransDate], [TransType] and calculated columns for [Days] and [Profit]. There are two parameters, [@Dys] (to query a the table for transactions within a certain period[Days = 30] and [@TT] to query only the closed transactions ie... [TransType='C'] I have been studying Stored Procedures and will be writing a Stored Procedure, but need verification if the following will work... Getting the SUM and AVG calcluations for the fields above is not a problem but I need to display SUM and AVG information also for those transactions where [Profit >0] and [Profit <0], which is easy enough by creating a subquery. But the problem is: 1. If I use a SubQuery for [Profit <0] and for [Profit>0], can I create an alias for [Count(*)] (to get a row or transaction count for each, and then divide that into the Total [Count(*)] alias for the Transactions table to get a value for % profitable or Probability (% total Profitable trades versus % total Unprofitable trades)? 2. Or, do I need to create either temporary tables or views to have 3 distinct tables (1 table for Transactoins and 2 temp or Views for [Profit >0] and [Profit <0])? Any suggestions and advice or examples on how to do this would be appreciated. Craig
I've starting to explore the Distributed Partitoned Views, in order to use it in the next project, and I've found the article: "MS SQL Server Distributed Partitioned Views" By Don Schlichting
I came across the following problem: While running sample: USE test GO
CREATE VIEW AllAuthors AS
SELECT * FROM AuthorsAM, TEST1.test.dbo.AuthorsNZ
GO
I got the error message: Server: Msg 4506, Level 16, State 1, Procedure AllAuthors, Line 5 Column names in each view or function must be unique. Column name 'au_lname' in view or function 'AllAuthors' is specified more than once.
Could anyone please explain? Can't i use the same column names in both tables?
I would like to break up a very large table into about ten smaller ones. With partitioning to be efficient the columns in the check constraint need to be used when accessing the view. The problem is the table has a composite primary key made up of LocationID/ProductID. With another composite index on ProductID/LocationID. This is accessed both ways from our applications.
I would like to partition the table by LocationID. But then when called by ProductID a scan of all tables in the view would have to be done.
In Oracle there is something called a global index that would solve this. Is there anything similar in SQL Server or does anybody have a work around?
Hi! This is my first post and I really need help with Partitioned View. I'm using Sql Server 2000 and I created a partitioned view using 6 tables and now a need to create the table '7' and alter the view. But when i'm trying to insert new data i'm receiving the message: :eek: "Server: Msg 4416, Level 16, State 5, Line 1 UNION ALL view 'tb_sld_cob_pap' is not updatable because the definition contains a disallowed construct."
My code is:
drop VIEW tb_sld_cob_pap GO CREATE TABLE dbo.tb_sld_cob_pap_7 ( cod_operacao int NOT NULL , cod_contrato int NOT NULL , sequencial_duplicata int NOT NULL , data_sld_pap smalldatetime NOT NULL CHECK ([data_sld_pap] >= '20060201'), liqex_dia_nom_outros float NULL , liqex_dia_moe_outros float NULL, constraint pk_pap7 primary key (cod_operacao,cod_contrato,sequencial_duplicata,da ta_sld_pap) ) GO CREATE INDEX IdxSldCobPap7_1 ON dbo.tb_sld_cob_pap_7(cod_titulo, seq_titulo, data_sld_pap) GO
CREATE INDEX IdxSldCobPap7_2 ON dbo.tb_sld_cob_pap_7(cod_operacao, seq_ctr_sacado, sequencial_duplicata, data_sld_pap) GO
ALTER TABLE dbo.tb_sld_cob_pap_6 DROP CONSTRAINT CK__tb_sld_co__data___6C190EBB GO
ALTER TABLE dbo.tb_sld_cob_pap_6 ADD CONSTRAINT CK__tb_sld_co__data___6C190EBB CHECK (((([data_sld_pap] >= '20051201') and ([data_sld_pap] < '20060201')))) GO
create VIEW tb_sld_cob_pap as select * from tb_sld_cob_pap_1 union all select * from tb_sld_cob_pap_2 union all select * from tb_sld_cob_pap_3 union all select * from tb_sld_cob_pap_4 union all select * from tb_sld_cob_pap_5 union all select * from tb_sld_cob_pap_6 union all select * from tb_sld_cob_pap_7
My table tb_sld_cob_pap_6 does NOT have data with ([data_sld_pap] >= '20060201'). I'm using this script in other database and I don't have this problem.
CREATE INDEX myTable99_1_IX ON MyTable99_1 (Account, Ledger) CREATE INDEX myTable99_2_IX ON MyTable99_2 (Account, Ledger) CREATE INDEX myTable99_3_IX ON MyTable99_3 (Account, Ledger) GO
CREATE VIEW myView99 AS SELECT Account , Ledger , PostDate FROM myTable99_1 UNION ALL SELECT Account , Ledger , PostDate FROM myTable99_2 UNION ALL SELECT Account , Ledger , PostDate FROM myTable99_3 GO
SELECT * FROM myView99 WHERE Account = 1 AND Ledger = 1 GO
DROP VIEW myView99 DROP TABLE myTable99_1, myTable99_2, myTable99_3 GO
OK, so I thought I knew this, but I'm looking for parallelism...not only am I no getting it, I'm getting an Index scan....is it becuse I didn't put any data in the table? I thought it would stil show my index seek with parallelism
I'm running sqlserver 2000 enterprise edition on windows 2000 and I need to know, how to create partition table. Please give me a small partition table example.
This post concerns updating across a partitioned view, and not unlike others about this subject I am getting this error:
Msg 4436, Level 16, State 12, Line 1 UNION ALL view 'dbII.dbo.MyTable' is not updatable because a partitioning column was not found.
I am aware of the rules for defining a partitioning column, but interpreting them may have beaten me. So perhaps I haven't abided by all the rules. How to spot which one(s) from the view and table definitions? I suspect the CHECK constraint does not allow the ASCII function, but I can't see how to avoid using it given SYSCODE entries in one table are like "[A-Z]%" and in the other are like "[0-9]%".
Otherwise, I suspect it is because one of the tables has, by legacy, a text column and the view is casting it to varchar(MAX). I also suspect it is because there's a second column with a unique index. These aren't mentioned in the rules (are they?).
Here's the view definition:
SELECT SYSCODE, COL2, CAST(COMMENTS AS varchar(MAX)) AS COMMENTS FROM dbo.MYTABLE UNION ALL SELECT SYSCODE, COL2, COMMENTS FROM OTHERDATABASE.dbo.MYTABLE AS MYTABLE_1
And here are the table definitions:
-- Table in the database where view is defined CREATE TABLE [dbo].[MYTABLE]( [SYSCODE] [char](12) NOT NULL,
hello i want to ask if the insertion of a record into a partion is slower than insertion it into a non partitioned table or not? cuz sql has to decide to wich partion the record has to insert according to the partitioning key and is this decesion process is making insertion slower ?
Hello,I have a large set of data that I have set up as a partitioned view.The view is partitioned by a datetime column and the individual tableseach represent one month's worth of data. I need to keep at least twoyear's worth of data at all times, but after two years I can archivethe data. A sample of the code used is below. It is simplified forspace reasons.My question is, how do other people maintain the database in this typeof scenario? I could create all of the tables necessary for the nextyear and then go through that at the end of each year (archive tablesover two years, add new tables, and change the view), but I was alsothinking that I might be able to write a stored procedure that runsonce a month and does all three of those tasks automatically. It seemslike a lot of dynamic SQL code though for something like that.Alternatively, I could write VB code to handle it in a DTS package.So, my question again is, how are others doing it? Any suggestions?Thanks!-Tom.CREATE TABLE [dbo].[Station_Events_200401] ([event_time] [datetime] NOT NULL ,[another_column] [char] (8) NOT NULL )GOCREATE TABLE [dbo].[Station_Events_200402] ([event_time] [datetime] NOT NULL ,[another_column] [char] (8) NOT NULL )GOCREATE VIEW Station_EventsASSELECT event_time,another_columnFROM Station_Events_200401UNION ALLSELECT event_time,another_columnFROM Station_Events_200402GO
I have inserted 200m rows into a partitioned table using SSIS, the table has a [RecID] column which is an identity(1,1) primary key. When I open the table, I see that RecId doesn't start from 1(its not ordered), it starts from 889823. But, when I query the table for RecID = 1, I can see that row.
Is it a typical behavior of a partitioned table? Or am I doing something wrong?
This is the query I used to create the partitioned table.
I am designing 3 p:artitioned views for 3 tables. Those tables grow up in 1.5 millions of rows per month (each one), so I decided to partition those tables monthly. The issue is that if I want to create the views with more than 256 months (256 tables) SQL Server says: 'Server: Msg 106, Level 15, State 1, Procedure Jugadas, Line 258Too many table names in the query. The maximum allowable is 256.'
Is there any workaround for this?
Another solution maybe?
PD1: I've tested with less than 256 tables and it works fine, I can update and query the tables (except for a couple of querys where I've got to join 2 or more of the involucred views in which case I got a similar error saying about a 260 table limit).
I have a table that I'm trying to scale out into a partitioned view. It's about 30 million rows. It's a workflow table and I have a taskID in the table. Originally the table was partitioned on this column but performance still wasn't what I wanted it to be, so we figured out how we could partition on a bit flag of IsOpen.
Question #1) Anyone know a best practice for creating apartitioned views on multi-columns?
What I'd like to try to do to lower the complexity of the original partitioned view is to create a view of partitioned views. Is this even possible (This is Q#2, BTW).
Hi everyone, I have some doubts about distributed partitioned views. When we create a distributed partitioned view whcih include three server, do we have tocreate this same distributed partitioned view in that three server in order to make each server to see adn especially modify it ?
Hi, I am using sql2000 ent edition. I have a partitioned view based on 8 tables. My selects and inserts are fine. But, when I run a delete on the view based on a query on the paritioned column, I get a "Transaction (Process ID 149) was deadlocked and has been chosen as a victim". I looked at the query plan and it was showing a parallel query on all the underlying tables. So, I put the Option(maxdop 1), using only one processor and the delete worked fine.
Does anybody know why? is parallel query create deadlocks? is there any known problems with deletes on partitioned views? same question for updates. I think I have the same problem for updates.
Has anyone had any problems on one row updates on a table where you have defined horizontal and vertical partitioning of the data to be replicated? When I execute an update clause that modifies just one row the log reader misses the modification and it does not get replicated to the other databases.
If I do the same update clause but on several rows then all the modifications are read by the log reader and the replication task goes ok.
CHECK constraints are not needed for the partitioned view to return the correct results. However, if the CHECK constraints have not been defined, the query optimizer must search all the tables instead of only those that cover the search condition on the partitioning column. Without the CHECK constraints, the view operates like any other view with UNION ALL. The query optimizer cannot make any assumptions about the values stored in different tables and it cannot skip searching the tables that participate in the view definition.
Then why am I getting index scans on my partitioning column on tables that fail the search value based on their check constraint?
Not looking for an answer because I know the query optimizer is a fickle b*tch and I did not post any code, but I needed to rant.
I have a partitioned view defined by a UNTION ALL of member tables. I can update the member tables through the view without any problem. However, when I declare a cursor on this partitioned view and try to update the view using WHERE CURRENT OF, I get an error saying 'The target object type is not updatable through a cursor'. Does anyone know if it's the case that updating a partitioned view through cursor is not supported in SQL Server 2000?
SQL 2008R2, Standard on Windows 2008R2 Enterprise.I have implemented a set of local partitioned views to facilitate multi-threading in one of our applications.Testing with SQL and the insert, update, Delete all work fine. However the inserts from SSIS are Bulk and then fail.According to all I have read I should be able to add "instead of Trigger" for the insert and it should work fine. URL....
I have created instead of triggers for insert and delete and again they work fine when I test them with T-SQL but fail in the SSIS package. The error message is that the view is not updateable.