Does anyone know how much free space should be left available on a storage partition to allow for the optimum performance? In fact is there any performance benefit to allowing a certain amount of free space on a partition that is occupied by SQL data files?
I created two tables one is based on partition structure and one is non-partition structure.
File Groups= Jan,Feb.....Dec Partition Functions='20060101','20060201'......'20061201' I am using RIGHT Range in Partition function. Then I defined partition scheme on partition function.
I have more than 7,00,000 data in my database. I checked filegroups and count rows. It works fine.
But When I check the estimation plan time out for query it is same for both partition table and non partition table.
I created two tables one is based on partition structure and one is non-partition structure.
File Groups= Jan,Feb.....Dec Partition Functions='20060101','20060201'......'20061201' I am using RIGHT Range in Partition function. Then I defined partition scheme on partition function.
I have more than 7,00,000 data in my database. I checked filegroups and count rows. It works fine.
But When I check the estimation plan time out for query it is same for both partition table and non partition table.
Does anyone know of any documentation on the performance of partitionmerge/split? Does the merge or split of a partition cause any lockingon the partitioned table? If you were merging or splitting a largevolume of data rebalancing your partitioned table would youpotentially lock users out?
Can someone take a look at my code and tell me what i'm doing in wrong. The script runs fine but when i go to table property it says the table is not partitioned. Thanks for your help.
create database [mypartition] go
--CREATE FILEGROUP USE [mypartition] GO ALTER DATABASE mypartition ADD FILEGROUP Y2000_filegroup ALTER DATABASE mypartition ADD FILEGROUP Y2001_filegroup ALTER DATABASE mypartition ADD FILEGROUP Y2002_filegroup ALTER DATABASE mypartition ADD FILEGROUP Y2003_filegroup ALTER DATABASE mypartition ADD FILEGROUP Y2004_filegroup ALTER DATABASE mypartition ADD FILEGROUP Y2005_filegroup ALTER DATABASE mypartition ADD FILEGROUP Y2006_filegroup ALTER DATABASE mypartition ADD FILEGROUP Y2007_filegroup ALTER DATABASE mypartition ADD FILEGROUP Y2008_filegroup ALTER DATABASE mypartition ADD FILEGROUP Y2009_filegroup
--CREATE FILES USE mypartition GO ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2000, FILENAME = 'F:ss_datadatadetail_2000.ndf', SIZE = 2MB) TO FILEGROUP Y2000_filegroup; ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2001, FILENAME = 'F:ss_datadatadetail_2001.ndf', SIZE = 2MB) TO FILEGROUP Y2001_filegroup; ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2002, FILENAME = 'F:ss_datadatamdetail_2002.ndf', SIZE = 2MB) TO FILEGROUP Y2002_filegroup; ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2003, FILENAME = 'F:ss_datadatadetail_2003.ndf', SIZE = 2MB) TO FILEGROUP Y2003_filegroup; ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2004, FILENAME = 'F:ss_datadatadetail_2004.ndf', SIZE = 2MB) TO FILEGROUP Y2004_filegroup; ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2005, FILENAME = 'F:ss_datadatadetail_2005.ndf', SIZE = 2MB) TO FILEGROUP Y2005_filegroup; ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2006, FILENAME = 'F:ss_datadatadetail_2006.ndf', SIZE = 2MB) TO FILEGROUP Y2006_filegroup; ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2007, FILENAME = 'F:ss_datadatadetail_2007.ndf', SIZE = 2MB) TO FILEGROUP Y2007_filegroup; ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2008, FILENAME = 'F:ss_datadatadetail_2008.ndf', SIZE = 2MB) TO FILEGROUP Y2008_filegroup; ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2009, FILENAME = 'F:ss_datadatadetail_2009.ndf', SIZE = 2MB) TO FILEGROUP Y2009_filegroup;
--CREATE PARTITION FUNCTION USE [mypartition] GO CREATE partition FUNCTION detail_part_function (varchar(10)) AS RANGE LEFT FOR VALUES('2001','2002','2003','2004','2005','2006','2007','2008') GO
--CREATE PARTITION SCHEME USE [mypartition] GO CREATE PARTITION SCHEME detail_part_scheme AS PARTITION detail_part_function TO (Y2000_filegroup, Y2001_filegroup,Y2002_filegroup,Y2003_filegroup,Y2004_filegroup,Y2005_filegroup,Y2006_filegroup,Y2007_filegroup,Y2008_filegroup,Y2009_filegroup) GO
-- Now just create a table that uses the particion scheme USE [mypartition] GO /****** Object: Table [dbo].[partitioned_table] Script Date: 05/14/2008 09:44:21 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[partitioned_table]( [id] [int] NOT NULL, [fiscal_year] [varchar](10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, CONSTRAINT [PK_partitioned_table] PRIMARY KEY CLUSTERED ( [id] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON detail_part_scheme(fiscal_year)
I can see lot of documentation on Range Partitioning. Is there any other type of partition supported in SQL Server 2005?
For example, I have a Fact table having Billion rows. It has a column called BATCH_ID. A BATCH_ID corresponds to 10-20 Million rows and it is a running sequence number like 1,2,3 etc. (not an identity column). Is there anyway I can specify a partition function with BATCH_ID column as an int value? Will the SQL Server automatically does the partition on each int value in that case? If not, what is the best way to do it?
Hi, I recently installed MSSQL 2000 and sp3a onto a windows 2003 server in a test lab. I configured one big c: partion on this os and installed the db in the default location.
I need to detach the db's on this server and re-attach them onto another MSSQL 2000/sp3a server running 2003 os with a partition scheme like this:
c: = 20 gigs for the os e: = 600 gigs for the data
I could not re-attach the db's onto the e:default path odatabase
Is there a work-around for this? This makes sense to me as to why it is not working and was an install oversight on my part but there has to be a way to overcome this delima?
Please help me how to do the Horizontal table partition?? I have to split the table in to multiple sub tables with same columns and less rows and then I have to use each sub table.
Msg 156, Level 15, State 1, Line 12 Incorrect syntax near the keyword 'over'.
What am I missing? The max() over statement looks just like the statement in the documentation.
select RegistrationId, OrderId, Sequence, Title, InformalFirstName, FirstName, MiddleName, Lastname, EntryDate, max(Sequence) over(partition by RegistrationId) as 'maxsequence' from registration where OrderId = '68379449583' and Year = '2008' and Active = 'Yes'
I have a situation where my SQL works everywhere else but my COBOL compiler complains wherever I use PARTITION BY. I can't find a workaround for that problem so I would like to remove all the PARTITION BYs. I'm not confident that I can do this accurately and would like some help getting started.
Here is my simplest example:
SELECT FESOR.REGION, FESOR.TYPE, COUNT(*) OVER (PARTITION BY FESOR.REGION, FESOR.TYPE) FROM FESOR, FR where FESOR.phase = 'Ref' and FESOR.assign is null and FESOR.comp_date is null and FESOR.region = FR.REGION and FESOR.type = FR.TYPE and FR.REP_ROW='A' GROUP BY FESOR.REGION, FESOR.TYPE
What I'm looking for is a modified version of the SQL above which returns the same result set without using PARTITION BY.
I have names in the database which I want partition by last name - for example last names starting with A, B, C, D should go to the file group 1. last names starting with E, F, G, H should go to file group 2.
I am trying to use the following function - but do I specify in the function that last names with with A, B, C, D should go to the file group 1
CREATE PARTITION FUNCTION myRangePF3 (char(20)) AS RANGE RIGHT FOR VALUES ('EX', 'RXE', 'XR');
Is there any way to modify partition function to accomplish this?
I am in the progress of migrating my 2000 install over to SQL05 and onto a couple of new boxes. I have 2 Dell 1850's to set up mirroring on and wanted to know your opinion on the best partition setup. The 1850's are a 2 disk machine so it has to be a RAID 1 setup. I am just unsure of the benefit of partitioning the logical drive to seperate the log files from the data files.
Should I partition the drive, a 300G SCSI into 2 partitions and keep the logs on one partition and the data on another? Can anyone tell me if there is a benefit to doing this?
If there is a more pratical solution can you explain?
Most examples for SQL Server 2005 involve a sales table that you split based on date, i.e. sales records prior to 2000 go to this partition, and the ones after that go to another one. Nice and simple.
Say I have a sales table:
id Amount Date 1 10 1/1/1999 2 9.99 1/1/2007
Now then, I put all the records prior to 2000 in it's own partition.
So when I do something like this: SELECT * FROM Sales WHERE DATE = 1/1/1999 the SQL server will know which partition to look at. Very nice.
Now then, if I do this: SELECT * FROM Sales WHERE id = 1 How will the SQL server know which partition to look at?
i have a table named stgBudgetFact, that is partitioned on DivisionID.
each DivisionID goes into its own partition, which is on its own file group.
the etl guru on the project wants to be able to truncate the partition, not do a delete from the table based on DivisionID.
Is it possible to truncate the partition somehow (remove rows where DivisionID = 3 for instance without ALTER DATABASE, where the medicine is worse than the disease) and then reestablish the partition so we can restart a failed load by division?
Sorry if my post misplaced. I have a table that contain huge data so I made a partition function and partition schema. Unfortunately there's only one column to be allowed as a partition column whereas my queries using a few columns. Can we make many partition function that apply to one partition schema ? I search no result in SQL BOL. Thanks in advance.
How complex can the over (partition by...) window functions be? All the examples I see in BoL, the partition clause is the same for each window function. Can it be different? How different?
Here are some snippets of where I'd like to take this. Right now I'm using successive views to bring the results to a single row.
SUM(building_function_table.e_and_g_square_foot + building_function_table.auxillary_square_foot) OVER (PARTITION BY building_function_table.fice_code, building_function_table.building_number) AS sqft
SUM(building_function_table.e_and_g_square_foot * function_table.square_foot_value) OVER (PARTITION BY building_function_table.fice_code, building_function_table.building_number,building_function_table.function_code) AS repl_value_e_g
SUM(building_needs_table.percentage * (building_needs_table.age / system_component_table.useful_life) * component_multiplier_table.multiplier) OVER (PARTITION BY building_needs_table.fice_code, building_needs_table.building_number, building_needs_table.system_code, building_needs_table.system_component_code, building_needs_table.function_code) as maint_needs
And it gets even more complex, but these are all I've written because I don't know if it will work.
These would be all part of a giant overhead view of a building maintenance database. It's normalized and the respective tables above are all simple inner joins on the primary key of their parent.
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***
1 HIS_HTTP_LOG a partition table2 REL_HTTP_LOG not a partition table,the same structure of HIS_HTTP_LOGļ¼3 When HIS_HTTP_LOG doesn't exist any index the following executed succeed ALTER PARTITION SCHEME PS_HIS_HTTP_LOG NEXT USED [FG_03] ALTER PARTITION FUNCTION PF_HIS_HTTP_LOG() SPLIT RANGE ('20070331 23:59:59.997') ALTER TABLE TMP_HTTP_LOG SWITCH TO HIS_HTTP_LOG PARTITION 3 4 However when I added the index in HIS_HTTP_LOG and execute the step 3,It made error: a) CREATE INDEX IDX_HIS_HTTP_LOG_001 ON HIS_HTTP_LOG(USERID)ON PS_HIS_HTTP_LOG (STARTIME) b) ALTER PARTITION SCHEME PS_HIS_HTTP_LOG NEXT USED [FG_03] ALTER PARTITION FUNCTION PF_HIS_HTTP_LOG() SPLIT RANGE ('20070331 23:59:59.997') ALTER TABLE TMP_HTTP_LOG SWITCH TO HIS_HTTP_LOG PARTITION 3 ========================= Error messages================================================"ALTER TABLE SWITCH statement failed. There is no identical index in source table 'TMP_HTTP_LOG SWITCH ' for the index 'IDX_HIS_HTTP_LOG_001' in target table 'HIS_HTTP_LOG' ." When I added index in REL_HTTP_LOG ,it gave me the same error message Could you tell me how can I solve the problem !
I have the pagefile.sys on the same partition (C:)as the database files. I been advised this is not a good idea. I'm getting paging. I'd like to move the swapfile to another partition on the same drive. Is that a good idea, or should I move it to another physical disk? And is it OK to leave the OS partition (C:) without any swapfile? Thanks!!
Hi! I'm installing a new SQL Server machine. During NT Server installation our NT support guy converted the only 2GB FAT C: partition to NTFS. So as of right now all my 4 8GB drives are NTFS. I think it would be better to keep this C: partition in FAT because, as of my knowledge, having FAT boot partition can help to boot the machine in case of NT crash.
Is there anything that I'm really losing by this conversion to NTFS or I should not be worried so much about it? Does it put my SQL Server databases, database .dat files or NT Server in more danger situation in case of any crash? Or it's giving me some advantages? Thanks Ninel
Recently i've been working on a new project that would partition a large table 2 smaller tables. I then create a view to union the 2 smaller tables(table A, B). I've been getting a strange error when i try to update, insert, delete a record through the view. "View needs partitioning column"....i find this strange. Both of my table have a cluster primary key consisting of 3 columns, and one of the 3 columns(date field) consist of a check constraint. The constraint is used to determine what record goes into which table. Am i missing anything else? The really strange part is sometime it works, and sometimes i get the error message.
I need to create a new partition on a Cube using T-SQL and I am not much aware of either the Cubes or the ActiveX script. I am writing a T-SQL script for creating this partition on a cube.
Select Case iMonth Case 1,2,3 sQuarter = "1" Case 4,5,6 sQuarter = "2" Case 7,8,9 sQuarter = "3" Case 10,11,12
Please see below sample from BOL + sample of execution plane .
I would like to ask what is the way to avoid the optimizer scan tables out of the scope (I would expect that the only table for this query will be SUPPLY1)
Thanks, Eyal
--This example uses tables named SUPPLY1, SUPPLY2, SUPPLY3, and SUPPLY4, which correspond to the supplier tables from four offices, located in different countries/regions. USE tempdb GO
--create the tables and insert the values CREATE TABLE SUPPLY1 ( supplyID INT PRIMARY KEY CHECK (supplyID BETWEEN 1 and 150), supplier CHAR(50) ) CREATE TABLE SUPPLY2 ( supplyID INT PRIMARY KEY CHECK (supplyID BETWEEN 151 and 300), supplier CHAR(50) ) CREATE TABLE SUPPLY3 ( supplyID INT PRIMARY KEY CHECK (supplyID BETWEEN 301 and 450), supplier CHAR(50) ) CREATE TABLE SUPPLY4 ( supplyID INT PRIMARY KEY CHECK (supplyID BETWEEN 451 and 600), supplier CHAR(50) ) GO --create the view that combines all supplier tables CREATE VIEW all_supplier_view AS SELECT * FROM SUPPLY1 UNION ALL SELECT * FROM SUPPLY2 UNION ALL SELECT * FROM SUPPLY3 UNION ALL SELECT * FROM SUPPLY4 GO