We are merge replicating data from a server to client servers. The publication has a static row filter. If a change is made on the client to the field defining the partition, we are seeing a problem. After a synch, the record is deleted from the client. Here are more specifics:
There are work orders created on the branch office server with a status of "Scheduled". These are replicated to the main office server upon a synch. Only records with a status of 'Scheduled' are replicated to the central server. If the client server changes the status from 'Scheduled' to unscheduled, the record then falls outside of the partition defined by the row filter.
Upon synch, the change in status is sent to the server. The server shows the updated record status. The 2nd part of the merge process notes that the record has fallen outside of the row filter and instructs the client to delete the record.
It appeared that setting @allow_partition_realignment to false would fix this. Setting @allow_partition_realignment to false requires that the publication be a download only publication. This doesn't work for the application.
We are unable to make the client server's the publisher as many of these are running SQL Express.
The answer is possibly in the many options used to setup merge rep. Possibly some pre or post rep routine is needed.
Does anyone know of any documentation on the performance of partitionmerge/split? Does the merge or split of a partition cause any lockingon the partitioned table? If you were merging or splitting a largevolume of data rebalancing your partitioned table would youpotentially lock users out?
Hi, I have 2 reports - a summary and a detail. When I select one of the values in the summary report I want it to jump to the detail report showing me what the 'summary' values consist of. I have set the parameters up on both reports and am able to pass them between the two.
I do not have any issues with any of the above but how do I stop jumping from the summary report to the detail when there is no value in the summary report? Any ideas.
Hello, I am trying to make a link from a report textbox to a file in my computer or on local server, but it doesn't seem to work. The cursor changes into hand, but when i push it nothing happens. I tried typing: " file:///f:/Sertifikatai/sertifikatas.txt " and " file://localhost/f:/Sertifikatai/sertifikatas.txt " Could you please help me?
We have an ASP.NET application using a ReportViewer to show reports. I have two reports (both with logo - image - in yours headers):
- Report A: with action to jump to Report B.
- Report B: with action to jump to Report A.
This work fine into BID and into Report Manager, but into our application, when I click in first link into Report A, show me the Report B, but our logo isn't visible. If I try to return to Report B (clicking in action), show me this error:
The path of the item "(null)" is not valid. The path must be less than 260 characters long and must start with slash. Other restrictions apply. (rsInvalidItemPath)
May I need to configure something into my web.config to work fine too?
I have database on SQL Server 2000 set up with a merge publication.This publication is configured with a number of dynamic filters toreduce the amount of data sent to each client. Each client has ananonymous pull subscription. The merge process can be triggered by thewindows sync manager and my application.To improve performance I have created some helper tables to hold themapping between user login and primary keys of selected entities.For the replicated data to be correct the contents of the helper tablesneeds to be up to date.I need to fire off a stored procedure on the publisher beforereplication starts to verify that this data is up to date. I can notsee any documented way of doing this however I have been experimentingwith some unorthodox systems.Firstly has anyone any ideas?I have been considering adding a trigger to some of the tables used bythe Microsoft replication code - yes I know this is very nasty.My problems arise because executing this stored procedure will causesome data to be updated. In updating data we could create a newgeneration in the database. I must therefore run my stored procedurebefore any the Microsoft code makes any generation checks / updates.Anyone done anything similar, Anyone have any better ideas?Any comments would be gratefully received.
I'm using merge replication to maintain a backup copy of my main (publisher)MSDE database. A push subscription periodically (1 per minute) updates the backup DB. It's intended that if the main db goes down then the backup (subscription) db can be configured as a publisher. This must all be performed via scripting. The initial configuration of the main publisher and subscription is controlled via scripting, which works fine. The problems occur when I try to configure the subsciber to become a publisher. A script is executed on the subscriber but fails at the point when it's configuring the publisher detail. The error is something like "unable to configure a publication for a database setup as an anonymous subscription". I'm guessing that there are subscritpion artifacts added to the database which need to be removed before it can be configured as a new publisher.
Can someone take a look at my code and tell me what i'm doing in wrong. The script runs fine but when i go to table property it says the table is not partitioned. Thanks for your help.
create database [mypartition] go
--CREATE FILEGROUP USE [mypartition] GO ALTER DATABASE mypartition ADD FILEGROUP Y2000_filegroup ALTER DATABASE mypartition ADD FILEGROUP Y2001_filegroup ALTER DATABASE mypartition ADD FILEGROUP Y2002_filegroup ALTER DATABASE mypartition ADD FILEGROUP Y2003_filegroup ALTER DATABASE mypartition ADD FILEGROUP Y2004_filegroup ALTER DATABASE mypartition ADD FILEGROUP Y2005_filegroup ALTER DATABASE mypartition ADD FILEGROUP Y2006_filegroup ALTER DATABASE mypartition ADD FILEGROUP Y2007_filegroup ALTER DATABASE mypartition ADD FILEGROUP Y2008_filegroup ALTER DATABASE mypartition ADD FILEGROUP Y2009_filegroup
--CREATE FILES USE mypartition GO ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2000, FILENAME = 'F:ss_datadatadetail_2000.ndf', SIZE = 2MB) TO FILEGROUP Y2000_filegroup; ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2001, FILENAME = 'F:ss_datadatadetail_2001.ndf', SIZE = 2MB) TO FILEGROUP Y2001_filegroup; ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2002, FILENAME = 'F:ss_datadatamdetail_2002.ndf', SIZE = 2MB) TO FILEGROUP Y2002_filegroup; ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2003, FILENAME = 'F:ss_datadatadetail_2003.ndf', SIZE = 2MB) TO FILEGROUP Y2003_filegroup; ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2004, FILENAME = 'F:ss_datadatadetail_2004.ndf', SIZE = 2MB) TO FILEGROUP Y2004_filegroup; ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2005, FILENAME = 'F:ss_datadatadetail_2005.ndf', SIZE = 2MB) TO FILEGROUP Y2005_filegroup; ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2006, FILENAME = 'F:ss_datadatadetail_2006.ndf', SIZE = 2MB) TO FILEGROUP Y2006_filegroup; ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2007, FILENAME = 'F:ss_datadatadetail_2007.ndf', SIZE = 2MB) TO FILEGROUP Y2007_filegroup; ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2008, FILENAME = 'F:ss_datadatadetail_2008.ndf', SIZE = 2MB) TO FILEGROUP Y2008_filegroup; ALTER DATABASE mypartition ADD FILE (NAME = mypartition_detail_2009, FILENAME = 'F:ss_datadatadetail_2009.ndf', SIZE = 2MB) TO FILEGROUP Y2009_filegroup;
--CREATE PARTITION FUNCTION USE [mypartition] GO CREATE partition FUNCTION detail_part_function (varchar(10)) AS RANGE LEFT FOR VALUES('2001','2002','2003','2004','2005','2006','2007','2008') GO
--CREATE PARTITION SCHEME USE [mypartition] GO CREATE PARTITION SCHEME detail_part_scheme AS PARTITION detail_part_function TO (Y2000_filegroup, Y2001_filegroup,Y2002_filegroup,Y2003_filegroup,Y2004_filegroup,Y2005_filegroup,Y2006_filegroup,Y2007_filegroup,Y2008_filegroup,Y2009_filegroup) GO
-- Now just create a table that uses the particion scheme USE [mypartition] GO /****** Object: Table [dbo].[partitioned_table] Script Date: 05/14/2008 09:44:21 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[partitioned_table]( [id] [int] NOT NULL, [fiscal_year] [varchar](10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, CONSTRAINT [PK_partitioned_table] PRIMARY KEY CLUSTERED ( [id] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON detail_part_scheme(fiscal_year)
I can see lot of documentation on Range Partitioning. Is there any other type of partition supported in SQL Server 2005?
For example, I have a Fact table having Billion rows. It has a column called BATCH_ID. A BATCH_ID corresponds to 10-20 Million rows and it is a running sequence number like 1,2,3 etc. (not an identity column). Is there anyway I can specify a partition function with BATCH_ID column as an int value? Will the SQL Server automatically does the partition on each int value in that case? If not, what is the best way to do it?
Hi, I recently installed MSSQL 2000 and sp3a onto a windows 2003 server in a test lab. I configured one big c: partion on this os and installed the db in the default location.
I need to detach the db's on this server and re-attach them onto another MSSQL 2000/sp3a server running 2003 os with a partition scheme like this:
c: = 20 gigs for the os e: = 600 gigs for the data
I could not re-attach the db's onto the e:default path odatabase
Is there a work-around for this? This makes sense to me as to why it is not working and was an install oversight on my part but there has to be a way to overcome this delima?
Please help me how to do the Horizontal table partition?? I have to split the table in to multiple sub tables with same columns and less rows and then I have to use each sub table.
Msg 156, Level 15, State 1, Line 12 Incorrect syntax near the keyword 'over'.
What am I missing? The max() over statement looks just like the statement in the documentation.
select RegistrationId, OrderId, Sequence, Title, InformalFirstName, FirstName, MiddleName, Lastname, EntryDate, max(Sequence) over(partition by RegistrationId) as 'maxsequence' from registration where OrderId = '68379449583' and Year = '2008' and Active = 'Yes'
I have a situation where my SQL works everywhere else but my COBOL compiler complains wherever I use PARTITION BY. I can't find a workaround for that problem so I would like to remove all the PARTITION BYs. I'm not confident that I can do this accurately and would like some help getting started.
Here is my simplest example:
SELECT FESOR.REGION, FESOR.TYPE, COUNT(*) OVER (PARTITION BY FESOR.REGION, FESOR.TYPE) FROM FESOR, FR where FESOR.phase = 'Ref' and FESOR.assign is null and FESOR.comp_date is null and FESOR.region = FR.REGION and FESOR.type = FR.TYPE and FR.REP_ROW='A' GROUP BY FESOR.REGION, FESOR.TYPE
What I'm looking for is a modified version of the SQL above which returns the same result set without using PARTITION BY.
I have names in the database which I want partition by last name - for example last names starting with A, B, C, D should go to the file group 1. last names starting with E, F, G, H should go to file group 2.
I am trying to use the following function - but do I specify in the function that last names with with A, B, C, D should go to the file group 1
CREATE PARTITION FUNCTION myRangePF3 (char(20)) AS RANGE RIGHT FOR VALUES ('EX', 'RXE', 'XR');
Is there any way to modify partition function to accomplish this?
I am in the progress of migrating my 2000 install over to SQL05 and onto a couple of new boxes. I have 2 Dell 1850's to set up mirroring on and wanted to know your opinion on the best partition setup. The 1850's are a 2 disk machine so it has to be a RAID 1 setup. I am just unsure of the benefit of partitioning the logical drive to seperate the log files from the data files.
Should I partition the drive, a 300G SCSI into 2 partitions and keep the logs on one partition and the data on another? Can anyone tell me if there is a benefit to doing this?
If there is a more pratical solution can you explain?
Most examples for SQL Server 2005 involve a sales table that you split based on date, i.e. sales records prior to 2000 go to this partition, and the ones after that go to another one. Nice and simple.
Say I have a sales table:
id Amount Date 1 10 1/1/1999 2 9.99 1/1/2007
Now then, I put all the records prior to 2000 in it's own partition.
So when I do something like this: SELECT * FROM Sales WHERE DATE = 1/1/1999 the SQL server will know which partition to look at. Very nice.
Now then, if I do this: SELECT * FROM Sales WHERE id = 1 How will the SQL server know which partition to look at?
i have a table named stgBudgetFact, that is partitioned on DivisionID.
each DivisionID goes into its own partition, which is on its own file group.
the etl guru on the project wants to be able to truncate the partition, not do a delete from the table based on DivisionID.
Is it possible to truncate the partition somehow (remove rows where DivisionID = 3 for instance without ALTER DATABASE, where the medicine is worse than the disease) and then reestablish the partition so we can restart a failed load by division?
Sorry if my post misplaced. I have a table that contain huge data so I made a partition function and partition schema. Unfortunately there's only one column to be allowed as a partition column whereas my queries using a few columns. Can we make many partition function that apply to one partition schema ? I search no result in SQL BOL. Thanks in advance.
Does anyone know how much free space should be left available on a storage partition to allow for the optimum performance? In fact is there any performance benefit to allowing a certain amount of free space on a partition that is occupied by SQL data files?
How complex can the over (partition by...) window functions be? All the examples I see in BoL, the partition clause is the same for each window function. Can it be different? How different?
Here are some snippets of where I'd like to take this. Right now I'm using successive views to bring the results to a single row.
SUM(building_function_table.e_and_g_square_foot + building_function_table.auxillary_square_foot) OVER (PARTITION BY building_function_table.fice_code, building_function_table.building_number) AS sqft
SUM(building_function_table.e_and_g_square_foot * function_table.square_foot_value) OVER (PARTITION BY building_function_table.fice_code, building_function_table.building_number,building_function_table.function_code) AS repl_value_e_g
SUM(building_needs_table.percentage * (building_needs_table.age / system_component_table.useful_life) * component_multiplier_table.multiplier) OVER (PARTITION BY building_needs_table.fice_code, building_needs_table.building_number, building_needs_table.system_code, building_needs_table.system_component_code, building_needs_table.function_code) as maint_needs
And it gets even more complex, but these are all I've written because I don't know if it will work.
These would be all part of a giant overhead view of a building maintenance database. It's normalized and the respective tables above are all simple inner joins on the primary key of their parent.
1 HIS_HTTP_LOG a partition table2 REL_HTTP_LOG not a partition table,the same structure of HIS_HTTP_LOGļ¼3 When HIS_HTTP_LOG doesn't exist any index the following executed succeed ALTER PARTITION SCHEME PS_HIS_HTTP_LOG NEXT USED [FG_03] ALTER PARTITION FUNCTION PF_HIS_HTTP_LOG() SPLIT RANGE ('20070331 23:59:59.997') ALTER TABLE TMP_HTTP_LOG SWITCH TO HIS_HTTP_LOG PARTITION 3 4 However when I added the index in HIS_HTTP_LOG and execute the step 3,It made error: a) CREATE INDEX IDX_HIS_HTTP_LOG_001 ON HIS_HTTP_LOG(USERID)ON PS_HIS_HTTP_LOG (STARTIME) b) ALTER PARTITION SCHEME PS_HIS_HTTP_LOG NEXT USED [FG_03] ALTER PARTITION FUNCTION PF_HIS_HTTP_LOG() SPLIT RANGE ('20070331 23:59:59.997') ALTER TABLE TMP_HTTP_LOG SWITCH TO HIS_HTTP_LOG PARTITION 3 ========================= Error messages================================================"ALTER TABLE SWITCH statement failed. There is no identical index in source table 'TMP_HTTP_LOG SWITCH ' for the index 'IDX_HIS_HTTP_LOG_001' in target table 'HIS_HTTP_LOG' ." When I added index in REL_HTTP_LOG ,it gave me the same error message Could you tell me how can I solve the problem !
I have the pagefile.sys on the same partition (C:)as the database files. I been advised this is not a good idea. I'm getting paging. I'd like to move the swapfile to another partition on the same drive. Is that a good idea, or should I move it to another physical disk? And is it OK to leave the OS partition (C:) without any swapfile? Thanks!!
Hi! I'm installing a new SQL Server machine. During NT Server installation our NT support guy converted the only 2GB FAT C: partition to NTFS. So as of right now all my 4 8GB drives are NTFS. I think it would be better to keep this C: partition in FAT because, as of my knowledge, having FAT boot partition can help to boot the machine in case of NT crash.
Is there anything that I'm really losing by this conversion to NTFS or I should not be worried so much about it? Does it put my SQL Server databases, database .dat files or NT Server in more danger situation in case of any crash? Or it's giving me some advantages? Thanks Ninel
Recently i've been working on a new project that would partition a large table 2 smaller tables. I then create a view to union the 2 smaller tables(table A, B). I've been getting a strange error when i try to update, insert, delete a record through the view. "View needs partitioning column"....i find this strange. Both of my table have a cluster primary key consisting of 3 columns, and one of the 3 columns(date field) consist of a check constraint. The constraint is used to determine what record goes into which table. Am i missing anything else? The really strange part is sometime it works, and sometimes i get the error message.