I'm trying to do some researh on the use of SQL's DPV. I'm looking for feedback from people who've actually done this production to know more about the design challenges and level of added administration required. Any information will be much appreciated. Thanks.
Hi everyone, I have some doubts about distributed partitioned views. When we create a distributed partitioned view whcih include three server, do we have tocreate this same distributed partitioned view in that three server in order to make each server to see adn especially modify it ?
Hi, I read the following article related to "Federated Database Servers" But i am not getting what is this "Federated Database Servers", May be the language of this article i think little bit difficult. So, can anyone please explain me this. Wht are thse servers and when, why and where we use these servers?????
I am trying to understand federated servers. So far I have been able to link 2 servers, create a distributed view, and run a select and insert query against the view.
The next thing that I need to understand is how to deal with triggers and with identity columns (ie columns that have auto-increments).
Our application is using triggers on tables in several cases to keep track of create and update dates. So there is a trigger that automatically updates the datetime field on a table whenever that row gets updated or changed or created. This way the programmer doesn't need to remember to update those date time values. Now I understand that in the distributed view architecture the triggers on the underlying tables may not work. Can any one comment on this?
As well, I have a table which has the identity enabled, so it is incrementing a counter and creating a unique integer key value for each new record. How does this work when we move to a distributed architecture? How do I assure a unique key is generated in the distributed view architecture?
I am setting up 3 Linked Servers (SERVER_A, SERVER_B and SERVER_C) in an isolated local network. They are all running SQL Server 2005 Developer Edition, all on XP SP2. On each server, I have a distributed partitioned view named WAREHOUSE_ALL that basically is the UNION of all WAREHOUSE tables.
I am having trouble in running write (INSERT, UPDATE or DELETE) queries on the distributed partitioned view. The error returned was (run from SERVER_B)
OLE DB provider "SQLNCLI" for linked server "SERVER_A" returned message "No transaction is active.". Msg 7391, Level 16, State 2, Line 7The operation could not be performed because OLE DB provider "SQLNCLI" for linked server "SERVER_A" was unable to begin a distributed transaction.
However, executing a read (SELECT) query ran smoothly without error.
I have done all the steps required as described in the article at http://support.microsoft.com/?kbid=873160 . Note that the only difference between the situation and our situation is the provider (SQLOLEDB and SQLNCLI), which I guess does not important. Unfortunately, the error still comes out.
After reading heaps of other article, I suspected that there is something wrong with MSDTC. As far as I know, all the settings for MSDTC were set accordingly. Then, I ran DTCPing - http://www.microsoft.com/downloads/details.aspx?FamilyID=5e325025-4dcd-4658-a549-1d549ac17644&DisplayLang=en and the error returned was DTCping log file: C:Documents and SettingsAdministratorDesktoplSERVER_B2496RPC server is ready Please Start Partner DTCping before pinging ++++++++++++Validating Remote Computer Name++++++++++++ Please refer to following log file for details: C:Documents and SettingsAdministratorDesktoplSERVER_B2496.log Invoking RPC method on SERVER_C Problem:fail to invoke remote RPC method Error(0x5) at dtcping.cpp @303 -->RPC pinging exception -->5(Access is denied.) RPC test failed
And here is the log file:
Platform:Windows XP IP Configure Information Host Name . . . . . . . . . : SERVER_B DNS Servers . . . . . . . . : 129.78.99.2 Node Type . . . . . . . . . : NetBIOS Scope ID. . . . . . : IP Routing Enabled. . . . . : no WINS Proxy Enabled. . . . . : no NetBIOS Resolution Uses DNS : no
++++++++++++++++++++++++++++++++++++++++++++++ DTCping 1.9 Report for SERVER_B ++++++++++++++++++++++++++++++++++++++++++++++ RPC server is ready ++++++++++++Validating Remote Computer Name++++++++++++ 10-05, 17:10:54.769-->Start DTC connection test Name Resolution: SERVER_C-->172.19.102.36-->SERVER_C 10-05, 17:11:09.781-->Start RPC test (SERVER_B-->SERVER_C) Problem:fail to invoke remote RPC method Error(0x5) at dtcping.cpp @303 -->RPC pinging exception -->5(Access is denied.) RPC test failed
I guess it could be due to port problem, which I have already opened in the Windows Firewall. There is one article which is confusing me -> Update to automatically open port 135 in Windows Firewall when a TCP or a UDP RPC server registers with the endpoint mapper at http://support.microsoft.com/kb/838191 (This article shows automatic opening of port 135!)
I've starting to explore the Distributed Partitoned Views, in order to use it in the next project, and I've found the article: "MS SQL Server Distributed Partitioned Views" By Don Schlichting
I came across the following problem: While running sample: USE test GO
CREATE VIEW AllAuthors AS
SELECT * FROM AuthorsAM, TEST1.test.dbo.AuthorsNZ
GO
I got the error message: Server: Msg 4506, Level 16, State 1, Procedure AllAuthors, Line 5 Column names in each view or function must be unique. Column name 'au_lname' in view or function 'AllAuthors' is specified more than once.
Could anyone please explain? Can't i use the same column names in both tables?
I would like to break up a very large table into about ten smaller ones. With partitioning to be efficient the columns in the check constraint need to be used when accessing the view. The problem is the table has a composite primary key made up of LocationID/ProductID. With another composite index on ProductID/LocationID. This is accessed both ways from our applications.
I would like to partition the table by LocationID. But then when called by ProductID a scan of all tables in the view would have to be done.
In Oracle there is something called a global index that would solve this. Is there anything similar in SQL Server or does anybody have a work around?
Hello,I have a large set of data that I have set up as a partitioned view.The view is partitioned by a datetime column and the individual tableseach represent one month's worth of data. I need to keep at least twoyear's worth of data at all times, but after two years I can archivethe data. A sample of the code used is below. It is simplified forspace reasons.My question is, how do other people maintain the database in this typeof scenario? I could create all of the tables necessary for the nextyear and then go through that at the end of each year (archive tablesover two years, add new tables, and change the view), but I was alsothinking that I might be able to write a stored procedure that runsonce a month and does all three of those tasks automatically. It seemslike a lot of dynamic SQL code though for something like that.Alternatively, I could write VB code to handle it in a DTS package.So, my question again is, how are others doing it? Any suggestions?Thanks!-Tom.CREATE TABLE [dbo].[Station_Events_200401] ([event_time] [datetime] NOT NULL ,[another_column] [char] (8) NOT NULL )GOCREATE TABLE [dbo].[Station_Events_200402] ([event_time] [datetime] NOT NULL ,[another_column] [char] (8) NOT NULL )GOCREATE VIEW Station_EventsASSELECT event_time,another_columnFROM Station_Events_200401UNION ALLSELECT event_time,another_columnFROM Station_Events_200402GO
Hi, I am using sql2000 ent edition. I have a partitioned view based on 8 tables. My selects and inserts are fine. But, when I run a delete on the view based on a query on the paritioned column, I get a "Transaction (Process ID 149) was deadlocked and has been chosen as a victim". I looked at the query plan and it was showing a parallel query on all the underlying tables. So, I put the Option(maxdop 1), using only one processor and the delete worked fine.
Does anybody know why? is parallel query create deadlocks? is there any known problems with deletes on partitioned views? same question for updates. I think I have the same problem for updates.
CHECK constraints are not needed for the partitioned view to return the correct results. However, if the CHECK constraints have not been defined, the query optimizer must search all the tables instead of only those that cover the search condition on the partitioning column. Without the CHECK constraints, the view operates like any other view with UNION ALL. The query optimizer cannot make any assumptions about the values stored in different tables and it cannot skip searching the tables that participate in the view definition.
Then why am I getting index scans on my partitioning column on tables that fail the search value based on their check constraint?
Not looking for an answer because I know the query optimizer is a fickle b*tch and I did not post any code, but I needed to rant.
SQL 2008R2, Standard on Windows 2008R2 Enterprise.I have implemented a set of local partitioned views to facilitate multi-threading in one of our applications.Testing with SQL and the insert, update, Delete all work fine. However the inserts from SSIS are Bulk and then fail.According to all I have read I should be able to add "instead of Trigger" for the insert and it should work fine. URL....
I have created instead of triggers for insert and delete and again they work fine when I test them with T-SQL but fail in the SSIS package. The error message is that the view is not updateable.
I have a partitioned view sitting over several tables and I'm slowlyapproaching the 256 number. Can anybody confirm if there is such alimit for the maximum number of tables that a partitioned view canhold?If this is true, does anybody have any suggestions or ideas to workaround this max limit?TIA!
/*problem: Trying to get partitioned views to "prune" unneededpartitions fromselect statements against the partitioned view. There are 5partitionedtables. Each with a check constraint based on a range of formula_idcolumn.Test: Run this script to create the 5 partitioned tables and thepartitioned view. Thenrun the explain plans on the select statements at the end of thescript and see that wecan only prune if we give a seemingly superfluous is not nullcriteria in addition tothe formula_id.Ideal: We want to only have to use the formula_id in the selectstatement to prune.*//*note: you may get errors on the drops first time run*/drop table dbo.cs_working_e2goCREATE TABLE dbo.cs_working_e2 (formula_id int NOT NULLCONSTRAINT formula_id_e14CHECK (formula_id between 1and 1000),submission_id int NOT NULL,node_id int NOT NULL,reference_year smallint NOT NULL,observation_period datetime NOT NULL,authority_flag tinyint NOT NULLCONSTRAINT ONE_DEFAULT3436DEFAULT 1CONSTRAINT Binary_flag_rule667CHECK (authority_flag IN(0, 1)),interpolated_flag tinyint NOT NULLCONSTRAINT ZERO_DEFAULT6926DEFAULT 0CONSTRAINT Binary_flag_rule668CHECK (interpolated_flag IN(0, 1)),observation_value decimal_datatype NOT NULL,time_created smalldatetime NOT NULLCONSTRAINTCURRENT_DATE_DEFAULT1807DEFAULT getdate(),CONSTRAINT XPKcs_working_e2PRIMARY KEY NONCLUSTERED (formula_id, submission_id,node_id, reference_year, observation_period)--ON "INDEXES")--ON "WORKING"goCREATE UNIQUE CLUSTERED INDEX XAK1cs_working_e2 ON dbo.cs_working_e2(submission_id ASC,formula_id ASC,node_id ASC,authority_flag ASC,observation_period ASC,reference_year ASC,observation_value ASC)goCREATE INDEX XIE1cs_working_e2 ON dbo.cs_working_e2(node_id ASC,authority_flag ASC,formula_id ASC,observation_period ASC,reference_year ASC,observation_value ASC)--ON "INDEXES"godrop table dbo.cs_working_indexes2goCREATE TABLE dbo.cs_working_indexes2 (formula_id int NOT NULLCONSTRAINT formula_id_indexes14CHECK (formula_id between7001 and 9000),submission_id int NOT NULL,node_id int NOT NULL,reference_year smallint NOT NULL,observation_period datetime NOT NULL,authority_flag tinyint NOT NULLCONSTRAINT ONE_DEFAULT3437DEFAULT 1CONSTRAINT Binary_flag_rule669CHECK (authority_flag IN(0, 1)),observation_value decimal_datatype NOT NULL,interpolated_flag tinyint NOT NULLCONSTRAINT ZERO_DEFAULT6927DEFAULT 0CONSTRAINT Binary_flag_rule670CHECK (interpolated_flag IN(0, 1)),time_created smalldatetime NOT NULLCONSTRAINTCURRENT_DATE_DEFAULT1808DEFAULT getdate(),CONSTRAINT XPKcs_working_indexes2PRIMARY KEY NONCLUSTERED (formula_id, submission_id,node_id, reference_year, observation_period)--ON "INDEXES")--ON "WORKING"goCREATE UNIQUE CLUSTERED INDEX XAK1cs_working_indexes2 ONdbo.cs_working_indexes2(submission_id ASC,formula_id ASC,node_id ASC,authority_flag ASC,observation_period ASC,reference_year ASC,observation_value ASC)goCREATE INDEX XIE1cs_working_indexes2 ON dbo.cs_working_indexes2(node_id ASC,authority_flag ASC,formula_id ASC,observation_period ASC,reference_year ASC,observation_value ASC)--ON "INDEXES"godrop table dbo.cs_working_other2goCREATE TABLE dbo.cs_working_other2 (formula_id int NOT NULLCONSTRAINT formula_id_other14CHECK (formula_id >= 9001),submission_id int NOT NULL,node_id int NOT NULL,reference_year smallint NOT NULL,observation_period datetime NOT NULL,authority_flag tinyint NOT NULLCONSTRAINT ONE_DEFAULT3438DEFAULT 1CONSTRAINT Binary_flag_rule671CHECK (authority_flag IN(0, 1)),observation_value decimal_datatype NOT NULL,interpolated_flag tinyint NOT NULLCONSTRAINT ZERO_DEFAULT6928DEFAULT 0CONSTRAINT Binary_flag_rule672CHECK (interpolated_flag IN(0, 1)),time_created smalldatetime NOT NULLCONSTRAINTCURRENT_DATE_DEFAULT1809DEFAULT getdate(),CONSTRAINT XPKcs_working_other2PRIMARY KEY NONCLUSTERED (formula_id, submission_id,node_id, reference_year, observation_period)--ON "INDEXES")--ON "WORKING"goCREATE UNIQUE CLUSTERED INDEX XAK1cs_working_other2 ONdbo.cs_working_other2(submission_id ASC,formula_id ASC,node_id ASC,authority_flag ASC,observation_period ASC,reference_year ASC,observation_value ASC)goCREATE INDEX XIE1cs_working_other2 ON dbo.cs_working_other2(node_id ASC,authority_flag ASC,formula_id ASC,observation_period ASC,reference_year ASC,observation_value ASC)--ON "INDEXES"godrop table dbo.cs_working_p1q12goCREATE TABLE dbo.cs_working_p1q12 (formula_id int NOT NULLCONSTRAINT formula_id_p1q114CHECK (formula_id between3001 and 7000),submission_id int NOT NULL,node_id int NOT NULL,reference_year smallint NOT NULL,observation_period datetime NOT NULL,authority_flag tinyint NOT NULLCONSTRAINT ONE_DEFAULT3439DEFAULT 1CONSTRAINT Binary_flag_rule673CHECK (authority_flag IN(0, 1)),interpolated_flag tinyint NOT NULLCONSTRAINT ZERO_DEFAULT6929DEFAULT 0CONSTRAINT Binary_flag_rule674CHECK (interpolated_flag IN(0, 1)),observation_value decimal_datatype NOT NULL,time_created smalldatetime NOT NULLCONSTRAINTCURRENT_DATE_DEFAULT1810DEFAULT getdate(),CONSTRAINT XPKcs_working_p1q12PRIMARY KEY NONCLUSTERED (formula_id, submission_id,node_id, reference_year, observation_period)--ON "INDEXES")--ON "WORKING"goCREATE UNIQUE CLUSTERED INDEX XAK1cs_working_p1q12 ONdbo.cs_working_p1q12(submission_id ASC,formula_id ASC,node_id ASC,authority_flag ASC,observation_period ASC,reference_year ASC,observation_value ASC)goCREATE INDEX XIE1cs_working_p1q12 ON dbo.cs_working_p1q12(node_id ASC,authority_flag ASC,formula_id ASC,observation_period ASC,reference_year ASC,observation_value ASC)--ON "INDEXES"godrop table dbo.cs_working_pq2goCREATE TABLE dbo.cs_working_pq2 (formula_id int NOT NULLCONSTRAINT formula_id_pq14CHECK (formula_id between1001 and 3000),submission_id int NOT NULL,node_id int NOT NULL,reference_year smallint NOT NULL,observation_period datetime NOT NULL,authority_flag tinyint NOT NULLCONSTRAINT ONE_DEFAULT3440DEFAULT 1CONSTRAINT Binary_flag_rule675CHECK (authority_flag IN(0, 1)),interpolated_flag tinyint NOT NULLCONSTRAINT ZERO_DEFAULT6930DEFAULT 0CONSTRAINT Binary_flag_rule676CHECK (interpolated_flag IN(0, 1)),observation_value decimal_datatype NOT NULL,time_created smalldatetime NOT NULLCONSTRAINTCURRENT_DATE_DEFAULT1811DEFAULT getdate(),CONSTRAINT XPKcs_working_pq2PRIMARY KEY NONCLUSTERED (formula_id, submission_id,node_id, reference_year, observation_period)--ON "INDEXES")--ON "WORKING"goCREATE UNIQUE CLUSTERED INDEX XAK1cs_working_pq2 ONdbo.cs_working_pq2(submission_id ASC,formula_id ASC,node_id ASC,authority_flag ASC,observation_period ASC,reference_year ASC,observation_value ASC)goCREATE INDEX XIE1cs_working_pq2 ON dbo.cs_working_pq2(node_id ASC,authority_flag ASC,formula_id ASC,observation_period ASC,reference_year ASC,observation_value ASC)--ON "INDEXES"go----- create view ---------drop view cs_working2goCREATE VIEW cs_working2 (submission_id, node_id, reference_year,observation_period, formula_id, observation_value, interpolated_flag,time_created, authority_flag) ASSELECT we.submission_id, we.node_id, we.reference_year,we.observation_period, we.formula_id, we.observation_value,we.interpolated_flag, we.time_created, we.authority_flagFROM cs_working_e2 weunion allSELECT wo.submission_id, wo.node_id, wo.reference_year,wo.observation_period, wo.formula_id, wo.observation_value,wo.interpolated_flag, wo.time_created, wo.authority_flagFROM cs_working_other2 wounion allSELECT wpq.submission_id, wpq.node_id, wpq.reference_year,wpq.observation_period, wpq.formula_id, wpq.observation_value,wpq.interpolated_flag, wpq.time_created, wpq.authority_flagFROM cs_working_pq2 wpqunion allSELECT wp1q1.submission_id, wp1q1.node_id, wp1q1.reference_year,wp1q1.observation_period, wp1q1.formula_id, wp1q1.observation_value,wp1q1.interpolated_flag, wp1q1.time_created, wp1q1.authority_flagFROM cs_working_p1q12 wp1q1union allSELECT wi.submission_id, wi.node_id, wi.reference_year,wi.observation_period, wi.formula_id, wi.observation_value,wi.interpolated_flag, wi.time_created, wi.authority_flagFROM cs_working_indexes2 wigo--- sample selects against partitioned view -----/*--run explain plan here and see all 5 partitions being pulledselect * from cs_working--run explain plan here and see just the 1 partitionselect * from cs_working_e2--run explain plan and see this is not pruning to the needed partitionselect * from cs_workingwhere formula_id = 1--run explain plan and see it is now pruning to the needed partitionselect * from cs_workingwhere formula_id = 1and submission_id is not null--run explain plan and see it is now pruning to the needed partition,tooselect * from cs_workingwhere formula_id = 1and observation_value is not null*/
We have are SQL 2014 Standard edition, I have a situation where-in I plan to partition a table now since table partition is not supported in standard version I thought about Partitioned views however now I am stuck where I cant make the view writable because of the identity column in the base table.
My database's design is set out here. In summary, I'm trying to model a Stock Exchange for a Technical Analysis application written using Visual C++. In order to create the hierachy I'm using a Nested Set Model. I'm now trying to write code to add and delete equities (or, more generically, nodes) to the database using a form presented to the user in my application. I have example SQL code to create the necessary add and delete procedures that calculate the changes to the values in the lft and rgt columns, but these examples focus around a single table, where as my design aggregates rows from multiple tables using UNION ALL:
Code Snippet CREATE VIEW vw_NSM_DBHierarchy -- Nested Set Model Database Hierarchy AS SELECT clmStockExchange, clmLeft, clmRight FROM tblStockExchange_ UNION ALL SELECT clmMarkets, clmLeft, clmRight FROM tblMarkets_ UNION ALL SELECT clmSectors, clmLeft, clmRight FROM tblSectors_ UNION ALL SELECT clmEPIC, clmLeft, clmRight FROM tblEquities_
Essentially, I'm trying to create an updateable view but I receive the error "UNION ALL View is not updatable because a partitioning column was not found". I suspect that my design in wrong or lacks and this problem is highlighting the design flaws so any suggestions would be greatly appreciated.
I've been using partitioned views in the past and used the check constraint in the source tables to make sure the only the table with the condition in the where clause on the view was used. In SQL Server 2012 this was working just fine (I had to do some tricks to suppress parameter sniffing, but it was working correct after doing that). Now I've been installing SQL Server 2014 Developer and used exactly the same logic and in the actual query plan it is still using the other tables. I've tried the following things to avoid this:
- OPTION (RECOMPILE) - Using dynamic SQL to pass the parameter value as a static string to avoid sniffing.
To explain wat I'm doing is this:
1. I have 3 servers with the same source tables, the only difference in the tables is one column with the server name. 2. I've created a CHECK CONSTRAINT on the server name column on each server. 3. On one of the three server (in my case server 3) I've setup linked server connections to Server 1 and 2. 4. On Server 3 I've created a partioned view that is build up like this:
SELECT * FROM [server1].[database].[dbo].[table] UNION ALL SELECT * FROM [server2].[database].[dbo].[table] UNION ALL SELECT * FROM [server3].[database].[dbo].[table]5. To query the partioned view I use a query like this:
SELECT * FROM [database].[dbo].[partioned_view_name] WHERE [server_name] = 'Server2'
Now when I look at the execution plan on the 2014 environment it is still using all the servers instead of just Server2 like it should be. The strange thing is that SQL 2008 and 2012 are working just fine but 2014 seems not to use the correct plan.
I have some Partitioned Views and on all queries using a table for the in clause, table elimination isn't happening.
Check Constraint is on the oid column
This works as expected, only goes to 2 tables; SELECT * FROM view_oap_all WHERE oid IN ( '05231416529481', '06201479586431' )
This works as expected, only goes to 2 tables; SELECT * FROM view_oap_all WHERE oid IN ( SELECT oid FROM owners WHERE oid IN ( '05231416529481', '06201479586431' ) )
This is checking all tables (headingnames are unique), ive tried this for the last 3 hours on many different tables containing the oid column.
Unless I write the oid as in the above queries it just doesn't work. SELECT * FROM view_oap_all WHERE oid IN ( SELECT oid FROM owners WHERE headingname = 'TestSystem' )
I have a few databases that are using Partitioned Views in order to manage the table sizes and they all work well for our purposes. Recently I noticed a table that had grown to 400+ million rows and want to partition it as well, so I went about creating new base tables based on the initial table's structure, just adding a column to both table and primary key to be able to build a Partitioned View on them.The first time around, on a test system, everything worked flawlessly but when I put the same structure in place on the production system I get the dreaded "UNION ALL view 'DBName.dbo.RptReportData' is not updatable because the primary key of table '[DBName].[dbo].[RptReportData_201405]' is not included in the union result. [SQLSTATE 42000] (Error 4444)" error.
I have searched high and low and everything I see points to a few directives in order for a UNION ALL view to be updatable:
- Need a partitioning column that is part of the primary key - Need a CHECK constraint that make the base tables exclusive, i.e. data cannot belong to more than one table - Cannot have IDENTITY or calculated columns in the base tables - The INSERT statement needs to specify all columns with actual values, i.e. not DEFAULT
Well, according to me, my structure fulfills these conditions but the INSERT fails anyway. CREATE scripts below scripted from SQL Server. I only modified them to be on a single row - it is easier to verify that they are identical in a text editor that way.
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO
I have Two Access Databases connected as Linked servers through ODBC driver.
I want to run a Distributed Query .
The SQL is as follows:
SELECT Alias1.FiledNames,Alias2.FiledNames from
FROM LinkedServer1.DatabaseName1.dbo.Tablename1 AS Alias1,
LinkedServer2.DatabaseName2.dbo.Tablename2 AS Alias2
It gives me this error message.
ODBC: Msg 0, Level 19, State 1 SqlDumpExceptionHandler: Process 12 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process.
I have two SQL servers, SERVERA and SERVERB. They are on two different VLAN's within our corporate network. They are physically only about 5 feet apart. SERVERA is SQL2005 while SERVERB is SQL2000. I am running a distruted transaction as a result of an insert on SERVERA which then causes a Stored procedure stored on SERVERA to insert a record on a table on SERVERB. I know my syntax is correct, 'cause I can get the Process to work between two servers (one SQL2000 and the other SQL2005) on the same VLAN. But when I run it in the problem environment, I get the following error message:
SQL Execution Error
Executed SQL Statement: INSERT INTO Tablea(recordid, recordtext) values(3, 'Testagain')
Error source: .Net SQLClient Data Provider
Error Message: The operation could not be performed because OLE DB provider "SQLNCL1" for linked server "SERVERB" was unable to being a distributed transaction.
I have checked to make sure that MSDTC is running and it is on both servers.
Any help would be greatly appreciated. Thanks! - Eric-
And it involved my favourite Design Technique - Simplicity
So to achieve Simple Replication across geographically disparate Servers we could use:-
300 plus SQL2000 Servers enabling 1000 Concurrent Active Clients (10% actual light Activity) WAN = National Private Secured (ping 300ms max) (128kbps leased line Min) One Interface Server constantly Running A Simple Dynamically Built Partitioned View (removing down servers) One Stored Proc (or more) that Synchronizes the DPV Updateable Partitioned View with Interface Mirror Table DeNormalized Holding a Physical Copy of each of the Subscribers/Publishers/Client Sql Servers Data
The Question is:-
Was my Dream a Nightmare OR A Dream Come True ?
I Know it's down to the Network Quality to a great degree but (That's the suck it and see part of the question) but as a form of replication it seems a very simple platform that could possibly tackle our friend The DCP (Data Consistency Problem) with Client Update DateTime Column & frequent activation (30 Secs).
Has anyone had much experience with this type of Scaling out over a WAN ?
I have 4 servers, 2 each for application (Dev & Prod)
DEV 1 & DEV 2 are standalone serversĀ
Prod 1 & Prod 2 are Windows Clustered Servers.
From one application to other we do Distributed transactions. Dev 1 - Dev 2 or Dev 2 - Dev 1 can start DTC and working fine,but issue comes when Prod 1 - Prod 2 or Prod 2 - Prod 1. I get error messageĀ OLE DB provider "SQLNCLI" for linked server "xyz" returned message "No transaction is active.".
Msg 7391, Level 16, State 2, Line 3
The operation could not be performed because OLE DB provider "SQLNCLI" for linked server "xyz" was unable to begin a distributed transaction.
I have tested Dev 1 - Prod 1, Dev 1 - Prod 2, Dev 2 - Prod 1, Dev 2 - Prod 2 everything is working fine only Production servers are causing issue.
I enabled all settings needed for DTC on Cluster MSDTC service but no luck.
I am trying to write some admin only procedures which will collect information to one of my development server from other production and development servers.
I have created linked servers to access these other servers on the development server. This development server is SQL Server 2000 EE. Other servers which I want to access are 2000 and 2005 (vaious editions)
E.g I have another development server called PRODTEST which is SQL Server 2005 and on the development server I have created a linked server pointing to PRODTEST called TESTLINKSRV. I want to access new object catalog view (as I do not want to use sysobjects)
When I run the following query
SELECT * FROM [TESTLINKSRV].[DBNAME].[sys].[objects]
I get following error,
OLE DB error trace [Non-interface error: OLE DB provider does not contain the table: ProviderName=' TESTLINKSRV ', TableName='" DBNAME "."sys"."objects"'].
Msg 7314, Level 16, State 1, Line 1
OLE DB provider ' TESTLINKSRV ' does not contain table '"DBNAME"."sys"."objects"'. The table either does not exist or the current user does not have permissions on that table.
So I try this query
SELECT * FROM [TESTLINKSRV].[DBNAME].[sys.objects]
and I get following error
Msg 208, Level 16, State 1, Line 1
Invalid object name TESTLINKSRV.DBNAME.sys.objects'.
So bottom line is how do I access catalog views on a 2005 server from a 2000 server using linked server?
I hope someone understands what I am trying to achieve. Please let me know what is it that I am doing wrong. Thank you
I want to find a way to get partition info for all the tables in all the databases for a server. Showing database name, table name, schema name, partition by (maybe; year, month, day, number, alpha), column used in partition, current active partition, last partition (for date partitions I want to know if the partition goes untill 2007, so I can add 2008)
all I've come up with so far is:
Code Block
SELECT distinct o.name From sys.partitions p inner join sys.objects o on (o.object_id = p.object_id) where o.type_desc = 'USER_TABLE' and p.partition_number > 1
Hi, I have a problem, maybe someone can help me.I'm traing to create a view with a Linked ServerThis query works great:select id, descrfrom SERVER.DB.dbo.TABLEWhen I tray to create the view:CREATE VIEW dbo.View1 WITH SCHEMABINDINGASselect id, descrfrom SERVER.DB.dbo.TABLEGOI have this error:Server: Msg 4512, Level 16, State 3, Procedure Pais2, Line 3Cannot schema bind view 'dbo.View1' because name 'SERVER.DB.dbo.TABLE'is invalid for schema binding. Names must be in two-part format and anobject cannot reference itself.So I try this:CREATE VIEW dbo.View1 WITH SCHEMABINDINGASselect SERVER.DB.dbo.TABLE.id, SERVER.DB.dbo.TABLE.descrfrom SERVER.DB.dbo.TABLEGOI have this error:Server: Msg 117, Level 15, State 2, Procedure Pais2, Line 3The number name 'SERVER.DB.dbo.TABLE' contains more than the maximumnumber of prefixes. The maximum is 3.Then I try this:CREATE VIEW dbo.View1 WITH SCHEMABINDINGASselect a.id, a.descrfrom SERVER.DB.dbo.TABLE as AGOI Have this errorServer: Msg 4512, Level 16, State 3, Procedure View1, Line 3Cannot schema bind view 'dbo.View1' because name'iservsql1.osderrhh.dbo.pais' is invalid for schema binding. Namesmust be in two-part format and an object cannot reference itself.This query alone works great:select a.id, a.descrfrom SERVER.DB.dbo.TABLE as AThe names aren't what I describe here (id is not valid without []).ANY IDEAS?!??!?!I don't know what else can I do.I need help!!!TANKS A LOT!!!!!!!!
Hello I have a production database that i need to refresh to our test environment daily. The database size is 700 MB. I do not need to transfer the stored procedures and triggers , users and logins. Would a DTS package that runs every night be the best and the easiest solution to implement or should i look into log shipping and snapshot replication.
Hello, I have an assignment: write a stored procedure with a parameter @location this variable will in turn choose a stored procedure from a different server based on the value. I never heard of federated db's before, or federated stored procedures, can anyone help with this problem or give me clues or details? i am using SQL 2000, and i have access to multiple db's
I am trying to implement linked servers and I am having some trouble. The setup is as follows: I have two machines with the same SA login/pwd. BOTH machines have a database called: Federated_Bridge.
On machine 1: Federated_Bridge contains a table Fed_Entity_1
On machine 2: Federated_Bridge contains a table Fed_Entity_2
Machine 1, when I look it up in SQL Enterprise Manager it says: (local)
Machine 2, when I look it up in SQL Enterprise Manager it says: REALIBM2
When I expand those, I can see the databases folder and I can then see Federated_Bridge respectively.
On machine 1 (local) I am trying to create a linked server. I use the following SP call:
---------
use master go EXEC sp_addlinkedserver @server = 'TEST3', @srvproduct = 'SQLServer OLEDB Provider', @provider = 'SQLOLEDB', @datasrc = 'REALIBM2Federated_Bridge'
The above queries seem to execute fine. Afterwards, when I check in Enterprise manager, I can see under linked servers TEST3 .. but no tables or views are showing.
When I run the following select query from query analyzer, I get an error message:
SELECT * FROM Test3.Federated_Bridge.dbo.Fed_Entity_2