We have a nested set L and R design in our database.
The design allows multiple instances of nodes in the hierarchy. Each node has a combination of node name and its instance id as the primary key. We also maintain a unique_qty column that has the unique number of nodes below a particular node. This unique qty basically ignores the multiple instances of nodes below it and counts only the distinct node names(ignoring their instance ids).
the problem that im facing is...how do i update the unique_qty when i perform any move in the tree.
UPDATE Hierarchy
SET unique_qty = ( SELECT COUNT(DISTINCT node_name) FROM Hierarchy AS H2 WHERE H2.L > Hierarchy.L AND H2.R < Hierarchy.R )
I am using the above query to find out the unique_qty when i initially populate the table.
My question is.. when i make a move of a subtree within the hierarchy, then i need to update this unique_qty for the source parents and the destination parents of the subtree( and the unique_qty for eah node in the subtree being moved remains the same ).
I had two ideas on how to update the parent nodes of the subtree:
1) for each node in the path to the root in the hierarchy, from the parent nodes( both source side and destination side) recalculate the unique_qty
2) for each node in the subtree find out until what level in the path to the root, we need to update the unique_qty, and then update only those unique_qtys
any suggestions on my methods? which one is better? any more ideas on how to do this??
We have a nested set design in our database. The design allows multiple instances of nodes in the hierarchy. Each node has a combination of node name and its instance id as the primary key. We also maintain a unique_qty column that has the unique number of nodes below a particular node. This unique qty basically ignores the multiple instances of nodes below it and counts only the distinct node names(ignoring their instance ids). the problem that im facing is...how do i update the unique_qty when i perform any move in the tree.
Dear all,I have table called CATEGORY, which is defined as follows:CREATE TABLE CATEGORY(CATEGORY_ID INTEGER IDENTITY(1,1) NOT NULL,CATEGORY_NAME VARCHAR(40) NOT NULL CONSTRAINTUC__CATEGORY__CATEGORY_NAME UNIQUE,PARENT_CATEGORY_ID INTEGER,CATEGORY_ICON IMAGE,DEPTH INTEGER,CONSTRAINT PK__CATEGORY PRIMARY KEY (CATEGORY_ID))Supposly, the following snap shot was taken later:================================================== ============| CATEGORY_ID | CATEGORY_NAME | PARENT_CATEGORY_ID | DEPTH |================================================== ============| 1 | PC | NULL | 1 |--------------------------------------------------------------| 2 | Networks | 1 | 2 |--------------------------------------------------------------| 3 | Audio | 1 | 2 |--------------------------------------------------------------| 4 | Video | 1 | 2 |--------------------------------------------------------------| 5 | TV Cards | 4 | 3 |--------------------------------------------------------------| 6 | Graphics Cards | 4 | 3 |--------------------------------------------------------------| 7 | AGP | 6 | 4 |--------------------------------------------------------------| 8 | PCI | 6 | 4 |--------------------------------------------------------------| 9 | Input Devices | 1 | 2 |--------------------------------------------------------------and I'd like to create out of this hierarchy the following desired XMLfile:<?xml version="1.0" encoding="utf-8" ?><Hardware><Catgeory name="PC" id="1"><Catgeory name="Networks" id="2" /><Catgeory name="Audio" id="3" /><Catgeory name="Video" id="4"><Catgeory name="TV Cards" id="5" /><Catgeory name="Graphics Cards" id="6"><Catgeory name="AGP" id="7" /><Catgeory name="PCI" id="8" /></Category></Category><Catgeory name="Input Devices" id="9" /></Catgeory></Hardware>The reason for this file is that it will be a datasource of theTreeView Control new in asp.net 2.0.Now, programmateiclally using C#.net i started using the XmlDocument,XmlTextWriter and XmlTextReader Namespaces and started using susingrecurrsion to genearete this desired XML file out of the records in thesnapshot, but ...Is there an easy way of doing this using SqlServer 2005 with the newdatatype XML?*Any hint would also be ok*Best regards
With reference to http://www.intelligententerprise.com/001020/celko.jhtml?_requestid=235427 I want the "sql stmt" which wud give the lft and rgt col values..
i am reading his book but cant understand :eek: where he explains wat lft and rgt cols are..
"The root is always (lft,rgt) (1, 2*(Select count(*)from table) and leaft nodes are (lft+1=rgt)" :S
I can't figure out how to put nested tables into the Data Mining Model Training Transform (SSIS). I can do a simple case table, but how do you get those nested tables with DM Training Transformation? Any ideas? Samples?
I dont think we should sample any nested tables for data mining model training? Since I think any nested tables are bound to the case table. Therefore whenever we sample the case table, the nested tables are like any other input attributes within the case table to be rectrieved as inputs accordingly?
Thank you very much for any guidance to clear my confusion.
I need to develop a language specific dwh, meaning that descriptions of products are available from a SAP system in multiple languages. English is the most important language and that is the standard. But, there are also requirements of countries that wants productdescriptions in their language.Â
Productnr Productdesc Language 1       product    EN 1       produkt    DE
One option is to column the descriptions, but that is not very elegantly. I was thinking of using bridge tables to model this but you have to always select a language in a filter (I think)..
I'm thinking of a technical solution, such that when a user logs on, the language is determined and a view determines whether to pick a certain product table specific for a certain language. But then I don't have the opportunity to interchange the different language specific fields in a report (or in my case PowerPivot).
We have our Production server having database on which Few DTS packages execute every night. Most of them have Bulk Insert stored procedures running.
SO we have to set Recovery Model of the database to simple for that period of time, otherwise it will blow up our logs.
Is there any way we can set up log shipping between our production and standby server, but pause it for some time, set recovery model of primary db to simple, execute DTS Bulk Insert Jobs, Bring it Back to Full recovery Model AND finally bring back Log SHipping.
It it possible, if yes how can we achieve this.
If not what could be another DR solution in this scenario.
I have MS Time Seeries model using a database of over a thousand products each of which has hundreds of cases. It amazingly takes only a few minutes to finish processing the model, but when I click Mining Model Viewer to view the models, it takes many hours to show up. Once the window is open, I can choose model for different products almost instantly. Is this normal?
hi !!i have a question about the connected and disconnected model to access the Sql server DB.......i know that there is better to choose one rather than the other in some situantions and there is no better model in all cases...... os i hope you can help me to decide what shall i choose...i will use the DB to connect to Web services and read data from the DB and wrtie some data back.......i do not know that to use ..... i hope you advise me and tell me about the rules that will allow me to choose what model to choose .... i appreciate your help!!Thanks !!!
I am very new to SQL Server 2005. I have created a package to load data from a flat delimited file to a database table. The initial load has worked. However, in the future, I will have flat files used to update the table. Some of the records will need to be inserted and some will need to update existing rows. I am trying to do this from SSIS. However, I am very lost as to how to do this.
Hi i have 3 master tables 1)RoleDetails(Roleid(PK),name,masterroleid(fk) ref:RoleDetails roleid) 2) PositionDetails(positionid(PK), name,MasterPositionid(FK) ref:PostionDetails postionid,Roleid(fk) ref:Roledetails roleid) 3) Userdetails(userid(pk), loginid,pwd,roleid(fk) ref:roledetails roleid,positionid(fk)ref:postionDetails positionid,fname,address) how to Create two functions one return child nodes as per Case 1 and another one is return Parent Nodes as per case 2 (Manager) a -- r1 (roledetails) /
(ROL)a1 (ROL) a2 -- r2 / | / | | (RO)b1 b2 b3 b4 b5 b6 -- r3 Case 1: On passing the User ID of (a) , should get the Output as User id of ( b1,b2,b3,b4,b5,b6) along with their Role ID.
On passing the User ID of (a1), should get the Output as User ID of (b1,b2,b3)along with their Role ID.
Case 2: On passing the Role ID of (R3), should get the User ID of all the Parent roles( a1 and a2 (R2), a(R1)long with their Role id Case 3: on passing role id of child node , should get only all particular parent userid and roleid's.
SELECT E.value('@D','varchar(MAX)') AS Code, E.value('@A','varchar(MAX)') AS Rate FROM @XML.nodes('./X/E') AS T(E)
The order of appearance is of capital importance as the rates do apply on the previous ones. Is there a way to generate a sequential ID based on the order of appearance in the XML string ? In this case, I want :
1 CODE1 0 2 CODE2 0.03 3 CODE2 0.04
I though of using a temp table with Identity column... But it's not the best way for my need. I have multiple lines each with a XML String. The ROW_NUMBER() windowed function needs a ORDER BY clause that I can't provide.
I am looking for information on building a 4 node, multi-instance, SQL cluster on Win2K3 Datacenter server. I've consulted BOL, google, and Microsoft's "techinfo" pages for Datacenter server clustering, but everytime I think I'm going to get some good information, I end up with info on a default, 2-node cluster setup. Any guidence, links, etc. would be highly prized.
To all the SQL H/A experts, we were wondering if we could have 3 physical nodes and 2 active/passive clusters architecture setup on a SAN as seen in the image below? http://www.geocities.com/juanlieu/CP_Arch.JPGIn case you cannot see the diagram, it would looks something like this: active/passive Cluster A ---> physical server A (Win2003/SQL2005) ---> HP EVA SAN ---> physical server B (Win2003/SQL2005) ---> HP EVA SANactive/passive Cluster B ---> physical server B (Win2003/SQL2005) ---> HP EVA SAN ---> physical server C (Win2003/SQL2005) ---> HP EVA SAN In this setup, I understand that Server B cannot be called upon as the active server at the SAME time by both clusters. question: what would happens if it does, would Server B reject the last cluster that calls it?Appreciated in advance.
Is there a way to expand the RDL to add custom nodes?
I read here there is a <Custom> element that can be put inside the <Report> node, but it does not work. I get all kinds of Scheme errors. http://msdn2.microsoft.com/en-us/library/ms153972.aspx
I get the following errors when I add <Custom> element inside <Report>, which is acceptable according the the MSDN link above.
Deserialization failed: The element 'Report' in namespace 'http://schemas.microsoft.com/sqlserver/reporting/2005/01/reportdefinition' has invalid child element 'Custom' in namespace 'http://schemas.microsoft.com/sqlserver/reporting/2005/01/reportdefinition'. List of possible elements expected: 'http://schemas.microsoft.com/sqlserver/reporting/2005/01/reportdefinitionescription http://schemas.microsoft.com/sqlserver/reporting/2005/01/reportdefinition:Author http://schemas.microsoft.com/sqlserver/reporting/2005/01/reportdefinition:AutoRefresh http://schemas.microsoft.com/sqlserver/reporting/2005/01/reportdefinitionataSources http://schemas.microsoft.com/sqlserver/reporting/2005/01/reportdefinitionataSets http://schemas.microsoft.com/sqlserver/reporting/2005/01/reportdefinition:Body http://schemas.microsoft.com/sqlserver/reporting/2005/01/reportdefinition:ReportParameters http://schemas.microsoft.com/sqlserver/reporting/2005/01/reportdefinition:Code http://schemas.microsoft.com/sqlserver/reporting/2005/... Line 3, position 4.
Does anyone know how to get this to work, or know where there might be some information about this online? I am coming up dry.
Dear ppl, I want to shred the following XML so that i can get all the Warranty(s) with id and text.
Required Output
id WarrantyText 1 1 year parts and labor 2 2 year parts and labor
XML
declare @myDoc xml set @myDoc = '<ProductDescription ProductID="1" ProductName="Road Bike"> <Features > <Warranty id="1">1 year parts and labor</Warranty> <Warranty id="2">2 year parts and labor</Warranty> <Maintenance>3 year parts and labor extended maintenance is available</Maintenance> </Features> </ProductDescription>'
I am using the following query but it doesn't seem to work
select C.value('@id', 'nvarchar(max)' ) as ID, C.value('Warranty', 'nvarchar(max)' ) as WarrantyText from @myDoc.nodes('//Warranty') T(C)
We have a number of databases in our company and we recently had to buy a second SQL server. What I would like to do is have both SQL Servers act as failovers for each other. They both are the exact same hardware configuration (dell blades poweredge 1855) and we have a RAID array with Melio clustered file system for our web servers. I haven€™t found any documentation on if this is even possible. Am I completely wrong in thinking this can be done or is there a better solution to spread the load but provide failover support with out buying 2 new machines?
I've got quite a simple case study, I've got a database which contains N nodes and a table containing N*(N-1) rows with the distances between each two nodes (it's a symmetrical matrix), I need to do several iterations and say to which clusters and sub-cluster each of the nodes belongs to, I failed to understand if it's within the SQL SERVER capabilities to handle already known distances in order to perform the clustering algorithm (the last part of it, just combining the groups according to the principle of minimum distances within each cluster and maximum distances between the different clusters).
In addition I would also need to know the followings:
is it possible to create an iterative process which will create several levels of clustering (number of groups found), and if so how do I tell the Mining tools how many group to create?
is there an over-lapping clusters process in the SSAS? (I.E if one node belongs to several clusters who are not a complete sub-groups of each other)
I was wondering if anyone can help me out with this. Is there any way that you can have a document map in SQL Reporting Services expand on load, or if there is a way to programmatically set the Document Map nodes to be expanded when a report is generated.
Hi, I have a SQL statement that returns a hierachy
it returns ..
-NEWSPAPER -QUALS - TIMES - TELEGRAPH -MIDS - DAILY MAIL - EXPRESS - EVENING STANDARD - POPS - THE SUN
I want the statement to include childnumber column so for newspaper it would count the total number of children below it, so in the above example newspaper would be 9 and for the mids it would equal 3? and pops = 1. does anyone know how i can do this in SQL?
Hi,I haev the following code:DECLARE @x xmlSET @x='<Root><row id="1"><name>Larry</name><oflw>some text</oflw></row><row id="2"><name>Joe</name></row><row id="3" /></Root>'exec sp_xml_preparedocument @idoc OUTPUT, @xSELECT * FROM OPENXML(@idoc, '/Root')This gives the following detailsidparentidnodetype localnameprefixnamespaceuridatatypeprev textI want to get the same details using XQuery, please let me know how togo about it.Regards,Shilpa
We had an existing 2 node active / active cluster, 1 running a default instance of Sql Server 2005 Enterprise Edition 9.0.3152 (SP2 + Hotfixes) and the other running a named instance of the same version.
We recently added 2 new nodes to the cluster, they were successfully added and we tested the cluster group failover successfuly to the new nodes.
Last night we tried to install Sql Server 2005 Enterprise edition on the new nodes.
I followed to proper proceudure of modifying the installation for both instances and selecting the 2 new nodes to apply them to. This went 100%. Sql Server 2005 successfully installed for both instances on the 2 new nodes, all log files were successful.
We then tried to apply SP2, we tried the following:
1. We ran SP2 from the active node, but when we go to the screen to select what you want to apply SP2 too we could not select anything, if you clicked on database engine the message said that these instances were already at a later version and we could not proceed. This is how i successfully applied SP2 to the original 2 node cluster but it does not work for additional nodes to an exisitng cluster.
This is also what all the documentation we could find said, refer to SP2 release notes under the topic "Failover Cluster Installation", it is also the method we found when googling.
2. We then tried what is described under SP2 release notes "Rebuild a SQL Server 2005 SP2 Failover Cluster Node". We ran SP2 from the new nodes while they were passive, but when we got the screen where you select what to apply the SP2 too we could not select database engine the message at the bottom said that SP2 must be run from the active node and that we were attempting it from the passive node, this is what we tried in step 1 described above.
3. This was a last resort. We were advised to try failing over the instance to the new node and then running SP2. Personally i thought this was a bad idea, one should never fail over a instance of sql server to a node with incompatible binary versions and secondly when we installed sql server on the new nodes a warnng popped up before hand stating that the instances were at a later version and that the new nodes must be at this version before attempting fail over. I thought that sql would not even start, to my surprie we successfully failed over the sql group to the new node, when we ran SP2 it looked good we could select the database engine on the new node to apply SP2 too, BUT after clicking next after a few seconds the SP2 installation just closed, NO INFORMAITONAL MESSAGES NO ERRORS NO WARNINGS it just closed an never came back.
I had never seen this happen on a cluster before, needless to say this made me very nervous so we failed the sql group back to the original nodes and gave up.
PLEASE can some tell me how to apply SP2 to 2 new nodes in a 4 node cluster all methods descibed in SP2 release notes and other documentation as descibed above in step 1 and 2 do not work !
SELECT res.res_id, sub.value('(./@id)[1]', 'char(2)') id FROM vwResult res CROSS APPLY report.nodes('/clue_personal_auto/report/search_dataset/subjects/*') report(sub)
It works just fine in SQL Query.After placing this into a view in SSDT November 2012 update, I get a compilation error.
I'm setting up a cluster to test a new deployments of SQL2K14 on a WSFC cluster (W2K12R2). Starwinds VS. the built in iSCSI service?Will either one allow me to build a cluster consisting of two nodes? Do I need to dedicate a third node to run the iSCSI storage?
I can't seem to install SQL Server 2005 (x64 version) on an x64 Win2k3 two node cluster.
I get all the way through the configuration and setup fails because it cannot start a task on the remote node. The error message indicates to check the task scheduler log file, which I have and I cannot find any 'errors' in the file.
Google/MSDN/Technet turn up nothing on multiple searches. Has anyone else run into this problem?
My installation account is a local admin on both machines, so is the sql cluster account. For the life of me I cannot figure out what's different for an x64 install vs 32bit....
We currently have a 2 node Active/Passive 2 named instance SQL 2000 cluster. We will be chaning the configuration to Active/Active, basically moving 1 instance to the passive node (so we can take advantage of the resources on the passive node).
We would also like to add 3 nodes to the cluster making it a 5 node SQL cluster. What we are thinking of doing is basically making it an Active/Active/Active/Active with the 5th server being passive. The question I have is will I be able to add and install the 3 new nodes without having to redo the SQL cluster? Should I install the two new active nodes with the default SQL instace or do I have to install SQL with named instances? (actually I think the named instances is the way to go but then I wouldn't be posting here if I was sure about the answer).