I have 2 tables, Customers and Organizations. 1 Customer can be under many organizations. What would be the best way to design the db (2 choices) for performance (around 50000 customers):
1)
-Customer table
-Organization table
-Link Organization & Customer by creating a new tables with the following structure ---> CustomerID, OrgID,
2)
-A Customer tables that has a field called OrgID where the orgID is stored. If the customer has more organization related to him, we add another customer record and we put the new OrgID in the OrgID column. Examples:
Customer Table
--------------
CustomerID--Fname--Lname--Addresss-----OrgID
---------------------------------------------
1 Bob Marley 33 Africa org1
1 Bob Marley 33 Africa org2
1 Bob Marley 33 Africa org3
Hi all, I am having problem with SQL connection at Godaddy where my pool connection gets MAX OUT. When it happens, I cannot access the database.This is the thread about the problem: http://forums.asp.net/thread/1665023.aspx I just created this with "THREAD". I hope someone who has experiences with thread can give me some advice about my design. This is my first time.
static object Locked = new object(); public object ExecuteCommand(string queryStr, string type){ //*************************************************************************************// // ExecuteCommand: Returns an object // //*************************************************************************************// Thread t = null; lock(Locked) { SQLString = queryStr; switch(type) { case "ExecuteNonQuery": t = new Thread(ExecuteNonQuery); break; case "ExecuteScalar": t = new Thread(ExecuteScalar); break; case "GetDataReader": t = new Thread(GetDataReader); break; } t.Start(); t.Join(); } return null;} First of, does this work at all? It runs, but is it a good design in term of Thread? Since I use LOCK, do I still need the t.Join() function? The switch with the three cases, is that OKAY? Basically, I'm clueless. If you read my other post, you will get an idea what I'm trying to do. Any feedback would be very very appreciated. Thank you.
I need some other opinions on whether or not this is considered a proper database design structure.
Here is the relationship...We have PEOPLE, that each can belong to a COMPANY.
PERSON_TABLE Person_ID Company_ID
COMPANY_TABLE Company_ID
Then each person can trust other people of other companies, but can only trust 1 person per company.
My question is this. In order to maintain a constraint of 1 person per company, is it considered OK to add a the redundant column Company_ID to the PERSON_TRUSTED_TABLE(and then creating a composite primary key on the Person_ID/Company_ID) instead of just adding a trigger to the PERSON_TRUSTED_TABLE to uphold the constraint.
I have about a 35000 record table. There are about 14 entries in this table that relate to "sections". Each of these sections can have up to 20 values. This lends itself to a design like:
BuildingConstructionType relates to ConstructionHasTypes relates to Types
Where BuildingConstructionType is one of 14 fields in the 35000 record "big" table, ConstructionHasTypes is the one-to-one intermediary relation that relates many-to-many with Types (the sections).
Unfortunately, with 35000 records, this big_table_has_sections seems like it might bloat. Is this a good solution?
Hi to everyone,My problem is, that I'm not so quite sure, which way should I go.The user is inputing by second part application a long string (let'ssay 128 characters), which are separated by semiclon.Example:A20;BU;AC40;MA50;E;E;IC;GREENNow: each from this position, is already defined in any other table, asa separate record. These are the keys lets say. It means, a have someproperities for A20, BU, aso.Because this long inputed string, is a property of device (whih alsohas a lot of different properities) I could do two different ways ofstoring data:1. By writing, in SP, just encapsulate each of the position separatedby semicolon, and write into a different table with index of device,and the position in long stirng nearly in this way:Major device data tableID AnyData1 AnyData2 ... AnyData3123 MZD12 XX77 .... any comment text124 MZD13 XY55 ... any other commentString data Tablefk_deviceId position value123 1 A20123 2 BU123 3 AC40.....123 8 GREENThe device table, contains also a pointer (position), which mightchange, to "hglight" specified position.Then, I can very easly find all necessary data. The problem is, I needto move the device record data (from other table) very often into otherhistory table (by each update). That will mean, that I also need tomove all these records from 1 -8 for example to a separate historytable, holding the index for a history device dataset. This is a littleinconvinience in this, and in my opinion, it will use to much storagedata, and by programming, I need always to shift this properities intohistory table, whith indexes to a history table of other properities.2. Table will be build nearly in this way:Major device data tableID AnyData1 AnyData2 ... AnyData3 stringProperty pointer123 MZD12 XX77 .... any comment text A20;BU;AC40;MA50;E;E;IC;GREEN 3124 MZD13 XY55 ... any other comment A20;BU;AC40;MA50;E;E;IC;GREEN 2By writng into device table, there will be just a additional field forthis string, and I will have a function, which according to specifiedpointer, will get me the string part on the fly, while I need it.This will not require the other table, and will reduce the amout ofdata, not a lot ... but always.This solution, has a inconvinance, that it will be not so fast doing asearch over the part of this strings, while there will be no real indexon this.If I woould like to search all devices, by which the curent pointervalue is equal GREEN, then I need to use function for getting thevalue, and this one will be not indexed, means, by a lot amount ofdata, might be slow.I would like to know Your opinion about booth solutions.Also, if you might point me the other problems with any of thissolution, I might not have noticed.With Best RegardsMatik
I have 2 tables: Table 1 has:ID,FName,Lname Table2 has:ID,PID,PFName,PLName, Flag PID is a unique number -> (hh:mm:ss) I need to run an update to table2 by setting the Flag to 1 and also creating PID. I could use a join query to do this, but I am concern about PID (hh:mm:ss), because it may take less then 1 second to update and I will have duplicate PID. Any idea what is the best way of doing this?
Well, its kind of tricky, and i got into this.For example i have a column of zip, all US city state name and abbreviation, preffered name and an alternate name of each city in US, and also non-acceptable name of each preffered name..
Can you give me a hint, how many databse should i used? for example, i search for a zip code, and the result will be the name of city or an acceptable name of city and there are non-acceptable name too..
well.i just need your opinion..hope you could get into this one.tnx
This is more a theoretical question so I do not have any DDL (working)to post.Let's say that I have a query which needs to be filtered for specificaccounts while also needing several joins to retrieve additional data.Is it better to so one big SELECT / JOIN / WHERE statement? As inSELECT * FROM T1JOIN T2 ON T2.[Col1] = T1.[Col1]JOIN T3 ON T3.[Col2] = T1.[Col2] AND T3.[Col3] = T1.[Col3]and so on...WHERE T1.[Account] IN ('123', '456', '789')OR is it better to do an inner SELECT / WHERE and pass that to a SELECT/ JOIN? As inSELECT * FROM(SELECT * FROM T1WHERE T1.[Account] IN ('123', '456', '789')) ITJOIN T2 ON T2.[Col1] = IT.[Col1]JOIN T3 ON T3.[Col2] = IT.[Col2] AND T3.[Col3] = IT.[Col3]and so on...First glance logic says that the inner select is the way to go sincethe joins would have less rows to work with, as opposed to joineverything and THEN pulling out what is not needed. But the queryplanner sometimes seems to have a mind of its own... Does it know thatrows will be pulled so it does that first? If I follow the same"structure" with many different queries does in us the same logic allthe time or do I need to try the same thing for each and check it?How does this apply to situations where there is a UNION involved? Do Ido the union and then apply WHERE and JOIN to filter out rows and getadditional data, respectively, or do I filter out rows inside the unionand take the combined set and do the JOINS?SELECT * FROM(SELECT T1.[Col1], T1.[Col2] FROM T1UNION ALLSELECT T2.[Col1], T2.[Col2] FROM T2) CTJOIN T2 ON T2.[Col1] = CT.[Col1]JOIN T3 ON T3.[Col2] = CT.[Col2] AND T3.[Col3] = CT.[Col3]and so on...WHERE CT.[Account] IN ('123', '456', '789')versusSELECT * FROM(SELECT T1.[Col1], T1.[Col2] FROM T1 WHERE T1.[Account] IN ('123','456', '789')UNION ALLSELECT T2.[Col1], T2.[Col2] FROM T2 WHERE T2.[Account] IN ('123','456', '789')) CTJOIN T2 ON T2.[Col1] = CT.[Col1]JOIN T3 ON T3.[Col2] = CT.[Col2] AND T3.[Col3] = CT.[Col3]and so on...
What is the better practice when joining tables in a query example 1 or 2? I've always joined tables together like example 2, but it seems that many people are advocates of example 1. Are there any differences performance wise between the two?
Example 1: SELECT * FROM authors AS a INNER JOIN publishers AS p ON a.city = p.city ORDER BY a.au_lname DESC
Example 2: SELECT * FROM authors AS a, publishers AS p WHERE a.city = p.city ORDER BY a.au_lname DESC
I need to purchase a new computer for a small medical clinic which will basically only have one purpose: to answer to read and write queries to a SQL Server 2005 which is resident on that computer. Queries come from the current 8 stations (up to 14 stations in the future). Most of the time, only 3 stations will be active at a time. Queries are mostly to access patient file information, are not complex and are short-lived.
A friend of mine who owns a computer store just quoted me for a dual quad-core Xeon 5405 2GHz system with Windows Server 2003 10 Cals. I'm concerned about the following: - What's the use in having 8 cores, each of them running at only 2GHz, when there's really only one service running (SQL Server 2005, likely Express Edition) on the computer. Does SQL Server have the capability to make use of all cores? Otherwise, why spend more for Xeon and so many cores instead of a single C2D running at a faster speed of say 3GHz ? - What would be the advantage of using a Windows Server over Windows XP in a peer-to-peer configuration? I don't buy into the 10 connection limit because the TCPIP.sys file can be altered to move that limit up, so 14 stations does not trigger the need for Windows Server in and of itself.
I was talking to my boss to day and our report request are not very consistant. We always having someone coming back to change something in our report. We were thinking of useing something called the Cube Analysis. Then it give our employees the raw data for them to run any standard query for themself. We have folks that want a report one way, but then they changed their minds and we are creating yet another report 4 or 5 times. what are your thoughts about this type of database?
What's the current opinion on UDTs? Are they valuable? Do the benefits outweigh the costs? Are they an absolute no-no? Has there been anything authorative or groundbreaking on the topic since Alex P's blog back in October 2005?
I am not that experienced in SQL 2005 and I'm trying to debug a stored proc (written by a far more experienced programmer which is why I'm appending here.)
At the top of the proc is the statement 'DECLARE @BillActualRoom bit'. If I understand correctly this is a local variable.
Later in the code we have the statement
'SELECT @BillActualRoom = 0 FROM BillingOptions WHERE CenterID = @CenterID'
where @CenterID is an input parameter.
The table BillingOptions does have a field called 'BillActualRoom' and it is a bit.
My question is this: Does this SQL statement make any sense at all? (My gut reaction is no but I'd like a second opinion. Can't get any consensus in the office.)
I would think that putting a local variable in like this would mean that @BillActualRoom is always equal to 0.
Trying to deal with a user complaint of slowness. Many variables looked at which look normal (Buffer cache, queue length,memory). Probably looks like a network issue. My question is what people consider acceptable when it comes to %disk time. My %disk time has increased from an average of 20% to 33% in recent months. My average disk read and average disk write have both been less than one. MY research has showed that more than 55% %disk time for ten minutes is considered a problem. Not there yet but seem to be slowly getting there. THe app running against my server is vendor written so can't change, also running log shipping which is probably inflating the numbers a little. Any opinions appreciated.
I have a server with little control over most of the codeset and db design. Recently I have seen both the Processor - %Processor time and Processor - % User time go fom about 6.3 to about 24.3. The system queue length has also gone from about .2 to 1.1. In my humble opinion both of the are signs of a problem coming (luckily the cache hit ratio is still sitting at about 99%). I have been running profiler to catch the things that take more that 4500 MS, and I can probably tie the 2 together. Any opinions, or real world comparisons appreciated
We are trying to restrict developer permissions in our development environment. One thought is to add developers to db_datareader, db_datawriter, db_ddladmin, db_securityadmin and then revoke various permissions from ddladmin and securityadmin. The goal is to allow developer to create stored procedures and assign permissions to the stored procedures.
Another option is to place all developers in the same role and ask them to create all procedures using that role name (ex: dev_role.sp_procedurename). By doing this each developer will be able to run stored procedures created by another developer. The down side is the permissions do not match Model Office/User Test and Production.
I was having a conversation with another developer about the order that the join conditions are placed. More specifically, about the order the tables are referenced in the join condition. There are two ways to do it, for example:-- Here the table that is referenced first was "declared" first. SELECT* FROM Logon LEFT OUTER JOIN Thread ON Logon.LogonID = Thread.LogonID LEFT OUTER JOIN Message ON Thread.ThreadID = Message.ThreadID
-- Here the table that is referenced first is the table being joined directly above it. SELECT* FROM Logon LEFT OUTER JOIN Thread ON Thread.LogonID = Logon.LogonID LEFT OUTER JOIN Message ON Message.ThreadID = Thread.ThreadID I realize this is not that big of a deal, but I was wondering if anyone had a good/valid reason for doing one versus the other.
i've a database where relations are hold in a special way which the projectleaders think of as "performant and uncomplicated" but which is veryquestionable to me:------------------------------------------------Table [Attributes]Fields [AttributeID] and [AttributeText]Table [Objects]Fields object stuff.... and [AttributeIDs] (varchar with 0-20 ids usually)in AttributeIDs there is a backslash separted list of Attribute-IDs like'3412278'so to get 20 object with a special attribute (which we need often) we doSELECT TOP 20 *FROM ObjectsINNER JOIN AttributesON (Objects.AttributeIDs LIKE ('%' + (CAST AttributeID AS varchar) + '\%'))ORDER BY ObjectTextps: to store data we need for communication we include a dozen of fields in*every* table and its content makes about 100 bytes/record------------------------------------------------i would do this stuff with a table to store the object/attributecorrelations.could someone tell me if that stuff makes any sense to an expert and how tovaluate it in regard of performance(we have big customers where that *is* anissue), design, scalability, pragmatism and sense ;)thanks in advance,ViperDK
I inherited this website from a previous developer that uses an Access DB as a backend. Ugh. Apparently, he was extremely limited in what databases he could use. The website is filled with problems related to connecting to this database, which is no surprise. Most of them are related to permissions to the database file, multiple user access, etc. I am thinking about moving this to a SQL Server Express DB and upgrading the site to the 2.0 framework. Most of my experience is in SQL Server, but since they can't, or won't, pay for the SQL Server Database, is this a good solution? I've considered using MySQL as well, but their host won't provide support for this. Are there any other recommended alternatives?
Hi, i need an opinion on this...to prevent the duplicate record in db,i am using unique constraints for a column or combination of column as the case may be.By reading this article http://aspalliance.com/687_Preventing_Duplicate_Record_Insertion_on_Page_Refresh.5 , i get the feeling that its not such a good idea..i am wondering,what does it imply?Does it mean that unique constraints are not reliable enough?Does it mean,it may break and let the duplicate record inserted,even though its not suppose to?I am using SQL server 2005 I have read Dino's article on dup recs and i have still not understood it completely.. i am looking for some not so complex ,full proof method,to prevent duplicate record insertion by clicking refresh or multiple (careless)clicking on submit....thanks ..
Ok, I'm doing a football database for fixtures and stuff. The problem I am having is that in a fixture, there is both a home, and an away team. The tables as a result are something like this:
It's not exactly like that, but you get the point. The question is, can I do a fixture query which results in one record per fixture, showing both teams details. The first in a hometeam field and the second in an away team field.
Fixture contains the details about the fixture like date and fixture id and has it been played
Team contains team info like team id, name, associated graphic
TeamFixture is the table which links the fixture to it's home and away team.
TeamFixture exists to prevent a many to many type relationship.
Make sense? Sorry if this turns out to be really easy, just can't get my head around it at the mo!
I would like to create a table called product. My objective is to get list of packages available for each product in data grid view column while selecting each product. Each product may have different packages type (eg:- Nos, CTN, OTR etc). Some product may have two packages and some for 3 packages etc. Quantity in each packages also may be differ ( for eg:- for some CTN may contain 12 nos or in other case 8 nos etc). Prices for each packages also will be different that also need to show. How to design the table..
Product name : Nestle milk | Rainbow milk packages : CTN,OTR, NOs |
CTN, NOs Price: 50,20,5 | 40,6
(Remarks for your reference):CTN=10nos, OTR=4 nos | CTN=8 Nos
I'm going to create a big DB that will hold important info (the usual stuff - Clients, Products, Orders...)
I wonder where should I use the IDENTITY field, for example - on Orders I will have Order_ID... (and where does SQL server the numbers of a deleted records)
My fear is that IDENTITY fields will go wrong somehow so I can loose connections within the tables (maybe when restoring my DB to some other locations... with DTS... or other issues)
1. When should I use IDENTITY field ?
2. If I do NOT - how can I lock a record when I add a new one
I'm going to create a hardware/software inventory program to keep track on all the computers at the office. This program should be able to search the database using queries like "all computers with a cpu faster than 500Mhz".
I want some tips on my database design.
One solution is to create columns for each piece of hardware, i.e. cpu, ram, hdd etc etc. Then just run simple SELECT queries against them. The problem is that a computer may have many HDDs/CDs etc, and also other type of equipment may be entered in the database like switches and routers.
Another solution is to specify the valuetype + value in one table having a relation to another table containing the actual machines/routers/switches. The problem here is that I can't do numeric comparisons this way since "11" is less than "2", ("all computers with a cpu faster than 500Mhz").
i am working on a project with another guy as well. well, we have a table called lets say student and student_unit where student_unit holds the unit names the student is doing. and i was planning to use the same student_unit table to hold the result of that unit for that student as it is the case of adding one field but this guy who happens to be my boss thinks that result should be separated. but is it a better idea to make a separate table just for that result which is totally dependent on unit and student? that doesn't make any sense to me.
if asked why, he says for performance, modularity and object oriented approach and blah blah and honestly i think he knows nothing. so i would like u guys to tell him if there is really a need to create a separate table for attributes which is totally dependant on the same PK? having five more attributes on the same table hurts than creating a new table?
I think I might have dug myself in a hole here, so could use some advice on database design.
Currently, I have a table that records meeting minutes. The design is:
Code:
minutesid | int | 4 | no nulls minutescat | varchar | 255 | allow nulls minutesnotes | text | 16 | allow nulls
What this does is set up a minutes category, such as "Members Present", and the notes for that category, such as "Mrs. Peacock, Mr. Green, Professor Plum, Colonel Mustard, Miss Scarlett". Each individual category has it's own ID.
I need to develop an archive of each of the minutes. So if there are 15 categories and associated notes for the 15 OCT 2007 meeting, how can I design the archive table to include all of those categories and notes into one record for 15 OCT 2007, then for 15 NOV 2007 and so on ...?
I have a table to store members. For each member, I have flags like manager, verified, etc. How should I be storing the flags? As individual columns in the same table and using bit datatype or creating a new table called MemberStatus and creating 1 row for each flag?
I need to design a database, but i i just don know if i am doing it correctly.
Basically, i have an equipment which is sending me a lot of gauges readings (high limit, low limit, current level), electrical readings, hydraulic readings, alarms levels such as Fuel level, temperature level, etc etc, about 200 different types of readings, continuously. I may have different types of equipments and although most of the data are the same, some data types and its availability may be different too., depending on the type of equipment.
I want to keep this readings in a database, and this database will be updated constantly.
Should i create all of this different gauges readings in one big single table? If i do so, i only have one long row of data then for a particular equipment.If i put in different table, i can't see how i can create relationship among my tables with such kind of data.
Can you help on the possible ways of putting this equipment real time readings into DB?
I am trying to create a database that is used to create/store estimates for a manufacturing company. There are many things to consider in this estimate but I will isolate this question to the takeoff itself:
There are several categories in which costs are estimated and they are as follows : Product(s), Site Work, Transportation (Shipping), Field Materials, etc...
Should each of the above have their own "Takeoff" table, or would using one table and a gategory table be a better way to go?