Design Of Tables With Large Optional Fields?
Jan 4, 2006
I have a general SQL design-type question.
I want to log errors to a table. If the error is with a URL, I want to store the URL. These URLs can be very large, hundreds of characters, but I only need to store it if it causes the error, which should be very infrequent. Which is the better design:
Create a large varchar field in the log table to hold the URL, or null if the error wasn't with the URL.
Create a foreign key field in the log table to a second URL table, which has a unique ID and a large varchar, and only create a record in this table if the error is with the URL.
One concern I have with design 2 is that there could be many other fields that are infrequent. Do I create a separate table for every one?
Richard
View 3 Replies
ADVERTISEMENT
Jul 20, 2005
Hi All!General statement: FK should not be nullabe to avoid orphans in DB.Real life:Business rule says that not every record will have a parent. It isimplemented as a child record has FK that is null.It works, and it is simpler.The design that satisfy business rule and FK not null can beimplemented but it will be more complicated.Example: There are clients. A client might belong to only one group.Case A.Group(GroupID PK, Name,Code…)Client(ClientID PK, Name, GroupID FK NULL)Case B(more cleaner)Group(GroupID PK, Name, GroupCode…)Client (ClientID PK, Name, ….)Subtype:GroupedClient (PersonID PK/FK, GroupID FK NOT NULL)There is one more entity in Case B and it will require an additionaljoin in compare with caseAExample: Select all clients that belongs to any groupSummary Q: Is it worth to go with CaseB?Thank you in advance
View 20 Replies
View Related
Mar 7, 2007
I am having trouble getting my head round participation for some scenarios. Consider the following:
River MUST flow through 1 or more cities.
City MUST have at least 1 river.
This is represented the pk of River being put as a fk in city AND (to represent that city must have a river), the foreign key does NOT allow nulls.
So how about...
River MAY flow through 0,1 or more cities. City MAY have ONLY 1 river.
This time we allow the foreign key in city to have NULLs, but how about the otherside, the pk of river. How do I represent the MAY here, and a pk value cannot be null.
Thanks
Drew
View 20 Replies
View Related
Feb 20, 2008
How can I create a Table whose one field will be 'tableid INT IDENTITY(1,1)' and other fields will be the fields from the table "ashu".
can this be possible in SQL Server without explicitly writing the"ashu" table's fields name.
View 8 Replies
View Related
Nov 4, 2015
I like writing concise and compact sql code without cursors if possible. My current dilemma has me stuck though.I have 3 tables, but one of them is optionally used and contains a key element of TimeOut to determine which Anesthesia CrnaID to use. It is the optionally used part that has me stumped.
Surgery table
CaseID
Patient
(Sample data: 101,SallyDoe 102,JohnDoe)
Anesthesia table
CaseID
CrnaID
(Sample data:
101,Melvin
102,Bart
102,Jack)
AnesthesiaTime table (this table is optionally used, only if the crna's take a break on long cases)
CaseID
CrnaID
TimeIn
TimeOut
(Sample data:
102,Jack,0800,1030
102,Bart,1030,1130
102,Jack,1130,1215)
Select Patient INNER JOIN Anesthesia produced too many case results. So, I figured out there is an AnesthesiaTime table that only gets used if the anesthesia guys take time-outs. That doesn't happen all the time. I could use TOP 1 on the Anesthesia table, but technically I need to read the AnesthesiaTime table and locate the last time and pull that crna, Jack. I'm not sure how to deal with an optional table. I believe the IF Exists will be pertinent, but not sure of how to build this query. I've tried subquery without success.
View 2 Replies
View Related
Sep 20, 2007
Hi,
What's the most efficient way to store the following information:
* Table contains 1 million listings
* Each listing can be geo-targeted to any of the 200+ countries
* Searches return listings based on geo-location
Storage options:
Option #1 (normalized)
* ListingsTable (PK listingID int) [1 million rows]
* ListingGeoLocations (listingID, geoLocationID) [could be up to 200 million rows]
Option #2 (denormalized)
* ListingsTable (PK listingID int, binary(32) with bit-mask consisting of 200 bits one for each location)
Did anyone have experience with similar structures? Which option is more efficient?
Thanks,
Av
View 8 Replies
View Related
Oct 11, 2000
Hi:
I'm trying to transfer a table from SQL Server 7 database to another SQL server 7 database on
another server. This table has a text field with lots of data (~.5-1 G). I'm using the export wizard
and the transfer appears to complete successfully, but when I view it, the text field data has
been truncated.
Any ideas?
Thanks, Nicole Lane
View 1 Replies
View Related
Sep 25, 2007
Hi,
What's the most efficient way to store the following information:
* Table contains 1 million listings
* Each listing can be geo-targeted to any of the 200+ countries
* Searches return listings based on location
Storage options:
Option #1 (normalized)
* Listings (PK listingID int) [1 million rows]
* ListingLocations (listingID, locationID) [could be up to 200 million rows]
Option #2 (denormalized)
* Listings (PK listingID int, binary(32) with bit-mask consisting of 200 bits one for each location)
Usage: Usually the query will simply lookup listings based on some keywords. It will get back 50-200 listings. Then the application (C#) will filter the listings based on location.
Did anyone have experience with similar structures? Which option is more efficient?
I know that using the intersection-table in Option #1 is the "proper" relational-DB way of doing things. However, I do not like the idea of storing the listingID so many times (ones for each locationID).
Thanks,
Av
View 1 Replies
View Related
Apr 5, 2001
I have an idea to use LIKEW opeartor (with te wildcards) to match large (>10Kb) text fields of 'text' and 'ntext' types. Are there known problems here?
Thanks
View 1 Replies
View Related
Jun 9, 2015
I have a table with raw scientific test results in a single field, some of which are over 25Mb field. I need to parse into the field to find and aggregate selected values from the field.
Table structure is
CREATE TABLE [dbo].[Gxxx_Data](
[id] [uniqueidentifier] NOT NULL,
[Status] [nvarchar](50) NULL,
[GxxxItem_ID] [int] NULL,
[Stats_Data] [varbinary](max) NULL,
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
[code]...
From which I need to parse and summarize the (Assembler) opcodes (MOV,CMPi, SHR etc...)I need to parse the large field [Stats_Data] to locate the target data.The internal result strings are delimited with Char(10), conservative counts are from 64k to over 100k lines in each record. Is there a way to parse the individual lines into another table (temp) that would be queried/regexed ?
View 9 Replies
View Related
Jul 6, 2006
Hello all,
I have recently been task with rewriting a database that holds large volumes of data, whilst ensuring that query can be run in optimal time. Having never really delved into this sort of thing before, I hoped you guys might be able to offer some advice and guidance.
The design I have inherited is based around 2 main tables:
[captured_traps]
[id] [int] IDENTITY (1, 1) NOT NULL
[snmp_version] [int] NULL
[community_name] [varchar] (255)
[packet_type] [varchar] (50)
[oid] [varchar] (500)
[source_ip] [varchar] (15)
[generic] [int] NULL
[specific] [int] NULL
[time_stamp] [varchar] (15)
[trap_entered] [datetime] NULL
[status] [int] NULL
[captured_varbinds]
[id] [int] IDENTITY (1, 1) NOT NULL
[captured_trap_id] [int] NOT NULL
[varbind_oid] [varchar] (500)
[varbind_text] [varchar (500)
The relationship between the two tables is on the "captured_traps (id)" to "captured_varbinds (captured_trap_id)". Currently the "captured_traps" table contains around 350 million rows, the "captured_varbinds" table contains around 900 million rows.
Now as you can probably gather this model runs like a....well it sort of hobbles more than runs hence the need to redesign.
My current thoughts on this are:
- Normalising all varchars - there is alot of duplicate values in most of the varchar fields.
- Full Text Indexing
However beyond that I am not sure which route to go down. After googling for most of today I have come across a number of "solutions" however I do not want to go steaming down the track of one of these to discover that it is fatally flawed somewhere.
View 6 Replies
View Related
Aug 29, 2013
For large databases is it a good idea to create indexes for fields that are used in Where statements? Does that improve performance and reduce overhead?
View 4 Replies
View Related
Feb 28, 2006
The data in the field is ok and can be displayed but in the management studio express tool the field show a bunch of pound signs.
View 1 Replies
View Related
May 16, 2007
Hello:
I have designed a CV database with complete CV stored in a TEXT field. There is a keyword search which queries the TEXT field also. The query conditions are defined in T-SQL submitted through an ASP page. There is about 20,000 records now. Now while querying the database for keyword search I am receiving time out errors. Is there any solution other than Index server to rectify this situation. How can I speed up the query execution time. Please advise.
Rgds
pooja
View 3 Replies
View Related
Nov 12, 2007
I am designing a table where the object(s) that the table represents could have hundreds of boolean attributes. Which method of design would you chose for this scenario:Keep the booleans in the original object's table, potential for hundreds of nulls in a rowCreate a 2 more tables, one that has the boolean value names & ID. Another that relates an object (in original table) to a boolean value name/ID. No nulls, lots of joiningSo second method would probably normalize it, but I would suffer a performance cost, whereas the 1st method would be the easiest/quickest for joins but tons of null records. ThanksBen
View 3 Replies
View Related
Jul 20, 2005
hi all(happy raksha bandhan day)we have one of Automation software for sales running for acustomer.He was cool for the first month of product, but later poppedwith adding some extra fields.no problems i added in database , put aseperate code in my application for that field.but later every 2 dayshe was adding new fields.....so i thought to put in some inbuilt logicuser defined fields.second his user defined fields are like shoudl benumeric,string , length validation.But do not know whats the best wayto acheive this.I mean should i make seperate table where i definefield name, data types , validation and then in my application code ageneral logic for it in my application code.Any one has prooven designfor user defined fields,just thinking if i can even get a idea.....When i die i die programming........
View 2 Replies
View Related
Nov 29, 2006
I am using BI Dev Studio for SS2005 in a research (as opposed to a production) environment. Often I want to compare the results of multiple models using the same attributes. If I switch to a different model, the Design view completely resets. Is there any way to retain the same field names with different models in the Design view?
My current workaround is to give my models similar names with AR, DT, CL, LOG, NN suffixes and make global changes in the DMX.
I have consulted the following without finding an answer:
http://msdn2.microsoft.com/en-us/library/ms178445.aspx
http://msdn2.microsoft.com/en-us/library/ms175642.aspx
http://msdn2.microsoft.com/en-us/library/ms175678.aspx
http://msdn2.microsoft.com/en-us/library/ms175637.aspx
Thanks for your help,
Sam
View 3 Replies
View Related
Jul 12, 2001
We currently have a data warehouse running on SQL 7.0, SP2. One of our primary fact tables is now well over 155 million rows in it. The table is not very wide, as it only contains 17 columns, most of which are defined as integers. The entire database is only 20 GB.
The issue is that the loads from the staging table to this fact table have significantly deteriorated over the last month or so, dropping from over 400 transactions per second to around 85. We drop all the indexes on the fact table before we load the data into it.
Are there issues with a manageable table size in SQL 7.0 that we need to be concerned about? And should we consider partitioning the table into several smaller tables and join them with a "union all" view?
I really need to get this performance issue resolved, as our IT support vendor is pushing us to port the data warehouse to UDB because they tell us that SQL server is not scalable enough to handle this volume of data.
Thanks for any help you can provide.
George M. Parker
View 6 Replies
View Related
Aug 10, 2000
Hi,
How can i partition the large tables so that the insert and updates which iam doing on the tables take less time.
I want to know how can i partition large tables and if i do that how is that the performance is going to be increased.
Thanks.
View 1 Replies
View Related
Mar 13, 2001
How can I find largest 5 or 10 tables in a database?
Thanks in advance
Chan
View 2 Replies
View Related
Mar 7, 2007
Hi there,i am having some problem related to SQL server........ Actually i am having a table called ZipCodes that have around 80,000 rows... and the size of the table is around 100 MB...... and my table is now on web Server,. now my problem is that when i fire some query that needs to go through whole of the table then it estimated time to execute the query comes to be 13 seconds and the corsor threshold is set to 7 seconds (and i can't change that)....... so the SQL server cancels the query to be fired........Now i need some Methodology/Technique through which i can search Large Tables with minimum calculations in minimum Time............(Any Ideas)....
View 3 Replies
View Related
Mar 19, 2001
Is it possible to compress the large tables in the database,
like COMPRESS, ARCHIVE options we use to reduce the size of files
stored on any operating system.
I know there is a difference between the file stored on disk and the table created in the database, but currently I am facing space problems wherein, I have to manage my database within the space available, so please advice me if the option is available in SQL Server 6.5 or 7.
I will be happy if I get the solution immediatly as currently I am facing this problem and waiting for your reply.
Thank you
Amol
View 1 Replies
View Related
Feb 12, 2008
I am fairly new to SQL, so please forgive me if my question is a bit elementary. I need to pull two individual tables out of a massive DB into a new DB for testing.
Thanks for the help.
View 2 Replies
View Related
Feb 15, 2008
I'm in the midst of a long file conversion job. Today I found that one of the tables (converted from csv) to be 6.7 million records. My sql script which I use to reconfigure the weird original date format, into something the rest of the planet uses, times out due to the size.
Does anyone please know of a file utility to automagically split sql server 2005 tables for later re-combining once my scripts have successfully completed their task on the smaller tables?
View 7 Replies
View Related
Nov 14, 2007
I am making a warehouse managment system. The system will cotain much data, but only a small portion of the data will be accessed frequently. Most of the data will only be accessed seldomly, but the customer wants to keep all historic data (just in case they should need it sometime). I have figured I need to partion the tables somehow to keep what is fresh in one place, and historical data in another place. What is the best way to do this? I am thinking about making historical tables. For example I can have a table named PickList and another table named PickListHistorical. When a picklist is processed/complete I can move it over to the PicklistHistorical table, but when the users need to search for a specific picklist I have to look in both tables. I can ofcourse create a view for this to make it transparant. Sql server 2005 introduced some automatically partioning. Will it be better to use this than create my own historic tables? If so, can you please tell me how I do it?
Thank you!
View 11 Replies
View Related
Oct 19, 2007
I've successfully created SSIS packages where I compare two tables in different databases on different servers. However, this is good enough to compare hundreds of thousands of records quickly. The process becomes a huge performance problem when trying to compare table differences when I'm looking at tables that each contain tens of millions of records.
One database is on a SQL 2005 box and the other DB is SQL 7.0 so the lookup component fails for this type of SQL Server. I've been implementing merge joins and conditional components to do my standard table comparisons.
Is there another way to implement this process or maybe partition it somehow to take pieces of the table at a time and compare them? I'm open to ideas.
View 11 Replies
View Related
Jan 9, 2008
I'm using DataAdapters with my SQL database with the intention of all the SELECT, UPDATE, INSERT, DELETE commands to be automatically generated.One table is huge so I'm wondering is it more efficient to "SELECT Top(1) * FROM hugetable" instead of "SELECT * FROM hugetable" in order to facilitate the generation of commands.I hope this isn't too confusing.Thanks,Geoff
View 2 Replies
View Related
May 10, 2008
I have 4 tables with the respective amount of records
1) 6755
2) 2021
3) 2021
4) 355
They all have the same columns. However, they need to be seperate, or at least when I query them. I'll be accessing this database via the web. i was first afraid that a large database would cause major slow down when accessing the db. So I broke it up into 4 tables. If I combined all 4 tables into one large table and just had a column that differentiated the 4, how significant would be the change in speed when accessing the table? It's not a big deal to keep them seperate, its just that when I have to add or remove a column from one table I have to remove it from all the tables. Furthermore, I'm using a module from DEVEXPRESS, don't know if anyone has heard of it, but when you use a gridview, it loads up the entire table even though your paging (which I think is retarded), so for that reason I was afraid it would slow up my access to the db. Any thoughts?
View 2 Replies
View Related
Nov 29, 2000
We are inserting into a table, which includes an identity primary key column. When the table gets really large (i.e. 1.5 million records), the performance of the inserts reduce.
I noticed that when we insert into the table an exclusive lock on the table is obtained. Do inserts into tables with identities always lock the table?
Given the table size is unavoidable, does anyone have a suggestion to improve the performance?
Thanks,
Matt
View 6 Replies
View Related
Aug 4, 2005
I have a few hundred users, maybe a dozen or two active at any given time, accessing the same database via ASP. The database has many tables, one being a very large orders table with a few million records, in which I have created a view against. A view only because I need to allow the user to filter quite extensively against the results. The users typically only need to view records for the last 30 days and results for each user might be five thousand records or less.
My question is this. Would I be better off writing each user's resultset to a temp table for that user's session and allow the filtering and sorting by the user go against that temp table and increase my hardware requirements to accomodate that. Possibly to the point of creating a database cluster. OR would I be better off leaving it as is where each users uses the same view.
FYI...each user may need visibility to only a hand full of fields, but over all the view must maintain many fields.
Any thoughts on this would be greatly appreciated. Thanks in advance.
Dave
View 2 Replies
View Related
Jan 25, 2008
Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.
I would really appreciate if you can give me some advice and if you have any good links that would be great...
View 10 Replies
View Related
Nov 1, 2007
I have the following table structure:
tableA (~85,000 rows) primary key = [colA,colB]
tableB (~850,000 rows) primary key = [colA,colC]
tableC (~120,000,000 rows) primary key = [colA,colB,colC]
IMPORTANT: colC is DATETIME
For a SET of rows in tableA (about 50,000) I need to pull the MOST RECENT (given a date) corresponding values from tables B and C. The only way I can think of doing this is the following:
SELECT tableA.colA
,(SELECT TOP 1 colX FROM tableB WHERE colA = tableA.colA AND colC <= @INPUTDATE ORDER BY colC desc)
,(SELECT TOP 1 colY FROM tableB WHERE colA = tableA.colA AND colC <= @INPUTDATE ORDER BY colC desc)
,... --some more columns from tableB
,(SELECT TOP 1 colX FROM tableC WHERE colA = tableA.colA AND colB = tableA.colB AND colC <= @INPUTDATE ORDER BY colC desc)
,(SELECT TOP 1 colY FROM tableC WHERE colA = tableA.colA AND colB = tableA.colB AND colC <= @INPUTDATE ORDER BY colC desc)
,... --some more columns from tableC
FROM tableA
WHERE tableA.colX = 'some criteria'
Is there any other way anyone can suggest? Unfortunately, because tableC is so large, the disk IO (I think) causes this query to take over an hour. (If I had monster RAM and super fast disk this wouldn't be as big an issue, but that's not an option right now )
Thanks in advance!
View 7 Replies
View Related
Jan 25, 2008
Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.
I would really appreciate if you can give me some advice and if you have any good links that would be great...
Waleed Eissa
http://www.waleedeissa.com
View 9 Replies
View Related