Design -- Should This Be Split Up Into A Few Tables?
Feb 5, 2005
I'm grappling with this design problem right now:
I have a table of users. Every user has an e-mail address and (hashed) password. Some of those users work for a company, and some of them do not. Of those who do not work for a company, some are salespeople who sell to one or more companies. Some users are simply administrators who don't work for a specific company. So here's what my users table looks like right now: "UserID, Email, Password, CompanyID (Nullable), IsAdmin"
And here's my companies table: "CompanyID, CompanyName, SalespersonID"
Of course, I could separate it out and make a Users table, an Employees table, and a Salespeople table. The way the relationship works out, though, I could use the same ID number for all three tables, and that indicates to me that perhaps they all belong in the same table. It seems silly, after all to have a Salespeople table whose only field is "UserID."
Two factors of the first design concern me: First is the fact that a salesperson could also have a company. I guess I could write a check constraint to prevent this, but doesn't having the companyID in the Users table violate a normalization rule? Maybe? The second is the fact that the Companies table relies upon Users, which in turn relies upon Companies. In OOP, this usually isn't a good thing, but I'm not sure whether it's cause for concern in a relational database.
Anyway, I really don't know what I should be doing with this design. Any suggestions?
Thanks in advance,
-Starwiz
View 1 Replies
ADVERTISEMENT
Mar 6, 2015
I have a ipaddress column is there where i need to split the column into two columns because of values like below
172.26.248.8,Fe80::7033:acba:a4bd:f874
172.26.248.8,Fe80::7033:acba:a4bd:f874
172.26.248.8,Fe80::7033:acba:a4bd:f874
I have written the below query but it will throuh some error.
select SUBSTRING(IPAddress0, 1, CHARINDEX(',', IPAddress0) - 1) as IPAddress0
from IPADDRESS
error:
Msg 537, Level 16, State 2, Line 1
Invalid length parameter passed to the LEFT or SUBSTRING function.
View 16 Replies
View Related
Sep 27, 2005
I have a large table that I'm planning on splitting out into 5 smaller ones. What I need to do is maintain some central repository for auto-numbering new records to make sure that no 2 records in different tables have the same unique ID. Thanks in advance!
View 14 Replies
View Related
Nov 20, 2007
Hi Guys
I have an application that runs on several sites that has a table with 36 columns mostly ints och small varchars.
I currently have only one table that stores the data and five indexes and since the table on one location (and others soon) has about 18 million rows I have been trying to come up with a better solution (but only if needed, I dont think I have to tell you that I am a programmer and not an dba).
The db file size with all the indexes is more then 10gb, in it self is not an problem but is it a bad solution to have it that way?
The questions are:
Are there any big benefits if i split it into several smaller tables or even smaler databases and make the SPs that gets the data aware that say 2006 years data is in table a and so on?
Its quite important that there are fast SELECTS and that need is far more important then to decrease the size of the database file and so on.
How many rows is okay to have in one table (with 25 columns) before its too big?
Thanks in advance.
Best regards
Johan, Sweden.
CREATE TABLE [dbo].[Cdr](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Abandon] [varchar](7) NULL,
[Bcap] [varchar](2) NULL,
[BlId] [varchar](16) NULL,
[CallChg] [varchar](6) NULL,
[CallIdentifier] [uniqueidentifier] NULL,
[ChgInfo] [varchar](5) NULL,
[ClId] [varchar](16) NULL,
[CustNo] [smallint] NULL,
[Digits] [varchar](32) NULL,
[DigitType] [varchar](1) NULL,
[Dnis1] [varchar](6) NULL,
[Dnis2] [varchar](6) NULL,
[Duration] [int] NULL,
[FgDani] [varchar](13) NULL,
[HoundredHourDuration] [varchar](3) NULL,
[Name] [varchar](40) NULL,
[NameId] [int] NOT NULL,
[Npi] [varchar](2) NULL,
[OrigAuxId] [varchar](11) NULL,
[OrigId] [varchar](7) NULL,
[OrigMin] [varchar](16) NULL,
[Origten0] [varchar](3) NULL,
[RecNo] [int] NULL,
[RecType] [varchar](1) NOT NULL,
[Redir] [varchar](1) NULL,
[TerId] [varchar](7) NOT NULL,
[TermAuxId] [varchar](11) NULL,
[TermMin] [varchar](16) NULL,
[Termten0] [varchar](3) NULL,
[Timestamp] [datetime] NOT NULL,
[Ton] [varchar](1) NULL,
[Tta] [int] NULL,
[Twt] [int] NULL,
[DateValue] [int] NULL,
[TimeValue] [int] NULL,
[Level] [varchar](50) NOT NULL CONSTRAINT [DF_Cdr_Level] DEFAULT ('x:'),
CONSTRAINT [PK_Cdr] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 10) ON [PRIMARY]
) ON [PRIMARY]
View 14 Replies
View Related
Feb 21, 2005
What's the best way to convert a large set of records from a simple schema where all fields are in one table to a schema where fields are split across two tables? The two table setup is necessary for reasons not worth getting into here.
Doing this via cursor is pretty straightforward, but is there a comparable set-based solution?
Here are sample create table commands. Obviously, the example below is simplified for discussion purposes.
-- One record from here will produce a record in TargetParentRecords and a record in TargetChildRecords for a total of two records.
CREATE TABLE OriginalSingleTableRecords (
ID INT IDENTITY (1, 1) NOT NULL,
ColumnA VARCHAR(100) NOT NULL,
ColumnB VARCHAR(100) NOT NULL,
CONSTRAINT PK_OriginalSingleTableRecords PRIMARY KEY CLUSTERED (ID)
)
CREATE TABLE TargetParentRecords (
ParentID INT IDENTITY (1, 1) NOT NULL,
ColumnA VARCHAR(100) NOT NULL,
CONSTRAINT PK_TargetParentRecords PRIMARY KEY CLUSTERED (ParentID)
)
-- Each row in this table must link to a TargetParentRecords row
CREATE TABLE TargetChildRecords (
ID INT IDENTITY (1, 1) NOT NULL,
ParentID INT NOT NULL, -- References TargetParentRecords.ParentID
ColumnB VARCHAR(100) NOT NULL,
CONSTRAINT PK_TargetChildRecords PRIMARY KEY CLUSTERED (ID)
)
View 5 Replies
View Related
Jul 18, 2006
I have an input file with fixed-width columns that I want to import into two tables.. 5 of the input columns go to 1 table and the remaining 15 go to another table. What's a good way to do this in SSIS?
TIA,
Barkingdog
View 3 Replies
View Related
Feb 7, 2006
Hello,
Hoping someone here can help. Perhaps I'm missing something obvious, but I'm surprised not to see a data flow task in SSIS for splitting *columns* to different destinations. I see the Conditional Split task can be used to route a *row* one way or another, but what about columns of a single row?
As a simple and somewhat contrived example, let's say I have a row with twelve fields and I'm importing the row into a normalized data structure. There are three target tables with a 1-to-1 relationship (that is, logically they are one table, but physically they are three tables, with one of them considered the "primary" table), and the twelve input fields can be mapped to four columns in each of the three tables.
How do I "split" the columns? The best way I can see is to Multicast the row to three different OLE-DB Destinations, each of which inserts to one of the three target tables, only grabbing the four fields needed from the input row.
Or should I feed the row through three successive OLE-DB Command tasks, each one inserting into the appropriate table? This would offer the advantage, theoretically, of allowing me to grab the identity-based surrogate primary key from the first of the three inserts in order to enable the two subsequent inserts.
Thoughts?
Thanks in advance,
Dan
View 5 Replies
View Related
Mar 14, 2006
I imported all rows of my txt file using SSIS 2005 into a table. I am now trying to figure out how to split out the header, payment rows, and maintenance rows. First, some information.
An example of table results is here:
http://www.webfound.net/split.txt
The table has just one field of type varcha(100) because the incoming file is a fixed length file at 100 bytes per row
The header rows are the rows with HD in them...then followed by detail rows for that header (see here http://www.webfound.net/rows.jpg).
I need to
1) Split out the header into a header table
2) Split out the maintenance rows (related to the header) into a maint table
3) Split out the payment rows (related to the header) into a payment table
I'll need to maintain a PK/FK relationship between each Header and it's corresponding maint and payment rows in the other 2 tables.
To determine if it's a payment vs. maintenance row, I need to compare chars 30 - 31. If it contains 'MT' then you know it's a maintenance row, else it's a payment row.
How in the hell do I do this???
View 4 Replies
View Related
Mar 14, 2006
SSIS 2005
Ok, I have a task in SSIS that does the following and works:
1) Brings in a txt file
2) Using a conditional component, checks for a value in the row.
3) Based on the value, splits the row into one of 3 tables (Header, Maintenance, or Payment)
Here is a print screen of what I have so far which splits Header rows into it's own table, Maintenance rows into its own table, and Payment Rows into its own table:
http://www.webfound.net/qst_how_to_add_header_PK_and_FKs.JPG
Here is a print screen of the conditional split:
http://www.webfound.net/conditional_split.jpg
Please take a look at the txt file here before it's processed:
http://www.webfound.net/split.txt
http://www.webfound.net/rows.jpg
Notice that the pattern is a header row, followed by it's corresponding detail rows. The detail rows are either Maintenance or Payment rows.
I need to somehow during the Script component or some other way, to assign a unique HeaderID (PK) to each of the header rows and add that ID to it's corresponding Maintenance and Payment detail rows in their corresponding tables as a PK. The problem is
1) I don't know how to do this in the flow of the components as I have it now
2) How do I tell it to create a new Header ID and Header FKs for the detail rows based off of each new Header row?
In the end (much later on in my entire package), the goal is to be able to run a stored proc to join and select the Header and Details rows back into a final table so I can then do more processing such as split each header and detail rows into their own txt files, etc....I don't need to go into details why but just know that this is the goal, therefore I need to relate each header row with their corresponding detail rows that are split off into a MaintenanceRow and PaymentRowTable
View 2 Replies
View Related
Oct 19, 2012
I have an empty employee table and employee_details table. The temp table which i created say it has 10 columns of which 6 are from employees and 4 from employee_details. I have loaded some data into temp table say 10 rows.
Now the stored procedure using cursor should be created such that, it should fetch the rows one by one from temp table and insert the values into employee table(6 columns) and the rest in employee_details table(4 columns).
This is the scenario.
Here is the column names of my temp table
CREATE TABLE [dbo].[temp](
[employee_id] [char](7) NOT NULL,
[first_name] [char](50) NOT NULL,
[middle_name] [char](50) NOT NULL,
[last_name] [char](50) NOT NULL,
[title] [char](5) NOT NULL,
[Code] ....
Here the last 4 columns belong to the employee_details table. The stored procedure should fetch record by record from temp split and insert into employee and employee_details table.
View 1 Replies
View Related
Sep 29, 2015
I am trying to join two tables and looks like the data is messed up. I want to split the rows into columns as there is more than one value in the row. But somehow I don't see a pattern in here to split the rows.
This how the data is
Create Table #Sample (Numbers Varchar(MAX))
Insert INTO #Sample Values('1000')
Insert INTO #Sample Values ('1024 AND 1025')
Insert INTO #Sample Values ('109 ,110,111')
Insert INTO #Sample Values ('Old # 1033 replaced with new Invoice # 1544')
Insert INTO #Sample Values ('1355 Cancelled and Invoice 1922 added')
Select * from #Sample
This is what is expected...
Create Table #Result (Numbers Varchar(MAX))
Insert INTO #Result Values('1000')
Insert INTO #Result Values ('1024')
Insert INTO #Result Values ('1025')
Insert INTO #Result Values ('109')
Insert INTO #Result Values ('110')
[Code] ....
How I can implement this ? I believe if there are any numbers I need to split into two columns .
View 2 Replies
View Related
Sep 5, 2007
Hi ,
I have two tables within a SQL database. The 1st table has an identified column and column which lists one of more email identifers for a second table,
e.g.
ID Email
-- ----------
1 AS1 AS11
2 AS2 AS3 AS4 AS5
3 AS6 AS7
The second table has a column which has an email identifier and another column which lists one email address for that particular identifier, e.g.
ID EmailAddress
--- ------------------
AS1 abcstu@emc.com
AS2 abcstu2@emc.com
AS3 abcstu3@emc.com
AS4 abcstu4@em.com
AS5 abcstu5@emc.com
AS6 abcstu6@emc.com
AS7 abcstu7@emc.com
AS11 abcstu8@emc.com
I need to create a stored procedure or function that:
1. Selects an Email from the first table, based on a valid ID,
2. Splits the Email field of the first table (using the space separator) so that there is an array of Emails and then,
3. Selects the relevant EmailAddress value from the second table, based on a valid Email stored in the array
Is there any way that this can be done directly within SQL Server using a stored procedure/function without having to use cursors?
Many Thanks,
probetatester@yahoo.com
View 7 Replies
View Related
Sep 30, 2015
I have a delimited text file with 650+ columns. The sum of the column lengths of a single row, if fully populated, exceeds 30K bytes. The "killer" fields lengthwise are the "Description" fields. If they were removed from the input file, the remainig columns would occupy about 5000 bytes, which is within SQL max row length.
Can SSIS be used to created these two tables? (one without description fields, the other with those field but arranged vertically in the table rows).
The fundamental issue is I can not import a single file row into a sql table because that row length could exceed the max byte count for a row.
View 8 Replies
View Related
May 13, 2007
I used to get flat files which i need to import into tables..
The flat files data which contains.....
How should i design the tables so that i can get the output in single row...
Table1ForRow1
--------------
col1,100
col2,1
col3,xx
col4,yy
col5,,,
col6,20030101
Table1ForRow2
-------------
col1,100
col2,2
col3,20030101
Table1ForRow3
------------
col1,100
col2,3
col3,01
col4,20030101
FlatFiledata:
------------
100,1,xx,yy,,20030101
100,2,20030101
100,3,01,20030101
I want the output:
100,1,xx,yy,20030101,2,3,01
View 1 Replies
View Related
Jul 23, 2005
I have an applicaton in which I collect data for different parametersfor a set of devices. The data are entered into a single table, eachset of name, value pairs time-stamped and associated with a device.The definition of the table is as follows:CREATE TABLE devicedata(device_idintNOT NULL REFERENCES devices(id),-- id in the devicetabledatetimedatetimePRIMARY KEY CLUSTERED,-- date creatednamenvarchar(256)NOT NULL,-- name of the attributevaluesql_variantNOT NULL-- value)For example, I have 3 devices, and each is monitored for two attributes-- temperature and pressure. Data for these are gathered at say every20 minute and every 15 minute intervals.The table is filled with records over a period of time, and I canperform a variety of SQL queries.I have another requirement which requires me to retrieve the *latest*values of temperature and pressure for each device.Ideally, I'd like to use the data I have collected to get thisinformation, and I suppose I can.What I need is the SELECT statement to do this.I'd appreciate it very much, if someone can help provide that.Conceivably, I could use a SQL server View for making this easier forsome of my users.One alternate technique I thought was to create another table which I*update* with the latest value, each time I *insert* into the abovetable. But it seems like a waste to do so, and introduces needlessreferential integrity issues (minor). Maybe for fast access, that isthe best thing to do.I have requirements to maintain this data for several months/year ortwo, so I am dealing with a large number of samples.Any help would be appreciated.(I apologize if this post appears twice)
View 9 Replies
View Related
Jul 20, 2005
Dear sirI have one SQL Server database.In this database more than 20 tables existsthat their structures is same.I have two way for design this tables:1.Create all tables separately(create more than 20 tables)2.Create one table and separate each groupby type(add column type to table and assign samevalue for each group)What is better solution?Please help me.*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
View 3 Replies
View Related
Dec 28, 2005
I am developing a Content management system of sorts and I want it
to be pretty flexible. I have noticed that with web content (News,
Articles, Events, FAQs) that there is a lot of similarity between these
items. They all have the same basic fields:
TitleSynopsisBodyStartDateEndDate
(for News, Articles, FAQs start and end date can be used for content
expiration. For events, they are used for the actual event dates.)
But for some types of content, I will need fields specific to that
type. For instance, I want to have a few custom settings for a "Photo
Gallery" type, like, how many rows/columns of thumbnails to display per
page.
I feel like I have 3 options, but would really appreciate your advice. I created diagrams for the 3 options, located here: http://nontalk.com/dbdesign/
View 2 Replies
View Related
Jan 18, 2008
I'm designing a database with 3 tables called Function, Test andScene.A Function has multiple Tests, but a Test has only one Function. Amany to many relationship exists between Test and Scene therefore Ineed a junction table between these two tables - giving 4 tables intotal. The Test table would store a foreign key, the primary key ofthe Function table.There is a problem with design though and that is that Functions andScenes are actually defined before the Test is defined. Therefore itshould be possible to create a Function and add to id its Scenes,before Tests have been defined. In other words, Scenes are as much apart of a Function as they are of Tests. Tests are in fact only ofrelavence to testers. Anyway, to satisfy this scenario, a Junction boxis also needed beween Function and Scene. This creates a loop betweenall tables.Is this a good approach? Any other suggestions or advice on thematter? Any advice regarding data integrity?Thanks,Barry
View 1 Replies
View Related
Jul 20, 2005
i am a beginner of database design, could anyone please help me tofigure out how to make these two tables work.1) a "players" table, with columns "name", "age"2) a "teams" table, which can have one OR two player(s)a team also has a column "level", which may have values "A", "B",or "C"how do you build the "teams" table (the critical question is "do ineed to create two fields" for the maximum two possible players?")how do you use one query to display the information with the followingcolumns:"name", "age", "levelA", "levelB", "levelC" (the later three columnsare integer type, showing how many teams with coresponding level thisplayer is in).now suppose i don't have any access to sql server, i save the datainto xml, and load it into a dataset. how could you do the selectionwithin the dataset? or ahead of that, how do you specify the relationsbetween "players" and "teams".the following is the schema file i am trying to make (I still don'tknow if i need to specified the primary key... and how to buildrelation between them):<code><?xml version="1.0" ?><xs:schema id="AllTables" xmlns=""xmlns:xs="http://www.w3.org/2001/XMLSchema"xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"><xs:element name="AllTables" msdata:IsDataSet="true"><xs:complexType><xs:choice maxOccurs="unbounded"><xs:element name="Players"><xs:complexType><xs:sequence><xs:element name="PlayerID" msdata:AutoIncrement="true"type="xs:int" minOccurs="0" /><xs:element name="Name" type="xs:string" minOccurs="0" /><xs:element name="Age" type="xs:int" minOccurs="0" /></xs:sequence></xs:complexType></xs:element><xs:element name="Teams"><xs:complexType><xs:sequence><xs:element name="TeamID" msdata:AutoIncrement="true"type="xs:int" minOccurs="0" /><xs:element name="Level" type="xs:string" minOccurs="0" /><xs:element name="Player1" type="xs:int" minOccurs="0" /><xs:element name="Player2" type="xs:int" minOccurs="0" /></xs:sequence></xs:complexType></xs:element></xs:choice></xs:complexType></xs:element></xs:schema></code>
View 2 Replies
View Related
Feb 28, 2008
Hi,
Currently i am developing a job portal in ASP 2.0, SQL Server 2005 which involves Job Seeker registration, Searching of resumes, applying for job Posting, Employer Registration, Create Job Posting, Searching for Job Seeker etc.
The Job Seeker is allowed to upload a word document of size up to 500Kb which is stored in Table as varbinary.
Right now I have MemberShip/Roles in seperate database. The Job Portal Tables are in seperate Database. I was told to split the Tables so that Tables of JobSeeker are One database and Employer to another Database so that they speed up the performance.
I have several tables that bridge (thats either store id's of Job Seeker or Employer) like Job Postings applied, Saved Postings Job Seeker, Job Postings of the Employer, Job Posting (Applied ones) alert etc.
Can any give me how to create a good Database design (one or more) with excellent performation. Right now I have one Database for Job Portal related tables excluding membership. The mapping of key fields including the fields that are enabled for Text indexing are given below.
(JobSeekerTable - Stores Personal Details)
JobSeekerId (PK)
...............
(JobSeekerResumeTable - Stores Resume Details)
JobSeekerResumeId (PK)
JobSeekerId (FK)
Job Title (enabled Text Indexing)
........
(JobSeekerDocTable - Stores Resume Details)
JobSeekerDocId (PK)
JobSeekerId (FK)
Resume (as varbinary) (enabled Text Indexing)
Covering Letter (Text)
........
(JobSeekerPostingTable - Stores Job Postings Saved by the Job Seeker)
JobSeekerPostingId(PK)
JobSeekerId (FK)
JobPostingId (FK)
......
(JobSeekerAppliedTable - Stores Job Postings Applied by the Job Seeker)
JobSeekerAppliedId(PK)
JobSeekerId (FK)
JobPostingId (FK)
.....
(CompanyTable - Employer Details)
CompanyId(PK)
.....
(JobPostingTable - Stores the information of the Job Posting created by Employer)
JobPostingId(PK)
CompanyId(FK)
Job Title (enabled Text Indexing)
Job Desc(enabled Text Indexing)
.....
(JobPostingConTable - Stores the information of the Job Posting Location Details )
JobPostingConId(PK)
JobPostingId(FK)
.....
(CompResumeSaved - Job Seeker details saved by Employer)
CompResumeSaved(PK)
CompanyId(FK)
JobSeekerId(FK)
.....
Eventually more tables would be added. Can any one tell me how to speed up the performance (particulary search engine fo Employer for searching resumes & Jobseeker for searching job Postings.) I hope I have mentioned everything clearly.
Thanks,
Uma Ramiya
View 4 Replies
View Related
Jun 10, 2015
I have 3 tables (accnt, jobcost, and servic15). all with the same fields (code, jno, ven, date). I need to insert the data from these tables into another table called dummy with the same fields, in one statement or query.
View 3 Replies
View Related
Jan 4, 2006
I have a general SQL design-type question.
I want to log errors to a table. If the error is with a URL, I want to store the URL. These URLs can be very large, hundreds of characters, but I only need to store it if it causes the error, which should be very infrequent. Which is the better design:
Create a large varchar field in the log table to hold the URL, or null if the error wasn't with the URL.
Create a foreign key field in the log table to a second URL table, which has a unique ID and a large varchar, and only create a record in this table if the error is with the URL.
One concern I have with design 2 is that there could be many other fields that are infrequent. Do I create a separate table for every one?
Richard
View 3 Replies
View Related
Sep 19, 2000
I want to have a linking table say for example we call this a claim. Based on the claim number you need to relate to one of say 6 different types of claims. The types of claims related to their own individual parent table. (individual because each type of claim tracks completely different information) does anyone have an idea on how to set this up?
Sample Structure
table = Claim
Field 1 = ClaimTypeA_ID
Field 2 = ClaimTypeB_ID
Field 3 = ClaimTypeC_ID
Field 4 = ClaimTypeD_ID
Field 5 = ClaimTypeE_ID
Field 6 = ClaimTypeF_ID
The six field relate to the 6 different tables ID.
If I do this how do I store the data? put 0's in each of the claim types that are not used???
Any suggestions would be appreciated.
View 2 Replies
View Related
Jun 25, 2007
Hi,I have a question regarding best practices in database design. In arelational database, is it wise/necessary to sometimes create tablesthat are not related to other tables through a foreign Keyrelationship or does this always indicate some sort of underlyingdesign flaw. Something that requires a re evaluation of the problemdomain?The reason I ask is because in our application, the user can perform xnumber of high level operations (creating/updating projects, creating/answering surveys etc. etc.). Different users can perform differentoperations and each operation can manipulate one or more table. Thispart of the system is done and working. Now there is a requirement tohave some sort of audit logging inside the database (separate from thetext based log file that the application generates anyway). This"audit logging" table will contain high level events that occur insidethe application (which may or may not relate to a particularoperation). This table is in some sense related to every other tablein the database, as well as data that is not in the database itself(exceptions, external events etc.). For example : it might haveentries that specify that at time x user created project y, at time Auser filled out survey B, at time C LDAP server was down, At time D anunauthorized login attempt occurred etc.As I said, these seems to suggest a stand alone, floating table with afew fields that store entries regarding whats going on the systemwithout any direct relationship to other tables in the database. But Ijust feel uneasy about creating such an isolated table. Another optionis to store the "logging" information in another schema/database, butthat doubles the maintainance work load. Not really looking forward tomaintaining/designing two different schemas.I had a look at the microsoft adventureworks database schema diagramand they seem to have 3 standalong tables - AWBuildVersion, ErrorLogand DatabaseLog (unless i am reading it wrong!)Any advice, Information or resources are much appreciated.
View 12 Replies
View Related
May 28, 2015
I have below DB structure in MSSQL for a small application which follow relational approach. Data retrieval (for Hostels) will need several Join, may be Key-Value approach where data retrieval will be fast.
Hostels
------------
HostelId,
Name,
Address,
CategotyId,
SubCategoryId,
FoodCategoryId,
LandLordId
Data:
1 H1 Address1 1 1 2 20
2 H2 Address2 1 2 2 21
3 H3 Address3 2 2 1 17
Category
----------
CategoryId,
CategoryName
[code]...
View 10 Replies
View Related
Oct 20, 2006
Hi,
I have a problem in design the tables. My main task is to learn how to give the Match Score.
I have hundreds of dataset and one of them is like this:
Test Record Number: 19
Prospect ID = 254040088233400441105260031881009
Match Score = 95
Input Record Fielding ( eg wordnumber[Field] ) : 1[1] 2[1] 3[11] 4[11] 5[11]
Prospect Word = 1 type = 1 match level = 4 input word = 1 input type = 1
Prospect Word = 2 type = 2 match level = 0 input word = NA input type = NA
Prospect Word = 3 type = 3 match level = 4 input word = 2 input type = 1
Prospect Word = 4 type = 11 match level = 4 input word = 3 input type = 11
Prospect Word = 5 type = 13 match level = 4 input word = 4 input type = 11
Prospect Word = 6 type = 14 match level = 4 input word = 5 input type = 11
Now I have all my data stored in the DB and I seperated them into 3 tables and their structures are:
1) prospect (id, testrecordnumber, prospectID, matchscore)
2) inputfieldind (id, prspid, inputword, inputfield)
3) prospectinfo (id, prspid, prospectword, prospecttype, matchlevel, inputword, inputtype)
and the prspid in table 2 & 3 refers to the prospectID in table 1.What I did was setting:
a) prospect table as case table with id AS key, prospectID AS input & predictable;
b) and the other two as nested tables with inputword/inputfield AS key & input, prospectword/prospecttype/matchlevel/inputword/inputtype AS key & input .
But it shows error for having multiply key columns...
And also I am thinking about using the Naive bayes algorithm. Can I also have some suggestion on this?
Thanks
View 3 Replies
View Related
Sep 9, 2015
In the Operating environment databases, may be made tables in the database on a temporary basis but they are still yet and they are not removed, how to identify tables that have been made on a temporary basis are not used (don’t have any read & write records)?
View 2 Replies
View Related
Jul 22, 2015
How to Change Order of Column In Database Tables
View 10 Replies
View Related
Jun 17, 2015
I am using sql server and I have a table called accnt with the fields ven1 and amnt1 and a table called acc1167 with fields ven, job#, and amnt. for this example these tables look like this
accnt acc1167
ven1 amnt1 ven job# amnt
1167 100 1167 1 200
1152 50 1167 2 300
1167 110 1167 3 100
1167 300 1167 4 200
1252 1050 1167 5 200
1167 210 1167 6 150
1167 1150
1167 130
2113 800
1167 550
1167 1200
I need to sum amnt1 for all the records in accnt with the ven1 of 1167, we will call this sumA. Then sum amnt in acc1167 for all records, we will call this sumB. next I need to divide sumB by sumA to get a ratio. finally I need to multiply each amnt value from acc1167 by the ratio and get a number that will then replace the acc1167 amnt value.
for example, sumA = 3750, sumB = 1150. taking these values, sumB/sumA = 0.307. I then replace every value in acc1167 amnt with 0.307*itself, so the final table should look like this:
acc1167
ven job# amnt
1167 1 61.4
1167 2 92.1
1167 3 30.7
1167 4 61.4
1167 5 61.4
1167 6 46.05
i have tried to use the sum function and and some insert, but i am very new to SQL and have never used sum before and don't know how to call from multiple tables, or how to store a ratio. Ive tried this:
UPDATE acc1167
sum1 = sum amnt1 where ven1 = '1167'
from accnt
sum2 = sum amnt
from accnt
SET amnt = sum2/sum1*amnt
FROM acc1167
View 2 Replies
View Related
May 20, 2015
I normalized the below tables but I am finding it difficult to copy data to the new tables. How do I copy data from existing table to the normalized tables? see the table structure below and other supporting information:
SKU_DATA(SKU,SKU_Description,Department,Buyer) Note: this table already has data in it.
CREATE TABLE SKU_DATA (
SKU
Integer NOT
NULL,
[code].....
The table structure above have two three determinants( SKU,SKU_Description and Buyer). SKU and SKU_Description are candidate keys. Primary key is SKU.
Normalization : SKU_DATA(SKU,SKU_Description, Buyer)
BUYER(Buyer,Department)
View 2 Replies
View Related
Apr 17, 2007
I don't know if this is the right forum but...
In a parent/child table structure (order/orderdetail) I have used identity columns for the orderdetail or compund primary keys. I find a single identity column on the detail table easier to manage (with a fk to the parent) but what ends up bieng easiest for the user is to have an order (say #3456) and detail items listed sequentially from 1 to n. This reflects a compound key structure but generating the 2nd field is a pain. Is there any way to tie an identity field to the parent key so that it will generate this number for me automatically?
View 3 Replies
View Related
Jun 15, 2015
But it doesn't explicitly tell wherther Interpreted (disk-based) tables can be accessed by Natively compiled stored procedures.And SQL Server Express edition doesn't allow creating Memory-optimized objects to very this.
View 2 Replies
View Related
Dec 13, 2007
I need to copy all the data from all the tables in a database to a copy of this database on another server.
What feature of SSIS should I take advantage of to accomplish this?
We have an SLA for 8am, most times the data warehousing jobs complete at 8:05am. Adding an additional process/set of tasks to this package would obviously make matters so I'm trying to update/copy/replicate the data in the fastest manner. Typically we're talking 2 marts (10-20GB) with 2 large tables (5-10 mill records) and 20 marts (0.5 - 5 GB) with many more smaller tables (~40 tables with record count ranging from 1 to a million)
Additionally please indicate if the design/feature you suggest can handle (pushing schema changes and additions to the target server) schema changes or new tablesviews added to the source database.
My only idea so far...is using the import wizard (in Management Studio) to create an SSIS package (top copy all the tables from one server to another) and saving it to the server, Then executing this package after the job is complete. However this would not work if the schema of a table changed, or if a a table is added. Moreover I don't think I can edit this package in visual studio.
View 3 Replies
View Related