I have facing a design problem and unable to justify which design to choose for my data model.
Usually, what we have is like data tables and reference tables to store data in those data tables.
My database has tables with 20-30 columns in them. And most of them (though not all of them), stores data from some reference tables. Meaning each column has an associated reference table where it stores possible list of values for that particular column. Sort of data domain for that column. FYI, My database is related to medical field.
For example, A table that has a varchar(40) column called "Differentiation". It can only store values from following list:
- Undifferentiated
- Moderate
- Poor
- Poor - Moderate
- Moderate - well
Now, to implement this, simple solution would be to have a reference table where I can store all these possible values...And then have just a reference of each data item into my "Differentiation" column in the table.
This is simplest and probably the best solution for such thing and i can also have referential integrity implemented for this.
But now if we look at the bigger picture, my database is growing and I have about 80 tables which I need to create where most of the columns will have different reference tables like I mentioned above. Approximate number of reference tables is 300 tables. All the reference table will have same structure, with different values for different columns.
Now, what seems to me is, because the table structure is same for every column, rather than having 300 different tables, I can only have 2 tables, where I can put all these reference values into these 2 tables.
Like,
Table 1 :
This table can have name of the reference table like "differentiationlist" etc.
Table 2:
It has reference to the reference table list in Table 1 discussed above and all the values that are part of that reference table can go in this table with its reference.
But problem with this is, because all the reference tables are in these two tables, I don't know how to implement referential integrity in this design.
Does anyone have any idea or solution for situation like this?
Ok say I would like to build a table for of the following questions(say 6 questions for the sake of argument): Do I just stored the index of the radiobuttonlist. What are some resources that I could look at. Should I make a look up table.
5) If money were no object, I would live . . . Prefer not to say On a tropical island In a New York penthouse In an English castle On a Texas ranch In a Malibu beach house In a mountain retreat (Selected) On the moon None of the above
1 ---> 2 ---> 3 ---> 4 ---> 5) --->6 This is the question we are looking at. 6 --->
How should I create the database table for the above example.
Actually this is in regard to SCD Type 2 Dimension, Scenario is like that I am moving Fact table from some old source and I have dimensionA description value in fact which I want to replace with appropriate id from Dimension Table and that Dimension table is SCD Type 2 based on StartDate and EndDate and Fact Table doesn't contains direct date value rather there is timeId in Fact so to update the value in Fact table I have to Join Time Dimension table and other Dimension Table to replace fact Description with proper Id.
I have two tables that are reference to each other by foreign key, now I would like to alter the table so the it doesn't reference each other any more. What is the syntax to alter a table so that the fk field doesn't reference a table anymore. Thanks in advance for the help.
Now i have a reauirement to build a stored proc in which all the transactions starting from one transaction like, if i want to know the chain for item no 2 it shall give the following result:
24 43 37
if i want to know the chain for item no 3 then it shall give following 37
if i want to know the chain for item no 1 then it shall give following 12 24 43 37
I'm trying to create two tables, Admins and SecurityGroups, I want to have the ability to assign a security group to an admin so that I don't have to specify all the rights individually to every admin. So, I have the securityGroupID in the Admins table and I reference the SecurityGroups table for data integrity. On the other hand, for audit purposes, I also want to record which admin created the security group so I have the AdminID in the SecurityGroups table and I reference the Admins table for data integrity. Of course when I try to create any of those tables I get an error message that the other table doesn't exist. I wonder if this can be done without having to remove the references constraint from one of the tables and then using alter table. Here's an example below to make it clearer:
CREATE TABLE Admins ( AdminID INT NOT NULL PRIMARY KEY, SecurityGroup NULL REFERENCES SecurityGroups (SecurityGroupID) )
CREATE TABLE SecurityGroups ( SecurityGroupID INT NOT NULL PRIMARY KEY, CreatedBy INT NOT NULL REFERENCES Admins (AdminID) )
So i am designing a new database that currently has several tables 'look up tables' that are used just to limit the values of columns in other tables.
my question is what is the best way to do this?
1. multiple tables - one for each set of values (ex: JobType, Position, PayGrade) 2. One large table that holds all the lookup values - has a 'Category' field to group them 3. put constraints on the columns of the tables that are 'looking up' and get rid of the lookup tables.
Hi, I don't know if this is possible, i believe not, so I'm here to ask the experts if is possible to have a foreign key constraint that references the key of one of two tables. Like this: I have 3 tables: TABLE X, TABLE A and TABLE B Is it possible to the FK on TABLE X refernce the PK of TABLE A OR TABLE B? If yes, how can I do this? If not, I need to have a fourth table, so TABLE X references TABLE A and TABLE Y references TABLE B. Thanks!
Not sure if the title describes my situation or not.
Simplified example is: I have an [Employee] table with EmpCode, EmpName
I have a second table [NewHires] that has: HireDate, EmpCode, Addedby
Both EmpCode and Addedby contain EmpCode referring to the Employee table.
I wish an output similar to:
New Employee (from EmpCode in NewHire), Hired on (From HireDate), Hired By (from Addedby)
My problem is with an Employee.EmpCode=NewHires.Empcode or Employee.EmpCode=NewHires.Addedby in the Where clause or Join part of the SQL I don't know how to get EmpName from the Employee table twice but using two different EmpCode as the reference.
I'm creating an application that will allow users to contribute "content". The content can be tagged, saved as "my content", etc... very web 2.0'ish. The site will rely very heavily on SQL Server 2005. By default, there is a Content table that simply stores the content with an identity column PK. There is also a Tag table and User table. What is the most effective schema for speed and reliable scalability? Eventually there could be hundreds to thousands of people contributing and tagging content. Idea 1: Lookup TablesSimply make a new table that holds the TagID and UserID. The table will be very important as it will be queried very regularly to show the users which tags they have stored.Pros: This will effectively allow me to store & query what tags the user has selected. It's simple to setup and the application won't have to work much with the data returned from the query.Cons: What happens when there are thousands of users tagging content? At what point does it become very inefficient to query a table that has a huge number of rows? Idea 2: Comma Delimited ListSimply has a field in the user table that has a comma delimited list of TagID's the user has selectedPros: Keeps table size low, fairly easy to implement.Cons: Application has to perform more work. It has to separate the TagID's by comma, and requery the database to get each tags data, based on TagID. Those are basically the only two methods I've really got experience using. The reason for this post is to see if there are methods that I'm not aware of that are better suited for what I'm trying to do. Any assistance is greatly appreciated, thank you in advance!
Hi,If you have lookup tables which are used by multiple tables (e.g.'City' lookup table might be used by the 'Employee' and the 'Company'tables) do we need to link it to both tables? Or can it sit by itselfand just referenced?Cheers,Jack
It's been always said that it is best to put index on commonly joined fields in the table. But putting too much index on the table would cause the table to be slow on insert and update.
My question is, how do you deal with your fields that uses look up tables? Like for example for these fields
Those fields don't come a big part in the table, though when I query the table I always join them with their respective primary table to get their respective text value. Do I still need to put Index & FK relationships to these fields?
What fields are normally good candidates for index or fk relationships?
Two tables:T1 (c1 int, TestVal numeric(18,2), ResultFactor numeric(18,2))--c1 isthe primary key.T2 (x1 int, FromVal numeric(18,2), ToVal numeric(18,2), Factornumeric(18,2))--x1 is the primary key. T2 contains non-overlappingvalues. So for eg., a few rows in T2 may look like.1, 51, 51.999, 512, 52, 52.999, 52........32, 82, 82.999, 82........T2 is basically a lookup table. There is no relationship between thetwo tables T1 and T2. However, if the TestVal from T1 falls in therange between FromVal and ToVal in T2, then I want to updateResultFactor in T1 with the corresponding value of Factor from the T2table.------Example for illustration only---------------Even though tables cannot be joined using keys, the above problem is avery common one in our everyday life. For example T1 could beemployees PayRaise table, c1=EmployeeID, with "TestVal" representingtest scores (from 1 to 100). T2 representing lookup of the ranges,with "Factor" representing percent raise to be given to the employee.If TestVal is 65 (employee scored 65% in a test), and a row in T2(FromVal=60, ToVal=70, Factor=12), then I would like to update 12 intable T1 from T2 using sql;. Basically T2 (like a global table)applies to all the employees, so EmpID cannot serve as a key in T2.---------------------------------------------------------Could anyone suggest how I would solve MY PROBLEM using sql? I wouldlike to avoid cursors and loops.Reply appreciated.Thanks
I need to match each car to it's respective class. I've already search the database for post on this subject, but unfortunatelly my goal is yet to be reached. Can someone help?
Consider a situation. There is a table of submitted 'documents'. They have some attributes. There are assignments to process the things, which have a date they were created. Finally there is a price list which specifies the price according to document features and date, so that the assignment to process a document created at different time will have a different cost. In other words, there is a relation (assignment->document.attribute(s) + assignment.date) -> pricelist.price
Creating relations has the integrity advantages: it is not possible to create an assignment, which price is not defined in the pricelist; precludes the pricelist entry removal if it is referred by any assignments.
Should a view, which combines all the foreign fields into one virtual table, be created to make establishing the reference possible?
I try to convert a Procedure that join 8 tables with INNER AND OUTER JOIN, my understanding is that the Lookup task is the one to use and I should break these joins into smaller block, it takes a long time to load when I do this, since each of these tables had 10-40mill. rows and I have 8 tables to go thru, currently this Stored Procedure took 3-4min to run, by converting this to 8 Lookup tasks, it ran for 20min. has anyone run into this issue before and know a work around for this.
I'm new to my company, although not new to SQL 2005 and I found something interesting. I don't have an ERD yet, and so I was asking a co-worker what table some data was in, they told me a table that is NOT in SQL Server 2005's list of tables, views or synonyms.
I thought that was strange, and so I searched over and over again and still I couldn't find it. Then I did a select statement the table that Access thinks exists and SQL Server does not show and to my shock, the select statement pulled in data!
So how did this happen? How can I find the object in SSMS folder listing of tables/views or whatever and what am I overlooking?
I am trying a create views that would join 2 tables:
Table 1: Has all the columns need by a view ( Name: Product Structure: ID, Attribute 1, Attribute 2, Attribute 3, Attribute 4, Attribute 5 etc Table 2: Is a lookup table that provides the names of columns Name: lookupTable Structure: tableName, ColumnName, columnValue Values: Product, Attribute1, Color Product, Attribute2, Size Product, Attribute3, Flavor Product, Attribute4, Shape
I have a common requirement (when I'm processing data rows from an input file) to perform some data manipulation in script then look up a value from a reference table and perform some further data manipulation depending on whether a matching value was found in the lookup table or not.
For example, say I'm processing Customer data rows and the first "word" (/token) of the FullName column might be a title or the title could be missing and it might be a forename or initial instead. I want to check the first word of this FullName column to see if it matches any valid title values in a ReferenceTitles lookup table. If I find a match I want to set my Title column to the value from the ReferenceTitles lookup table, otherwise I want to set it to, say, an empty string. Then I want to process the rest of the FullName column tokens differently depending on whether or not a match was found.
It seems very messy to start coding a script transformation and then have to use a lookup transformation combined with a script transformation on the error output followed by a union and a sort and finally a further script transformation (especially as I would like to be able to use variables from the first script in the later processing)...
So what I'm wondering is: Is there an easy/clean way to perform a database lookup (using cached values) from a script so that I can achieve all the above from within a single script component?
We have 20 -30 normalized tables in our dartabase . Also we have 4tables where we store the calculated data fron those normalised tables.The Reason we have these 4 denormalised tables is when we try to dothe calcultion on the fly, our site becomes very slow. So We haveprecalculated and stored it in 4 tables.The Process we use to do the precalcultion, will get do thecalculation and and store it in a temp table. It will compare the thetemp with denormalised tables and insert new rows , delte the old oneans update if any changes.This process take about 20 mins - 60mins. Ittakes long time because in this process we first do the calculationregardless of changes and then do a compare to see what are changed andremove if any rows are deleted, and insert new rowsand update thechanges.Now we like to capture the rows/columns changed in the normalisedtables and do only those chages to the denormalised table , which weare hoping will reduce the processing time by atleast 50%WE have upgraded to SQL SERVER 2005.So We like to use the newtechnology for this process.I have to design the a model to capture the changes and updated onlythose changes.I have the list of normalised tables and te columns which will affectthe end results.I thought of using Triggers or OUTPUT clause to capture the changes.Please help me with the any ideas how to design the new process
In BOL it says: "The Lookup transformation performs an equi-join between values in the transformation input and values in the reference dataset. Using an equi-join means that each row in the transformation input must match at least one row from the reference dataset. If there is no matching entry in the reference dataset, no join occurs and no values are returned from the reference dataset. This is an error, and the transformation fails, unless it is configured to ignore errors or redirect error rows to the error output. "
I have a lookup transformation which is supposed to find a match on two fields in the reference dataset (a table in my case) but strangely, when I execute my package and the reference table is empty the lookup still finds match for each row of my input dataset.
Does anyone have an idea why? I could'nt find anything about that in BOL.
Hello, I have created SSIS package programmatically, I want to add Lookup transformation, How can I add column from reference dataset to the transformation? I have try to add new output column but it gives me an validation error, I write following coed to add new output column to lookup. IDTSOutputColumn90 outputColumn = this.lookup.OutputCollection[0].OutputColumnCollection.New(); outputColumn.Name = col.Name; outputColumn.Description = "Staging table output"; outputColumn.TruncationRowDisposition = DTSRowDisposition.RD_FailComponent; outputColumn.ErrorOrTruncationOperation = "Copy Column"; outputColumn.SetDataTypeProperties(col.DataType, col.Length, col.Precision, col.Scale, col.CodePage);
Please suggest other way to add column from reference dataset to transformation output.
The logic I am trying to recreate via SSIS is the following SQL statement:
insert into db3.dbo.targettable1 -- Target database table (SiteC, Objecte, Attrib1)
select distinct ?, ?, from ? -- Source database table join dbo.targettable2 c1 -- Target database table on c1.Alias = ? and c1.CSetID = ? and c1.FacID = (select f.PFacID from dbo.Fac f where f.FacID = ?) where not exists (select * from dbo.targettable2 c -- Target database table where c.Alias = ? and c.FacID = ? and c.CSetID = ?)
I have an OLE DB Source that consists of an expression to approximate the following portion of the Above Select statement:
Select ?, from ? -- Source database table and
The package has 2 global variables User:CSetID and User::FacID whose scope is global to the package and whose values are set within a Foreach Loop Container outside of the Data Flow Task
I was trying to reference the 2 global variables within the Looup Transformation to recreate the following portion of the SQL statement.but encounter errors:
join dbo.targettable2 c1 -- Target database table on c1.Alias = ? and c1.CSetID = ? and c1.FacID = (select f.PFacID from dbo.Fac f where f.FacID = ?)
In the Advanced Editor window of Lookup Transaction
select * from (select * from [dbo].[targettable2 ]) as refTable where [refTable].[Alias] = ? and [refTable].[FacID] = ? and [refTable].[CSetID] = ?
Is there away to reference global variables in a Lookup Transformation that are set outside a Data Task Flow?
My source has 2.2 million of records. I'm performing the incremental load.In the lookup transformation i used the destination table for the reference using Full cache mode.For the first time package executed successfully but when i executed the package second time, Suddenly Package hangs while running.Than i truncate the data from the destination table and restart the SQL Server Services.After doing all this i executed package again and it worked but when i executed package second time, again package hangs up .I have 8GB RAM and i5 2.5 GHz Processor laptop.
I am using the following select statement to get the row count from SQL linked server table.
SELECT Count(*) FROM OPENQUERY (CMSPROD, 'Select * From MHDLIB.MHSERV0P')
MHDLIB is the library name in IBM DB2 database. The above query gives me only the row count of table MHSERV0P. However, I need to get the names, rowcounts, and sizes of all tables that exist in MHDLIB librray. Is it possible at all?
'You have to have a BackUps folder included into your release!
Private Sub BackUpDB_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles BackUpDB.Click Dim addtimestamp As String Dim f As String Dim z As String Dim g As String Dim Dialogbox1 As New Backupinfo
addtimestamp = Format(Now(), "_MMddyy_HHmm") z = "C:Program FilesVSoftAppMissNewAppDB.mdb" g = addtimestamp + ".mdb"
'Add timestamp and .mdb endging to NewAppDB f = "C:Program FilesVSoftAppMissBackUpsNewAppDB" & g & ""
Try
File.Copy(z, f)
Catch ex As System.Exception
System.Windows.Forms.MessageBox.Show(ex.Message)
End Try
MsgBox("Backup completed succesfully.") If Dialogbox1.ShowDialog = Windows.Forms.DialogResult.OK Then End If End Sub
Code Snippet
'RESTORE DATABASE
Private Sub RestoreDB_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles
RestoreDB.Click Dim Filename As String Dim Restart1 As New RestoreRestart Dim overwrite As Boolean overwrite = True Dim xi As String
With OpenFileDialog1 .Filter = "Database files (*.mdb)|*.mdb|" & "All files|*.*" If .ShowDialog() = Windows.Forms.DialogResult.OK Then Filename = .FileName
'Strips restored database from the timestamp xi = "C:Program FilesVSoftAppMissNewAppDB.mdb" File.Copy(Filename, xi, overwrite) End If End With
'Notify user MsgBox("Data restored successfully")
Restart() If Restart1.ShowDialog = Windows.Forms.DialogResult.OK Then Application.Restart() End If End Sub
Code Snippet
'CREATE NEW DATABASE
Private Sub CreateNewDB_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles
CreateNewDB.Click Dim L As New DatabaseEraseWarning Dim Cat As ADOX.Catalog Cat = New ADOX.Catalog Dim Restart2 As New NewDBRestart If File.Exists("C:Program FilesVSoftAppMissNewAppDB.mdb") Then If L.ShowDialog() = Windows.Forms.DialogResult.Cancel Then Exit Sub Else File.Delete("C:Program FilesVSoftAppMissNewAppDB.mdb") End If End If Cat.Create("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:Program FilesVSoftAppMissNewAppDB.mdb;
Jet OLEDB:Engine Type=5")
Dim Cn As ADODB.Connection 'Dim Cat As ADOX.Catalog Dim Tablename As ADOX.Table 'Taylor these according to your need - add so many column as you need. Dim col As ADOX.Column = New ADOX.Column Dim col1 As ADOX.Column = New ADOX.Column Dim col2 As ADOX.Column = New ADOX.Column Dim col3 As ADOX.Column = New ADOX.Column Dim col4 As ADOX.Column = New ADOX.Column Dim col5 As ADOX.Column = New ADOX.Column Dim col6 As ADOX.Column = New ADOX.Column Dim col7 As ADOX.Column = New ADOX.Column Dim col8 As ADOX.Column = New ADOX.Column
Cn = New ADODB.Connection Cat = New ADOX.Catalog Tablename = New ADOX.Table
'Open the connection Cn.Open("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:Program FilesVSoftAppMissNewAppDB.mdb;Jet
OLEDB:Engine Type=5")
'Open the Catalog Cat.ActiveConnection = Cn
'Create the table (you can name it anyway you want) Tablename.Name = "Table1"
'Taylor according to your need - add so many column as you need. Watch for the DataType! col.Name = "ID" col.Type = ADOX.DataTypeEnum.adInteger col1.Name = "MA" col1.Type = ADOX.DataTypeEnum.adInteger col1.Attributes = ADOX.ColumnAttributesEnum.adColNullable col2.Name = "FName" col2.Type = ADOX.DataTypeEnum.adVarWChar col2.Attributes = ADOX.ColumnAttributesEnum.adColNullable col3.Name = "LName" col3.Type = ADOX.DataTypeEnum.adVarWChar col3.Attributes = ADOX.ColumnAttributesEnum.adColNullable col4.Name = "DOB" col4.Type = ADOX.DataTypeEnum.adDate col4.Attributes = ADOX.ColumnAttributesEnum.adColNullable col5.Name = "Gender" col5.Type = ADOX.DataTypeEnum.adVarWChar col5.Attributes = ADOX.ColumnAttributesEnum.adColNullable col6.Name = "Phone1" col6.Type = ADOX.DataTypeEnum.adVarWChar col6.Attributes = ADOX.ColumnAttributesEnum.adColNullable col7.Name = "Phone2" col7.Type = ADOX.DataTypeEnum.adVarWChar col7.Attributes = ADOX.ColumnAttributesEnum.adColNullable col8.Name = "Notes" col8.Type = ADOX.DataTypeEnum.adVarWChar col8.Attributes = ADOX.ColumnAttributesEnum.adColNullable
'You have to append all your columns you have created above Tablename.Columns.Append(col) Tablename.Columns.Append(col1) Tablename.Columns.Append(col2) Tablename.Columns.Append(col3) Tablename.Columns.Append(col4) Tablename.Columns.Append(col5) Tablename.Columns.Append(col6) Tablename.Columns.Append(col7) Tablename.Columns.Append(col8)
'Append the newly created table to the Tables Collection Cat.Tables.Append(Tablename)
'User notification ) MsgBox("A new empty database was created successfully")
'Restart application If Restart2.ShowDialog() = Windows.Forms.DialogResult.OK Then Application.Restart() End If
End Sub
Code Snippet
'COMPACT DATABASE
Private Sub CompactDB_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles
CompactDB.Click Dim JRO As JRO.JetEngine JRO = New JRO.JetEngine
'The first source is the original, the second is the compacted database under an other name. JRO.CompactDatabase("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:Program
Data Source=C:Program FilesVSoftAppMissNewAppDBComp.mdb; JetOLEDB:Engine Type=5")
'Original (not compacted database is deleted) File.Delete("C:Program FilesVSoftAppMissNewAppDB.mdb")
'Compacted database is renamed to the original databas's neme. Rename("C:Program FilesVSoftAppMissNewAppDBComp.mdb", "C:Program FilesVSoftAppMissNewAppDB.mdb")
'User notification MsgBox("The database was compacted successfully")
I'm working on an ASP.Net project where I want to test code on a localmachine using a local database as a back-end, and then export it tothe production machine where it uses the hosting provider's SQL Serverdatabase on the back-end. Is there a way to export tables from oneSQL Server database to another in such a way that if a table alreadyexists in the destination database, it will be updated to reflect thechanges to the local table, without existing data in the destinationtable being lost? e.g. suppose I change some tables in my localdatabase by adding new fields. Can I "export" these changes to thedestination database so that the new fields will be added to thedestination tables (and filled in with default values), without losingdata in the destination tables?If I run the DTS Import/Export Wizard that comes with SQL Server andchoose "Copy table(s) and view(s) from the source database" and choosethe tables I want to copy, there is apparently no option *not* to copythe data, and since I don't want to copy the data, that choice doesn'twork. If instead of "Copy table(s) and view(s) from the sourcedatabase", I choose "Copy objects and data between SQL Serverdatabases", then on the following options I can uncheck the "CopyData" box to prevent data being copied. But for the "CreateDestination Objects" choices, I have to uncheck "Drop destinationobjects first" since I don't want to lose the existing data. But whenI uncheck that and try to do the copy, I get collisions between theproperties of the local table and the existing destination table,e.g.:"Table 'wbuser' already has a primary key defined on it."Is there no way to do what I want using the DTS Import/Export Wizard?Can it be done some other way?-Bennett
I have a trade data tables (about 10) and I need to retrieve information based on input parameters. Each table has about 3-4 million rows.
The table has columns like Commodity, Unit, Quantity, Value, Month, Country
A typical query I use to select data is "Select top 10 commodity , sum(value), sum(quantity) , column4, column5, column6 from table where month=xx and country=xxxx"
The column4 = (column2)/(total sum of value) and column 5=(column3)/(total sum of quantity). Column6=column5/column4.
It takes about 3-4 minutes for the query to complete and its a lot of time specially since I need to pull this information from a webpage.
I wanted to know if there is an alternate way to pull the data from server ?
I mean can I write a script that creates tables for all the input combinations i.e month x country (12x228) and save them in table (subtable-table) with a naming convention so from the web I can just pull the table with input parameters mapped to name convention and not running any runtime queries on database ??
OR
Can I write a script that creates a html files for each table for all input combinations save them ?
Hi all, I have a large Excel file with one large table which contains data, i've built a SQL Server DataBase and i want to fill it with the data from the excel file.
Currently, I'm using the following steps to migrate millions of records from Foxpro tables to SQL Server tables:
1. Transfer Foxpro records to .dat files and then bcp to SQL Server tables in a dummy database. All the SQL tables have the same columns as the Foxpro tables. 2. Manipulate the data in the SQL tables of the dummy database and save the manipulated data into the SQL tables of the real database where the tables may have different structure from the corresponding Foxpro tables.
I only know the following ways to import Foxpro data into SQL Server:
#1. Transfer Foxpro records to .dat files and then bcp to SQL Server tables #2. Transfer Foxpro records to .dat files and then Bulk Insert to SQL Server tables #3. DTS Foxpro records directly to SQL Server tables
I'm thinking whether the following choices will be better than the current way:
1st choice: Change step 1 to use #2 instead of #1 2nd choice: Change step 1 to use #3 instead of #1 3rd choice: Use #3 plus manipulating in DTS to replace step 1 and step 2