I need to store a list of parameters in a database. Each parameter has a name, description, comment and a value. Easy so far.
However the values are of different types. Each individual parameter has a value which may be of type int, decimal, string, boolean, custom type etc.
Which table design pattern is most appropriate? We have a heated in-house discussion and I need supporting arguments.
Options explored so far:
1) (De-)serializing the value to a string-type. 2) Adding a column for each type, using only one column at a time. 3) Adding extra value-tables, one table for each type. The disadvantages for each option are obvious and the basis for our discussion.
Your help in this matter will be appreciated. Regards, Tonn
I created a Calculated measure in cube something like this : ([TransType].[TransTypeHierarchy].[TransTypeCategoryParent].&[SPEND],[Measures].[Transaction Amount]). To get only spend transactions. Now, I want to slice this measure with same hierarchy to find the amount distribution across different transaction types under spend transaction. But this query behaving like the measure doesn't have relation with measure.
you can think this as below query: WITH MEMBER SPEND AS ([TransType].[TransTypeHierarchy].[TransTypeCategoryParent].&[SPEND],[Measures].[Transaction Amount]) SELECT NON EMPTY {SPEND} ON 0 ,NON EMPTY ([TransType].[TransTypeHierarchy].[TransTypeCategoryParent]) ON 1 FROM [CUBE]
This question is around how we can get the data types and lengths populated into the flat file source columns.
In Connection Manager, you have your flat file defined. You can choose "Suggest Types...", and the minimum lengths and correct data types will be returned from within the data in the flat file.
Is there some way to automate this data type definition, but coming from the other direction (coming from the destination table that we are loading)?
For example, you have mapped the columns that will be loaded. Can you then reverse engineer the data types and lengths for the columns in the flat file from the destination table?
I am preparing to design an application that will archive files created by another application. In my SQL database I want to store details about the file and then the file its self. Each file is about 500kb in size and there will be about 20,000 files generated per year.
My preference is to store these files in a blob field. It makes storage, linking to file meta data, backup etc easy for me. I have already solved the technical issues surrounding pulling the file in and out of a blob field.
By my calculations I will need a server with 10GB of disk space for each year of files archived which doesn't seem outlandish for table size.
I do not want to design my application however to find out a year from now that I should have been storing these files in a traditional file system because of (... whatever ...) and just linking them by path in the sql database.
I'm curious what users of this forum believe to be the best practices surrounding this type of database?
Master Database in SQL Server 7 has a transaction log. Using Enterprise Manager the option to back up the transaction log is greyed out. Is this because there is no need to back it up. I don't know if there is any value, or whether it is possible to do so. I have a number of books, none of which cover this specific question. Can anyone help. pargat.bhatti@uk.neceur.com
Okay. I changed the times that the transaction logs are backed up, via the built in maintenance schedule. It was then that I started to get failures only on the Transaction log backup for the master. The error that I get from the history log is Backup can not be performed on this database. This sub task is ignored. If I look in the file that is saved to disk :
Starting maintenance plan 'DB Maintenance Plan1' on 09/11/2004 02:30:00 Backup can not be performed on database 'master'. This sub task is ignored.
End of maintenance plan 'DB Maintenance Plan1' on 09/11/2004 02:30:31 SQLMAINT.EXE Process Exit Code: 1 (Failed)
I changed the backup back to its original time as this was the only change made. This has not resolved the problem. As I am new to SQL and still finding my feet All the other SQL maintenance plans that I changed are working fine.
In my DataFlow i have OleDBDataSource and OleDB Command. Using these i am inserting data to master and child tables.
In OleDBDataSource , i am inserting into master table and returning the ID of newly inserted rows. Next in the OleDB Command, i am inserting to child table using the ID returned from OleDBDataSource.
It is working fine. Now i want to put this in the Transaction so that if it fails to insert into child table, the changes made to the master table should be rolled back. I tried by giving Transaction Supported for dataflow. But does not looks like it works for me. Please suggest me the best approach for this.
I have Full database backup upto previous day and transaction logfile of Today transaction. my database has crashed. I have restored previous day's Full backup. I have faced difficulty to restore today's transaction from today's transaction log. What are the steps to restore full database back and one day's transaction log file. Note: there is no differential database backup and transaction backup.
I have a text file to import where there are three file types: a header which has info about who sent the file and begins with 'H', detail records that begins with D and a trailer record that begins with T and just has the record count following that. The fields are delimited by '*'. H, D and T records each contain a different number of fields. I suspect that what I should do is to split this file into three separate files. I tried to do this with SSIS but ran into problems. If I make the output a file destination, it won't let me use that output as input for the next process. There are no arrows I can grab onto to link to the next transform.
This is my first SSIS package although I made hundreds of DTS packages a few years ago. I can't figure this out in DTS either.
This sounds like it should be an EASY thing to do.
Each 01 record type has the records after it associated to it until the next 01 appears, so TestStuff would have TestStuff 2,3 related to it while TestStuff 4,5 belong together. In the example the 888 in the 01 record is the key to the group, but it does not appear in the following lines.
The problem is that each record type has different line formats, columns, etc, so they must be parsed differently. I have created a conditional branch on the first two characters, and written each record type out to a seperate flat file for that type, so that they can be imported again and parsed with the Flat File Source, but I am unsure how to relate them again. I tried appending the 888 to the other lines before they were written out, but I can't find a way to share the variable across the conditional split branches using a script component.
Does anyone have an idea how I could parse these files and keep the relationship intact?
Is there a way to tell the flat file wizard to use a different map based on certain characters?
Is there a way to share a variable across the different braches of a conditional split.
I would like to create a table called product. My objective is to get list of packages available for each product in data grid view column while selecting each product. Each product may have different packages type (eg:- Nos, CTN, OTR etc). Some product may have two packages and some for 3 packages etc. Quantity in each packages also may be differ ( for eg:- for some CTN may contain 12 nos or in other case 8 nos etc). Prices for each packages also will be different that also need to show. How to design the table..
Product name : Nestle milk | Rainbow milk packages : CTN,OTR, NOs |
CTN, NOs Price: 50,20,5 | 40,6
(Remarks for your reference):CTN=10nos, OTR=4 nos | CTN=8 Nos
hello,I am building a web page in asp.net 2.0 and i'm looking for a way to save various files uploaded by users (such as doc, pdf, cs, txt .... whatever). Obviously the normal way would be to store it on the filesystem of the web server (or any other middleware server).But I am asking if there is a way to store these files in the database, and not affect too much of the performance of the web page?I know a picture can be stored easily, but what about all the other file types? I would appreciate if someone can spead light of the subject.Thanks,Tranquil.
Hi, I have a flat file that contains 2 types of records - Dev and production. The Dev will be noted with an D and the Production with a P. These records are different - The dev records are in a different order and contain different info then the Production. I need to use SSIS to import the data into 2 different SQL Tables. How to do this? Can any one help me Thanks in advance
I'm getting a bit lost in SSIS. I've got an Excel source file that I'm trying to load into a table. I keep getting validation errors that warn about not being able to convert between unicode and non-unicode string data types.
I'm trying figure out where I have to change this and am frankly confused. It seems SSIS is selecting various columns as unicode/WSTR data types, but I want them to import as regular string types.
On the Data Flow tab in SSIS, I right-click on the source Data Flow component (the Excel file) and select Show Advanced Editor. Then on the last tab, Input and Output Properties, there's a tree view for the Excel output. There are "External Columns" and "Output Columns" containers in the tree view.
I tried setting some of these but they don't seem to "take". Do I need to change the data type for each column under both the External and Output columns?
That seems like a lot of work! And, as I say, I tried setting some, but I still got the same validation errors. So, then I go back to this spot (Advanced Editor -> Input and Output Properties tab) and my changes seem to have been lost.
I have a problem with "Flat file"-connection, which I cannot understand at the present. Here is the issue: I've got an ASCII-file containing 233898 lines. I try to read this file in two different packages using two different connections. In the first connection I used default data type - DT_STR of length 50, in the second one I used "Propose types..." feature (with 2000 samples) to detect types which are better matching the reality. And, when I try to load my data, the first connection reads exactly 233898 lines from the file, the second one 203898. Somehow it skips 30000 lines unloaded.
I tried to observe the error output for the second connection - everything goes smoothly and problemless. But somehow those 30000 lines are missed.
Has anybody experienced such a situation? Is the issue known?
I've just started looking at SSIS and have encountered what should hopefully be a simple problem to solve. I have a pipe-separated source file that looks like this (I've added Line numbers for simplicity):
In addition to a header and footer records, this file contains three record types for each person.
Record types are identified by the second column.
Each record type has a different number of columns:
Type 100 has 5 columns Type 200 has 4 columns Type 305 has 12 columns
The Row delimiter for all records is the {CR}{LF} character
I've set up a flat file input source and specified {CR}{LF} as the row delimiter for both header and data rows and the "|" character as the field delimiter.
It appears that SSIS is assuming that because the first data row has 5 columns, then everything must fit that format too. So the {CR}{LF} character that separates lines 02 and 03 is interpreted as text rather than a separation character and all remaining | field separators after 305 are interpreted as text containing in the fifth column. SSIS is also complaining that the last row is incomplete.
A bit like this (I've used tildes to indicate column separation):
I've seen one other reference to this behaviour but the response seemed to be SSIS doesn't know which columns are missing. In this scenario, we don't have missing columns, rather, we have different types of record in a single file. in DTS I would effectively parse the file once for each record type thus:
I'm using SSIS to import seven flat files (each containing a different record type) into a staging database. This part was easy.
Now I need to export the records from all seven tables into a single flat file structured in a nested hierarchy using common keys. (This format is required by the vendor for loading data into a new system).
I could use some ideas on the data transformations needed to combine all seven record types into an hierarchical record set which can then be written to my Flat File Destination. I'm currently looking at an article on SLQIS.com ("Handling Different Row Types In The Same File") which seems close to what I need, but they are importing (ref: www.sqlis.com/54.aspx ). I'm not sure if I should just reverse this for export or use something different. Any comments are appreciated.
The record types B1 through E2 form a complete set. Each set has it's own unique child-set key. There may be one or more sets for each typeA record (although it's possible that typeE records don't exist in the most recent set).
I've seen several posts about reading and writing files that have different record types with varying column metadata. My particular file has 11 record types plus several header types and looks something like:
<Header1>
<Header2>
<Detail01-#1>
<Subdetail02>
<Subdetail03>
...
<Detail01-#2>
<Subdetail02>
<Subdetail03>
...
...
Since i need to get different detail and subdetail records, i can't really use the technique of 3 dest file connection managers found in http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=87269&SiteID=1
I've tried using an exec sql to get the main detail records and then a forech ADO en umerator that would get the subdetails, but it all seems so kludgy. I'm starting to think that I should just write the bulk of the file creation code in a c# app instead of trying to smush this into SSIS. Opinions? Am I missing some trick in SSIS?
I've been writing this stuff for a while, and can't seem to come to the conclusion of how I should be retrieving data and assigning this data to variables.
Since i'm using SQL Server, I'm convinced that I should be using the datareaders GetSqlDouble (or whatever) function, but this would mean i need my local variables to be one of the SQL types. The problem with that is, that there will have to be lots of conversions done by me to be able to use a SQL type in my application.
For instance, I have a class where i'm retrieving dates. In order to retrieve them correctly (Null values included), I need to retrieve them with GetSqlDateTime(), then when it comes time to display the date in a table, i must first check for nulls, then convert to a string. This seems to be very cumbersome. Would I be better off just using GetDateTime(), and the .ToString method, and ignoring Sql Types all together?
so, basically, how are you guys using your sql server data? with the supplied sql types, and doing all of the post-processing work manually? I feel like i'm having trouble conveying my issue...hopefully someone knows what i mean....i'd just like some direction to save trouble in the long run, since i feel like there's got to be a better way...
SO when i try to load from Master table to parent and child table i am using using expresssion like
select B.ID,A.* FROM FLATFILE_INVENTORY AS A JOIN DMS_INVENTORY AS B ON A.ACDealerID=B.DMSDEALERID AND A.StockNumber=B.STOCKNUMBER AND A.InventoryDate=B.INVENTORYDATE AND A.VehicleVIN=B.VEHICLEVIN WHERE convert(date,A.[FtpDate]) = convert(date,GETDATE()) and convert(date,B.Ftpdate) = convert(date,getdate()) ;
If i use this Expression i am getting the current system date data's only from Master table to parent and child tables.
My Problem is If i do this in my local sserver using the above Expression if i loaded today date and if need to load yesterday date i can change my system date to yesterday date and i can run this Expression.so that yeserday date data alone will get loaded from Master to parent and child tables.
If i run this expression to remote server i cannot change the system date in server.
while using this Expression for current date its loads perfectly but when i try to load yesterday data it takes current date date only not the yesterday date data.
What is the Expression on which ever date i am trying load in the master table same date need to loaded in Parent and child table without changing the system Date.
For the length of Column ID is not enough, So I want to alter its length.The alter statement is:
ALTER TABLE student ALTER COLUMN ID CHAR(20)
For the table student is referenced by table score, the alter statement can not alter the column of the table student, and the SQL Server DBMS give the errors.
But, I can manually alter the length of the column ID in SQL SERVER Management Studio. How to alter column length of the master table(student) along with the slave table(score)?
Hi, Is this anyway to finding updated/ deleted recored using anyother data flow transformation tasks without using sql task. Can find the new records using merge join task.
Is there better way to merge master table using staging table?
Now i have master Table for a device Utility. There is a attribute called "Device Type " in the table. Every Device Type has specific Device Attributes associate with it . Now attribute of Diffrent Device type are stored in Different Tables. Now when i select a particular value of Device Type ( lets say Type 1 or TYPE 2 ... ) then the table with has the attribute associated with particuter device type only has to be selected . So how can I do this ??? How to form a realtion between the tables,... ????
We need to load a "master" flat file to SQL Server tables. The file is a dump from mainframe. Based on a field called "record_type", each record in the file has different columns. I would use the following as an example (the real file is much more complicated than this, but you get the idea):
If "M", the fields are "age", "gender", "birthdate", "state", "salary"
If "F", the fields are "age", "gender", "birthdate", "state", "company", "salary"
We need to load the file (only one file) into two different tables, M_table, and F_table. But I have researched and discovered in DTS the source (TEXT file) can not be queried against to filter on the gender field.
Since each record may have different number of fields, I cannot really load the flat file into a "staging" table.
Does anyone has any idea on how to achieve this? Thanks in advance!!!