I have one of our production Accounting Databases starting from 2 GB
now grown into a 20 GB Database over the period of a few years...
I have been getting timeouts when transactions are trying to update
different tables in the database.. Most of the error I get are I/O
requests to the data file (Data file of the production db
Accounting_Data.MDF).
I would like to implement the following to this Accounting database.
I need to split the Data file into multiple files by placing some of
the tables in different file groups. I have the server upgraded to be
able to have different drives in different channels. I can place these
data and log files in different drives so it will be less I/O
conflicts..
I would like to have the following file groups..
FileGroup 1 - which will have all database definitions (DDL).
FileGroup 2 - I will have the AR Module tables under here..
FileGroup 3 - I will have the GL module tables under here..
FileGroup 4 - I will have the rest of the tables under here
FileGroup 5 - I will like to place the indexes under here....
Also where will the associated transaction files go?
I would like to get some help doing this. Is there any articles / help
available that I can refer to. Any suggestions / corrections/
criticisms to what I have mentioned above is much appreciated...!
- 500 GB DW - 5 GB in smaller DBs - 220 GB TempDB - 350 GB in Log files.
My machine is Fujitsu Primergy 64 cores (with HT) and 192 GB RAM.
I have several IO locations:
- 540 GB in-server HDD 15k RAID10 - 1 TB HDD 15k RAID10 on SAN (separete controller) - 2 TB HDD 15k RAID10 on SAN (same controlller as below) - 800GB SSD RAID10 on SAN (same controller as above)
Data warehouse has 2 fact tables that are absolutely crucial and quite large.
Now i want to organize DB into several Filegroups and put them on different drives. Filegroups I'm thinking of:
- FILEGROUP1: for 1st crucial Fact Table - FILEGROUP2: for 2nd crucial Fact Table - FILEGROUP3: for tempDB - FILEGROUP4: for dimensions data - FILEGROUP5: for the rest of facts data - FILEGROUP6: for dimensions indexes - FILEGROUP7: for the rest of facts indexes - FILEGROUP8: for 1 log file of one smaller DB (its in full-recovery and its quite large) - FILEGROUP9: for the rest of log files - FILEGROUP10: others
How should I organize them across available drives? I was thinking about sth like:
I know that having multiple filegroups on the same drive is pointless regarding performance, but in future i could actually add some more drives, so i want to separate them now.
Also - how much files per filegroups should i create? Considering 1 or 2. Except TempDB where I am going for 4.
I have a delimited text file with 650+ columns. The sum of the column lengths of a single row, if fully populated, exceeds 30K bytes. The "killer" fields lengthwise are the "Description" fields. If they were removed from the input file, the remainig columns would occupy about 5000 bytes, which is within SQL max row length.Â
Can SSIS be used to created these two tables? (one without  description fields, the other with those field but arranged vertically in the table rows).
The fundamental issue is I can not import a single file row into a sql table because that row length could exceed the max byte count for a row.
How can I split this incoming file into separate txts. I want to cut out each Header/detail row section into a new txt. What I mean by header/detail row:
incoming txt file:
http://www.webfound.net/split.txt
basically want to cut out each section like this:
http://www.webfound.net/what_to_cut.txt
http://www.webfound.net/rows.jpg
and a kicker...each new txt name must use a certain field (based on x numbers in header row) followed by another field whcih is the date form the header row. somethign like this:
I need some hand holding here, it's my first time trying to do something so complicated in SSIS 2005. If I can first just get the txt split into multiple, that would be a big help.
I have one really long .sql file I'm working on. It's actually a data conversion type script. It's gotten really cumbersome to work on as long as it is. I would like to split up various logical parts of script into their own .sql file.How can I have one file .bat, .sql or whatever call each .sql file in the order I specify? Hoping this is easy. Thanks
I am looking for the easiest way of rebalancing data across multiple files.
Instead of creating a secondary filegroup and then dropping and recreating all indexes in the database which is going to take ages (we have a lot of tables and indexes), I am trying to just add more files to the primary file group and then rebalance data evenly between these.
I guessed that adding the new files to the primary file group and then rebuilding all indexes on a table should redistribute the table over these multiple file groups evenly. This is not the case though. It does rebalance data a bit but I still end up with the majority on the first file that existed.
I have attached the script I am running, maybe it is something in the create database/file statements that is the issue.
Basically what I am seeing is to start off with the table is 160MB. I then add the file groups and rebuild all indexes on the table. The first file is then about 100MB and each of the three other files are about 20MB. I would expect them all to be the same size.
Why shrinkfile empty file does not redistribute data evenly in the primary file group with multiple files:
Please run the script attached to see what the end result is.
This is what I set up last night on my test machine.
1) Create database [FGTest] size 200MB 2) Create table called TEST on primary 3) Insert 40MB of data into test 4) Create another file group called temp in primary size 200MB 5) Shrinkfile('FGTest',emptyfile) so that all data is transfered from FGTest into temp file group. 6) Add another 2 files called DATA2 and DATA3. Both are 200MB. 7) We now have 3 empty files that I want data distributed evenly on. FGTest, DATA2 & DATA3 8) Shrinkfile('temp',emptyfile) to move all the data from temp over the 3 file groups evenly
I would expect at this stage to have the following:
FGTest = 13MB, DATA2 = 13MB, DATA3 = 13MB
(40MB of data over 3 files should be about 13 MBish in each file)
What I actually end up with is this:
FGTest = 20MB DATA1 = 10MB DATA2 = 10MB
It looks as though SQL Server is allocating 50% of all data to the original file and then 50% evenly over the remaining files in PRIMARY.
I have a new SQL 2005 (SP2) Reporting Services server to which I've just upgraded and deployed some SSRS 2000 reports.
I have a subreport that contains a matrix with two groups. The report data seems to be inexplicably repeating the data for the first row in the group for all rows in the group. Example:
ID1 ID2 DisplayData
1 1 A
1 2 B
1 3 C
2 1 A
2 2 B
2 3 C
Parent group is on ID1, child group is on ID2, report would show:
1 1 A
2 A
3 A
2 1 A
2 A
3 A
Is this a matrix bug in 2005 SP2, or do I need to do something differently? I can no longer pull a comparison version from an SSRS 2000 server to verify, but I believe it was working as expected before...
I am looking to find out when to use file groups when backing up. When should you use this, what's the benefit over just doing a full db backup? Is it better when you are dealing with large db's?
Also this question has been on my mind for a while. Why shouldn't you shrink the db after every full backup? What is the negative in doing so?
Hi everyone, While creating our database in only one disc(C or D), suppose that we create more than one file group in order to group our data files. However, in this situation; I wonder that whether it brings any benefit or advantage to us.
Also, I wonder that why we always have to put our data file into separate file group if we use separate discs for data files. Is not it allowed to use only one file group even if we use separate dics ?
I need to move specific files from a server to another server on a monthly basis. Â There are hundreds of files that are in the source directory and I need to move approximately 40 of those to the destination server. Â I would like to easily add or delete the file list as needed. Â I have seen where several variables were created for for each file name (and one for the path) and the ForEach Loop would go through them. Â With 40 or more I was thinking that I could make a connection to an Excel spreadsheet or text file with a record for each file name and read in and and move to the next record and make that value become the content of a "FileName" variable. Â Then if I wanted to add another file name I could just add another record to spreadsheet/text file or remove and the package would handle automatically....
Hello all. Before my arrival at my current employer, our consultantsphysically set up our MSSQL 7 server as follows:drive c: contains the mssql enginedrive d: contains the transaction logdrive e: contains the data filesNo filegroups were set up and the data files consist of only 1 largephysical file. Currently, our data file is >10GB. When I was trained onthe physical aspects of sqlserver, I was told to never create physical files[color=blue]> 2048MB each. If I did, I could expect inefficient physical storage of[/color]data and slower performance (due to the OS).Our server has 2 RAID-5 arrays. Drive c: and e: are located on the firstarray and drive d: on the second. We're running Windows 4.0 NT Server SP6with NTFS.Can someone comment on the use of 1 single large data file vs. more smallerdata files?
I had a database that’s comprised of different file groups and log files spread out among different hard drives. I have recently upgraded the database to SQL 7.0 on a RAID 10 volume. I would like to consolidate all the file groups and files as well as various log files into one primary datafile and logfile. How do I do that? Thanks in advance.
I am currently converting some Oracle scripts to SQL Server. Encountered this following code segment in a CREATE TABLE query :
CONSTRAINT ck_PK PRIMARY KEY ( O_ID) USING INDEX TABLESPACE DIRECT PCTFREE 10 STORAGE ( INITIAL 65536)) TABLESPACE DIRECT PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE ( INITIAL 65536 MINEXTENTS 1 MAXEXTENTS 2147483645 FREELISTS 1 FREELIST GROUPS 1)
what is the equivalent conversion in SQL Server. Is it just ON PRIMARY in the PRIMARY KEY clause ?? Are the Segments and Extents in Oracle equivalent to Filegroups in SQL Server.
Something strange.I have a database(SQL2000) with two file group(on seperate physicaldrives).One is meant for table data[PRIMARY] and one for indexes [INDEX].So i create a table on the [PRIMARY] file group, and fill indata.Next I build a clustered index on the table, on the [INDEX] filegroup.Once the index is built, the database now indicates that the filegroupfor the table [INDEX]! and not [PRIMARY] as i originally set it up for!My question it then: Has the table been moved or is this somehow anerror in SQL server?I would really appreciate any thought anyone might have on this?
I will be doing some performance testing on financial application nextmonth. Without going into a lot of details, I suspect I will have apotential bottleneck when writing to the log file.My hardware setup is a quad 2.8 Xeon Dell server direct attached to aDELL/EMC CX200 (Fibre channel array with 10 X 30something GB, 15,000rpm drives, with about 1GB of memory on the array for caching.This is a benchmark environment, so I am not concerned about loosingdata. I am looking for a little guidance on using raid (0 or 10)and/or file groups to spread IO to db objects (log file(s), data,indexes, tempdb, etc). I have read about and played with file groupsenough to know that SQL server does some level of load balancingacross file, but am unclear it is in parallel or serialized.Common wisdom seems to be to separate data, non-clustered index, logs,and tempdb onto separate files, but I am unclear on how to make bestuse of the high-speed disk array. I'd greatly appreciate opinions onwhich would perform better; one file on a stripe set of N drives (raid0 or 10), N files in a file group placed on N (non-striped) drives, ora combination of the two? Is the answer the same for both log and data(or index) files?Thanks,-Bernie
I have a text file which contains the data that has to be inserted into multiple tables.The columnames of table 1 form the H1 follwed by Details D1,D1,D1... The column names of table two form the H2 followed by details D2,D2,D2 so on and similarly for Table 3. Am using a link server to the file directory and schema.ini which defines the column names fofr the text file
Is there any way of defining column names for more than one table through the schema.ini? or is there any other way through I can parse the text file contents to multiple tables?
Sample text file: H1,JobDate,JobNumber,FileName, D1,13/02/2008,asdf123,text1.txt D1,13/02/2008,asdf123,text2.txt D1,13/02/2008,asdf123,text3.txt
We have a large Datawarehouse and the size is 50TB.. The tables are placed in filegroups based on the schema like fact, dimensions, raw data each sit on seperate filegroups. I am thinking will it make sense to seperate the large facts which are having billions of rows so that they reside on filegroups on their own..
I am facing a peculier problem. Problem definition goes like this,
I have one staging DB in which all the tables resides in Primary file and one production DB in which tables resides in 2 secondary files.
Now when iam trying to load the data from the table A in staging which is on primary file to the table A1 in production DB which in secondary file, all the data are going to error log instead of table A1.
We have recently added a new file group and file on a new drive. We have tested it by moving a small table to the new file group. We would like to relocate a new table to this file group but the table is about (we estimate) 75GB. My question is this: How long can we expect the transfer of data from the current file group to the new one for this table? I understand that depending on our hardware the answer may vary but does anyone have a rough estimate?
The current current (primary) file is located on a DELL SAN and the new secondary group is on a EMC 4700 both are connected via fiber channel.
Also a bonus question would be: Does a "normal" database backup created as a maintenance plan backup the seconary data as well into the BAK file?
I am building partitiong tables, partitioning on different file groups:
the question is:
Partitioned table referred to old data that are not frequent accessed for reporting can be stored on separate location(External storage, tape and so on) or to make partitioning functioning must all file groups must be presents?
If not, how can I separate old data from current ones (still using partitioning) to reduce the size of DB?
What it is the best for storage data and easy to access it when needs arise (eg reporting): Tape, external storage, others?
Hi All, i have mutiple text file. let us say,a1.txtb1.txtc1.txt i have to port this text file data into the table (SqlServer Database) which have the same file structure.(i.e)x1 (SqlServer table)y2 (SqlServer table)z3 (SqlServer table) now i have to transfer a1.txt file data ----to--- x1b1.txt file data ----to--- y2c1.txt file data ----to--- z3 using SSIS. like that, i have to transfer more than 250 files at a time.manually binding 250 files into the package is very cumbersome and time consuming process. so, can any one give ur valuable sugession to solve this issue.
I have a package that contains three database tables (Header, detail and trailer record) each table is connected via a OLE DB source in SSIS. Each table varies in the amount of colums it holds and niether of the tables have the same field names. I need to transfer all data, from each table, in order, to a flat file destination.
I am trying to load a file using SSIS that contains records with two different layouts in one data file but in the flat file connection I can only specify one layout and this is causing the records with the second layout to be loaded incorrectly.
The different record layouts can be identified by the first character of the record. Example: If Field begins with "A" then assign one layout; "B" assign second layout.
Has anybody come accross this issue, if so some guidence would be appreciated.
When the database is configured for mirroring and you want to do partitioning on that database, How can we do? Is this similar process or any variation there while adding file groups and files? The partition will reflect in the mirroring database also?