How To Separate Data's And Log's Files Location.
Sep 26, 2000
Hi everybody,
On the time of installation SQL Server asking me where I wont to locate the DATA files and the PROGRAM files. It’s giving to me choice to put database AND log files on one disk and program files on separate. But what about to separate LOG and DATA files. I have RAID1 especially created on F: drive for LOG files and RAID 5 on E: for DATABASE files. When I have to separate that if not on the time of installation? How I can do that?
I proposed on a new server that we separate Data Files, Log Files, tempDB, Backups, etc. onto separate LUNS on a SAN with High Speed Solid State Drives.I was told that with the new technology with solid state SAN's that it would decrease performance and that it did not work the same way as it did when you had RAID 5's etc.I thought that if things were cared out correctly by a SAN Administrator they would know how to configure for optimal performance.
I€™ve created with the help of some great people an SSIS 2005 package which does the follow so far:
1) Takes an incoming txt file. Example txt file: http://www.webfound.net/split.txt
The txt file going from top to bottom is sort of grouped like this Header Row (designated by €˜HD€™) Corresponding Detail Rows for the Header Row €¦.. Next Header Row Corresponding Detail Rows
€¦and so on
http://www.webfound.net/rows.jpg
2) Header Rows are split into one table, Maintenance Detail Rows into another, and Payment Detail Rows into a third table. A uniqueID has been created for each header and it€™s related detail rows to form a PK/FK relationship as there was non prior to the import, only the relation was in order of header / related rows below it when we first started. The reason I split this out is so I can massage it later with stored proc filters, whatever€¦
Now I€™m trying to somehow bring back the data in those table together like it was initially using a query so that I can cut out each of the Header / Detail Row sections into their own txt file. So, if you look at the original txt file, each new header and it€™s related detail rows (example of a cut piece would be http://www.webfound.net/rows.jpg) need to be cut out and put into their own separate txt file.
This is where I€™m stuck. How to create a query to combine it all back into an OLE DB Souce component, then somehow read that souce and split out the sections into their own individual txt files.
The filenames of the txt files will vary and be based on one of the column values already in the header table.
Does anyone know where a good article pertaining to where you should locate your Data and Log files (in SQL Server 2005)? I read an article several years ago stating that Log files should be on a seperate RAID 1 and Data on a seperate RAID 5. Anyway, any help is appreciated.
I receive data via FTP to our webserver nightly as .txt files and .dic (if anybody is familiar with idx realtor websites, that's what this data is). I've learned recently that I'm not going to be able to use Access to import or link to this data, so I'm trying to get my feet wet with SQL. I have been practicing importing text files into SQL db, but I notice that the dts imports everything as varchar 8000, and that you can edit that. I've got a .dic file that accompanies every .txt file that contains definitions of each fieldname, fieldtype & length & I was wondering how to import that data as well, without having to manually retype everything. I would be happy to email these text files to anybody willing to take a look.
I'm making backups of the database by first making a full backup and then differential backups. The differentials are backed up to separate files.
Restore of the full backup works fine, but I can't restore a differential backup. In Management Studio Express, I first do a full backup restore with option NO RECOVERY and then try to restore a differential backup. But this failes with the message:
"This differential backup cannot be restored because the database has not been restored to the correct earlier state."
Is it possible to restore a differential backup that is backed up to a separate file?
In SQL2000, there's an option to change the location of the template folder. This allows me to create a customized set of templates on a network folder and have all the developers reference the centralized location. Can the same be done in SQL2005 and how would I go about doing so?
I have a situation from where I need to loop through different folders and files in these folders. After processing these files, I need to archive these folders to different location.
e.g., C:MainFolderMar01 ==> Multiple files in Mar01 folder
C:MainFolderMar02 ==> Multiple files in Mar02 folder
Does any one know the best way to do this in SSIS?
We just upgraded from SQL 2000 to 2005. Under 2000, I could export multiple stored procs to separate windows files. Is there a way to do this under 2005 without exporting 1 proc at a time?
I'm trying to do something which I hope can be accomplished relatively simply.
I have a report similar to bank statements let's say. When run, it currently prints out each person's statement into one file, with page breaks sepearating each person's statement. What I need to do, is when the report is run, save each person's report into a seperate file for the purpose of emailing to them later.
I could easily modify my report to just output for one particular person, but I'm not sure if there's a way to "bulk render" all the reports and have them saved to sepearate files.
I should also add that I'm using an MS Access Data Project (ADP) as the front end to my app - connected to a SQL Server 2005 DB. I currently display the reports by embedding a web browser object into an Access form and rendering the report via HTML.
Hello there, I've been told that it is good practice to keep mdf and ldf files in another location... We have it in place for all our user databases, however mdf and ldf files for our system dbs are still at the same location. I was wondering what is the right way of splitting those should be?
I have C,D,E drives on server. Data files will be on D and Log on E. My question is what is best practice for data and log files for system databases during sql server installation selection? Should they be on C drive along with SQL Server installation or D & E? If they should not be on C then what is the reason and what is benefit to move them on other drives.
I have a job which copies .txt files 24 hours 7 days a week to c:TempSource
What I am planning to do is copy the files from one location to another say c:TempTarget
So I have written the Powell shell script and when i put that in the sql agent job i get below error ;
A job step received an error at line 5 in a PowerShell script. The corresponding line is '$filesToMove = $files | Where -Property "Name" -NotLike -Value $newestFile.Name'. Correct the script and reschedule the job. The error information returned by PowerShell is: 'A parameter cannot be found that matches parameter name 'Property'. '. Process Exit Code -1. The step failed.
How to load files with similar format , from two different locations into same database with same ssis.
Lets say Location 1: C:LoadFilesCust1APP_123445.txt Location 2: D:LoadFilescust2VDD_543121.txt
Currently we have one ssis which loads and process files from C:LoadFilesCust1 only. we have to modify the existing package it to load files from Location 2 (D:LoadFilescust2) as well. Also while loading, the ssis should assign a value to existing column CustID depending upon the file name. File names always start with APP_ in first location. VDD_ in second location
Assign CUSTID as 100 if file name starts with APP_ Assign CUSTID as 200if file name starts with VDD_
I have just installed SQL Server 2005. I was not given a choice during installation (that I remember seeing anyway) about which drive I wanted to place the default SQL instance on.
It ended up on C: and I needed it on D:.
So my questions are:
Is there a way to move it to D: that is easier than the way you had to go about it in SQL 2000? Is there a way to do it during setup so I can avoid this in the future?
I have a master table containing details of over 800000 surveys made up of approximately 400 distinct document names and versions. Each document can have as few as 10 questions but as many as 150. Each question represents one row.
My challenge is to create a separate spreadsheet for each of the 400 distinct document names and versions containing all the rows and columns present in the master table. The largest number of rows would be around 150 and therefore each spreadsheet will not be very big.
e.g. in my sample data below, i will need to create individual Excel files named as follows . . . "Document1Version1.xlsx" containing all the column names and 6 rows for the 6 questions relating to Document 1 version 1 "Document1Version2.xlsx" containing all the column names and 8 rows for the 8 questions relating to Document 1 version 2 "Document2Version1.xlsx" containing all the column names and 4 rows for the 4 questions relating to Document 2 version 1
I assume that one of the first things is to create a lookup of the distinct document names and versions assign some variables and then use this lookup to loop through and sequentially filter the master table data ready for creating the individual Excel files.
--CREATE TEMP TABLE FOR EXAMPLE
IF OBJECT_ID('tempdb..#excelTest') IS NOT NULL DROP TABLE #excelTest CREATE TABLE #excelTest ( [rowID] [nvarchar](10) NULL, [docName] [nvarchar](50) NULL,
We have multiple databases on a single instance in an OLTP environment. I have my data files on a separate SAN LUN from my transaction log files (and a few NDFs split out onto additional LUNs). I was wondering if there is a performance benefit to putting each LDF file on its own LUN? Or at least my few busiest LDFs?
We are currently on 2012, but I'm having to put together specs for a 2014 installation and need to answer this question without having an environment in which I can benchmark different setups. I just want to hear whether or not others have done this (why or why not?).
We have a scenario to process last created/modified files from a location using SSIS package , eventhough the folder contains multiple files with same name and extension.
Kindly give respond to this if any one has worked on this.
I am testing out a blank database created over two physical files on two separate disks with one table called data which has one column called values nvarchar(max).
I filled the table up with a whole load of data and ran a select * against it. If I run Permon at the same time I can see that the read load has been spread over multiple disks as each of these disks is getting read from in parallel. If I create the same database on a single file and run the same select * again it takes much longer, proving that the read load has been distributed across multiple disks.
Now moving onto writes, this is where the confusion lies. I understand that SQL server fills files evenly until they need growing, after which it will then fill files individually until they are full in a round robin fashion unless you have trace 1117 turned on. What I don't understand is why the writes aren't distributed out whilst it is filling these file groups.
I ran an continual insert into my table with go 1000000 to monitor how the files are being filled up. I monitored where SQL server was physically placing the files as they were being inserted by running the following query:
;WITH CTE AS (SELECT sys.fn_PhysLocFormatter (%%physloc%%) col1, RIGHT(LEFT(sys.fn_PhysLocFormatter (%%physloc%%),2),1) AS [Physical RID], DATAID
[Code] ....
I could see that it would a thousand or so records into file 1, then a thousand or so into file 2, then a thousand or so into file 1 etc etc. In another words it would hit one disk, then another disk, then back to disk one to fill the file evenly. Is there any way to make SQL Server distribute the writes out in parallel so that both disks are writing in tandem?
By the looks of it, multiple disks only scale reads, as with writes only one disk is ever written to at once which is annoying. Any way to harness the write power of multiple disks?
I have Developed ETL Package Which Supplying the CSV File, if I run the package Next time if Same File name  is there I need to Rename the that File with Currentdatetime need to move in to Archive Folder. if that File is not exist in that location no need to move the file into Archive file.
Can multiple instances of SQL 2005 Express attach to the same database files on a network share? I have seen this done before with MSDE where the database files are stored on the server, but instead of having a SQL server running on the network and then connecting to it, only the database files exist on the network share and the users connect through MSDE running on the local machine. Is this possible with SQL2005Express? I do not have the ability to share an SQL instance from one workstation to another nor do I have the ability to install an instance on the corporate server. Is it as simple as creating the database and storing the files on the share then attaching the database to the SQL Instance on each workstation?
Hi! Need help with this one: I have a column with a string composed by several data. After using REPLACE several times, I get something like the data below, which has (in most of cases) a value and a date.
378 9/05
388 9/05
4/05
1/06 606
1/06 646
76 5/05
100 1/05
118 8/05
129 8/05
9/05 342
05/3 123
1/07
4/06 164 The problem is that I need to get each value alone (to separate columns), in example: Value Date 378 09/2005 388 09/2005 0 04/2005 ... 606 01/2006
and so on... In addittion you can see that sometimes the Value come first or alone, and sometimes the Date come first or alone.
I will appreciate any good ideas, Thanks in advance, Aldo.
I've written a custom script to delete backup files from location. But unable to modify now to count the number of files are deleted. How to modify the script...
/* Script to delete older than N days backup from a specific directory */
USE [db_admin] GO IF OBJECT_ID('usp_DeleteBackup', 'P') IS NOT NULL DROP PROC usp_DeleteBackup GO
I am trying to add 2 separate columns from separate tables i.e column1 should be added to column 2 when inserted and I want to use a trigger but i don't know the syntax to use...
I have a data grid with dropdownlist.the dropdownlist is populated with datas wth a sql statement with 2 combined datamy sql : SELECT NAME + CAST(ID as CHAR(10)) FROM TABLE1When i select a value from the dropdownlist, i need to separate the data, name and id into different columnshow do i do it?Is there a way to manipulate the sql to do such a thing?
I recieved a SQL Server table that was supposed to have just the firstname in a field, but actually has firstname and middle name. Example David Michael Carol Anne Is there a way in a query to look for the blank space and separate the names?
Hello there,How can i take data out of my database, put them into a textbox and then separate with a comma..An example:----------------------------------| column Email || mail1@email.com || mail2@email.com || mail3@email.com |----------------------------------Put them into a textbox and separate with a , (comma)-------------------------------------------------------------------------------------| mail1@email.com, mail2@email.com, mail3@email.com |-------------------------------------------------------------------------------------Anybody who know how I can do that? :S