Dynamic Vs Basic Disks
Oct 26, 2007
Can both be used with SQL server ? Dynamic disks provide more flexibility, is there performance imact ?
Thank you.
Can both be used with SQL server ? Dynamic disks provide more flexibility, is there performance imact ?
Thank you.
Are there any issues with installing SQL 2000 on a server with Dynamic Disks configured with RAID 1 Mirroring ? I know that with Dynamic Disks there are no partitions but rather volumes. Is the drive configuration setup the same way ?. Would I setup a volume for the O/S and a volume for SQL ? The reason I need to use Dynamic Disks is because we will be integrating it later with an EMC SAN solution.
View 1 Replies View RelatedHello,
I have a two node SQL 2000 cluster running on windows 2003 enterprise server. We need to replace the SAN disks. Can we not disable SQL service & Cluster service, copy the contents from existing disks to target disks, swap the drive letter & start the services?
What is the best practice to do this? Appreciate your help.
Thanks
Rajan
I have a new server.
It was shipped with a 76 gig drive setup in RAID 1 (2 disk) and a 400 gig drive setup in RAID 5 (4 disk).
I would like to determine what is the best way to setup up the partitions. What size and what should be placed on each.
Like the C: Drive...Should I just put Windows on there and nothing else? Do I stand to gain something from not using part of that 76 gigs as a D: drive for my apps?
Any help appreciated!
Is there any performance enhancements to be gained by storing frequently 'trigger-written-to' databases on a seperate disk to the source database? In particular, we keep a 'history' database of all inserts/updates/deletes against records, activated by triggers, and I was wondering if I would gain performance enhancement by locating the two databases on different disks?
Thanks in advance
I am trying to build the 2 node 2 clusters with the AlwaysOn.
Here isthe landscape.
2 nodes PROD failover cluster (running once instance)
2 nodes DR failover cluster (running 2 instances - DR and PRE-PROD)
Both clusters are in different geographies.
PRE-PROD can be editable. So out of scope of Always On.
One instance on PROD -> DR of the other box. [Want to achive thru AlwaysON]
Now my Question:
1) Do i need to have all the 4 nodes in same failover cluster group? If yes, then this would become MultiSubnet cluster Or Is there any way those 2 diffrerent failover clusters (one DR and one PROD) can be part of AlwaysOn.
2) Can i use the clustered disks as in the above landscape for always on?
Hi All
I am in process of moving a SQL 2005 solution from a development box that used local storage to UAT environment with SAN attached storage. The solution uses database snapshots
The database files are on the SAN storage but during testing I was unable to create a Database snapshot on the SAN disk. Creating snapshots on the local disk worked fine.
Is their some restriction/problem in using the database snapshot technology with SAN storage?
I've read that if particular tables are frequently queried together through a join then these tables should be placed on different devices on different physical disks.
What does this mean exactly and how would you configure this?
Is this a common practice in high-performance real-world environments (or should it be)?
Hello,
I am managing a sqlserver 6.5 database in my company. I get the message that the datafiles should be expanded but whenever I try to expand it the following message appears:
Could not find enough space on disks to extend the database. Meanwhile, I have about 6 gigabytes free space on my disks. Please help me out.
Thanks,
Albert
OS Windows Server 2012
SQL 2012
I have a software that presents a drive to a 2 node cluster.
Not sure how I would go about presenting this newly presented drive as a Cluster Resource and make SQL depend on this resource.
Why I see absolutely no performance improvement when I spread my primary file group over 8 separate files on 8 separate disks, as opposed to having the primary file group all in one file on one disk.
I have set up 2 identical databases, one spread over 8 disks and one on one disk. Each database has a table called DATA and a column called VALUE. Value is NVARCHAR(200). I have filled each table up in both databases with 20,000 rows.
I then perform a select on each table in each database using CHECKPOINT and DBCC DROPCLEANBUFFERS to ensure I am reading from disk before each query and the execution times are identical in both databases.
I then ran the same queries against each database using a load testing tool and the batch requests per second on each DB is identical under load.
Surely the database with data spread over 8 disks should be FAR faster than the single file database as you have the combined reading power of 8 disks as opposed to 2??
Also, the same is happening for write speeds. When I create the data on both databases, the time it takes is identical on both.
BOL says it should be faster with multiple disks.
Just FYI this is on an Azure virtual machine and each disk is a locally redundant data disk that I have attached to the virtual machine.
Whether write speeds should increase with multiple disks or just read speeds?
I am getting following errors in my Cluster Validation report when trying to create a Windows Cluster.
I have 2 nodes DB01 and DB02 . Each has 1 public ip, 1 private ip (for heartbeat), 2 private ips for SAN1 and SAN2. The private ip's to SAN are directly connected via Network Adaptor in DB01 and DB02.
Validate Microsoft MPIO-based disks
Description: Validate that disks that use Microsoft Multipath I/O (MPIO) have been configured correctly.
Start: 9/9/2014 1:57:52 PM.
No disks were found on which to perform cluster validation tests.
Stop: 9/9/2014 1:57:53 PM.
[Code] ...
I am wondering what would be the best disk/RAID setup for a Windows server 2008 R2 OS and SQL Server 2012 database that has heavy read/write. I have the following disks I can use:
4x 15k 146GB
2x 10k 600GB
According to the server build requirements for the application, I need 100GB for the OS and 290GB for the drive containing the SQL mdf there are no stated requirements for the ldf, but would like to know if it should be allocated elsewhere?I should do RAID 10 for the 15k drives for SQL and RAID 1 for the OS on the 10k.
I am testing out a blank database created over two physical files on two separate disks with one table called data which has one column called values nvarchar(max).
I filled the table up with a whole load of data and ran a select * against it. If I run Permon at the same time I can see that the read load has been spread over multiple disks as each of these disks is getting read from in parallel. If I create the same database on a single file and run the same select * again it takes much longer, proving that the read load has been distributed across multiple disks.
Now moving onto writes, this is where the confusion lies. I understand that SQL server fills files evenly until they need growing, after which it will then fill files individually until they are full in a round robin fashion unless you have trace 1117 turned on. What I don't understand is why the writes aren't distributed out whilst it is filling these file groups.
I ran an continual insert into my table with go 1000000 to monitor how the files are being filled up. I monitored where SQL server was physically placing the files as they were being inserted by running the following query:
;WITH CTE AS
(SELECT
sys.fn_PhysLocFormatter (%%physloc%%) col1,
RIGHT(LEFT(sys.fn_PhysLocFormatter (%%physloc%%),2),1) AS [Physical RID],
DATAID
[Code] ....
I could see that it would a thousand or so records into file 1, then a thousand or so into file 2, then a thousand or so into file 1 etc etc. In another words it would hit one disk, then another disk, then back to disk one to fill the file evenly. Is there any way to make SQL Server distribute the writes out in parallel so that both disks are writing in tandem?
By the looks of it, multiple disks only scale reads, as with writes only one disk is ever written to at once which is annoying. Any way to harness the write power of multiple disks?
Hi Craig/Kamal,
I got your email address from your web cast. I really enjoyed the web cast and found it to be
very informative.
Our company is planning to use SSIS (VS 2005 / SQL Server 2005). I have a quick question
regarding the product. I have looked for the information on the web, but was not able to find
relevant information.
We are getting Source data from two of our client in the form of Excel Sheet. These Excel sheets
Are generated using reporting services. On examining the excel sheet, I found out that the name
Of the columns contain data itself, so the names are not static such as Jan 2007 Sales, Feb 2007 Sales etc etc.
And even the number of columns are not static. It depends upon the range of date selected by the user.
I wanted to know, if there is a way to import Excel sheet using Integration Services by defining the position
Of column, instead of column name and I am not sure if there is a way for me to import excel with dynamic
Number of columns.
Your help in this respect is highly appreciated!
Thanks,
Hi Anthony, I am glad the Web cast was helpful.
Kamal and I have both moved on to other teams in MSFT and I am a little rusty in that area, though in general dynamic numbers of columns in any format is always tricky. I am just assuming its not feasible for you to try and get the source for SSIS a little closer to home, e.g. rather than using Excel output from Reporting Services, use the same/some form of the query/data source that RS is using.
I suggest you post a question on the SSIS forum on MSDN and you should get some good answers.
http://forums.microsoft.com/msdn/showforum.aspx?forumid=80&siteid=1
http://forums.microsoft.com/msdn/showforum.aspx?forumid=80&siteid=1
Thanks
Craig Guyer
SQL Server Reporting Services
Hi,
I have a need to display on screen AND email a pdf report to email addresses specified at run time, executing the report with a parameter specified by the user. I have looked into data driven subscriptions, but it seems this is based on scheduling. Unfortunately for the majority of the project I will only have access to SQL 2005 Standard Edition (Production system is Enterprise), so I cannot investigate thoroughly.
So, is this possible using data driven subscriptions? Scenario is:
1. User enters parameter used for query, as well as email addresses.
2. Report is generated and displayed on screen.
3. Report is emailed to addresses specified by user.
Any tips on how to get this working?
Thanks
Mark Smith
If anyone could confirm...
SQL Server 2000 SP4 to multiple SQL Server 2005 Mobile Edition on PDAs. My DB on SQL2k is published with a single dynamic row filter using host_name() on my 'parent' table and also join filters from parent to child tables. The row filter uses joins to other tables elsewhere that are not published to evaluate what data is allowed through the filter.
E.g. Published parent table that contains suppliers names, etc. while child table is suppliers' products. The filter queries host_name(s) linked to suppliers in unpublished table elsewhere.
First initial sync with snapshot is correct and as I expected - PDA receives only the data from parent (and thus child tables) that matches the row filter for the host_name provided.
However - in my scenario host_name <--> suppliers may later be updated E.g. more suppliers assigned to a PDA for use or vice versa. But when I merge the mobile DB, the new data is not downloaded? Tried re-running snapshot, etc., no change.
Question: I thought the filters would remain dynamic and be applied on each sync?
I run a 'harmless' update on parent table using TSQL e.g. "update table set 'X' = 'X'" and re-sync. Now the new parent records are downloaded - but the child records are not!
Question: I wonder why if parent records are supplied, why not child records?
If I delete existing DB and sync new, I get the updated snapshot and all is well - until more data added back at server...
Any help would be greatly appreciated. Is it possible (or not) to have dynamic filters run during second or subsequent merge?
I have tried building an Inline TVF, as I assume this is how it would be used on the DB; however, I am receiving the following error on my code, I must be missing a step somewhere, as I've never done this before. I'm lost on how to implement this clr function on my db?
Error:
Msg 156, Level 15, State 1, Procedure clrDynamicPivot, Line 18
Incorrect syntax near the keyword 'external'.
CREATE FUNCTION clrDynamicPivot
(
-- Add the parameters for the function here
@query nvarchar(4000),
@pivotColumn nvarchar(4000),
[code]....
I have a Stored Procedure for processing a Bill of Material.
One column on the Assembly Table is a Function Name that contains some busniess rules.
OK, now I'm doing a Proof of Concept and I'm stumped.
Huuuuh!
I will ultimately have about 100 of these things. My plan was using Dynamic SQL to go execute the function.
Note: The function just returns a bit.
So; here's what I had in mind ...
if isnull(@FnNameYN,'') <> ''
exec spinb_CheckYN @FnNameYN, @InvLineID, @FnBit = @FnBit output
CREATE PROCEDURE dbo.spinb_CheckYN
@FnNameYN varchar(50),
@InvLineID int,
@FnBit bit output
AS
declare @SQL varchar(8000)
set @SQL = '
if dbo.' + @FnNameYN + ' (' + convert(varchar(31),@InvLineID) + ')) = 1
set @FnBit = 1
else
set @FnBit = 0'
exec (@SQL)
GO
Obviously; @FnBit is not defined in @SQL so that execution will not work.
Server: Msg 137, Level 15, State 1, Line 4
Must declare the variable '@FnBit'.
Server: Msg 137, Level 15, State 1, Line 5
Must declare the variable '@FnBit'.
So; is there a way to get a value out of a Dynamic SQL piece of code and get that value INTO my OUTPUT variable?
My many thanks to anyone who can solve this riddle for me.
Thank You!
Sigh: For now, it looks like I'll have a huge string of "IF" statements for each business rule function, as follows:
Hopefully a better solution comes to light.
------ Vertical Build1 - Std Vanes -----------
if @FnNameYN = 'fnb_YN_B1_14'
BEGIN
if dbo.fnb_YN_B1_14 (convert(varchar(31),@InvLineID) ) = 1
set @FnBit = 1
else
set @FnBit = 0
END
------ Vertical Build1 - Scissor Vanes -----------
if @FnNameYN = 'fnb_YN_B1_15'
BEGIN
if dbo.fnb_YN_B1_15 (convert(varchar(31),@InvLineID) ) = 1
set @FnBit = 1
else
set @FnBit = 0
END
.
.
.
etc.
I've looked up Books Online on Dynamic Cursor/ Dynamic SQL Statement.
Using the examples given in Books Online returns compilation errors. See below.
Does anyone know how to use Dynamic Cursor/ Dynamic SQL Statement?
James
-- SQL ---------------
EXEC SQL BEGIN DECLARE SECTION;
char szCommand[] = "SELECT au_fname FROM authors WHERE au_lname = ?";
char szLastName[] = "White";
char szFirstName[30];
EXEC SQL END DECLARE SECTION;
EXEC SQL
DECLARE author_cursor CURSOR FOR select_statement;
EXEC SQL
PREPARE select_statement FROM :szCommand;
EXEC SQL OPEN author_cursor USING :szLastName;
EXEC SQL FETCH author_cursor INTO :szFirstName;
--Error--------------------
Server: Msg 170, Level 15, State 1, Line 23
Line 23: Incorrect syntax near ';'.
Server: Msg 1038, Level 15, State 1, Line 24
Cannot use empty object or column names. Use a single space if necessary.
Server: Msg 1038, Level 15, State 1, Line 25
Cannot use empty object or column names. Use a single space if necessary.
Server: Msg 170, Level 15, State 1, Line 27
Line 27: Incorrect syntax near ';'.
Server: Msg 170, Level 15, State 1, Line 30
Line 30: Incorrect syntax near 'select_statement'.
Server: Msg 170, Level 15, State 1, Line 33
Line 33: Incorrect syntax near 'select_statement'.
Server: Msg 102, Level 15, State 1, Line 35
Incorrect syntax near 'author_cursor'.
Server: Msg 170, Level 15, State 1, Line 36
Line 36: Incorrect syntax near ':'.
I have a requirment which i have partly accomplished , but could not get through completely
i have a file which comes in a standard format ending with date and seq number ,
suppose , the file name is abc_yyyymmdd_01 , for first copy , if it is copied more then once the sequence number changes to 02 and 03 and keep going on .
then i need to transform those in to new file comma delimited destination file with a name abc_yyyymmdd,txt and others counting file counting record abc_count_yyyymmdd.txt. and move it to a designated folder. and the source file is then moved to archived folder
what i have taken apprach is
script task select source file --------------------> data flow task------------------------------------------> script task to destination file
dataflow task -------------------------> does count and copy in delimited format
what is happening here is i can accomlish a regular source file convert it to delimited destination file --------> and move it to destination folder with script task .
but cannot work the dynamic pick of a source file.
please advise with your comments or solution you have
I am trying to create an ssis package with dynamic csv file as output. and out format contains query output.
sample file name:
Unique identifier + query output + systemdate();
The expression is looking like this.
@[User::FilePath] + @[User::FileName] + ".CSV"
-- user filepath is a variable from ssis package. File name is the output from SQL query. using script task i have assigned the values to @[User::FileName] .
When I debugged the script task the value getting properly but same variable am using for Flafile destination. but its not working.
This is probably a very silly question.I started learning ASP.net by following ASP.NET Unleashed. I am stuck where he wants me to open a connection to SQL Server database. I have just downloaded
MSDE. But I dont know where to type this code and how to run it..so as to connect to the database.
<%@ Import Namespace="System.Data.SqlClient" %>
<Script Runat="Server">
Sub Page_Load
Dim conPubs As SqlConnection
conPubs = New SqlConnection( "server=localhost;uid=webuser;pwd=secret;database=pubs" )
conPubs.Open()
End Sub
</Script>
Connection Opened!
Now do i have to change the uid to SA ? (i had to assign one when i downloaded and installed MSDE?
Thanks for the help.
Hi all,
am not very experienced in using DTS and really need your help. I have a dts package that i have scheduled to run every day. Here's what i want the package to do:
1. Check whether a value for a certain column in a certain row of a table in my database is 0 or 1. If it is 1, then
2. Run the dts task (which i have created and is working)
In other words, when the package is started, i want to execute a stored procedure or sql task or whatever, and if that returns 1 then i want to continue, if it returns 0 i want to finish the package without running the dts task. I'm sure there's a simple way to do this, but i could use your help...!
Thanks,
Elisabet
Hi All,
Can this be done and if so can you give a bullet list of the steps need to accomplish this.
I need to load a bunch of files into a stagging table. Need to loop through the files and load them.
Thanks,
Michael
Hi,
what this statement do?
does it add all the values or combine all the values.
REPLACE combine WITH lc_tran + lc_exp + lc_war + ll_boc
Regards
kk
I downloaded SQLExpress and Visual Studio Express to my home computer.
I built a simple database, adding data through theSQLexpress admin tool.
I built a web page using MS Studio. I connected to the database and used the webpage for a few days. Then I restarted the computer. Now the web page won't open, and MS Studio won't open the MDF file in the App_Data folder.
I can still see and work in the database through SQL server Express.
The web page and the MSStudio attempt to connect to the mdf file both fail with this message:
Cannot open user default database. Login failed.Login failed for user 'KAAAK/Administrator'.
So it seems to be trying to connect as the Windows user.
When I try to modify the connection to connect through a user/password I created in SQL manager, I get a message that the user is not a trusted SQL user.
from web.config:
<connectionStrings>
<add name="ConnectionString" connectionString="Data Source=.SQLEXPRESS;AttachDbFilename=|DataDirectory|info.mdf;Integrated Security=True;User Instance=True;User ID=Admin;Password=12345" providerName="System.Data.SqlClient"/>
</connectionStrings>
That was changed from the original string created automatically by MS Studio
<connectionStrings>
<add name="stocksConnectionString" connectionString="Data Source=.SQLEXPRESS;AttachDbFilename=|DataDirectory|stocks.mdf;Integrated Security=True;User Instance=True;" providerName="System.Data.SqlClient"/>
</connectionStrings>
I am sure this is some simple problem, but why would the system refuse to access an mdf file it had already been accessing.?
Thanks, Michael
Hi all, having trouble with my first sql communication. I've got hosted service with an SQL database i've populated with a row.
When it gets to the third line the page crashes with an error.
SqlConnection connection = new SqlConnection("Server=mydbserver.com;Database=db198704784;");// +"Integrated Security=True"); SqlCommand cmd = new SqlCommand("SELECT UserName FROM Users",connection); SqlDataReader reader = cmd.ExecuteReader();
is there somewhere i need to put in my username or password? or is this code just wrong
Many thanks burnside.
-- Edited by longhorn2005
not sure why I am having trouble here but I am using the following WHERE clause expecting to find all rows where any one of the the three keywords are present.
....WHERE Company.L_Keywords LIKE '%metal%' AND Company.L_Keywords LIKE '%tile%' AND Company.L_Keywords LIKE '%ceramic%'
however it appears to finding only the rows where all three words are present in the L_Keywords field
This is a very simple question. How would a select satement be formated in the following example.
SELECT Grade, Student_ID, First_Name, Last_Name FROM Scores WHERE (This is where I'm stuck and I know this is not the right formatting although I wish it were because it would make my life a little bit easier.) Student_ID = 115485, 115856, 568547, 965864, etc...
I may have up to 100 specific student ids to put in this one statement. I know I can use the "WHERE Student_ID = 115485 OR Student_ID = 115856, OR Student_ID = 568547" but that would be alot of waste. Seems like there should be an easier way than using the "OR Student_ID =" for every entry.
Can someone explain another way I can do this. Thanks in advance.
Hey, I have a pretty simple question.My query is throwing an error saying "Invalid column name 'subject'."The problem is that subject is a custom column I've made, well just look at the sql:SELECT a.ArticleID, subject=ISNULL((select subject from subjects),'') where subject='some subject'
View 8 Replies View RelatedI have already created package which loads a text file to database using the dts wizard in Enterprise Manager.How do I execute that package using visual basic?Please provide the Code!!!Thanks
View 2 Replies View RelatedHello,
I've just migrated my access database from access2000 to sql7.0. The wizard told me there was no problem. But a simple question:
How do i open my database? Where can i see tables, fields...?
Is there no interface like the one in access2000?
Thanks in advance!