Are there any issues with installing SQL 2000 on a server with Dynamic Disks configured with RAID 1 Mirroring ? I know that with Dynamic Disks there are no partitions but rather volumes. Is the drive configuration setup the same way ?. Would I setup a volume for the O/S and a volume for SQL ? The reason I need to use Dynamic Disks is because we will be integrating it later with an EMC SAN solution.
I have a two node SQL 2000 cluster running on windows 2003 enterprise server. We need to replace the SAN disks. Can we not disable SQL service & Cluster service, copy the contents from existing disks to target disks, swap the drive letter & start the services?
What is the best practice to do this? Appreciate your help.
It was shipped with a 76 gig drive setup in RAID 1 (2 disk) and a 400 gig drive setup in RAID 5 (4 disk).
I would like to determine what is the best way to setup up the partitions. What size and what should be placed on each.
Like the C: Drive...Should I just put Windows on there and nothing else? Do I stand to gain something from not using part of that 76 gigs as a D: drive for my apps?
Any help appreciated! Is there any performance enhancements to be gained by storing frequently 'trigger-written-to' databases on a seperate disk to the source database? In particular, we keep a 'history' database of all inserts/updates/deletes against records, activated by triggers, and I was wondering if I would gain performance enhancement by locating the two databases on different disks? Thanks in advance
I am trying to build the 2 node 2 clusters with the AlwaysOn.
Here isthe landscape.
2 nodes PROD failover cluster (running once instance) 2 nodes DR failover cluster (running 2 instances - DR and PRE-PROD)
Both clusters are in different geographies.
PRE-PROD can be editable. So out of scope of Always On.
One instance on PROD -> DR of the other box. [Want to achive thru AlwaysON]
Now my Question:
1) Do i need to have all the 4 nodes in same failover cluster group? If yes, then this would become MultiSubnet cluster Or Is there any way those 2 diffrerent failover clusters (one DR and one PROD) can be part of AlwaysOn.
2) Can i use the clustered disks as in the above landscape for always on?
I am in process of moving a SQL 2005 solution from a development box that used local storage to UAT environment with SAN attached storage. The solution uses database snapshots
The database files are on the SAN storage but during testing I was unable to create a Database snapshot on the SAN disk. Creating snapshots on the local disk worked fine.
Is their some restriction/problem in using the database snapshot technology with SAN storage?
I've read that if particular tables are frequently queried together through a join then these tables should be placed on different devices on different physical disks. What does this mean exactly and how would you configure this? Is this a common practice in high-performance real-world environments (or should it be)?
I am managing a sqlserver 6.5 database in my company. I get the message that the datafiles should be expanded but whenever I try to expand it the following message appears:
Could not find enough space on disks to extend the database. Meanwhile, I have about 6 gigabytes free space on my disks. Please help me out. Thanks,
Why I see absolutely no performance improvement when I spread my primary file group over 8 separate files on 8 separate disks, as opposed to having the primary file group all in one file on one disk.
I have set up 2 identical databases, one spread over 8 disks and one on one disk. Each database has a table called DATA and a column called VALUE. Value is NVARCHAR(200). I have filled each table up in both databases with 20,000 rows.
I then perform a select on each table in each database using CHECKPOINT and DBCC DROPCLEANBUFFERS to ensure I am reading from disk before each query and the execution times are identical in both databases.
I then ran the same queries against each database using a load testing tool and the batch requests per second on each DB is identical under load.
Surely the database with data spread over 8 disks should be FAR faster than the single file database as you have the combined reading power of 8 disks as opposed to 2??
Also, the same is happening for write speeds. When I create the data on both databases, the time it takes is identical on both.
BOL says it should be faster with multiple disks.
Just FYI this is on an Azure virtual machine and each disk is a locally redundant data disk that I have attached to the virtual machine.
Whether write speeds should increase with multiple disks or just read speeds?
I am getting following errors in my Cluster Validation report when trying to create a Windows Cluster.
I have 2 nodes DB01 and DB02 . Each has 1 public ip, 1 private ip (for heartbeat), 2 private ips for SAN1 and SAN2. The private ip's to SAN are directly connected via Network Adaptor in DB01 and DB02.
Validate Microsoft MPIO-based disks Description: Validate that disks that use Microsoft Multipath I/O (MPIO) have been configured correctly. Start: 9/9/2014 1:57:52 PM. No disks were found on which to perform cluster validation tests. Stop: 9/9/2014 1:57:53 PM.
I am wondering what would be the best disk/RAID setup for a Windows server 2008 R2 OS and SQL Server 2012 database that has heavy read/write. I have the following disks I can use:
4x 15k 146GB 2x 10k 600GB
According to the server build requirements for the application, I need 100GB for the OS and 290GB for the drive containing the SQL mdf there are no stated requirements for the ldf, but would like to know if it should be allocated elsewhere?I should do RAID 10 for the 15k drives for SQL and RAID 1 for the OS on the 10k.
I am testing out a blank database created over two physical files on two separate disks with one table called data which has one column called values nvarchar(max).
I filled the table up with a whole load of data and ran a select * against it. If I run Permon at the same time I can see that the read load has been spread over multiple disks as each of these disks is getting read from in parallel. If I create the same database on a single file and run the same select * again it takes much longer, proving that the read load has been distributed across multiple disks.
Now moving onto writes, this is where the confusion lies. I understand that SQL server fills files evenly until they need growing, after which it will then fill files individually until they are full in a round robin fashion unless you have trace 1117 turned on. What I don't understand is why the writes aren't distributed out whilst it is filling these file groups.
I ran an continual insert into my table with go 1000000 to monitor how the files are being filled up. I monitored where SQL server was physically placing the files as they were being inserted by running the following query:
;WITH CTE AS (SELECT sys.fn_PhysLocFormatter (%%physloc%%) col1, RIGHT(LEFT(sys.fn_PhysLocFormatter (%%physloc%%),2),1) AS [Physical RID], DATAID
[Code] ....
I could see that it would a thousand or so records into file 1, then a thousand or so into file 2, then a thousand or so into file 1 etc etc. In another words it would hit one disk, then another disk, then back to disk one to fill the file evenly. Is there any way to make SQL Server distribute the writes out in parallel so that both disks are writing in tandem?
By the looks of it, multiple disks only scale reads, as with writes only one disk is ever written to at once which is annoying. Any way to harness the write power of multiple disks?
Hi, DTS now has a dynamic properties task in sql 2000 which at a first glance is pretty cool. However i do not know if it would sort out a particular issue I have.
I need to have a generic DTS package that pulls data from different locations based on the project selected by a user. The source is a btrieve database The name of the tables change with the project, so for example the F24 project would have its source table named F24TACT and the IFO source would have its source table named IFOTACT.
I need to be able to dynamically retrieve the table name from a local SQL table and assign this name to the data pump for the data extraction..
I don' t know if this is achievable in DTS. I can retrieve the value into a look up variable but how can i set it at runtime ?
My company stores thousands of images in a SQL 200 database (imagedatatype). We have a need to manipulate these images as they arepresented according to different situations. For example - differentlogos need to be inserted at the bottom of the image according to therequesting party.Any thoughts on the subject are much appreciated.Dante
set @x1 = (select DRfieldname from tblJournalType where journaltypeid = 2 set @SQLstring = 'select ' + @x1 + ' from tblAssettype where assettypeid = 10' set @x2 = EXEC (@SQLstring)
The last line above is where I am getting the error. Is this possible to do this?
I'm attempting to write a script that I can execute accross 30 servers that will create a domain login and subsequently grant access to said account on all databases per server. The only problem that I'm running into is trying to dymanically create the login. Example source is below.
declare @sql varchar(1000)
declare @loginname varchar(50)
select @loginname = 'DOMAINaccountname'
set @sql = 'if not exists (select * from master.dbo.syslogins where name = N' + char(39) + 'DOMAINaccountname' + char(39) + ')' + char(10) + char(13)
I need to pull data using input from one table in sql server 2005. I have to query against the sql server 2000 database and pull data into sql server 2005. I have a list of ids that I have to pass to a query to get the desired data. What is the best practice for this. Can I use SSIS or do I need to build an app in C#? Can somebody please reply back?
I got your email address from your web cast. I really enjoyed the web cast and found it to be very informative.
Our company is planning to use SSIS (VS 2005 / SQL Server 2005). I have a quick question regarding the product. I have looked for the information on the web, but was not able to find relevant information.
We are getting Source data from two of our client in the form of Excel Sheet. These Excel sheets Are generated using reporting services. On examining the excel sheet, I found out that the name Of the columns contain data itself, so the names are not static such as Jan 2007 Sales, Feb 2007 Sales etc etc. And even the number of columns are not static. It depends upon the range of date selected by the user.
I wanted to know, if there is a way to import Excel sheet using Integration Services by defining the position Of column, instead of column name and I am not sure if there is a way for me to import excel with dynamic Number of columns.
Your help in this respect is highly appreciated!
Thanks,
Hi Anthony, I am glad the Web cast was helpful.
Kamal and I have both moved on to other teams in MSFT and I am a little rusty in that area, though in general dynamic numbers of columns in any format is always tricky. I am just assuming its not feasible for you to try and get the source for SSIS a little closer to home, e.g. rather than using Excel output from Reporting Services, use the same/some form of the query/data source that RS is using.
I suggest you post a question on the SSIS forum on MSDN and you should get some good answers. http://forums.microsoft.com/msdn/showforum.aspx?forumid=80&siteid=1 http://forums.microsoft.com/msdn/showforum.aspx?forumid=80&siteid=1
Hi, I have a need to display on screen AND email a pdf report to email addresses specified at run time, executing the report with a parameter specified by the user. I have looked into data driven subscriptions, but it seems this is based on scheduling. Unfortunately for the majority of the project I will only have access to SQL 2005 Standard Edition (Production system is Enterprise), so I cannot investigate thoroughly.
So, is this possible using data driven subscriptions? Scenario is:
1. User enters parameter used for query, as well as email addresses. 2. Report is generated and displayed on screen. 3. Report is emailed to addresses specified by user.
SQL Server 2000 SP4 to multiple SQL Server 2005 Mobile Edition on PDAs. My DB on SQL2k is published with a single dynamic row filter using host_name() on my 'parent' table and also join filters from parent to child tables. The row filter uses joins to other tables elsewhere that are not published to evaluate what data is allowed through the filter.
E.g. Published parent table that contains suppliers names, etc. while child table is suppliers' products. The filter queries host_name(s) linked to suppliers in unpublished table elsewhere.
First initial sync with snapshot is correct and as I expected - PDA receives only the data from parent (and thus child tables) that matches the row filter for the host_name provided.
However - in my scenario host_name <--> suppliers may later be updated E.g. more suppliers assigned to a PDA for use or vice versa. But when I merge the mobile DB, the new data is not downloaded? Tried re-running snapshot, etc., no change.
Question: I thought the filters would remain dynamic and be applied on each sync?
I run a 'harmless' update on parent table using TSQL e.g. "update table set 'X' = 'X'" and re-sync. Now the new parent records are downloaded - but the child records are not!
Question: I wonder why if parent records are supplied, why not child records?
If I delete existing DB and sync new, I get the updated snapshot and all is well - until more data added back at server...
Any help would be greatly appreciated. Is it possible (or not) to have dynamic filters run during second or subsequent merge?
I have tried building an Inline TVF, as I assume this is how it would be used on the DB; however, I am receiving the following error on my code, I must be missing a step somewhere, as I've never done this before. I'm lost on how to implement this clr function on my db?
Error: Msg 156, Level 15, State 1, Procedure clrDynamicPivot, Line 18 Incorrect syntax near the keyword 'external'. CREATE FUNCTION clrDynamicPivot ( -- Add the parameters for the function here @query nvarchar(4000), @pivotColumn nvarchar(4000),
CREATE PROCEDURE dbo.spinb_CheckYN @FnNameYN varchar(50), @InvLineID int, @FnBit bit output AS
declare @SQL varchar(8000)
set @SQL = ' if dbo.' + @FnNameYN + ' (' + convert(varchar(31),@InvLineID) + ')) = 1 set @FnBit = 1 else set @FnBit = 0'
exec (@SQL) GO
Obviously; @FnBit is not defined in @SQL so that execution will not work. Server: Msg 137, Level 15, State 1, Line 4 Must declare the variable '@FnBit'. Server: Msg 137, Level 15, State 1, Line 5 Must declare the variable '@FnBit'.
So; is there a way to get a value out of a Dynamic SQL piece of code and get that value INTO my OUTPUT variable?
My many thanks to anyone who can solve this riddle for me. Thank You!
Sigh: For now, it looks like I'll have a huge string of "IF" statements for each business rule function, as follows: Hopefully a better solution comes to light.
------ Vertical Build1 - Std Vanes ----------- if @FnNameYN = 'fnb_YN_B1_14' BEGIN if dbo.fnb_YN_B1_14 (convert(varchar(31),@InvLineID) ) = 1 set @FnBit = 1 else set @FnBit = 0 END
------ Vertical Build1 - Scissor Vanes ----------- if @FnNameYN = 'fnb_YN_B1_15' BEGIN if dbo.fnb_YN_B1_15 (convert(varchar(31),@InvLineID) ) = 1 set @FnBit = 1 else set @FnBit = 0 END . . . etc.
I've looked up Books Online on Dynamic Cursor/ Dynamic SQL Statement.
Using the examples given in Books Online returns compilation errors. See below.
Does anyone know how to use Dynamic Cursor/ Dynamic SQL Statement?
James
-- SQL ---------------
EXEC SQL BEGIN DECLARE SECTION; char szCommand[] = "SELECT au_fname FROM authors WHERE au_lname = ?"; char szLastName[] = "White"; char szFirstName[30]; EXEC SQL END DECLARE SECTION;
EXEC SQL DECLARE author_cursor CURSOR FOR select_statement;
EXEC SQL PREPARE select_statement FROM :szCommand;
EXEC SQL OPEN author_cursor USING :szLastName; EXEC SQL FETCH author_cursor INTO :szFirstName;
--Error-------------------- Server: Msg 170, Level 15, State 1, Line 23 Line 23: Incorrect syntax near ';'. Server: Msg 1038, Level 15, State 1, Line 24 Cannot use empty object or column names. Use a single space if necessary. Server: Msg 1038, Level 15, State 1, Line 25 Cannot use empty object or column names. Use a single space if necessary. Server: Msg 170, Level 15, State 1, Line 27 Line 27: Incorrect syntax near ';'. Server: Msg 170, Level 15, State 1, Line 30 Line 30: Incorrect syntax near 'select_statement'. Server: Msg 170, Level 15, State 1, Line 33 Line 33: Incorrect syntax near 'select_statement'. Server: Msg 102, Level 15, State 1, Line 35 Incorrect syntax near 'author_cursor'. Server: Msg 170, Level 15, State 1, Line 36 Line 36: Incorrect syntax near ':'.
I have a requirment which i have partly accomplished , but could not get through completely
i have a file which comes in a standard format ending with date and seq number ,
suppose , the file name is abc_yyyymmdd_01 , for first copy , if it is copied more then once the sequence number changes to 02 and 03 and keep going on .
then i need to transform those in to new file comma delimited destination file with a name abc_yyyymmdd,txt and others counting file counting record abc_count_yyyymmdd.txt. and move it to a designated folder. and the source file is then moved to archived folder
what i have taken apprach is
script task select source file --------------------> data flow task------------------------------------------> script task to destination file
dataflow task -------------------------> does count and copy in delimited format
what is happening here is i can accomlish a regular source file convert it to delimited destination file --------> and move it to destination folder with script task .
but cannot work the dynamic pick of a source file.
please advise with your comments or solution you have
I am trying to create an ssis package with dynamic csv file as output. and out format contains query output.
sample file name:
Unique identifier + query output + systemdate();
The expression is looking like this.
@[User::FilePath] + @[User::FileName] + ".CSV"
-- user filepath is a variable from ssis package. File name is the output from SQL query. using script task i have assigned the values to @[User::FileName] .
When I debugged the script task the value getting properly but same variable am using for Flafile destination. but its not working.
I have created a dynamic SQL program that returns a range of columns (1 -12) based on the date range the user may select. Each dynamic column is month based, however, the date range may overlap from one year to another. Thus, the beginning month for one selection may be October 2005, while another may have the beginning month of January 2007.
Basically, the dynamic SQL is a derived Pivot table. The problem that I need to resolve is how do I now use this dynamic result set in a Report. Please keep in mind that the name of the columns change based on the date range select.
I have come to understand that a dynamic anything is a moving target!
Thanks in advance. What is maximum SQL Server database (*.mdf) file size with SQL Server 2000 as part of Microsoft Small Business Server 2000? (Database files were limited to 10 GB in SBS 4.5 with SQLServer 7.0... has this changed?).
Am very new to MS SQL adminstration Can anybody help me out how to work on Microsoft SQL Server 2000 Desktop Engine (MSDE 2000) Release A just for the practice.
The activity which am going to workout on MSDE is below.
How to install SQL(on XP) How the layout will be(like if i insall MSDE what are all Application will be and how they depends on each other) How to create/delete tables if so, how can we do it either by GUI or CUI