SQL 2012 :: Use One Of Shared Drives For Configuring MSDTC?
Jun 7, 2015
We are going to install a SQL Server 2012 Ent. Edition two node (Active/Passive) cluster. Only one instance. Issue is, a separate shared storage is not provision for MSDTC.
1. Is it mandatory to configure MSDTC for a single SQL 2012 instance ?
2. Can we use one of the shared drives (Data/log/bkp/temp) for configuring MSDTC ?
I have tried doing sql server backups to a file share and that has been taking too long. So I've decided to backup locally and then taking those backups and getting them off the server. For those that are doing this what do you use to get your backups off the server?
Is it a best practice to disable "Allow files on this drive to have contents indexed" on NTFS drives used by SQL for its data, log, tempdb, etc?
In what I've read it seems to be a best practice for Filestream objects and Flash storage drives. We don't currently use Filestream objects or have Flash drives.
Are there any benefits or drawbacks to disabling this feature on an NTFS drive connected to SAN LUNs under mount points?
Does MSDTC auto-install with the plain vanilla version of SQL Enterprise? Or do I have to install it later?Do you know of any links that reference specifically SQL 2012 stand-alone server installs of MSDTC?
getting SQL able to import files via ORS. So we got it to work on local drives, but not over networked drives. This is the code and error:
select * into sample.dbo.[eriktest] from OPENDATASOURCE('Microsoft.ACE.OLEDB.12.0', 'Data Source=serverSample est.xlsx; Extended Properties="Excel 12.0 XML;HDR=YES;IMEX=1"')...[Sheet1$] Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7303, Level 16, State 1, Line 1 Cannot initialize the data source object of OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)".
But going from the local C: or D: drives is fine. I also tried using xp_cmdshell (OPC) to map a drive, and then tried using ORS with the drive letter, and got the same error.
I have configured the memory for sql server as below
min : 512 MB Max :13500 MB
Total RAM on the server is 16GB
I want to receive email from sql server informing that sql server is running on low memory ( less than 200 MB free) so that I do not get out of memory issues and the sql server continues to run without fail.
Apparently this error was fixed in CU12 for SQL 2008, but it seems to have raised it's head again in SQL 2012.[SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console.
I've got a client who is seeing it. but I've not seen a fix in CU1 or CU2 for 2012.
I need to add an existing shared folder to a SQL FileTable. So this is the path and I created a SQL Filetable and now I need to add it to the filetable.
Any fix for the seemingly random sort order of the variables in the dropdown list when configuring parameters and connection managers in the SSISDB catalog?
I imported all of our connection strings into an environment (about 200 of them). They were inserted in alpha order and the ID values within the internal.environment_variables table shows them in order as well, by ID and by name. When I run profiler and capture the command that retrieves them and run it in ssms they are in order but in the dropdown they seem random.
There are no values within any of the tables that accounts for the order they are in.
If a package has 5 connections you need to go through the unsorted list 5 times to find them.
Sometimes you get lucky and they are in the first 20 or so.
I know I can write a script, just wondering if there is a fix for the sorting.
[SSIS.Pipeline] Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console.
More often than not, I typically don't touch DTC on clusters anymore; however on a project where the vendor states that it's required. So a couple things here.
1) Do you really need DTC per instance or one for all? 2) Should DTC be in its own resource group or within the instance's group? 2a) If in it's own resource group, how do you tie an instance to an outside resource group? tmMappingSet right?
Was wondering if there was a best practice minimum permissions for creating a SQL login to use when setting up a new shared Data source for SSRS report manager?
Something along the lines of them being a data read for the DB and permissions to update tempdb?
Would have thought it not advisable to have the login be able to update the main db...
So I started a new job recently and have noticed a few strange configurations. Typically I would never mess with min memory per query option and index create memory option configuration because i just haven't seen any need to. My typical thought is that if it isn't broke... They have been modified on every single server in my environment.
From Books Online: • This option is an advanced option and should be changed only by an experienced database administrator or certified SQL Server technician. • The index create memory option is self-configuring and usually works without requiring adjustment. However, if you experience difficulties creating indexes, consider increasing the value of this option from its run value.
I am trying to run a program called NVivo7, which - when I install - also requires installation of SQL Server. Because my C drive is small (and nearly full), I am trying to run NVivo7 off my D drive, though SQL seems to install on C. Is it possible, do you think, to use 2 different drives in this way, or do both the program and the Server need to be on the same one? If so, is there any way to get them both on D?
So i'm not really new, but got a question. i've recently been looking to to the Western Digital Raptor Drives.
as far as performance, it's always been my understanding that the speed of the hard drive is just about always the bottle-neck of a computer. i'm currently running 2 stripped WD 500gb SATA drives for my SQL server (dual Xeon 2.8 with 2GB memory).
i'm thinking of upgarding to 4 WD Raptors (10k RPM) drives, the new 150GB models. anyone have an opionion? do you think i'll get a large performance increase?
the database that i run queries off of now is about 125 Million names, with about 80 fields in width. so it's rather large, and usally takes a fair abount of time to get my results back (we're talking anywhere from a minute, to half the day.)
do you think the raptors will slim that down significantly?
We recently installed SQL Server 2005. The server has 3 drives. When I try to restore a database I can only access the C: drive. How do I make the D: and E: drives visible in the "locate folder" window?
Can someone help me, I installed SQL 2005 Enterpirse Editon on windows clustered servers. Then after the installation I want to change the path of my DB logs but the problem was, I can not see the other drives. I can see only the drive where the DB was located. Is there any special configurations that should be done.
Ideally we'd like to configure our SQL cluster w/ the databases on one drive and the logs on another. Is this feasable in a cluster solution.. Will it basically just be 2 drives that are failed over vs. 1?
Hi , We are having 4 sql7 servers. All are in network only. serverA: drives c,d serverB: drives c,e serverC: drives c,d serverD: drives c,d,f Now my question i want map all the drives to each other. So that i can use the space wherever available, because in serverA i dont have space but in ServerC have lot of free space. Can anyone pls tell me in detail how we have to map the drives. Thanks
Hi, I'm looking for a way to check the free space left on the hard drives and then if needed send an alert to notify when we need to free up some space. I played around with the performance monitor and realized I could do it that way but I think you would have to leave the performance monitor running all the time and I'm not sure if I want to do that. I also read about the xp_fixeddrives proc that displays how much free space is available but then I don't know what to do from there? Does anyone have any recommendations for the best way to do this.
I mapped a drive on to my SQL Server box. It points to another server from the same domain. When I try to backup or restore a database, I can't see this mapped drive through my SQL Server. Even if I type the entire path, SQL Server wouldn't take it. I don't have a clue about why it is not working. Can anyone throw some light on this. Your help is grately appreciated.
Can someone tell me if it is possible to see the drives on the server using Perfomance Monitor? I so, where are tehy hiding because i struggled the wholed day!
I have a SQL 2000 server that is installed on a Dual Xeon server running win2k. The server has two raid 5 hard drives, a C drive and an E drive. The C drive is currently where the operating system files are stored as well as the SQL program files. As things stand there are SQL DB and transaction logs strewn between these two drives with no particular logic. My question is, with two drives as it stands how should I move things around to gain the best performance? For example, should I keep all my data on the E drives and all my transaction logs on the C drives with the OS and the program files?
There are about 10 Databases in use. One database run's the configuration for proprietary predictive dialing software. The other databases are calling information for each campaign we run within the dialing software.
I have enough space on both drives to accommodate the data, its performance I would like to see a difference in.
Problem: I have following drives on this server, Is there a query that I can run daily which will show us the % full and remaining storage capacity on these drives?
Logs(L) Data(M) Tempdb(T)
I would like to run that query on all of my servers to get a daily report.
I have a SAN and configuring a cluster on SQL 2005. I initially created a Quorum drive when setting up the cluster and now added 4 more drives to the physical node but when I try to install SQL that drive cannot be located.
Do we need to create all the drives when installing the cluster or what is the way to add the drives later on.
I have a 75 GB hard drive and a 300 GB. I want to mirror the 75 to the 300 and use the extra space as data storage. Is this possible if I partition the 300 and then mirror the hard drives.
Trying to locate information on MSDTC. Is this "needed" to run SQL Server? That is, if this part of the installation is deleted, will SQL Server still function? Also, does anyone know if this is a crucial tool needed by Veritas Volume Manager or Windows Disk Manager?
If anyone knows of a link to this information, I'd appreciate it. My searches come up with lots on information on MSDTC, but nothing that answers my specific questions.
When i do a BEGIN TRAN to a SQL server sitting on Windows 2003 Server from a SQL Server Sitting on Windows 2000 Server, the Transaction hangs and if I try o kill it the Transaction is in ROLLBACK State.
I tried setting the Properties for the MSDTC and restarted the Windows 2003 Server but in vain