SQL Server Admin 2014 :: Running Multiple Instances On 2 Nodes
Apr 12, 2015
I'm looking at installing 2008R2 and 2014 side by side, then using Mirroring to provide HA for the 2008R2 instance and AoHA for the 2014 instance. I'd be using the same two physical servers for both the Mirroring pair and the AoHA pair.
I am playing around in a test environment with SQL Server 2014. I have a question about the default location of the report server databases when you have multiple report server instances installed on one server.
I did a very simple install of SQL Server 2014 with the database and Reporting Services in Native Mode (install only) features selected. Accepting the default locations, I ended up with the following locations as you would expect:
Running the Reporting Services Configuration Manager, I created the Report Server database. After creating the Report Server database, the related files will be located below in the SQL folder as I would expect.
Next I installed another instance, which I called Test, of SQL Server 2014 like I did above. I now have the following folder structure the Test instance as I expect.
We are consolidating some old SQL server-environments from 'OLD' to 'NEW' and one of our vendors is protesting on behalve of the collation we use on our 'NEW' SQL server.
Our old server (SQL 2005) contains databases with collation SQL_Latin1_General_CP1_CI_AS
Our new server (2014) has the standard collation Latin1_General_CI_AS
Both collations have CI and AS
From experience I know different databases can reside next to eachother on the same Instance.
The only problem could be ('could be !!') the use of TempDB with a high volume of transaction to be executured in TempDB and choosing for Snapshot Isolation Level ....
The application the databases belong to is very static, hardly updated, and questioned only several time per hour (so no TempDB issue I guess).
using different databases using a different collation running on the same instance?
-MS Server 2012 R2. -SQL 2014 EE. -All windows updates. -Clean install of both OS and SQL, all 3 nodes are identical. -SQL Server is running on an alternate port, which I've opened in the firewall. Connections from all network locations are working swimmingly; including connections between all 3 nodes. -I've got the groups up and running; Listener is set up correctly. Connections work great. -One node is synchronous, one is asynchronous. Both show synchronized, and synchronizing respectively. -Data added at the primary node is moved across to all 3 with lightning speed.
When I attempt a manual failover it hangs..and hangs...then pops up an error 41131 and rolls back the failover. Leaving the cluster perfectly intact and working just as it did prior to the failover attempt. What I've checked so far:
-There is absolutely NOTHING in the cluster events log. -Windows event log shows no errors, just the standard stuff of the primary nodes state changing from primary to resolving and then back again. -SQL Event log has a few things in there, but nothing that's leading me to a solution, I've attached the log from start to finish on an attempted manual failover:
I am tasked with correcting 600 SQL Server instances default settings. I would like to use powershell, if possible to loop thorough each named instances (I have a list) and apply the following settings. I also need to query the server for the installed memory and apply * 80% logic for maxmem, and look at the VCPU count and adjust the MAXDOP to that number.
USE MASTER GO EXEC sp_configure 'recovery interval (min)', '0'; RECONFIGURE WITH OVERRIDE; EXEC sp_configure 'allow updates', '0'; RECONFIGURE WITH OVERRIDE; EXEC sp_configure 'user connections', '0';
We have a SQL 2014 active passive cluster with 5 instances. When the cluster was installed one of the nodes was a virtual machine. As we started to have problems and this is not a Microsoft supported configuration we decided to replace the virtual node with a phyiscal one. On 3 of these 5 instances we configured static ports (1433 TCP) this was required for applications and firewall implementation.
Now with the physical node joined to the cluster we have issues with these three instances. Whenever one of these instances is moved to the passive node the server authentication is changed from mixed to windows only. I'm no SQL expert at all but to me it looks like the configuration of the instances are not replicated to the passive node? I found some similar problems on the net but these are mostly under sql 2008.
I have found no real documentation on this, so any links would be greatly appreciated.
We have an existing 2 node active active cluster.
A default and and named instance SS2005 Enterprise Edition are installed on the cluster.
Recently we acquired new hardware, 2 servers, which we added to the cluster so we now have a 4 node cluster.
The 2 new nodes have OS etc installed.
Now i have installed sql on a new multi node cluster, i have upgraded sql server in an existing cluster BUT i have never installed sql on an existing cluster for new nodes for multiple instances.
Now i presume you simply run the install on one of the new nodes, do this for a default instance, selecting both 2 new nodes to installl on. And do this again for the named instance.
But in the past I have never been promoted for which nodes to install sql on, it simply does all the nodes BUT i do not want this.
And how will the cluster know where the binary files are for each instance on the new nodes?
In a nutshell, how do i install sql server 2005 for 2 new nodes in an existing 4 node cluster for multiple instances ?
We carried out an in-place upgrade on our production server on Saturday - going from 2008 R2 to 2014.
We had tested this method out in dev/test and pre-production with only minor post issues to fix.
However, on production we had an issue whereby checkdb was hitting 100% CPU and caused overnight processes to hang. The checkdb statement was terminated and disabled by a colleague at 1 am.
Since then we have restored this database to a dev server and ran checkdb against it with no_infomsgs and all_errormsgs but it still hasn't finished since Monday morning!
The database is just over 800 GB and whilst checkdb was crippling the cpu, logical reads are less than one. However, sp_whoisactive is showing that it has done 56 million reads so far and this number increases periodically so it looks like it keeps going back to re-check the database with a deep dive.
Also, on a different environment, we ran check table statements and one of them took over 9 hours for a single table but came back clean (see attachment).
We need to wait for the output but the database is still in use in production and the mess will just get worse if it is indeed corrupted.
I recently installed standalone version of SQL 2014 Standard on my work computer. I used Access before but I want to use a SQL server instead.
We have a shared drive that a file gets deposited every day at midnight. I want to be able to get this file and import it to the server (its basically a list of names).
Here what I have done so far:
I created the database
Created the file and successfully imported data into it using the Import Data feature.
I saved the SSIS package
Scheduled an Agent Job for this package to run at certain time,daily
At first the jobs would fail with a Access is Denied. I added a user under Credentials with my network account ( have admin rights on the work computer).Also added a Proxy for the Credential user I made.
Jobs fail with a “Cannot open data file” error. I tried changing things here and there, but I can’t get it to work.
I have a SQL Server 2008 instance that is running on "LiveServer" our production database (ProdDB) - and we need to upgrade to 2014. In order to do some upgrade testing, I spun up a VM with the same version of SQL server on the test VM (TestServer), did a backup of the production DB from the live server, and restored it to TestServer under a different name (ProdDBUA).
I then installed SQL2014 Upgrade advisor onto TestServer, and ran it, checking all the boxes (reporting services etc..) and it all came back clean - no issues whatsoever - not a single warning even. I'm under the impression that stored procs/functions etc... all reside within the DB, so a backup will include those. Is that correct?
The problem is, I know I have stored Procs, functions and views that use deprecated joins in that LiveServer.ProdDB. What do I need to do/configure/check in order to make sure that the Upgrade Advisor is actually checking through all that T-SQL that has deprecated code? I want to have a list to give to my report writers of procs/functions/views that need to be rewritten prior to the upgrade going live.
If there is a modification that needs to be run on the TestServer.ProdDBUA, a cursor to change the path etc. DB is running in Compatibility mode 90.
I have a website running under Windows Server 2003 which I am trying to deploy Reportin Services to using a custom security extension using Forms authentication. There is also another website which is just a Report Server running on this IIS 6.
On my development machine, XP with IIS 5.1, the Website with Reporting Services using forms authentication works fine, shares cookies for authentication and all that jazz. but trying to configure a second instance of Reporting Services on Server 2003 is not behaving.
Is there some trickery or details involved with running two instances of Reporting Services?
Also in the Configure Report Server tool it is failing and so not initializing on the last step of the Database Setup
Setting Connection Info for Reporting Server
ReportServicesConfigUI.WMIProvider.WMIProviderException: An error occurred when attempting to connect to the report server remote procedure call (RPC) end point. Verify that the Report Server Windows service is running, and then retry the operation.
---> System.Runtime.InteropServices.COMException (0x800706B3): The RPC server is not listening. (Exception from HRESULT: 0x800706B3) --- End of inner exception stack trace --- at ReportServicesConfigUI.WMIProvider.RSReportServerAdmin.ThrowOnError(ManagementBaseObject mo) at ReportServicesConfigUI.WMIProvider.RSReportServerAdmin.ListReportServersInDatabase(RSReportServerInfo[]& serverInfos)
My client has a server running MS 2003 Server for Smal Business Server, and it has SQL Server Express 2005 installed (I believe it is installed 3 times, as part of SharePoint, Veritas Backup Exec and ACT! applications).
I am trying to implement an application that uses SQL Server 2000, but the Server 2003 SBS does not support that version. So I need to launch an additional instance of SQL Server Exp 2005.
To do so, do I need to download and install SQL Server Express 2005 software again? Or can I just run SQLSVR.exe from an existing instance, but with a different instance name (sqlservr.exe -s<InstanceName>?
The reason I need another instance is that these existing instances limit DDL statements.
I am new to this and compare it to launching MS Word multiple times - you don't need to reinstall Word to run it multiple times.
Is launching from the command line a bad idea - and if it is, what is the better way to ensure that it launches under the local system context a service?
Thanks in advance for any help and insights. It is much appreciated.
I can easily query multiple servers using the multi-server query function in Central Management Server and write some of the results to logging tables. I would like to be able to do this via a scheduled job. So far I am finding that even setting up Master/Target Servers this may not work and the only workaround is either using SSIS, SQLCMD (by basically hard coding the servername) and possibly Powershell.
tell me if they have been successful just using standard jobs and querying against multiple servers?
If I can't save the results to a 'central' database/table (I can do this when in SSMS), but can still query against multiple servers I was thinking I could write the results to a CSV file that a SSIS job picks up.
I have attempted using SSIS to iterate through servers and have been plagued with intermittent connection issues when using a For...Loop container.
We have multiple databases on a single instance in an OLTP environment. I have my data files on a separate SAN LUN from my transaction log files (and a few NDFs split out onto additional LUNs). I was wondering if there is a performance benefit to putting each LDF file on its own LUN? Or at least my few busiest LDFs?
We are currently on 2012, but I'm having to put together specs for a 2014 installation and need to answer this question without having an environment in which I can benchmark different setups. I just want to hear whether or not others have done this (why or why not?).
I have a requirement to delete all the orphans users for the databases. The issue I am having is with when database principal owns a schema in the DB, User cannt be dropped.
How do I transfer it to DBO in case I am looping multiple databases. This is what I got so far .
declare @is_read_only nvarchar (200) Select @is_read_only = is_read_only from master.sys.databases where name='test' /* This should be a parameter value */ IF @IS_READ_ONLY= 0 BEGIN Declare @SQL as varchar (200)
I read , When sql server Database having multiple data files within single filegroup then sql server writes data in multiple proportional file algorithm where the amount of data written to a file is proportionate to the amount of free space in that file, compared to other files in the filegroup.
so if there is no filegroups created and multiple secondary files are attached in databse , is there same way data stored and writes data in multiple files by the same algorithm or any different way.
I am trying to create a logon trigger. As I am testing this, I discovered that each time I do a connection, I get 19 rows, inserted into my audit table. I ran profiler, and I see it is going through the logon trigger multiple times, for a single connection. So, what am I doing wrong? The code is fairly simplistic, and the profiler doesn't give a clue, as to what is going on. When I look at the output, I see the spid for the first couple of connections are different, then a spid, that is different from those 2 is in the next 17 rows. But, when I do an sp_who2, that spid does not exist.
This issue was noticed on a 2012 version, that I was first testing on, then had the same issue on a 2008 R2. I am currently testing on a 2014 version, that is doing the same thing. Is the logon trigger itself, firing, and causing this?
I also tried using the After Logon option, and got the same issue.
Here is the code:
CREATE TRIGGER LogonAuditTrigger ON ALL SERVER WITH EXECUTE AS 'sa' FOR LOGON AS BEGIN DECLARE @Body NVARCHAR(2000),
A little background on what I am trying to achieve first. We are moving to Azure virtual machines and we will have 8 disks on the SQL Server box. I am adding more files to the primary file group and each file will go on its own drive. I am then rebalancing data across these files by rebuilding all of the indexes on the tables which is working fine. No problems so far all is good.
I now have an additional problem. If there is a lob or blob column on the table, rebuilding the clustered index and all the non clustered indexes doesn't rebalance the blob or lob data across the disks such as it does with in row data.
I cannot find any articles on rebalancing lob or blob data because all the articles say to move to a new file group. I do not want a new file group, I just want to use the primary file group where the data already resides, and just redistribute it evenly in the same way that I can in row data which is working fine.
One solution I thought about was to BCP data out of the table, truncate the table and then BCP back into the table which I imagine would have the desired effect of distributing the data evenly over the files.
We have a DTS package that is submitted every minute by a SQL Agent job (2000). We see instances where before the first job is completed, the second one starts running.
Has anyone encountered this type of behavior before?
In my application code I am trying to invoke multiple threads in which each thread is loading an instance of the same SSIS package and would initialize the package variables with different values and execute the different instances in parallel. In each thread - after the package execution has completed successfully - I read that instance's SSIS package variables to get result information from that Instance run.
When I load the same package in different thread using LoadFromSqlServer() method - does the code create multiple instances of the SSIS package and load the distinct instances in each of the thread - Will the Package Execution ID be different for the different instances? - Are the package level variables instance safe?
I am running multiple instance of the same application. This application is connecting to the database and running stored procedures using ADODB (all code examples are taken from msado15).
The problem is that somehow these two applications are sharing something either with the connection or commands.
For instance if the two instances are in the following function at the same time then they both thow an SEH exception:
The exception that occurs is: "First-chance exception ...: 0xC0000005: Access violation reading location 0x00000068."
And afterwords when they go to release the command an error occurs on First-chance exception at 0x4de4120c in IpsEngine.exe: 0xC0000005: Access violation reading location 0xcccccccc.
Code Snippet
inline void _bstr_t::Data_t::_Free() throw() {
if (m_wstr != NULL) { ::SysFreeString(m_wstr); } if (m_str != NULL) {
The command and connections are not static and there should be completely seperate instances of these for each instance of the application. Does anybody know why this may be happening. Any help would be appreciated. Thanks in advance.
When viewing an estimated query plan for a stored procedure with multiple query statements, two things stand out to me and I wanted to get confirmation if I'm correct.
1. Under <ParameterList><ColumnReference... does the xml attribute "ParameterCompiledValue" represent the value used when the query plan was generated?
2. Does each query statement that makes up the execution plan for the stored procedure have it's own execution plan? And meaning the stored procedure is made up of multiple query plans that could have been generated at a different time to another part of that stored procedure?
I want to set up a database role so that users can use sp_readerrorlog through SSMS. It does a check on membership in the securityadmin role.
I have tested it and can see you can grant execute on xp_readerrorlog but the SSMS GUI uses sp_readerrorlog.
I thought I could create a user/certificate and add the signature to sp_readerrorlog but it's not permitted (likely because it's not a normal database object).
So the other solution is to add the users to the securityadmin role but then explicitly deny alter any login (best done with a custom server role in 2012+ but otherwise just manually in 2008). I tested this out and it works, I'm not able to alter any logins or increase my own permissions, I also did a check of what's reported from fn_my_permissions(null, null) and it shows minimal permissions like I'd expect.
I have a 2012 AlwaysOn DB Mirroring environment set up with two nodes. Both have 5 installs of SQL named instances.
The issue we are having is when we patch one server and fail everything over, some of the applications will error. Some of the applications had to have their web.config files updated with hostinstance name because it seems to not work with DNS.
How do I SELECT multiple nodes with same name but different attributes. For example, I would need to SELECT NM1.03.1 from <NM1> node. How do I specify which one I want to SELECT? What if I want to SELECT them all? What if I only want two of them?
I think this is a question for the specialists among us.
Can I use one general reporting server (installed on instance MAINREPORTING) for multiple customers who all have their own sql instance (CUST1, CUST2, CUST3, ..) I would use UserAuthenciation on the reportserver url to display the specific reports customers can use.
Is this possible, and what do i have to take care off concerning installation and/or configuration (especially on the reporting side) ?
How to get the list of instance of SQL Server 7.0/2000 running on the local machine inside my domain... I need to prepare the list of all sql instances.. pls help if possible to find details using sql query.