SQL Server Admin 2014 :: Database Mirroring For Large Number Of Databases
Oct 27, 2015
I have a 2 node cluster having 4 cores each wherein having 3 instances of SQL 2008 R2 enterprise comprising of 60 databases, 20 on each instance. I need to setup mirroring for each of the databases to a secondary server having 4 cores and 3 instances. What i understand is that in this case the mirror server will be providing max of 512 worker threads and the 60 mirror databases would consume 240 threads.what all needs to be checked for looking into the feasabilty of going ahead with a async mirror setup as mentioned above.
I am trying to enable database mirroring for 100 database. It goes error free till 59 databases (some times 60 databases) with the status (principal, synchronized) on principal. on the 60th or 61st database it gave the status (principal, disconnected). Also mirror starts acting abnormal. connection to mirror starts to give connection timeout and it is not enabling database mirroring on any more databases. I have SQL SERVER 2005 Enterprise with SP1 on the servers. witness is not included yet.
this are my test servers... i have more than 500 databases on my production servers.
principal and mirror both are using port 5022 for ENDPOINT communication.
I am trying to enable database mirroring for 100 database. It goes error free till 59 databases (some times 60 databases) with the status (principal, synchronized) on principal. on the 60th or 61st database it gave the status (principal, disconnected). Also mirror starts acting abnormal. connection to mirror starts to give connection timeout and it is not enabling database mirroring on any more databases. I have SQL SERVER 2005 Enterprise with SP1 on the servers. witness is not included yet.
these are my test servers... i have more than 500 databases on my production servers.
principal and mirror both are using port 5022 for ENDPOINT communication.
All of the databases are critical and all must be included in the Database Mirroring. so, after that I tried to implement database mirroring again...... System has 3 GB of RAM, SQL SERVER (Mirror) using 85 MB of RAM but still giving this error while trying to enable database mirroring for 37th Database.....
"There is insufficient system Memory to run this query"
I've installed the MDW (Mangement Data Warehouse) database on our central monitoring SQL Server. I've then added a number of servers to be monitored. The data is collected on the servers that are being monitored and uploaded to the central MDW Monitoring server.
On the servers that are being monitored, I'm seeing a large number (over 1000) of SPIDs being generated by 'SQL Server Data Collector'.
Is this normal behaviour? I've seen more blocking as a result of this.
Is there any way to reduce the number of SPIDs generated?
I have multiple SQL 2008 severs with databases. Also, 1 mirroring server in place.
Since my database count is increasing can i have only 1 mirroring server. Is there any limit of db at mirroring server. I would have approx. 150 databases.
I configured a database mirroring on SQL server 2008 R2 on test environment. However It was not working, after some time. I deleted the mirroring configuration and also deleted database on both primary and mirror instance.
few days later I notices, On Primary, Continuous error events logged with message "database mirroring attempt by user 'domainservice account' failed with error 'connection handshake failed, 'domainservice account' does not have permission on the endpoint state 84.
Database File Placement Layout? We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:
Data files – should go on the fastest reading storage. Log files – should go on the fastest writing storage. TempDb – involves a lot of writing at the same time the data files are being read. Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”
I have system database and user database file are present in G,H and W drive.The process is going to be - copy data from G to S, H to T, W to U. Rename G to X, H to Y and W to Z. Rename S to G, T to H and U to W. Reboot the servers. The original G, H and W will then be X, Y and Z. The old S will be the new G, old T will be H and old U will be W. My question is that after doing this whether my SQL server will start or not
We are in the middle of re-designing few tables (namely transaction tables) that would store very large data and would be hosted on cloud (Azure). The old design of this product breaks transaction tables into monthly tables. i.e. say ORDERS Table would be physically broke into twelve monthly tables over a year like ORDERS0115 (mmyy), ORDERS0215 and so on.
We are in the opinion that keeping the entire transactions in one Table is better. Would like to know what's the best practices for transaction tables like the one mentioned above? Is it better to use one table with partitions. I read somewhere that partitions can slow down SELECT queries if not designed and thought properly.Since this would be hosted on cloud (Azure), do you think some additional things are to be taken care? How a site like Amazon keeps their transactions tables?
I have a Windows NT group that is used to delegate certain database responsibilities to other members of staff and I am trying to grant permissions for the members of the group to be be able to establish database mirroring sessions, as in run the following:
ALTER DATABASE <database> SET PARTNER = 'tcp://principal_server.domain.com:port';
Although the group has db_owner role membership to the user database which grants the ALTER permission on the database, the following is being generated in the error log when they get to this step on the intended Mirror instance after restoring the database correctly in preperation:
SqlDumpExceptionHandler: Process 59 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process. * ******************************************************************************* * * BEGIN STACK DUMP: * 10/29/15 11:16:15 spid 59 * * * Exception Address = 00007FF9A6AF838C Module(sqlmin+000000000003838C) * Exception Code = c0000005 EXCEPTION_ACCESS_VIOLATION * Access Violation occurred reading address 00000000000000D8 * Input Buffer 210 bytes - * alter database <redacted> set partner = '<redacted>';
As you can see, the statement is denied to the user. There are no issues with the database as I am able to run the same query successfully using my own sysadmin account after the failed attempt. What other minimum permissions the group might need to successfully enable them to setup a mirroring session?
I want to analyze procedure cache, to find inefficient plans and parameter issues.
I do it trow DMV But my requests to DMV are very slow and demand resources because procedure cache is about several GB Actually I dont need on-line analysis.
Is it possible to have fast snapshot of procedure cache?
We have a database. It is enabled for mirroring. We need to delete the old records. That is around 500k records from a table. But it has foreign key relation. How to do in Production servers these kind of deletes?
I've got an old version of SQL Server 2008 R2 Developer Edition on an old PC which is failing. I've got a new PC and have put SQL Server 2014 Developer Edition onto it. Now before the old machine completely dies, I've gotten into SSMS on the old machine and did a backup of the databases I want to save. I've moved the .BAK files to where I could get to them from SSMS on the new machine. I've gotten into SSMS and tried to do a restore the database to my new machine. However I'm getting an error that does not make any sense to me.
The database I'm I've backed up is named JobSearch. When I backed it up, that was the only database I had selected. Like I said I copied the .BAK to the new machine. Got into SSMS, told it that I wanted to restore the JobSearch database, telling it where I wanted to put it, and it then immediately fails with a:
"Restore of database 'JobSearch' failed. System.Data.SqlClient.SqlError: Logical file 'VideoLibrary_Data' is not part of the database 'JobSearch'. Use RESTORE FILEISTONLY to list the logical file names."
Well of course VideoLibrary isn't "the logical file". But neither did I select VideoLibrary (which is a database I also want to move, but I'm doing one at a time). So what in heck is going on here? Why is it complaining about a database I haven't even selected to back up? Why, when I check everything on the old machine, it's backing up JobSearch, but on the new machine it sees VideoLibrary?
I need backup script to take all the database backups and we have the maintenance plan but our database character size is 98 and when we are taking the backups through maintenance plan while storing the backup history information it is adding the date and timezone information and exceeding the length to 128 so it is not writing the information on MSDB.
So we want to take the backup using the script and it has to create sub folder for each database. Also if any of the database fails it should continue with others.
I have heard that high numbers of VLF's aren't good. It can impact performance and can delay recovery time, so I wanted to test that.
I created 2 DBs with 100MB datafile and 50MB logfile.
TestDB log file had 100MB autogrowth TestDB2 log file had 1% growth.
I inserted 1048576 records, took the backup
Ran DBCC loginfo and TestDB had 40 VLFs and TestDB2 had 165 VLFs
But when I restored both DBs, this is what I got.
TestDB: RESTORE DATABASE successfully processed 42258 pages in 4.420 seconds (74.691 MB/sec). SQL Server Execution times: CPU Time = 125ms, elapsed time = 8323 ms.
TestDB2: RESTORE DATABASE successfully processed 42257 pages in 3.943 seconds (83.724 MB/sec). SQL Server Execution Times: CPU time = 109 ms, elapsed time = 8314 ms.
Question is: Where is the difference? How TestDB which has 40 VLFs are better than TestDB22 which has 165 VLFs.
SELECT top 100 Ltrim([text]),objectid,total_rows,total_logical_reads , execution_count FROM sys.dm_exec_query_stats AS a CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) AS b where last_execution_time >= '2015-04-07 10:01:01.01' ORDER BY execution_count DESC
But the result of execution count is from the first. I want to know it only one day.
I have just upgraded a test server from sql server 2008 sp3 to sql server 2014 inplace upgrade. The compatability level of master database has not upgraded. It was showing 90 and the rest of system databases got updated to 120. Is it fine to update the compatibility level of master database ? Any precautions need to taken??
We have multiple databases on a single instance in an OLTP environment. I have my data files on a separate SAN LUN from my transaction log files (and a few NDFs split out onto additional LUNs). I was wondering if there is a performance benefit to putting each LDF file on its own LUN? Or at least my few busiest LDFs?
We are currently on 2012, but I'm having to put together specs for a 2014 installation and need to answer this question without having an environment in which I can benchmark different setups. I just want to hear whether or not others have done this (why or why not?).
I have a requirement to delete all the orphans users for the databases. The issue I am having is with when database principal owns a schema in the DB, User cannt be dropped.
How do I transfer it to DBO in case I am looping multiple databases. This is what I got so far .
declare @is_read_only nvarchar (200) Select @is_read_only = is_read_only from master.sys.databases where name='test' /* This should be a parameter value */ IF @IS_READ_ONLY= 0 BEGIN Declare @SQL as varchar (200)
I have been looking into mirroring a large amount of small databases approx 150 databases.
As I understand this won't be feasible because of the way mirroring threading works, http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=441900&SiteID=1
As I understand it for every database being mirrored sql will ping the mirror second, causing a network bottleneck?.
Also that the amount of threads generated for each mirrored database will cause also cause a bottleneck?
At the moment our database servers are under very little pressure and as an estimate use about 10% of the resources allocated to them such as CPU utilization, memory, disk IO and network. Our server hardware is Dual Quad core Xeons with 4 - 8 gig of memory and variety of 10k SCSCI raid configurations from raid 5 or 1,0 and sql 2005 32bit.
Ive done some calculations on the log file generation rate compared to network bandwidth there is more than enough network bandwidth.
Has anybody had any luck in mirroring many small databases?
My concerns is how much traffic is caused by the pinging of the mirror for each database?,
How many threads will the mirroring cause and what is the max amount of threads sql can handle?
How much memory will be consumed by each one of these mirroring threads?
The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it worked me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)
The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.
In test 1 min/max server memory was set to 9215MB/10751MB In test 2 min/max server memory was set to 13311MB/14847MB
The script assures the number of bytes inserted in the nvarchar columns is always the same.
This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)
Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9) In addition you can see at that moment
- buffer cache hit ratio is decreasing - page life expectance becomes "terrible" - free list stall/sec increases - lazy writes/sec increases - readlatency increases (write latency does not)
In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).
Now what I do not understand is :
Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.
I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.
Finally I want to notice
- I'm the only user on this machine - the table has a clustered index on a identity column - there are no foreign key constraints - inserts are executed using a loop, not one big transaction - to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queries
So I wonder why SQL Server starts to execute so many reads in test 1.
Our development team wanted to create a database user for each application user in the application and use these for granular data access control, which at first, sounded like a good idea but our initial testing ran into some interesting results.
Our target user base was about 15 million users with an estimated 1% concurrency rate, and finding no MS documentation on an upper limit to the number of users a database can have we began some load testing to see how the database performed. In the hundreds of thousands of users range our test database had a hard time performing well under light loads (even without any concurrent connections).
When we purged the users and reverted back to just a handful of service accounts, performance went back to "normal" under the same loads. I began to wonder if this is a situation where throwing more hardware at the problem would overcome the issue or if there is a practical upper limit to the number of users a single database can handle well.
(There were of course other cons to this arrangement and I certainly was never going to expand the users tree in the object explorer for a database like this, but we thought it a solution worth investigating.)
What is the largest number of users any of you have had in a single database?
For anyone with a larger number of databases (500+): How many do you have in a single instance. If you are using multiple instance on a single server, how many dbs per instance. This is why I'm asking
We are experiencing 701 "out of system memory" and temporary (usually) system freezes when the error occurs. We have 32bit 2005 version 9.00.2153.00, 32GB of memory, AWE enabled, quad dual-core 3GHz hyperthreaded server. Nether the bPool or VAS show any pressure when the "out of system memory error" occurs. Since this error usually indicates a VAS problem we tried increasing VAS to 1GB w/the -g flag. It made no difference. PSS has been working on the case for 3 weeks. They dont seem to be finding any evidince of memory pressure either. When I last spole to the escalation engineer yesterday it seemed that they are going to recommend reducing the number of databases on the server. I asked for clarification as to whether we are hitting a 32 bit barrior, an instance limitation, or both. I am awaiting the answer. How many databases do you have on your server? We had between 1700 and 1900 (the number varies) at times when the error occured. We are now at 1500, and have not had the error in the 2 days since reducing the number of databases...
I have a 2 node cluster having 4 cores each wherein having 3 instances of SQL 2008 R2 enterprise comprising of 60 databases, 20 on each instance. I need to setup mirroring for each of the databases to a secondary server having 4 cores and 3 instances.
What i understand is that in this case the mirror server will be providing max of 512 worker threads and the 60 mirror databases would consume 240 threads.
What all needs to be checked for looking into the feasibility of going ahead with a async mirror setup as mentioned above.
I did tried the encryption on server "A" for database "AdventureWorks2012". Then I tried to restore to server "B". There was the certificate issue, and I thought "of course : it's encrypted ! Let's deactivate it". So here I go "ALTER DATABASE AdventureWorks2012 SET ENCYRPTION OFF".I look at sys.databases : not encrypted.I backup using no encryption, I verify using msdb.dbo.backupset : not encrypted.
I move my backup to my other server where encryption was never configured (so no certificate, nothing...), and I have the error : Msg 33111, Level 16, State 3, Line 1
Cannot find server certificate with thumbprint '0xFA130E58C999C4919B8975999C83A75A403B11D8'. Msg 3013, Level 16, State 1, Line 1 RESTORE DATABASE is terminating abnormally.
And have chosen the destination - unstructered (flat) file. But the wizard proposes to export only one table (dbo.Acocount) and all the others from the list are not exported. How can I export ALL the data into one file.I need to do this to edit the syntax in the editor and then import this data and database structure into Postgresql
I have two databases like each other that one is the backup of another. Each DB have 2 filegroups. I want to replace one filegroup from one db to another. How do I do this? Or how do I backup and then restore?
Query to show logins that don't have any permissions within the SQL instance? I'm tasked with doing some cleanup and have found some cases where the database was deleted or moved to another server but the logins that used it were not deleted. I'd like to identify them to research.
For instance a query to show logins that have no permissions in any of the existing databases would be handy. I'm thinking it would be complicated by the need to loop through all of the existing databases and then outer join it to the list of instance level logins. Going to try to write something like that but was hoping that a script already exists.