I set up the collector, and specify the Run As as my AD account in the Collector Set - Properties - General screen. My AD account is the local admin of the remote server.
However, the collector does not seem to work. Although the collecting set is shown as running, the The blg file stays at 64K. If I open it, there is nothing inside (no counter at the bottom). What did I miss?
And have chosen the destination - unstructered (flat) file. But the wizard proposes to export only one table (dbo.Acocount) and all the others from the list are not exported. How can I export ALL the data into one file.I need to do this to edit the syntax in the editor and then import this data and database structure into Postgresql
I have been facing following Error in Failover cluster setup as below. I have prepared 2 node and 2 instance sql server failover cluster on top of windows failover.I have deleted MTCBJINS07 in AD and recreated even after, problem is not solved. MTCBJINS07 is my 2nd sql instance sql server network name.
Cluster network name resource 'SQL Network Name (MTCBJINS07)' failed registration of one or more associated DNS name(s) for the following reason:
DNS bad key.Ensure that the network adapters associated with dependent IP address resources are configured with at least one accessible DNS server.
I am trying to setup querying Active directory from sql for the first time.
We are running on windows server 2012 and using sql 11.0.2100.60. Have tried the following
sql is on sever dev AD is on sever DO
EXEC sp_addlinkedserver 'ADSI', 'Active Directory Services 2.5', 'ADSDSOObject', 'adsdatasource' GO
[Code] ....
I get the following error when I try and query
Msg 7321, Level 16, State 2, Line 2 An error occurred while preparing the query "SELECT name FROM 'LDAP:// xxxx.internal' WHERE objectCategory='Person' AND objectClass = 'contact'" for execution against OLE DB provider "ADSDSOObject" for linked server "ADSI".
I have a Windows Server 2012 R2 2 node cluster with SQL Server 2014 FCI installed. Data files are on a separate Windows Server 2012 R2 file server. Data files share has been permissioned to the SQL Server service and SQL Server Agent service accounts as Full Control. NTFS Permissions are Full Control.
When I try to attach a database CREATE DATABASE AdventureWorksDW2012 ON (FILENAME = 'apricotmssql_VIOLETMSSQL12.MSSQLSERVERMSSQLDATAAdventureWorksDW2012_Data.mdf') FOR ATTACHI get this error: Msg 5120, Level 16, State 101, Line 4 Unable to open the physical file "apricotmssql_VIOLETMSSQL12.MSSQLSERVERMSSQLDATAAdventureWorksDW2012_Data.mdf". Operating system error 5: "5(Access is denied.)".
If I log into the file server (called APRICOT) and look at the NTFS permissions they all look good. I have also reapplied the NTFS permissions from the root folder down.
EDIT If I log on to one of the nodes in the cluster as the SQL Server service account and navigate to apricotmssql_VIOLETMSSQL12.MSSQLSERVERMSSQLDATA and copy and paste the data file, it works fine.
EDIT2: If I log on to the file server and Enable Inheritance at the root level, then Replace all child objects with inheritable permission entries from this object, I get this error:
User Account Control settings on all nodes and the file server are set to Never notify
I did tried the encryption on server "A" for database "AdventureWorks2012". Then I tried to restore to server "B". There was the certificate issue, and I thought "of course : it's encrypted ! Let's deactivate it". So here I go "ALTER DATABASE AdventureWorks2012 SET ENCYRPTION OFF".I look at sys.databases : not encrypted.I backup using no encryption, I verify using msdb.dbo.backupset : not encrypted.
I move my backup to my other server where encryption was never configured (so no certificate, nothing...), and I have the error : Msg 33111, Level 16, State 3, Line 1
Cannot find server certificate with thumbprint '0xFA130E58C999C4919B8975999C83A75A403B11D8'. Msg 3013, Level 16, State 1, Line 1 RESTORE DATABASE is terminating abnormally.
I have a requirement to implement CDC for 50+ tables to implement incremental data changes warehouse/reporting rather than exporting the whole table data. The largest table is having more than half a billion records.
The warehouse use a daily copy of OLTP db (daily DB refresh). How can I accomplish this. Is there a downside in implementing CDC just for the sake of taking incremental changes on the tables?
Is there any performance impact if we enable CDC on OLTP db?
Can we make use of the CDC tables on the environment we do daily db refresh so that the queries don't hit OLTP database?
What is the best way to implement CDC to take incremental changes for reporting.
I have two databases like each other that one is the backup of another. Each DB have 2 filegroups. I want to replace one filegroup from one db to another. How do I do this? Or how do I backup and then restore?
Query to show logins that don't have any permissions within the SQL instance? I'm tasked with doing some cleanup and have found some cases where the database was deleted or moved to another server but the logins that used it were not deleted. I'd like to identify them to research.
For instance a query to show logins that have no permissions in any of the existing databases would be handy. I'm thinking it would be complicated by the need to loop through all of the existing databases and then outer join it to the list of instance level logins. Going to try to write something like that but was hoping that a script already exists.
I have multiple SQL 2008 severs with databases. Also, 1 mirroring server in place.
Since my database count is increasing can i have only 1 mirroring server. Is there any limit of db at mirroring server. I would have approx. 150 databases.
I want to Replace The Big Log database with A new one ( A database with same structure).But current DB has many connection .
This is my plan :
1- Create a new database with same structure.
2- Rename current database to olddb with this code :
USE master GO EXEC sp_dboption CurDataBase, 'Single User', True EXEC sp_renamedb 'CurDataBase', 'OldDataBase' GO 3- Rename Newdb to current DB. USE master GO EXEC sp_renamedb 'NewDataBase', 'CurDataBase'
is it true ? and Tsql code is ok ? (dont forget many of connection to curdatabase (that Is a log db) and loss some seconds data is not problems)
My database went into suspected mode. and after we had run some script, it came out from the suspected mode. but we encountered this error while opening table in database.
2009-11-02 15:46:42.90 spid51 Error: 824, Severity: 24, State: 2. 2009-11-02 15:46:42.90 spid51 SQL Server detected a logical consistency-based I/O error: incorrect pageid (expected 1:43686; actual 0:0). It occurred during a read of page (1:43686) in database ID 23 at offset 0x0000001554c000 in file 'H:MSSQL.SQL2008MSSQLDATAmy_db.mdf'.
Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
I tried to copy db from server to server by sa user ( sql login) but this error raised and the copy failed
Executed as user: NT ServiceSQLSERVERAGENT. Microsoft (R) SQL Server Execute Package Utility Version 11.0.2100.60 for 64-bit Copyright (C) Microsoft Corporation. All rights reserved. Started: 9:55:24 AM Progress: 2015-05-11 09:55:24.45 Source: 10_32_0_201_10_32_0_202_Transfer Objects Task Task just started the execution.: 0% complete End Progress Error: 2015-05-11 09:56:31.87 Code: 0x00000000 Source: 10_32_0_201_10_32_0_202_Transfer Objects Task
An error occurred while transferring data. See the inner exception for details. StackTrace: at Microsoft.SqlServer. Management. Smo. Transfer. TransferData()The Execution method succeeded, but the
[code]....
number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors. End Warning DTExec: The package execution returned DTSER_FAILURE (1). Started: 9:55:24 AM Finished: 9:56:32 AM Elapsed: 67.892 seconds. The package execution failed. The step failed.
Is there a better way to deal with the virtual log files?...I see several approaches in dealing/decreasing the virtual log files for a database..want to know what's the best n safest approach, from the masters here?
I have an environment with MS-SQL Server 2014 and always-on availability group configured (on 2-nodes).
I'm writing a Powershell Script which removes the database from the availability group (on the primary server) and then SHOULD drop the database on the secondary Server.
That works most of the time, but not always...
When it fails I get the error message:
Cannot drop database "Customer_2" because it is currently in use.
When i check the secondary DB-Server (sp_who2) while the script is running, i see that there is a process for the DB "Customer_2" with Status="background", Command="DB STARTUP" and LastWaitType="REDO_THREAD_PENDING WORK".
As soon as the script fails, this process for "Customer_2" disapears.
This happens always only on the second database in the availability group.
Why is the process still there, even after I removed the database from the Availability Group on the primary node.
If I remove the database from the availability group manually, the "background" process on the secondary node for that database disappears..
1) We are providing a e governance solution for an organization,where we are providing a centralized database,Client have provided 5 Database server for the same.how can we position the Database Server? there are 5000 Concurrent users and 25000 users,SAN Storage for approx. 60 TB,Database size of 2 TB and growth of 1 TB every year
2) How many instance can we have for above said Case?
We are in web site development company,Previously we don't have proxy configuration, after implementing Proxy , we have an issue to connect a remote database.
The error pops "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. Error 53".
I've got reporting services on a different box from the database and I can see all the reports, but when I try to setup a subscription, I get this weird error:
The SQL Agent service is not running. This operation requires the SQL Agent service. (rsSchedulerNotResponding)
The same error happens when I connect to the database server via management studio and try to run a job.
We have a 2 node clustered instance(SQL 2014) with 26 databases and we would like to enable alwayson for one of the databases for reporting (only one secondary and do not need high availability setup). I'm thinking if the reporting application/queries can explicitly connect to the secondary database(Instance namedatabase name) without using a listener and setup the secondary in asynchronous commit mode. Read about the REDO thread blocking due to reporting workload. How does this affect if I implement the secondary in this way.
Copy mssqlsystemresource.mdf of a recently upgraded server and paste to an old server have same effect of upgrading via .exe installation?
My idea is to save time and administrative efforts in upgrades (Service Packs and/or Cumulative Updates) using this method.
According to BOL:
The Resource database makes upgrading to a new version of SQL Server an easier and faster procedure. In earlier versions of SQL Server, upgrading required dropping and creating system objects. Because the Resource database file contains all system objects, an upgrade is now accomplished simply by copying the single Resource database file to the local server.
I have SQL 2014. When I try to restore a user database using SSMS GUI, the Restore Database Pop up box never pops up. This happens for any database on this server at any time. Sometimes I get the pop up, some times I dont get.
So I tried to click on Databases on Top and Restore Database, and then select the db that I need to restore from Drop down, then it shows "creating restore plan selecting backups" but it takes forever.
We have full backup and trn log backups every 30 mins. So is it trying to get all these backup files in the background causing this issue? If yes then how to overcome this?
I would like to know what happens if i shrink the database with truncate only option and do a full backup or transaction log backup ? are the full backup or transaction log backup valid? I know that the performance of the database is bad if i shrink the database. What happens to full backup or transaction log backups?