SQL Server Admin 2014 :: Recover Value Of Column In Database Hosted Online
Jun 3, 2015
SQL server, by-mistake I updated values of a column in a database hosted online, is there any way undo the transaction. I didn't created any backup of the database. I read that still it can be recovered through the .ldf (log file) but unable to access it. Is there anyway to get access of the Log file or is there any way to recover the data.
I am asking about a virtual IP for SQL Server, is there a way we can assign a different IP to SQL Server other than the server's(host) IP address? like the same what we do in a clustered env.
I have set the environment set for AutoRecover (for every 3 minutes and Keep information for 7 days under the SSMS 2014 Menu: Tools -> Option ->Environment -> AutoRecover).
I've rebooted the box and restarted the SQL Server service and nothing seems to create the files.
SQL server. By-mistake I updated values of a column in a database hosted online, is there any way undo the transaction. I didn't created any backup of the database. I read that still it can be recoverd through the .ldf (log file) but unable to access it. Is there anyway to get access of the Log file or is there any way to recover the data.
I have a database it is 50 gb with hundreds of columns. I would like to choose a certain column and convert the data in it to .csv or excel file. How can I do that I am very new to MSSQL...
I would like to put a Clustered Index on a date column in a current heap, but one question/concern.This heap every month has thousands of rows deleted and even more added later. How much of an issue will this cause the Clustered Index as far as page splits? I was thinking Fill Factor of 70%.I would normally just test and still will on Dev box, but my Dev box is much smaller than production as far as power.
I am trying to implement the column encryption on one of the tables, have used the below link as the reference and got stuck at the last step.
[URL] ....
I have completed the following steps so far.
- CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘myStrongPassword’
- CREATE CERTIFICATE MyCertificateName WITH SUBJECT = 'A label for this certificate'
- CREATE SYMMETRIC KEY MySymmetricKeyName WITH IDENTITY_VALUE = 'a fairly secure name', ALGORITHM = AES_256,
[Code] .....
Example by using the function
EXEC OpenKeys
-- Encrypting SELECT Encrypt(myColumn) FROM myTable
-- Decrypting SELECT Decrypt(myColumn) FROM myTable
When I ran the last command :
-- Decrypting SELECT Decrypt(myColumn) FROM myTable
I get the following error :
Msg 257, Level 16, State 3, Line 2 Implicit conversion from data type nvarchar to varbinary is not allowed. Use the CONVERT function to run this query.
Where will I use the convert function, in decrypt function or in select statement?
I did tried the encryption on server "A" for database "AdventureWorks2012". Then I tried to restore to server "B". There was the certificate issue, and I thought "of course : it's encrypted ! Let's deactivate it". So here I go "ALTER DATABASE AdventureWorks2012 SET ENCYRPTION OFF".I look at sys.databases : not encrypted.I backup using no encryption, I verify using msdb.dbo.backupset : not encrypted.
I move my backup to my other server where encryption was never configured (so no certificate, nothing...), and I have the error : Msg 33111, Level 16, State 3, Line 1
Cannot find server certificate with thumbprint '0xFA130E58C999C4919B8975999C83A75A403B11D8'. Msg 3013, Level 16, State 1, Line 1 RESTORE DATABASE is terminating abnormally.
And have chosen the destination - unstructered (flat) file. But the wizard proposes to export only one table (dbo.Acocount) and all the others from the list are not exported. How can I export ALL the data into one file.I need to do this to edit the syntax in the editor and then import this data and database structure into Postgresql
I have two databases like each other that one is the backup of another. Each DB have 2 filegroups. I want to replace one filegroup from one db to another. How do I do this? Or how do I backup and then restore?
Query to show logins that don't have any permissions within the SQL instance? I'm tasked with doing some cleanup and have found some cases where the database was deleted or moved to another server but the logins that used it were not deleted. I'd like to identify them to research.
For instance a query to show logins that have no permissions in any of the existing databases would be handy. I'm thinking it would be complicated by the need to loop through all of the existing databases and then outer join it to the list of instance level logins. Going to try to write something like that but was hoping that a script already exists.
I have multiple SQL 2008 severs with databases. Also, 1 mirroring server in place.
Since my database count is increasing can i have only 1 mirroring server. Is there any limit of db at mirroring server. I would have approx. 150 databases.
I want to Replace The Big Log database with A new one ( A database with same structure).But current DB has many connection .
This is my plan :
1- Create a new database with same structure.
2- Rename current database to olddb with this code :
USE master GO EXEC sp_dboption CurDataBase, 'Single User', True EXEC sp_renamedb 'CurDataBase', 'OldDataBase' GO 3- Rename Newdb to current DB. USE master GO EXEC sp_renamedb 'NewDataBase', 'CurDataBase'
is it true ? and Tsql code is ok ? (dont forget many of connection to curdatabase (that Is a log db) and loss some seconds data is not problems)
My database went into suspected mode. and after we had run some script, it came out from the suspected mode. but we encountered this error while opening table in database.
2009-11-02 15:46:42.90 spid51 Error: 824, Severity: 24, State: 2. 2009-11-02 15:46:42.90 spid51 SQL Server detected a logical consistency-based I/O error: incorrect pageid (expected 1:43686; actual 0:0). It occurred during a read of page (1:43686) in database ID 23 at offset 0x0000001554c000 in file 'H:MSSQL.SQL2008MSSQLDATAmy_db.mdf'.
Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
I tried to copy db from server to server by sa user ( sql login) but this error raised and the copy failed
Executed as user: NT ServiceSQLSERVERAGENT. Microsoft (R) SQL Server Execute Package Utility Version 11.0.2100.60 for 64-bit Copyright (C) Microsoft Corporation. All rights reserved. Started: 9:55:24 AM Progress: 2015-05-11 09:55:24.45 Source: 10_32_0_201_10_32_0_202_Transfer Objects Task Task just started the execution.: 0% complete End Progress Error: 2015-05-11 09:56:31.87 Code: 0x00000000 Source: 10_32_0_201_10_32_0_202_Transfer Objects Task
An error occurred while transferring data. See the inner exception for details. StackTrace: at Microsoft.SqlServer. Management. Smo. Transfer. TransferData()The Execution method succeeded, but the
[code]....
number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors. End Warning DTExec: The package execution returned DTSER_FAILURE (1). Started: 9:55:24 AM Finished: 9:56:32 AM Elapsed: 67.892 seconds. The package execution failed. The step failed.
Is there a better way to deal with the virtual log files?...I see several approaches in dealing/decreasing the virtual log files for a database..want to know what's the best n safest approach, from the masters here?
I have an environment with MS-SQL Server 2014 and always-on availability group configured (on 2-nodes).
I'm writing a Powershell Script which removes the database from the availability group (on the primary server) and then SHOULD drop the database on the secondary Server.
That works most of the time, but not always...
When it fails I get the error message:
Cannot drop database "Customer_2" because it is currently in use.
When i check the secondary DB-Server (sp_who2) while the script is running, i see that there is a process for the DB "Customer_2" with Status="background", Command="DB STARTUP" and LastWaitType="REDO_THREAD_PENDING WORK".
As soon as the script fails, this process for "Customer_2" disapears.
This happens always only on the second database in the availability group.
Why is the process still there, even after I removed the database from the Availability Group on the primary node.
If I remove the database from the availability group manually, the "background" process on the secondary node for that database disappears..
1) We are providing a e governance solution for an organization,where we are providing a centralized database,Client have provided 5 Database server for the same.how can we position the Database Server? there are 5000 Concurrent users and 25000 users,SAN Storage for approx. 60 TB,Database size of 2 TB and growth of 1 TB every year
2) How many instance can we have for above said Case?
We are in web site development company,Previously we don't have proxy configuration, after implementing Proxy , we have an issue to connect a remote database.
The error pops "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. Error 53".
I've got reporting services on a different box from the database and I can see all the reports, but when I try to setup a subscription, I get this weird error:
The SQL Agent service is not running. This operation requires the SQL Agent service. (rsSchedulerNotResponding)
The same error happens when I connect to the database server via management studio and try to run a job.
We have a 2 node clustered instance(SQL 2014) with 26 databases and we would like to enable alwayson for one of the databases for reporting (only one secondary and do not need high availability setup). I'm thinking if the reporting application/queries can explicitly connect to the secondary database(Instance namedatabase name) without using a listener and setup the secondary in asynchronous commit mode. Read about the REDO thread blocking due to reporting workload. How does this affect if I implement the secondary in this way.