SQL Server Admin 2014 :: Error While Updating Data Using Oracle Linked Server
Sep 11, 2015
We have oracle linked server created on one of the sql server 2008 standard , we are fetching data from oracle and updating some records in sql server . Previously its working fine but we are suddenly facing below issue.
Below error occurred during process .
OLE DB provider "OraOLEDB.Oracle" for linked server "<linkedservername>" returned message "".
Msg 7346, Level 16, State 2, Line 1
Cannot get the data of the row from the OLE DB provider "OraOLEDB.Oracle" for linked server "<linked server name>".
We are upgrading from SQL 2008R2 to SQL2014 but we have discovered that a couple of our applications are not supported on 2014. We'd like to keep one 2008R2 server and one 2014 server until we have time to upgrade the applications and move everything to the new server. The problem is we have custom code in some of the 2014 databases that access tables in the 2008 databases.
I know we can easily do cross server joins by using a linked server, but it would be a huge undertaking to find all that code and add a linked server name in front of every table, stored procedure, etc. So my question is, is there any way to move a database to a different server and still be able to access it without having to qualify the object names with a linked server? Is there some kind of server/database synonym that can be setup that would be recognized by all databases?
I'm looking for a quick script that someone has already written to update statistics (not to rebuild or re-organise) on specific indices in specific databases - I guess loop though a table comprising of a list of databases and the indices.
I know Ola has one but I'm not look for something that is that complicated. If I cannot find one I'm going to have to write one myself - I want to try and avoid re-inventing the wheel as tomorrow I have to do this work and it's about 7K plus indices in about 10+ databases.
I am planning to delete a login from SQL logins because he moved out from project .when i try to delete the login , it throws an error saying " The server principal owns an endpoint and cannot be dropped , error 15141 "
Same problem facing on different servers.
Note : Environment is SQL 2012,SQL 2008 including cluster servers .
I set up the collector, and specify the Run As as my AD account in the Collector Set - Properties - General screen. My AD account is the local admin of the remote server.
However, the collector does not seem to work. Although the collecting set is shown as running, the The blg file stays at 64K. If I open it, there is nothing inside (no counter at the bottom). What did I miss?
I have a SQL Server 2014 instance running on a SQL Server 2008 R2 server. The server is not clustered, it is just a stand alone SQL Server. The syspolicy_ purge_history job fails every now and then with the error message: "A job step received an error at line 1 in a PowerShell script. The corresponding line is 'import-module SQLPS -DisableNameChecking'. Correct the script and reschedule the job. The error information returned by PowerShell is: 'Access to the path 'PowerShell_CommandAnalysis_Lock' is denied. '. Process Exit Code -1. The step failed."
Google isn't bringing up much besides the whole"If this is a clustered server make sure you have the right server name in the command" answer, which isn't the case here. Some days this job fails and some days it succeeds. I have checked out task scheduler to see if there were any conflicts there, found nothing. Nothing in the event viewer either.
I am trying to create a job that runs against my High Availability listener server.
It is a fairly simple SQL statement in the job - execute tsql.
When I try and run the job I get the error:
Executed as user: NT SERVICESQLAgent$SQL2014A. The target database ('BB_Prod') is in an availability group and is currently accessible for connections when the application intent is set to read only. For more information about application intent, see SQL Server Books Online. [SQLSTATE 42000] (Error 978). The step failed.
I thought there was a way to run a select statement as a job against the listener? The tsql step is only a select.
Is there a way to pass in the application intent = readonly as part of my SQL statement?
I have an environment with MS-SQL Server 2014 and always-on availability group configured (on 2-nodes).
I'm writing a Powershell Script which removes the database from the availability group (on the primary server) and then SHOULD drop the database on the secondary Server.
That works most of the time, but not always...
When it fails I get the error message:
Cannot drop database "Customer_2" because it is currently in use.
When i check the secondary DB-Server (sp_who2) while the script is running, i see that there is a process for the DB "Customer_2" with Status="background", Command="DB STARTUP" and LastWaitType="REDO_THREAD_PENDING WORK".
As soon as the script fails, this process for "Customer_2" disapears.
This happens always only on the second database in the availability group.
Why is the process still there, even after I removed the database from the Availability Group on the primary node.
If I remove the database from the availability group manually, the "background" process on the secondary node for that database disappears..
Need to be able to run update queries on DB2 on IBM MF from a Linked Server. Select and Insert queries work but Update and Delete queries don't. DB2 connect is installed and ODBC System dsn's are created for DEV and Production DB2 environments.
The ODBC drivers can be selected when running Imports/Exports but can't be specified through a linked server.
And have chosen the destination - unstructered (flat) file. But the wizard proposes to export only one table (dbo.Acocount) and all the others from the list are not exported. How can I export ALL the data into one file.I need to do this to edit the syntax in the editor and then import this data and database structure into Postgresql
I have a requirement to implement CDC for 50+ tables to implement incremental data changes warehouse/reporting rather than exporting the whole table data. The largest table is having more than half a billion records.
The warehouse use a daily copy of OLTP db (daily DB refresh). How can I accomplish this. Is there a downside in implementing CDC just for the sake of taking incremental changes on the tables?
Is there any performance impact if we enable CDC on OLTP db?
Can we make use of the CDC tables on the environment we do daily db refresh so that the queries don't hit OLTP database?
What is the best way to implement CDC to take incremental changes for reporting.
We are in web site development company,Previously we don't have proxy configuration, after implementing Proxy , we have an issue to connect a remote database.
The error pops "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. Error 53".
I have been facing following Error in Failover cluster setup as below. I have prepared 2 node and 2 instance sql server failover cluster on top of windows failover.I have deleted MTCBJINS07 in AD and recreated even after, problem is not solved. MTCBJINS07 is my 2nd sql instance sql server network name.
Cluster network name resource 'SQL Network Name (MTCBJINS07)' failed registration of one or more associated DNS name(s) for the following reason:
DNS bad key.Ensure that the network adapters associated with dependent IP address resources are configured with at least one accessible DNS server.
We have deleted 120GB of data but space did not released even after 2 days. Is there any reason for this? tell me how exactly it releases the space after truncating a 120GB table?
I am trying to replicate data from a view in the publisher to a table in the subscriber (transaction replication). I do not need the view's base table, or the view itself, replicated to the subscriber. I only want to data from the view to feed a table in the subscriber.
If data is modified (by an insert, update, or delete) while the backup is running, will the backup contain those changes or will it be added to the database afterwards?
I was running an operation to shrink file/emptyfile a data file, and then remove it.
It blocked and caused a huge mess, I suspect on the removal part. But I want to confirm that the emptyfile completed (and that the engine isn't going to try to put more data in there for when I schedule the removal part again a week or more from now).
How does the engine know not to put any more data in there, and how long does that situation last?
We have always on setup in our environment with read only replica. The primary database has 2 schema one is a dbo and other xyz. We have some store procs created in dbo schema and xyz schema. These store procs are being used by SSRS reports to retrieve the data (select only) no data changes will be made.
when we run the store proc from the read only server the storeprocs in the dbo schema run fine but xyz schema are failing with the message saying failed to update the database as this is a read only...
I am working on adding DBs to the AG but for some reason I am getting this error.
"Joining DB on secondary replica resulted in error"
Msg: The remote copy of the database is not recovered far enough to enable DB mirroring or to join AG. Missing log records have to be applied to the remote DB by restoring the current log backups" Which I did. I took the log backup of DB1, restored it on DB2 with no recovery, but still I am getting the same error.
I am using Windows 2003 server and Sqlserver 2005 by the use of Linked server , I made a connection to Oracle 10g after that I am importing records from Oracle to sqlserver 2005. When I made tnsnames.ora in sql machine , it worked fine but when i am using tnsnames file from oracle server then i fiired importing procedure it returns below maintain error :
OLE DB provider "MSDAORA" for linked server "BI_ORACLE_LS" returned message "Unspecified error".
OLE DB provider "MSDAORA" for linked server "BI_ORACLE_LS" returned message "Oracle error occurred, but error message could not be retrieved from Oracle.".
Msg 7311, Level 16, State 2, Line 1
Cannot obtain the schema rowset "DBSCHEMA_TABLES" for OLE DB provider "MSDAORA" for linked server "BI_ORACLE_LS". The provider supports the interface, but returns a failure code when it is used.
I have a heavy database , More than 100 GB only for six month .every Query on it takes me along time and I dont have enough space to add more indexes.by a way I decided to do partitioning. I create a partition function , on date filed and all Data records per month was appointed to a separate file.And is partitioning only for Future data entry?
I recently installed standalone version of SQL 2014 Standard on my work computer. I used Access before but I want to use a SQL server instead.
We have a shared drive that a file gets deposited every day at midnight. I want to be able to get this file and import it to the server (its basically a list of names).
Here what I have done so far:
I created the database
Created the file and successfully imported data into it using the Import Data feature.
I saved the SSIS package
Scheduled an Agent Job for this package to run at certain time,daily
At first the jobs would fail with a Access is Denied. I added a user under Credentials with my network account ( have admin rights on the work computer).Also added a Proxy for the Credential user I made.
Jobs fail with a “Cannot open data file” error. I tried changing things here and there, but I can’t get it to work.
I need to modify a table to reside on a new filegroup and also point TEXTIMAGE_ON to that filegroup instead of PRIMARY. Apparently in the past, the only way to achieve this via SQL is to create a new table, copy over data, drop the old table and rename the new table to the original name. I found this solution in the SQL Server 2005 forum.
Is there any other way to alter this table in order to point the TEXTIMAGE_ON to new filegroup using SQL Server 2014? We are on Standard edition. The technique I am using is the drop constraint (with move option) and add constraint (to new filegroup) commands. The data and indexes move, but not the text data (it still is in primary filegroup).