The secondary server for my availability group was recycled. When SQL Server came back online the data movement for a database was suspended. The error log shows:
"AlwaysOn Availability Groups data movement for database 'XXXXXXXXX' has been suspended for the following reason: "system" (Source ID 5; Source string: 'SUSPEND_FROM_RESTART'). To resume data movement on the database, you will need to resume the database manually. For information about how to resume an availability database, see SQL Server Books Online."
I was able to resume data movement with no issue. I would like to understand the technical reason as to WHY the data movement was put in the suspended state and left there upon recycle. I searched for an article that would list possible reasons (BOL, Google, Bing, etc..). I just could not find much information out there on 'SUSPEND_FROM_RESTART'.
I am aware that TDE protects data at Rest and not during communication or data in motion (UNLESS you use Encrypted communication channels using SSL certs etc). Hence I am thinking of doing data export from a TDE encrypted database to a database on the instance where TDE is not enabled or supported. I believe it works and need to take care of relationships between tables.The target database is hosted on SQL 2012 standard edition on which TDE is not supported.
I have transactional replication setup from server A to Server B. I wanted to move the subscriber from B to C. What could be the best approach.
1. Backup the DB from Server B and restore on Server C. set the replication between A & C. 2. setup the transaction replication between A & C along between A & B. Test A& C working fine and then remove B.
If I am going with approach 2 , I have to replicate data approx. 70 GB so If I ran both the replication on Server A that will stress as 140 GB of data moving out. How do I control this large movement ? Can the replication be manual synch?
I am wanting to continuously monitor a source table throughout the day and as data becomes available, process it and insert it into one of a number of tables.
I have tried achieving this using a FOR LOOP and setting the halt condition such that it is not stisfiable. However, this has a couple of problems:
1) It runs in a tight loop and consequently degrades system performance enormously.
2) I can't get transactions to work. I would like each iteration of the loop to spawn a new transaction under which the tasks in the loop can run. Therefore, if one of the tasks fails during such an iteration, only the updates affected by that iteration are lost.
Ideally, I would like to be able to put a wait statement within the loop container so that it runs every couple of seconds. And would also like to implement transactions as described above.
I'm getting this error on expanding the Databases node in SQL Server Management Studio Express Object Explorer:
Failed to retrieve data for this request. (Microsoft.SqlServer.Express.SmoEnum) An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.Express.ConnectionInfo) Could not continue scan with NOLOCK due to data movement. (Microsoft SQL Server, Error: 601)
A similar error message also appeared when I tried opening an existing data connection within Database Explorer in Visual Web Developer Express. (However, the web application ran fine and it managed to access the database normally.)
These errors only appeared recently. Any ideas how to go about solving this issue?
In my organization, we have a database which exceed unto 2 GB. The application on which we are working is developing since the time of asp / cb 6 and in short it is the baby of many developers. It is a kind of ERP which has no documentation.
Problem 1: ========= Now, to reduce the size of the database we have examined some of the orphan procedures, tables, and columns. Is it possible for me to create a DTS package which create the backup of all the orphan stuff to some other database and delete it from the actual one. Moreover, We still have doubts that some of fields which we think are orphan, might not be. So we need another DTS package which rollback all the changes. (Any Idea for that)
Problem 2: ========= We want another DTS package which would move the data from one database to another and same rollback DTS for this one. (Any Idea for that)
Problem 3: ========= How can we use DTS programming to do these tasks ? what benefits do I got from DTS Programming over DTS Wizard.
We have a stored procedure that failed with: Could not continue scan with NOLOCK due to data movement
We are running with ISOLATION LEVEL READ UNCOMMITTED. There are other jobs running, some of which might be hitting these tables (all using the ROWLOCK hint - though I know that's not guaranteed), however, this stored proc would not be going near the same rows. But even if they were, we'd be happy with either the before-look or after-look. This needs to be a low-impact job and should have minimal impact on ther other jobs, so we can't take out locks. Is there any hint we can use to do this? e.g. can we tell the query to just wait until the data has stopped moving then try again?
I have a table named sales, num of rows are 20. total number of rows for stor_id is 5 rows I assume that when I create a cursor like this one which will have 5 rows from the cursor Now if I print a statement , I assume that I will have the statement 5 times( one for each row) Am I correct... well, if this is the concept, I am still having one printed statement.. I do not know why , am I doing something wrong? thanks for help
CREATE PROCEDURE p_cursor @ord_nbr char(4) declare rst cursor for select * from sales where stor_id= 123 open rst fetch rst if (@@fetch_status=0) print ' I may have 5 rows '
We're very interested in having our application use SQL/e but we can't have the 4 gig limit. It makes sense to me that SQL/e simply should not be able to access a file over the network, and then you wouldn't have any reason to put a 4 gig limit on it.
At that point it becomes a very flexible alternative for remote users that need to have a large amount of data (i.e. documents, images etc.) with them.
We'd love to start building an abstraction layer so that we can support both SQL Server and SQL/e so that we can support network and remote users and not have the nightmare that is SQL Server Express installation. (care of the windows installer group's bugs...)
I stored resume in database with datatype Image. But now i want to retrieve the resume becoz if user wants to edit their resume and again i shud store the updated resume into my database. Give me ideas...Thanks in adv!!
Often when I write a stored procedure, I encounter a situation where it will be really convenient if I can ignore an error and continue the execution of next SQL statement, especially when I know what kind of error it will generate. It's just like the effect of "On Error Resume Next" in VB.
Does anyone have any idea or have some knowledge to share? I would really appreciate.
I am using SQL Server 2005 and SQL Server 2000. Thanks.
I have a stored procedure containing iterating cursor in which iam inserting records in a table. My problem is that whenever any data mismatch occurs whole process gets stops. I want it should skip that error record and continue with next iteration. Like on error resume next in vb. Please suggest.
I have setup logshipping in sql 2005. i have a primary and secondary server only. Suppose for maintenace i bring primary down. . i have to manually bring the secondary server online. usnig restore databse with restore.
1) so far so good. But what now. How can i re-sync the old primary with the new primary?
2) can i return the original secondary back to standby mode so that the primary can resume its role and the log shipping process proceeds.
If we have a situation that the Mirror server is unavailable for a day....very unlikely, but it happens when servers are moved from one data center to another....Lets assume we are doing safety off (high performance mode)
Should we pause mirroring on the principal server?
When we pause, does the transaction log keeps growing ? Can we backup the logs ?
If we don't pause mirroring
Does the transaction log keeps growing ? Can we backup the logs ?
What we can expect we we bring online the mirror server next day! will there be any performanbce problems since mirroring will try to catch up....
Considering moving to mirroring from logshipping for our disaster recovery.
Hello everyone,There's an interesting SQL problem I've come across that I'm currentlybanging my head against. Given the following table that contains itemlocation information populated every minute :location_id date_created=========== ============5 2000-01-01 01:00 <-- Don't need5 2000-01-01 01:01 <-- Don't need5 2000-01-01 01:02 <-- Need7 2000-01-01 01:03 <-- Don't need7 2000-01-01 01:04 <-- Need5 2000-01-01 01:05 <-- Need2 2000-01-01 01:06 <-- Don't Need2 2000-01-01 01:07 <-- Need7 2000-01-01 01:08 <-- Needhow would you generate a result-set that returns the item's locationhistory *without* duplicating the same location if the item has beensitting in the same room for a while. For example, the result setshould look like the following :location_id date_created=========== ============5 2000-01-01 01:027 2000-01-01 01:045 2000-01-01 01:052 2000-01-01 01:077 2000-01-01 01:08This is turning out to be a finger twister and I'm not sure if itcould be done in SQL; I may have to resort to writing a stored-proc.Regards,Anthony
Hi, I am trying to create an asp.net web recruiting application for HR, which will give the users the ability to both copy/paste the Resume and cover letter in textbox and upload resume and cover letter, then submit it (which will be saved into SQL Server 2000 table). I am thinking to save Resume and cover_letter as Image data type columns in SQL Server. . Can someone give me a direction about how to save the uploaded resume and cover letter to table and if it's the easiest way to do it? . What to deal with different formats of uploaded resumes? I hope to limit to only Word or HTML . Since I also give user another option - copy/paste the resume and coverletter into a textboxes. Can I simply save the copy/paste resume and cover letter into text field column? Later, say, if any HR recruiter retrieve the text from database, will it concatonates everything together without line break? Any ideas is appreciated.
A customer has reported getting the following excpetion in our logs, and I have never seen it before and wanted to see if there was any insight you could provide to understand when you throw this exception.
The process is a "purging" process- it just executes a sequence of DELETE statements that should be fairly simple (delete a number of records from a table and CHECK some constraints, no cascading), after the sequence it commits. All of this occurs on the same connection. We use c3p0 for connection pooling.
Here is the exception:
2008-03-22 08:30:08,699 WARN impl.NewPooledConnection : [c3p0] A PooledConnection that has already signalled a Connection error is still in use! 2008-03-22 08:30:08,699 WARN impl.NewPooledConnection : [c3p0] Another error has occurred [ com.microsoft.sqlserver.jdbc.SQLServerException: The server failed to resume the transaction. Desc:9f00000002. ] which will not be reported to listeners! com.microsoft.sqlserver.jdbc.SQLServerException: The server failed to resume the transaction. Desc:9f00000002. at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(Unknown Source) at com.microsoft.sqlserver.jdbc.IOBuffer.processPackets(Unknown Source) at com.microsoft.sqlserver.jdbc.SQLServerConnection.doConnectionCommand(Unknown Source) at com.microsoft.sqlserver.jdbc.SQLServerConnection$ConnectionCommandRequest.execute(Unknown Source) at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeRequest(Unknown Source) at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectionCommand(Unknown Source) at com.microsoft.sqlserver.jdbc.SQLServerConnection.rollback(Unknown Source) at com.mchange.v2.c3p0.impl.NewProxyConnection.rollback(NewProxyConnection.java:855) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.hibernate.jdbc.BorrowedConnectionProxy.invoke(BorrowedConnectionProxy.java:50) at $Proxy15.rollback(Unknown Source) at
In a high load asp.net environment, I am getting an error. The transation active in this session has been committed or aborted by another session.Here is the code , I am getting error. 1 object returnObject = null; 2 SqlTransaction sqlTrans = null; 3 try 4 { 5 //Getting new SqlConnection 6 comm.Connection = GetConnection(); 7 //Opening connection 8 OpenConn(comm.Connection); 9 //beginning transaction 10 sqlTrans = comm.Connection.BeginTransaction( IsolationLevel.ReadCommitted ); 11 //setting transaction to SqlCommand object 12 comm.Transaction = sqlTrans; 13 //executing operation 14 returnObject = comm.ExecuteScalar(); 15 //trying to commit. 16 sqlTrans.Commit(); 17 18 19 } 20 catch (SqlException sex) 21 { 22 if (sqlTrans != null) 23 { 24 sqlTrans.Rollback(); 25 } 26 } 27 finally 28 { 29 comm.Connection.Close(); 30 } Could you please explain , am I doing smthg wrong ?
I am developing a package using SSIS which needs to do the following.
1. Read all flat file from a folder. I am doing this using For Loop task. I know the total number of files in that folder hence I am setting the loop counter = file count.
2. The next step is to import the data from flat file to SQL server destination table using data flow task.
3. Upon successful completion of data flow task there are some other tasks like SQL to do some checks/validation on the data, export it to another tables.
Upon successful completion of step 3 the iteration goes to next file.
I want to achieve the following
IF step 2 has error (for example corrupt file or incomplete data), I want to fail data transfer completely, skip step 3, and go to step 1 for next available file and do rest.
I have a table having 220 lakhs of records and one of the column is Full Text enabled.We have used ContainsTable() to search for data, but we are unable to get results as expected. so we done rebuild.During Index Rebuild, population is failed.I have found this error in error log and it is saying to do resume population.So I want to know how long it takes to complete Resume population process.
look at the below more details about FT Index table.
Row count - 22155112
Index space - 1,903.250 MB (1.9 GB)
Data space - 87,552.258 MB (87 GB)
sqlserver2008 R2
and the below query we have used
HTML Code: SELECT Distinct top 50 cal.case_id,cal.cas_details From g_case_action_log cal (READUNCOMMITTED) inner join containstable(es.g_case_action_log, cas_details, ' ("235355" OR "<br>235355" OR "235355<br> ") ') as key_tbl on cal.log_id = key_tbl.[key] Where cal.product_id = 38810 ORDER By cal.case_id DESC
This query is not going to search in recent inserted/updated rows. this is the actual issue we are facing.
how to fix this error and if population need to be resume, then how long takes to do resume population.
SQL Server 2012 Data Tools was working fine for me but something must've changed, now every time I try to create a new SSIS project I get:
The server threw an exception. (Exception from HRESULT: 0x80010105 (RPC_E_SERVERFAULT)).
When I try to open an existing project I get:
exception has been thrown by the target of an invocation
external component has thrown an exception (SSISUpgrade)
The issue seems to only arise with SSIS projects.I have already uninstalled SQL Server 2012 and reinstalled it and that didn't work.I tried to install Visual Studio 2012 Data Tools with BI and that also crashes when I try to create an SSIS project.Output of SQL Server SELECT @@VERSION is:
Microsoft SQL Server 2012 - 11.0.2100.60 (X64) Feb 10 2012 19:39:15 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) (Hypervisor)
SQL Data Tools page info:
Microsoft SQL Server Integration Services Designer Version 11.0.2100.60 Microsoft Visual Studio 2010 Version 10.0.40219.1 SP1Rel Microsoft .NET Framework Version 4.5.51641 SP1Rel
I have a SSIS Package with a "Execute Package Task" to call a child package. I am trying to have the master/parent package complete its execution regardless the outcome (failure or success) of the child package. The overall structure of the master package is:
1. Perform Pre-load tasks (stored procedure).
2. Execute Package Task (call child package)
3. Perform Post-load Tasks (stored procedure)
I have try everything and cannot get the results that I want... I have tried the combination of "failparentonfailure", "forceexecutionvalue", etc. The master package stops at childs failure. I would like to resume to completion
I think I am definitely thrashing and am not getting anywhere on something I think should be pretty simple to accomplish: I need to pull the total amounts for compartments with different products which are under the same manifest and the same document number conditionally based on if the document types are "Starting" or "Ending" but the values come from the "Adjust" records.
So here is the DDL, sample data, and the ideal return rows
CREATE TABLE #InvLogData ( Id BIGINT, --is actually an identity column Manifest_Id BIGINT, Doc_Num BIGINT, Doc_Type CHAR(1), -- S = Starting, E = Ending, A = Adjust Compart_Id TINYINT,
[Code] ....
I have tried a combination of the below statements but I keep coming back to not being able to actually grab the correct rows.
SELECT DISTINCT(column X) FROM #InvLogData GROUP BY X HAVING COUNT(DISTINCT X) > 1
One further minor problem: I need to make this a set-based solution. This table grows by a couple hundred thousand rows a week, a co-worker suggested using a <shudder/> cursor to do the work but it would never be performant.
Was wondering if there was a best practice minimum permissions for creating a SQL login to use when setting up a new shared Data source for SSRS report manager?
Something along the lines of them being a data read for the DB and permissions to update tempdb?
Would have thought it not advisable to have the login be able to update the main db...
I am trying to delete tables from data where the ModifiedDates older than 9 years in AdventureWorks2012 database . I get console notified that foreign keys are dropped but the delete statement is throwing errors. I am sure that somewhere the key constraints are not getting altered, but i'm not able to figure it out as i'm a relative beginner to T-SQL. The error and code:
The DELETE statement conflicted with the REFERENCE constraint "FK_SalesOrderHeaderSalesReason_SalesReason_SalesReasonID". The conflict occurred in database "AdventureWorks2012", table "Sales.SalesOrderHeader [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null $option_drop = new-object Microsoft.SqlServer.Management.Smo.ScriptingOptions; $option_drop.ScriptDrops = $true;
I need to see inside a SSIS 2012 project a new SSIS installed component, but in the SSDT 2010 I cannot see the SSIS Data Flow Items tab for adding data source/data destination respect to the choose toolbox items pane.
Background: In my current company the business users maintain a huge quantity of master data using excel. Then a series of SSIS jobs are edited and manually executed.
Goal: the challenge is to replace this process using MDS. One of the requested features is the possibility for the users to edit or insert new master data using the Web UI or the Excel Add-in and when they are done perform a merge of the master data in the target, in this case in the reporting DB.
The perfect solution for me is something like trigger the execution of a SSIS package to export the data from the subscription views to the reporting DB after the business rules are apply to a specific entity.
I am using SQL Server 2012 and to me a part of data captured by CDC is not making sense.
I have a table called 'Schema.Table1', and I enabled CDC on it by running 'sys.sp_cdc_enable_table'. I see that a table called 'cdc.Schema_Table1_CT' got created which now gets an entry when ever I Insert, Update or delete a record in the original table.
Till this point every thing works fine.
My original Table has a NOT NULL INT column called 'AuditTrackerUserID' with a default value of 1996. My application does not provides a value for this column, but because the column itself has a default value, records get inserted without error.
When I try to execute the following Query I see multiple records with __$operation of 3 and 1.
SELECT * from cdc.Schema_Table1_CT where AuditTrackerUserID IS NULL
My expectation is that I should not ever see any record returned by this query because AuditTrackerUserID is a not null column, but I do.
I am currently in the process of migrating data from Sybase to Sql server and would like to know how to test the data migrated.
As of now, we took one table data from both source and destination and compared it in Excel to check if the data migrated looks good (note, we used SSIS to migrate data). However, I would like to check if there are any other best & easy ways to apprach data validation post migration.