SQL Server Admin 2014 :: In-Memory Previous Transaction Aborted Exception
Sep 21, 2015
I'm working on a large scale project that is currently in production. We have a big process that recently changed to use In-Memory Tables with SQL 2014 for performance efficiency.
The Process uses:
51 In-Memory SQL Tables.
50 Stored Procedures (not native) that loads data(Insert) from about 150 regular Tables and IM tables.
300 Validations (short stored procedure not native) Selecting from those 50 In-Memory Tables (And insert to In-Memory table that save the validation errors if exists on In-Memory table).
At the end of this process we clean the table from the data that relavnt to etch prosses(DELETE FROM WHERE)
B.T.W
No UPDATE STAT on In-Memory are used-when we test the prosses it slow as down and cause some locks.
We are calling this process from ADO.Net, loads stored procedure first and then validations, each SP use different SQL Connection. In normal use, everything works fine and takes about 1.5 second.
Under stress test (6 Clients X 100 Tasks) for 30 minutes. After several minutes we are starting to get this SQL Exception (1 SQL Exception for every 20 tasks):
41301. A previous transaction that the current transaction took a dependency on has aborted, and the current transaction can no longer commit.
Transactions in Memory-Optimized Tables
The Exception is not clear. We are not using BEGIN TRANSACTION in the process. The SQL Exception occurs in different stored procedures each time.
I have Full database backup upto previous day and transaction logfile of Today transaction. my database has crashed. I have restored previous day's Full backup. I have faced difficulty to restore today's transaction from today's transaction log. What are the steps to restore full database back and one day's transaction log file. Note: there is no differential database backup and transaction backup.
My database server memory utilisation is growing faster from past 1 week. it remained same for 1 week around 55% and now it is going to 70% and increasing.
Total OS memory is 32GB and I kept cap for sql server memory upto 29GB. Dont know what to do..
We have run into an issue on a dedicated SSAS 2012 SP1 server where the allocated memory is not being utilized, causing some slowness in use, connections, and queries.
Total Memory on the server is 512, and after startup, the utilized memory gets up to a max of 60GB and stops there. Checking the Resource Monitor, msmdsrv.exe is only taking around 39GB overall. With the current properties, that should be at 330GB. Am I missing something in the settings or in configuration that should be changed?
Version: SQL Server 2012 SP1 Enterprise (11.0.3000) OS: Windows Server 2012 Datacenter - Fully patched and up to date Databases: 2 Tabular models Server: 512GB RAM
How do i find Total allocated space and used space of a memory optimized filegroup?
use memory_optimized_db Go select (SUM(size)*8.0)/1024.0 as Space, FILEGROUP_NAME ( data_space_id ) , type_desc from sys.database_files group by data_space_id,type_desc;
above query gives "current used size of the container " of memory optimized file group but doesn't give Total space detail.
- An MSSQL 2014 Standard server that houses multiple small databases (in excess of a hundred). - These databases are frequently dropped and restored by an application that uses this SQL Server. - There is a business need for this setup at this time, so I can't get away from it. Therefore answers like "don't have so many small databases that are frequently dropped and restored" would be somewhat unuseful
This is the problem I have:
- When I connect SSMS 2014 to the server and expand the "Databases" node, it takes forever to display. In comparison, SSMS 2008 connected to SQL 2008R2 server with the same number of databases displays the Databases tree very quickly.
I ran a trace to see what exactly SSMS 2014 is doing. When the "Databases" node is expanded, it runs a query that checks each database for Memory-Optimized Tables (new and wonderful feature of SQL 2014 for sure, but I'm not using it, at least yet). Naturally, when you have to loop through over a hundred DBs, it takes time. Worse yet, if one of these DBs is in process of being restored, the query sits and waits to time out before proceeding to the next DB. Sometimes this causes outright timeouts. Here is the query:
use [MyDatabase] SELECT ISNULL((select top 1 1 from sys.filegroups FG where FG.[type] = 'FX'), 0) AS [HasMemoryOptimizedObjects]
To be sure, this is NOT a SQL Server performance issue. This server processes a rather heavy workload and has been doing so for over a month, and the workload completes within expected time limits or better. Even so I've done some basic performance measuring, and the server itself is quite all right.
Moreover, if I connect SSMS 2008 to it, I get an error message (Index out of bounds or somesuch), but SSMS 2008 does connect, and displays the Databases tree much faster than SSMS 2014.
I'd like to turn off the option to check for Memory Optimized Objects altogether, as I'm not using the feature.
Is there a formula for calculating how expensive a transaction will be in terms of disk space used before its run. I dont want it accurate to the MB, but rough enough so I can determine how much additional space to assign to a transaction log or SAN volume.
Currently we're reindexing ~25billion rows, nothing too wide, say 12 columns consisting of 1 varchar(50) and the rest ints, bits and money.Roughly speaking if I reindex the clustered index on an int indetity, (with sort_in_tempdb) how would I calculate the the disk space used?
I have a database that is part of AlwaysOn that is filling up the transaction log drive even though I have a daily full backup and transaction logs set for every 2 hours. The backups are going from both the primary and secondary replica backuping up to the shared disk and I have the backup preferences set to the primary.
When I try to shrink the log I get 'The transaction log for database 'DB' is full due to 'LOG_BACKUP''. I have to manually backup the trans log and then shrink, why the maintenance plan backups aren't doing this even though they are "working".
I try to load data into a memOpt table (INSERT INTO ... SELECT ... FROM ...). The source table has a size about 1 Gb and 13 Mio Rows. During this load the LDF File grows to size of 350 GB (until the space if the disk is run out of space). The Server has about 110 GB Memory for the SQL Server reserved. The tempdB doesn't grow. The Bucket Size in the create statement has a size of 262144. The Hash key as 4 fields`(2 fields have the datatype int,1 has smallint, 1 has varchar(200). ) The disk for the datafiles has still space for the datafiles (incl. the hekaton files).
How can I reduce the size of the ldf files during the load of the data ?
In Windows Server 2012. How do I do a System Restore to a previous restore point?I need to install the 64 bit and 32 bit Oracle Client Install for connections in SSIS and to create Oracle Linked Servers.
If you make a mistake it is not fun removing it. Sometimes it corrupts the machine and it is difficult to uninstall since there is not an Oracle Universal installer for Oracle 11g.If you install the 32 bit before the 64 you mess up the machine.how to create a restore point.
NODE1 -256GB INST1 - 64GB min/64GB max INST2 - 64GB min/64GB max NODE2 - 256GB INST3- 64GB min/64GB max INST4- 64GB min/64GB max
With this configuration and if all instances are running on the same node there will be enough memory for them to run. Knowing that normally i ll have only 2 instances in each node wouldnt it be better the following config?
NODE1 -256GB INST1 - 64GB min/128GB max INST2 - 64GB min/128GB max NODE2 - 256GB INST3- 64GB min/128GB max INST4- 64GB min/128GB max
With this configuration and in case all the instances (due to a failure) start running on only 1 node, SQL will adjust all instances to just use Min memory specified?
We have multiple databases on a single instance in an OLTP environment. I have my data files on a separate SAN LUN from my transaction log files (and a few NDFs split out onto additional LUNs). I was wondering if there is a performance benefit to putting each LDF file on its own LUN? Or at least my few busiest LDFs?
We are currently on 2012, but I'm having to put together specs for a 2014 installation and need to answer this question without having an environment in which I can benchmark different setups. I just want to hear whether or not others have done this (why or why not?).
I've recently started working with a public sector organisation who have 4 clustered sql instances that has 80% of it's db mirrored.
Looking at the transaction log - it seems that a transaction log backup is a good idea as the log is 4x larger than the data file.But I'm not allowed access to the physical server to check onto which drive I can create the trn. No RDP, no vmware - let's be honest I'm not even allowed to launch cmd line Also the Server Manager informs me "We will need to carefully look at database backups if you guys want to start doing these backups on box, as that will break our off box backup routine (it will screw the transaction chain)."
I don't understand how backing up the transaction log could break the "transaction chain"?
AppDomain 446 (A.dbo[runtime].477) unloaded. AppDomain 446 (A.dbo[runtime].477) is marked for unload due to memory pressure. Error: 6532, Severity: 16, State: 48. .NET Framework execution was aborted by escalation policy because of out of memory. Im getting this problem on server A? when I restore a backup that is located on a drive on server A? to server B?. The drive isnt actually on server A? it is on an EMC but is mapped to server A?. So, I guess server A? is acting as a file server for the file. Server A? is an x64 bit server with 32 GB of Ram. It has SQL 2005 Enterprise Edition build 9.00.3042.00. Min Memory is set to 2 GB and Max is set to 28 GB. SQL service is running under a windows account with Lock pages in Memory? set. From what Ive read this was a problem but I thought SP2 fixed it. Any ideas? Thanks.
I have a SQL CLR used to read a XML file, it works fine when the file is smaller, but will give me this error if the file if bigger i.e. more than 10M.
Msg 6532, Level 16, State 49, Line 2
.NET Framework execution was aborted by escalation policy because of out of memory.
System.InvalidOperationException: Data access is not allowed in this context. Either the context is a function or method not marked with DataAccessKind.Read or SystemDataAccessKind.Read, is a callback to obtain data from FillRow method of a Table Valued Function, or is a UDT validation method.
System.InvalidOperationException:
at System.Data.SqlServer.Internal.ClrLevelContext.CheckSqlAccessReturnCode(SqlAccessApiReturnCode eRc)
at System.Data.SqlServer.Internal.ClrLevelContext.GetDatabase(SmiEventSink sink, Int32* pcbLen, IntPtr* ppwsName)
at Microsoft.SqlServer.Server.InProcConnection.GetCurrentDatabase(SmiEventSink eventSink)
at System.Data.SqlClient.SqlInternalConnectionSmi.Activate()
at System.Data.SqlClient.SqlConnectionFactory.GetContextConnection(SqlConnectionString options, Object providerInfo, DbConnection owningConnection)
at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection)
at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup)
at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection)
at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory)
at System.Data.SqlClient.SqlConnection.Open()
at BuildDirect.SQLCLR.CLRErrorHandler.RaiseNewCLRError(Int32 errorid, String errormsg)
at BuildDirect.SQLCLR.CLRErrorHandler.ReportCLRError(String location, String errormsg)
at BuildDirect.SQLCLR.CLRErrorHandler.ReportCLRError(String location, Exception ex)
at BuildDirect.SQLCLR.HTTPRequest.uCLRGetHTTPResponse(String url)
I want to set up a database role so that users can use sp_readerrorlog through SSMS. It does a check on membership in the securityadmin role.
I have tested it and can see you can grant execute on xp_readerrorlog but the SSMS GUI uses sp_readerrorlog.
I thought I could create a user/certificate and add the signature to sp_readerrorlog but it's not permitted (likely because it's not a normal database object).
So the other solution is to add the users to the securityadmin role but then explicitly deny alter any login (best done with a custom server role in 2012+ but otherwise just manually in 2008). I tested this out and it works, I'm not able to alter any logins or increase my own permissions, I also did a check of what's reported from fn_my_permissions(null, null) and it shows minimal permissions like I'd expect.
I have an XML file about 25MB. When I read it with openrowset it gives me 'System.OutOfMemoryException'. The machine I'm running has 16GB of RAM and memory is definetely not exceeded. When I run smaller XML files it works fine.
I've read that older versions of SQL Server had this problem and it was caused by the parser having limited amount of memory. Is this still the case? Is there a way to change this?
If I install an instance with Windows Only authentication, and then change it to Mixed Mode, if I enable the sa login, the password has already been set. What is the default? If it's generated, how secure is it? Is the password generated? What algorithm is used for that?
My sql databases in SQL Server 2014 has the status "suspend" as I saw in SQL Management Studio. I can't restore to serviceable condition sql databases through standard procedures. I need to restore .mdf file.
I am using a monitoring system where I can monitor a numeric SQL result assuming the result is one field and one row.I would like to do this to say monitor the free available space or percentage on say the Master database. DBCC SQLPERF gives me a few columns and results for all databases on the server.
In our environment applications are using a DNS name which points to the physical server ip address. Now we are planning to move to 2014. We are planning to have servers in different subnets so we will be having two ip adresses for listener. How we can point the DNS to the listener ips? If failover happens can the DNS point to the exact ip address of the listener where it's primary node?
"Process 0:0:0 (0x1e10) Worker 0x00000006B6D341A0 appears to be non-yielding on Scheduler 13. Thread creation time: 12906028806348. Approx Thread CPU Used: kernel 0 ms, user 0 ms. Process Utilization 13%. System Idle 84%. Interval: 70189 ms."
Is it better to run the profiler or performan counter?
What are the filters we have to select in the profiler to monitor the Sql server