How do i find Total allocated space and used space of a memory optimized filegroup?
use memory_optimized_db Go select (SUM(size)*8.0)/1024.0 as Space, FILEGROUP_NAME ( data_space_id ) , type_desc from sys.database_files group by data_space_id,type_desc;
above query gives "current used size of the container " of memory optimized file group but doesn't give Total space detail.
I've a database with a memory optimized filegroup on it. How can I remove it?I have removed the memory optimized table I had on it, but when I try to remove the filegroup I receive an error.
I'm just beginning to experiment with memory optimised tables.
I have two sets of near identical tables - one set normal, the other set memory optimised with DURABILITY=SCHEMA_ONLY - and am running test queries against these. When I say that the two sets are "near identical", I mean that they are the same except for the primary keys: for the normal tables these are defined as PRIMARY KEY CLUSTERED whereas for the memory-optimed ones they are defined as PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=nnnn) as per the requirements for such tables.
I then run a pair of test queries, again identical but one referencing the normal tables and the other referencing the memory optimised ones.
(The query uses an inner join on three tables with row counts of approx 3m rows, 100000 rows and 5000 rows.)
The query against the normal tables runs noticeably faster than that against the memory optimised ones. To try to find out why, I examined the execution plans. the plan for the memory optimised query suggests that I have a missing index: but of course I can't create this againsty a memory optimised table. Is this a bug or am I missing something? Why the performance between the two should be so different?
CREATE TABLE [Sales].[Test_inmem] ( [c1] [int] NOT NULL, [c2] [nvarchar](20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [ModifiedDate] [datetime2](7) NOT NULL CONSTRAINT [IMDF_Test_ModifiedDate] DEFAULT (sysdatetime()),
[Code] ....
I have to generate 1000000 random records into it. I tried various ways to insert records, but not being a developer could not do it. I hope to make the C1 as a serial number, C2 can be anything, C3 I want to be the timestamp.
I've been having some trouble getting a single-column "varchar(5)" field to reliably use a table seek instead of a table scan. The production table in this case contains 25 million rows. As impressive as it is to scan 25 million rows in 35 seconds, the query should run much faster.
Typically, this table is accessed with a query that includes:
SELECT ... FROM SummaryTable WHERE ixZIP IN (SELECT ZipCode FROM @ZipCodesForMO)
This query insists on using a table scan. I've tried WITH (FORCESEEK) for example, but that just makes the query fail.
As I've investigated this issue I also tried:
SELECT * FROM Summaries WHERE ZipCode IN ('xxxxx', 'xxxxx', 'xxxxx')
When I run this query with 64 or fewer (actual, valid) ZIP codes, the query uses a table seek.But when I give it 65 or more ZIP codes it uses a table scan.
To summarize, the production query always uses a table scan, and when I specify 65 or more ZIP codes the query also uses a table scan. I'm wondering if the data type of the indexed column (Latin1_General_100_BIN2) is somehow the problem. I'll likely try converting the ZIP codes to an integer to see what happens.
- An MSSQL 2014 Standard server that houses multiple small databases (in excess of a hundred). - These databases are frequently dropped and restored by an application that uses this SQL Server. - There is a business need for this setup at this time, so I can't get away from it. Therefore answers like "don't have so many small databases that are frequently dropped and restored" would be somewhat unuseful
This is the problem I have:
- When I connect SSMS 2014 to the server and expand the "Databases" node, it takes forever to display. In comparison, SSMS 2008 connected to SQL 2008R2 server with the same number of databases displays the Databases tree very quickly.
I ran a trace to see what exactly SSMS 2014 is doing. When the "Databases" node is expanded, it runs a query that checks each database for Memory-Optimized Tables (new and wonderful feature of SQL 2014 for sure, but I'm not using it, at least yet). Naturally, when you have to loop through over a hundred DBs, it takes time. Worse yet, if one of these DBs is in process of being restored, the query sits and waits to time out before proceeding to the next DB. Sometimes this causes outright timeouts. Here is the query:
use [MyDatabase] SELECT ISNULL((select top 1 1 from sys.filegroups FG where FG.[type] = 'FX'), 0) AS [HasMemoryOptimizedObjects]
To be sure, this is NOT a SQL Server performance issue. This server processes a rather heavy workload and has been doing so for over a month, and the workload completes within expected time limits or better. Even so I've done some basic performance measuring, and the server itself is quite all right.
Moreover, if I connect SSMS 2008 to it, I get an error message (Index out of bounds or somesuch), but SSMS 2008 does connect, and displays the Databases tree much faster than SSMS 2014.
I'd like to turn off the option to check for Memory Optimized Objects altogether, as I'm not using the feature.
I try to load data into a memOpt table (INSERT INTO ... SELECT ... FROM ...). The source table has a size about 1 Gb and 13 Mio Rows. During this load the LDF File grows to size of 350 GB (until the space if the disk is run out of space). The Server has about 110 GB Memory for the SQL Server reserved. The tempdB doesn't grow. The Bucket Size in the create statement has a size of 262144. The Hash key as 4 fields`(2 fields have the datatype int,1 has smallint, 1 has varchar(200). ) The disk for the datafiles has still space for the datafiles (incl. the hekaton files).
How can I reduce the size of the ldf files during the load of the data ?
I have two databases like each other that one is the backup of another. Each DB have 2 filegroups. I want to replace one filegroup from one db to another. How do I do this? Or how do I backup and then restore?
I work for a 24/7 shop. We currently have a table that is partition on monthly. I have to created a script that will add a new file group, add the new file to the group, and alter the the partition scheme and function. However, I need for this process to not cause a lock on the table. Typically I get the locking and issues when I am run the split command. Is there a way to prevent this from happening?
I'm being asked to create multiple filegroups for a new database based on the table type, transaction, lookup, misc... From what i'm reading this doesn't make sense. I'm reading either large tables get file groups, nonclustered indexes when they are about the same size of the data, or a few other reasons...
First of all, we are talking about the same disk (please don't ask me about how it is configured) and I'm not sure yet if restoring separate file groups is even going to be necessary.
So here are my questions (beyond, the test and see what happens) because in the end I'm going to probably have to do what i'm told. So this is for my professional knowledge.
1. Does file groups separated by table type make sense? 2. Should you put tables that are queried often together in the same or different file groups. 3. I'm pretty sure you can't restore single file group for write access, am I correct?
I need to modify a table to reside on a new filegroup and also point TEXTIMAGE_ON to that filegroup instead of PRIMARY. Apparently in the past, the only way to achieve this via SQL is to create a new table, copy over data, drop the old table and rename the new table to the original name. I found this solution in the SQL Server 2005 forum.
Is there any other way to alter this table in order to point the TEXTIMAGE_ON to new filegroup using SQL Server 2014? We are on Standard edition. The technique I am using is the drop constraint (with move option) and add constraint (to new filegroup) commands. The data and indexes move, but not the text data (it still is in primary filegroup).
We are planning to upgrade. We are using Sql 2008R2 now. Which is the better option migrating to SQL 2012 or migrating to 2014?I am thinking 2014 has memory optimized tables and updatable column stored index. So it is better option.
I have been creating databases in SQL 2008 with a primary filegroup for the system objects and a secondary, marked Default, for the data.
We are preparing a migration to SQL 2014, and the administrator is complaining he won't adopt this structure on the new servers because 'there is no benefit' and 'a backup cannot be restored (!?)'.
I read , When sql server Database having multiple data files within single filegroup then sql server writes data in multiple proportional file algorithm where the amount of data written to a file is proportionate to the amount of free space in that file, compared to other files in the filegroup.
so if there is no filegroups created and multiple secondary files are attached in databse , is there same way data stored and writes data in multiple files by the same algorithm or any different way.
I have a non-recoverable filegroup in a partition that I would like to remove from the database. Every command that I know of requires the filegroup to be on-line before I can remove it.
Is it possible to remove an off-line filegroup from a database?
Within the LinkingID, there are duplicates in ID1 and ID2 but just in opposite columns. I have been trying to figure out a way to remove these set based. It doesn't matter which duplicate is removed. Essentially these are just endpoints and I don't care which side they are on. The solution must recognize the duplicates and not just remove based on every 2nd row.
My database server memory utilisation is growing faster from past 1 week. it remained same for 1 week around 55% and now it is going to 70% and increasing.
Total OS memory is 32GB and I kept cap for sql server memory upto 29GB. Dont know what to do..
I am doing a performance testing for In-memory option is sql server 2014. As a part I want to insert 500 million rows of records into a in-memory enabled test table I have created.
I need a sample script to insert 500 million records into a table ....
We have run into an issue on a dedicated SSAS 2012 SP1 server where the allocated memory is not being utilized, causing some slowness in use, connections, and queries.
Total Memory on the server is 512, and after startup, the utilized memory gets up to a max of 60GB and stops there. Checking the Resource Monitor, msmdsrv.exe is only taking around 39GB overall. With the current properties, that should be at 330GB. Am I missing something in the settings or in configuration that should be changed?
Version: SQL Server 2012 SP1 Enterprise (11.0.3000) OS: Windows Server 2012 Datacenter - Fully patched and up to date Databases: 2 Tabular models Server: 512GB RAM
I'm working on a large scale project that is currently in production. We have a big process that recently changed to use In-Memory Tables with SQL 2014 for performance efficiency.
The Process uses:
51 In-Memory SQL Tables. 50 Stored Procedures (not native) that loads data(Insert) from about 150 regular Tables and IM tables. 300 Validations (short stored procedure not native) Selecting from those 50 In-Memory Tables (And insert to In-Memory table that save the validation errors if exists on In-Memory table).
At the end of this process we clean the table from the data that relavnt to etch prosses(DELETE FROM WHERE)
B.T.W No UPDATE STAT on In-Memory are used-when we test the prosses it slow as down and cause some locks.
We are calling this process from ADO.Net, loads stored procedure first and then validations, each SP use different SQL Connection. In normal use, everything works fine and takes about 1.5 second.
Under stress test (6 Clients X 100 Tasks) for 30 minutes. After several minutes we are starting to get this SQL Exception (1 SQL Exception for every 20 tasks):
41301. A previous transaction that the current transaction took a dependency on has aborted, and the current transaction can no longer commit.
Transactions in Memory-Optimized Tables
The Exception is not clear. We are not using BEGIN TRANSACTION in the process. The SQL Exception occurs in different stored procedures each time.
NODE1 -256GB INST1 - 64GB min/64GB max INST2 - 64GB min/64GB max NODE2 - 256GB INST3- 64GB min/64GB max INST4- 64GB min/64GB max
With this configuration and if all instances are running on the same node there will be enough memory for them to run. Knowing that normally i ll have only 2 instances in each node wouldnt it be better the following config?
NODE1 -256GB INST1 - 64GB min/128GB max INST2 - 64GB min/128GB max NODE2 - 256GB INST3- 64GB min/128GB max INST4- 64GB min/128GB max
With this configuration and in case all the instances (due to a failure) start running on only 1 node, SQL will adjust all instances to just use Min memory specified?
Hi i am using MS SQL 2000 db ,while executing following prepared statement query its working fine ...BUT when i remove "top 300" then tomcat giving OUT OF MEMORY ERROR!!
i think that its because of the cross join....So i want to optimize this query in such a way that it will not give "Out of memory error" How can i re write this query?
Query :-
"select top 300 haw.DeviceID as deviceid,haw.Description as Description,haw.SerialNo as SerialNo,haw.Suffix as Suffix,haw.HWRev as HWRev,haw.Type as Type,dev.Status as Status from Hardware_PlaceHolder haw,Device_PlaceHolder dev ,Accounts ac where dev.siteId = ac.siteId and ac.CustID = ? ";
I have a question concerning where to put certain database files for the followinig RAID configurations. The server has 2 RAID configs: 2 hds in a RAID 1 and 4 hds in a RAID 10. The server will host 4 database instances: A replicated db, a Reporting Services db (which technically constitutes 2 db instances) and an application db. In order to get the best performance, should I put the OS, SQL binary and log files on the RAID 1 config with the data and tempdb on the RAID 10? If not, please explain the best solution. Thank you!
I tried to remove AdventureWorksDB in the "Add or Remove Programs" of Contol Panel and I got the following errors: (1) AdventureWorksDB Error 1326: Error getting file security: CProgram FilesMicrosoft SQL ServerMSSQL1MSSQLGetLastError: 5. |OK| and (2) Add or Remove Programs Fatal Error during installation (after I clicked the |OK| button). Please help and tell me how I can solve this problem.