SQL Server 2014 :: Automating Random Inserts Into A Memory Optimized Table
Jan 28, 2015
I have this table
CREATE TABLE [Sales].[Test_inmem]
(
[c1] [int] NOT NULL,
[c2] [nvarchar](20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[ModifiedDate] [datetime2](7) NOT NULL CONSTRAINT [IMDF_Test_ModifiedDate] DEFAULT (sysdatetime()),
[Code] ....
I have to generate 1000000 random records into it. I tried various ways to insert records, but not being a developer could not do it. I hope to make the C1 as a serial number, C2 can be anything, C3 I want to be the timestamp.
View 3 Replies
ADVERTISEMENT
Sep 19, 2015
I've been having some trouble getting a single-column "varchar(5)" field to reliably use a table seek instead of a table scan. The production table in this case contains 25 million rows. As impressive as it is to scan 25 million rows in 35 seconds, the query should run much faster.
Here's a partial table description:
CREATE TABLE [dbo].[Summaries_MO]
(
[SummaryId] [int] IDENTITY(1,1) NOT NULL,
[zipcode] [char](5) COLLATE Latin1_General_100_BIN2 NOT NULL,
[Golf] [bit] NULL,
[Homeowner] [bit] NULL,
[Code] .....
Typically, this table is accessed with a query that includes:
SELECT ...
FROM SummaryTable
WHERE ixZIP IN (SELECT ZipCode FROM @ZipCodesForMO)
This query insists on using a table scan. I've tried WITH (FORCESEEK) for example, but that just makes the query fail.
As I've investigated this issue I also tried:
SELECT * FROM Summaries WHERE ZipCode IN ('xxxxx', 'xxxxx', 'xxxxx')
When I run this query with 64 or fewer (actual, valid) ZIP codes, the query uses a table seek.But when I give it 65 or more ZIP codes it uses a table scan.
To summarize, the production query always uses a table scan, and when I specify 65 or more ZIP codes the query also uses a table scan. I'm wondering if the data type of the indexed column (Latin1_General_100_BIN2) is somehow the problem. I'll likely try converting the ZIP codes to an integer to see what happens.
View 9 Replies
View Related
Aug 26, 2013
I try to load data into a memOpt table (INSERT INTO ... SELECT ... FROM ...). The source table has a size about 1 Gb and 13 Mio Rows. During this load the LDF File grows to size of 350 GB (until the space if the disk is run out of space). The Server has about 110 GB Memory for the SQL Server reserved. The tempdB doesn't grow. The Bucket Size in the create statement has a size of 262144. The Hash key as 4 fields`(2 fields have the datatype int,1 has smallint, 1 has varchar(200). ) The disk for the datafiles has still space for the datafiles (incl. the hekaton files).
How can I reduce the size of the ldf files during the load of the data ?
View 9 Replies
View Related
Feb 18, 2015
I'm just beginning to experiment with memory optimised tables.
I have two sets of near identical tables - one set normal, the other set memory optimised with DURABILITY=SCHEMA_ONLY - and am running test queries against these. When I say that the two sets are "near identical", I mean that they are the same except for the primary keys: for the normal tables these are defined as PRIMARY KEY CLUSTERED whereas for the memory-optimed ones they are defined as PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=nnnn) as per the requirements for such tables.
I then run a pair of test queries, again identical but one referencing the normal tables and the other referencing the memory optimised ones.
(The query uses an inner join on three tables with row counts of approx 3m rows, 100000 rows and 5000 rows.)
The query against the normal tables runs noticeably faster than that against the memory optimised ones. To try to find out why, I examined the execution plans. the plan for the memory optimised query suggests that I have a missing index: but of course I can't create this againsty a memory optimised table. Is this a bug or am I missing something? Why the performance between the two should be so different?
View 1 Replies
View Related
Apr 28, 2015
I read this: [URL] ....
Which says you must drop the database to remove the filegroup.
I deleted all the objects and then backed up the DB and restored it and the filegroup is still there.
I was skeptical but some of the comments made me think this might work.
Do I really have to restore from a pre-memory optimized state?
View 3 Replies
View Related
Jun 11, 2015
How do i find Total allocated space and used space of a memory optimized filegroup?
use memory_optimized_db
Go
select (SUM(size)*8.0)/1024.0 as Space,
FILEGROUP_NAME ( data_space_id ) , type_desc from sys.database_files
group by data_space_id,type_desc;
above query gives "current used size of the container " of memory optimized file group but doesn't give Total space detail.
View 0 Replies
View Related
Oct 2, 2014
I have the following setup:
- An MSSQL 2014 Standard server that houses multiple small databases (in excess of a hundred).
- These databases are frequently dropped and restored by an application that uses this SQL Server.
- There is a business need for this setup at this time, so I can't get away from it. Therefore answers like "don't have so many small databases that are frequently dropped and restored" would be somewhat unuseful
This is the problem I have:
- When I connect SSMS 2014 to the server and expand the "Databases" node, it takes forever to display. In comparison, SSMS 2008 connected to SQL 2008R2 server with the same number of databases displays the Databases tree very quickly.
I ran a trace to see what exactly SSMS 2014 is doing. When the "Databases" node is expanded, it runs a query that checks each database for Memory-Optimized Tables (new and wonderful feature of SQL 2014 for sure, but I'm not using it, at least yet). Naturally, when you have to loop through over a hundred DBs, it takes time. Worse yet, if one of these DBs is in process of being restored, the query sits and waits to time out before proceeding to the next DB. Sometimes this causes outright timeouts. Here is the query:
use [MyDatabase]
SELECT
ISNULL((select top 1 1 from sys.filegroups FG where FG.[type] = 'FX'), 0) AS [HasMemoryOptimizedObjects]
To be sure, this is NOT a SQL Server performance issue. This server processes a rather heavy workload and has been doing so for over a month, and the workload completes within expected time limits or better. Even so I've done some basic performance measuring, and the server itself is quite all right.
Moreover, if I connect SSMS 2008 to it, I get an error message (Index out of bounds or somesuch), but SSMS 2008 does connect, and displays the Databases tree much faster than SSMS 2014.
I'd like to turn off the option to check for Memory Optimized Objects altogether, as I'm not using the feature.
View 3 Replies
View Related
May 27, 2014
I need a script that inserts the data of an excel sheet into a table. If something already exists it should leave it, unless it's edited in the excel sheet and so on and so on. This proces has to go through a stored procedure... ...But how?
View 6 Replies
View Related
Jun 15, 2015
I've a database with a memory optimized filegroup on it. How can I remove it?I have removed the memory optimized table I had on it, but when I try to remove the filegroup I receive an error.
View 12 Replies
View Related
Jul 29, 2014
I am doing a performance testing for In-memory option is sql server 2014. As a part I want to insert 500 million rows of records into a in-memory enabled test table I have created.
I need a sample script to insert 500 million records into a table ....
View 9 Replies
View Related
Oct 3, 2015
I have a function which generate random number in range :
CREATE FUNCTION Func_Rand
(
@MAX BIGINT ,
@UNIQID UNIQUEIDENTIFIER
)
RETURNS BIGINT
AS
BEGIN
RETURN (ABS(CHECKSUM(@UNIQID)) % @MAX)+1
END
GO
If you run this script you always get result between 1 and 4
SELECT dbo.Func_Rand(4, NEWID())
BUT, in this script sometimes have no result !!!!!!! why??
SELECT 'aa'
WHERE dbo.Func_Rand(4, NEWID()) IN ( 1, 2, 3, 4 )
View 2 Replies
View Related
Apr 2, 2015
I have a production DB that all of a sudden it seems that any and every insert causes massive locks/blocks.
If I kill the offending spid anther spids pops up with the block/lock.!
View 7 Replies
View Related
Apr 26, 2015
We have a database with a table that contains around 180m records. Each day a further 70k are inserted. No records are ever deleted as this table is used for archiving only.Users are required to perform SELECTs on this table constantly but due to the high number of INSERTs the indexes become very fragmented very quickly. My aim is to avoid daily rebuilds of the indexes which is what our software house is telling us we have to do.
This is the DDL for the table:
CREATE TABLE [dbo].[Inventory](
[EAN] [bigint] NOT NULL,
[Day] [smalldatetime] NOT NULL,
[State] [int] NOT NULL,
[Quantity] [int] NULL,
[StockValue] [float] NULL,
CONSTRAINT [PK_Inventory] PRIMARY KEY CLUSTERED
[code]...
There are also three clustered Indexes on this table each referencing a single column. The problem from my side is that I cannot understand why the three columns in a primary key would also be configured as non-clustered indexes.My solution would be one of the following:
1. Accept the tables are going to be fragmented and require a daily rebuild (don't like this one!)
2. Partition the table
3. Remove the non-clustered Indexes and let the clustered index for the primary key do the work.
View 9 Replies
View Related
Aug 26, 2014
We are planning to upgrade. We are using Sql 2008R2 now. Which is the better option migrating to SQL 2012 or migrating to 2014?I am thinking 2014 has memory optimized tables and updatable column stored index. So it is better option.
View 2 Replies
View Related
Aug 31, 2015
I am checking some ratio numbers for our system engineers, those are
Read/write ratio?
Random/sequential ratio?
Read/write block size?
For Read/write ratio, I am using below query,
SELECT
m.type_desc
, CEILING(sum(num_of_bytes_read*1.0) / (sum(num_of_bytes_read*1.0) + sum(num_of_bytes_written*1.0)) * 100) AS 'Read %'
, CAST((sum(v.size_on_disk_bytes) / 1024.0 / 1024 / 1024) AS MONEY) AS 'FileSizeGB'
[Code] ....
Random/sequential ratio, I googled but cannot find a similar query to get the result?
View 1 Replies
View Related
Oct 20, 2015
I've used some info on here to generate random dates within a given range and also random times - independently they work fine, but I can't seem to join them into a single field of datetime. I'm not sure why. The following snippet works fine as two independent fields:
select CAST(CAST(ABS(CHECKSUM(NEWID()))%(780)+(33968) AS DATETIME) as DATE) as theDate,
CAST(CAST(DATEADD(milliSECOND,ABS(CHECKSUM(NEWID()))%86400000 ,'00:00') AS TIME) as varchar(50)) as theTimeBut when I try to make it a single datetime field:
select CAST(cast(cast(CAST(ABS(CHECKSUM(NEWID()))%(780)+(33968) AS DATETIME) as date) as varchar(50)) + ' ' + cast(CAST(CAST(DATEADD(milliSECOND,ABS(CHECKSUM(NEWID()))%86400000 ,'00:00') AS TIME) as varchar(50)) as varchar(50)) as datetime)
Which returns with: Conversion failed when converting date and/or time from character string.
So what I am really looking for is a way to join those two values into a single datetime field... Or failing that that how to generate random dates within a range including random times...
View 9 Replies
View Related
May 7, 2015
In SQL Server 2014, how big for the block size is better for performance? 64 KB? 4 KB?
For normal database files, best practise is 64 KB disk block size. Not sure if it is same for memory-optimized filegroup.
View 12 Replies
View Related
Aug 25, 2004
Hi guys,
I figure this should not be a complex one. I know how to manually pull in data from Oracle 9i into SQL Server 2000 using DTS. However this is my issue.....
I simply want to automate the pulling in of data from 1 table in my ORACLE 9i database into another table in my Sql Server 200. I was hoping I could simple write a stored procedure that would sort of utilize a dblink like in ORACLE and then schedule that procedure. Is this feasible in Sql Server, and how would one go about setting this automated import up????
Thanks in Advance all............
'Wale
View 5 Replies
View Related
Mar 5, 2014
My database server memory utilisation is growing faster from past 1 week. it remained same for 1 week around 55% and now it is going to 70% and increasing.
Total OS memory is 32GB and I kept cap for sql server memory upto 29GB. Dont know what to do..
View 9 Replies
View Related
Nov 11, 2014
Is there a method of forcing existing tables into the in-memory filegroup so the table data can benefit from in-memory processing.
View 7 Replies
View Related
Oct 4, 2015
I want to create a lot of index for my database for performance.
But I need find memory usage by indexes.
How to find memory usage by index in sql server?
View 1 Replies
View Related
Mar 14, 2014
We have run into an issue on a dedicated SSAS 2012 SP1 server where the allocated memory is not being utilized, causing some slowness in use, connections, and queries.
Total Memory on the server is 512, and after startup, the utilized memory gets up to a max of 60GB and stops there. Checking the Resource Monitor, msmdsrv.exe is only taking around 39GB overall. With the current properties, that should be at 330GB. Am I missing something in the settings or in configuration that should be changed?
Version: SQL Server 2012 SP1 Enterprise (11.0.3000)
OS: Windows Server 2012 Datacenter - Fully patched and up to date
Databases: 2 Tabular models
Server: 512GB RAM
Current memory configuration:
Hard Memory Limit - 0 (Default)
LowMemoryLimit - 65% (Default)
TotalMemoryLimit - 95% (Default is 80)
VertiPaqMemoryLimit - 60% (Default)
VertiPaqPaginingPolicy - 1 (Default)
MemoryHeapType - 2 (Default)
View 2 Replies
View Related
Nov 26, 2014
We are planning to 2014 migration in few days.
ServerA----- ServerA1
ServerB---- ServerB1
In serverA we have 5databases. And making 5databases as availability group. The replica is ServerA1
In server B we have 3 databases. And the making those 3 databases as an availibility group. The secondary replica is ServerB1.
What is the best option to configure the quoram drive in this situation.
Also Server A1 & Server B1 also we use for reporting purposes.
We have some sensitive data. Is it possible to delete the data while reading the data?
How the memory optimization feature work with always on?
View 8 Replies
View Related
Sep 21, 2015
I'm working on a large scale project that is currently in production. We have a big process that recently changed to use In-Memory Tables with SQL 2014 for performance efficiency.
The Process uses:
51 In-Memory SQL Tables.
50 Stored Procedures (not native) that loads data(Insert) from about 150 regular Tables and IM tables.
300 Validations (short stored procedure not native) Selecting from those 50 In-Memory Tables (And insert to In-Memory table that save the validation errors if exists on In-Memory table).
At the end of this process we clean the table from the data that relavnt to etch prosses(DELETE FROM WHERE)
B.T.W
No UPDATE STAT on In-Memory are used-when we test the prosses it slow as down and cause some locks.
We are calling this process from ADO.Net, loads stored procedure first and then validations, each SP use different SQL Connection. In normal use, everything works fine and takes about 1.5 second.
Under stress test (6 Clients X 100 Tasks) for 30 minutes. After several minutes we are starting to get this SQL Exception (1 SQL Exception for every 20 tasks):
41301. A previous transaction that the current transaction took a dependency on has aborted, and the current transaction can no longer commit.
Transactions in Memory-Optimized Tables
The Exception is not clear. We are not using BEGIN TRANSACTION in the process. The SQL Exception occurs in different stored procedures each time.
View 2 Replies
View Related
Apr 24, 2014
SQL Server In Memory OLTP 2014?
View 6 Replies
View Related
Oct 9, 2015
I have the following SQL FCI configuration:
NODE1 -256GB
INST1 - 64GB min/64GB max
INST2 - 64GB min/64GB max
NODE2 - 256GB
INST3- 64GB min/64GB max
INST4- 64GB min/64GB max
With this configuration and if all instances are running on the same node there will be enough memory for them to run. Knowing that normally i ll have only 2 instances in each node wouldnt it be better the following config?
NODE1 -256GB
INST1 - 64GB min/128GB max
INST2 - 64GB min/128GB max
NODE2 - 256GB
INST3- 64GB min/128GB max
INST4- 64GB min/128GB max
With this configuration and in case all the instances (due to a failure) start running on only 1 node, SQL will adjust all instances to just use Min memory specified?
View 6 Replies
View Related
Aug 7, 2007
Hi,I would have used the aspnet membership tool to auto-create all the ASP.NET membership tables. However, the hosting company don't allow remote connections which meant I had to create the tables by hand, scripting the tables using script to CREATE using management studio.However, I noticed one of the tables has data without any users: aspnet_SchemaVersions, which causes an error when trying to log onto my site.The fix is to make sure the table has the 4-5 rows of data in it (which is missing off the live server). Its just a few rows of data, but I want to script the inserts for each row so I don't have to type them in using myLittleAdmin (the host's web version of management studio). Can anyone point me in the right direction?
View 3 Replies
View Related
Jul 20, 2005
I have two tables in SQL 6.5 database with identical fields and indexes. Onecontains the data of August 2003 and other July 2003. Now the august tableis larger ( about 40000 more rows ) than the july table but i've noticedthat the same queries perform much faster on the august table than the julytable. Ive tried this with many different queries so i'm wondering whats thereason behind this. Is there a way to optimize a table? Remember , I'm usingSQL 6.5thx
View 4 Replies
View Related
Jun 12, 2007
I Got the Sql query "Select Top 1 * from Books order by NewId()"........ for select Random row in a table .. any one can explain Above Query
Regards,
S.sajan
View 4 Replies
View Related
Jul 20, 2005
Hello,Can someone point me to getting the total number of inserts and updates on a tableover a period of time?I just want to measure the insert and update activity on the tables.Thanks.- Vish
View 3 Replies
View Related
May 1, 2015
We have a SQL Server 2008R2 system that has heavy usage to one specific table. I have tuned basically all I can as far as making sure SQL Statements are using good indexes. From time to time a group of folks will log into Mgt Studio and run SQL Statements like this, leave the query open and once in a while it will cause blocking to other SQL running our online system
The query is like this: select ID,* from tablename with (nolock) where ID like 'MSPRYy%'
The results come back within less than 1 second. However, they leave this window open which is what causes this to be a HEAD BLOCKER and blocks other SQL Statements from running.
View 6 Replies
View Related
Apr 10, 2015
I am trying to create a trigger on a table. Let's call it table ABC. Table looks like this:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[ABC](
[id] [uniqueidentifier] NOT NULL,
[Code] ....
When someone updates a row on table ABC, I want to insert the original values along with the current date and time getdate() into table ABCD with the current date and time into the updateDate field as defined below:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[ABCD](
[id] [uniqueidentifier] NOT NULL,
[Code] .....
The trigger I've currently written looks like this:
/****** Object: Trigger [dbo].[ABC_trigger] Script Date: 4/10/2015 1:32:33 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TRIGGER [dbo].[ABC_trigger] ON [dbo].[ABC]
[Code] ...
This trigger works, but it inserts all of the rows every time. My question is how can I get the trigger to just insert the last row updated?
I can't sort by uniqueidentifier in descending as those can be random.
View 9 Replies
View Related
Jan 30, 2015
I'm trying to write a function that will retrieve the last backup date/time of a particular database on a remote server (i.e. by querying msdb.dbo.backupset). Unfortunately, you can't use sp_executesql in a function, so I can't figure out a way to pass the server name to the query and still be able to return the datetime value back to the calling TSQL code (so that rules out using EXEC().
View 3 Replies
View Related