SQL Server 2008 :: How To Find Statements That Cause Large Memory Paging

Apr 22, 2015

I am monitoring our production server, and noticed that periodically we have spikes of Memory Paging Rate (pages/sec).

How to find particular queries/stored procedures that causing this?

View 5 Replies


ADVERTISEMENT

SQL Server 2008 :: How To Find Which Queries / Processes Causing Large Memory Paging Rate

Mar 30, 2015

Our monitoring tool shows that our production system periodically experiencing large rate - up to 800 memory pages/sec. How to find out which particular queries, S.P., processes that initiate this?

View 3 Replies View Related

DB Engine :: How Server Memory Paging Out

Nov 23, 2015

Is there any easy way to detect SQL Server memory is getting paged out? Any DMV query or tips to confirm it is paging out.

My SQL Server Version is: Microsoft SQL Server 2008 (SP3) - 10.0.5500.0 (X64)

View 5 Replies View Related

Paging File Size For Server With 128GB Memory

Sep 7, 2006

Hi,

Anyone knows what the size should the Paging File be for a 8 P 16 Core Opteron server with 128GB memory installed? The server will be running SQL 05 Ent and IIS, on top of Win2k3 R2 x64 Ent.

Please also give the reason for that size.

Regards,

dong

View 1 Replies View Related

Extreme Paging Rate Reduced By Setting Maximum Server Memory To 6 Gig?

Dec 3, 2007

We have several 2005 servers with "Maximum server memory" set to 214 gig, which I believe is the default at installation time. I am told that this means "use all the memory there is including paging." Well, this is nuts but the servers seem to work fine with this setting no matter how much physical memory they have.

One of our 2005 servers recently started paging like crazy, so I reduced "Maximum server memory" to 6000 and the paging disappeared (server has 8 gig of physical memory) and the server appears happy.

I can not explain why only this one server has this paging issue and the others do not. Should I be setting "Maximum server memory" on all my servers? Are there other considerations which might cause the server to eat-up all the memory? As far as I know no other applications run on this box.

Thanks,

Michael

View 6 Replies View Related

Paging Large Recordsets

Oct 21, 2007

Here's a question you'll never quit hearing: is there a convenient way to page through large recordsets in SQL Server 2000?

I'm writing some software which, for all intents and purposes, works like a messageboard: users can create threads, leave replies, and so on.

I have about a half million records in a few of my tables, and some of my queries return 1000s of results. I'd prefer not to return 1000s of records all at once, so I don't want to page my records in code; I'd rather page them in SQL Server. Naturally, I want to page replies. However, I don't know of a convenient way to page records in SQL Server 2000.

View 1 Replies View Related

Paging Large Results In SQL 2005

May 29, 2006

lets say we have more than 100 000 rows in Table1, and we want to view each 10 rows alone.... and by pressing on a NEXT button we will see the other 10 pages....

there is 2 buttons : NEXT and PREVIOUS

so can anyone tell me how to do that in SQL 2005, and what is correctly called.

I have found a code that does use ROW_NUMBER in order to view results between 2 numbers,

example: rows between 10 and 50....
but It is not what I want, so please I need some help, thank you

By Uncle Sam

View 10 Replies View Related

Handle Paging With Large Datasets

Apr 10, 2008

Im looking at this article http://www.dotnetjunkies.com/Article/EA868776-D71E-448A-BC23-B64B871F967F.dcik
and it seems like they are selecting the entire customers table into the temp table, correct ?

View 2 Replies View Related

Paging Large Results In SQL 2005

May 29, 2006

lets say we have more than 100 000 rows in Table1, and we want to view each 10 rows alone.... and by pressing on a NEXT button we will see the other 10 pages....

there is 2 buttons : NEXT and PREVIOUS

so can anyone tell me how to do that in SQL 2005, and what is correctly called.

I have found a code that does use ROW_NUMBER in order to view results between 2 numbers,

example: rows between 10 and 50....
but It is not what I want, so please I need some help, thank you

By Uncle Sam

View 4 Replies View Related

SQL Server Is Occupying Large Memory Space

Oct 3, 2001

In an Intranet Application using Win NT, Apache, Tomcat and SQL Server, the memory space used by SQL Server is drastically increasing and finally the system crashes. Nearly 40 people are accessing the system. The hardware configuration is P2 processor with 393 MB RAM and 2GB Virtual Memory. SQL Server,Web server and Servlet Engine are running on same machine.
Within three hours, SQL Server occupies 200M memory and the system perfomance comes down and finally the system stopes the tomcat servlet engine.
Anybody have any idea on this? We have nearly 1500 JSP pages,200 Bean files and 300 tables in SQL Server.

View 2 Replies View Related

Sorting + Paging A Large Table In Stored Procedure

May 6, 2007

As I said above, how do I put sorting + paging in a stored procedure.My database has approximately 50000 records, and obviously I can't SELECT all of them and let GridView / DataView do the work, right? Or else it would use to much resources per one request.So I intend to use sorting + paging at the database level. It's going to be either hardcode SQL or stored procedures.If it's hardcode SQL, I can just change SQL statement each time the parameters (startRecord, maxRecords, sortColumns) change.But I don't know what to do in stored procedure to get the same result. I know how to implement paging in stored procedure (ROW_NUMBER) but I don't know how to change ORDER BY clause at runtime in the stored procedure.Thanks in advance.PS. In case "ask_Scotty", who replied in my previous post,   http://forums.asp.net/thread/1696818.aspx, is reading this, please look at my reply on your answer in the last post. Thank you.

View 3 Replies View Related

Any Way To Prevent SSAS Memory From Paging?

May 21, 2007

This morning msmdsrv.exe looked like the following in Task Manager:

Memory Usage: 600,000 K

VM Size: 2,500,000 K



If I understand correctly, that means that a good deal of SSAS memory has been swapped to disk by the OS. Is there any way to prevent this? (I have read about using the Lock Pages In Memory privilege at http://msdn2.microsoft.com/en-us/library/ms179301.aspx which lets SQL prevent the OS from paging sqlservr.exe memory. I don't suppose there's an equivalent SSAS setting.)


In particular, the symptom I'm seeing is that after we finish processing, all that paged memory has to be loaded back into memory before the transaction can be committed. Committing the transaction is usually very quick. But when most of the SSAS process memory is pages, it takes quite a while, during which I can see the Memory Usage number in Task Manager growing. (I don't think it's a blocked transaction commit.)

View 2 Replies View Related

Memory Usage/allocation/paging Question

Jun 30, 2006

Are there any *negative* consequences to SQL Server 2000 Standardpaging to disk more often if one reduces the amount of available RAMfrom 2.0 GB to 1.5 GB to give the OS and other apps enough RAM, otherthan things possibly being "slower" and whatever wear and tear couldhappen to the disk drives??Or is it better to add in more RAM per se to bring the server up to 4GB (Win2k3 Svr SP1)?? (it has a 4GB pagefile)Is any information available about judging how much RAM the OS (Windows2003 Server) should have available?? Does IIS require a lot of RAM??Thank you, Tom

View 1 Replies View Related

SQL2005 Memory And Paging File Issue

Mar 28, 2007

Currently, we have the following setup...

SQL 2005 Std, 8gb ram, boot ini switches for /PAE and /3GB, AWE enabled and the min/max memory settings for a single instance respectively set to 1024 and 7168. Performance-wise the system is a bit slower than before (with SQL2000) and there seems to be alot of paging. The SQL service is also running under the "LocalSystem". In this case, do I need to specify the local "SYSTEM" to have rights to "Lock pages in memory"? I have a feeling there is a misconfiguration somewhere. Can someone point me in the right direction? Thanks.

View 4 Replies View Related

SQL Server 2008 :: Merge 2 Select Statements

Aug 25, 2015

SELECT
part.num, woitem.qtytarget AS woitemqty,

(SELECT LIST(wo.num, ',')

FROM wo INNER JOIN moitem ON wo.moitemid = moitem.id
WHERE moitem.moid = mo.id) AS wonums, mo."USERID" AS mo_USERID

[Code] ...

View 5 Replies View Related

W3wp Causing Memory Paging, Crash On Reportserver?

Jan 29, 2007

Summary: We host reports off of one server that are accessed and refreshed 24/7/365. On a daily basis, I receive calls about reports taking several minutes to run IF they run at all. (normally these reports take seconds). There has yet to be an actual "error message" associated with this behavior. When I check the server at these times, I will find that w3wp.exe process is taking up about 4G and is causing paging. It seems that once it gets to that point, it will kill itself and all goes back to normal. However, it's a slow death and is impacting business.

We're using MSRS 2005. Hosting the reports is the job of this server--no other applications are hitting it. There is a development SQL db on the box, but not in use at this time. The datasources in most cases are stored procedures housed on another box. The web interface (for end users running the report) is hosted on a third server.

Any thoughts?

View 5 Replies View Related

SQL Server 2008 :: Date Manipulation And CASE Statements

Mar 1, 2015

I have a question on date manipulation functions and CASE statements

My sql is passed the following parameter's and performs a select using a manipulation on these date param's to get a start and end date range depending on the conditions;-

monthColHeader = eg 'Feb 2015'
defaultStartDate and defaultEndDate
filterStartDate and filterEndDate.

These are my conditions;-

if defaultStart and End = filterStart and End use monthColHeader for the date range
if defaultStart and End != filetrStart and End AND the month/year of filterStart and filterEnd match then use the filterStart & End month/Year with the monthColHeader to get the date range
if defaultStart and End != filetrStart and End AND the month/year of filterStart and filterEnd DON't match use filterStart Day and monthColHeader for our start date and monthColHeader for our end date.

When I say use monthColHeader I mean like this;-

(r.dbAddDate >= (CAST('@Request.monthColHeader ~' AS DATETIME)) AND r.dbAddDate < DATEADD(mm,1,'@Request.monthColHeader ~'))

This sql works for converting say 'Feb 2015' to '2015-02-01' & '2015-02-28'....

View 1 Replies View Related

SQL Server 2008 :: Add ID And Primary Key To Large Table(s)

Apr 24, 2015

Background:

* SQL Server 2008 R2
* Database was created from a third party product. The product writes to the 3 tables that I need to make changes to 24/7 and downtime is not an option. All changes must be done live.
* Database overall size is ~200 GB
* The 3 tables I must update make up ~190 GB of that space.
* Tables have no primary key or ID columns. Therefore, the data is highly fragmented.
* Of the ~190 GB of space allocated for the tables, there is roughly 70 GB of actual data.
* Rows of the table are not guaranteed to be unique. In fact, on one of the tables, tests were ran with a small sample of data and duplicates were very much evident.

What I'm trying to accomplish here is to get an ID column added to the 3 tables and set that ID field as the primary key. Doing so will force the data to become much less fragmented than it is currently and with purging and new inserts, eventually fragmentation will be nearly non-existent.

Problem:
Making table changes on tables this large while data is constantly being added poses many risks and can cause data loss. This was tried on a smaller table than these three and the entire table was lost in the process. Restore from backup was needed to get back to most recent log backup point.

Original Solution:
My original plan was to create a backup of each table and run the script below to migrate the majority of the data temporarily into the new table. I could then update the original table (which now would contain much less data) and then migrate the data back.

CREATE TABLE #temp
(
MsgDate varchar(10)
,MsgTime varchar(8)
,MsgPriority varchar(30)
,MsgHostname varchar(255)

[Code] ....

Original Solution Problem:
The problem with the solution above is that it calls the DELETE function on the original table using the values from the temporary table. When there are duplicate rows, which have not all been inserted into the backup table yet, they will all be removed from the original table because there is nothing unique to separate them out. In my testing, I had 10,000 rows in the original table and ended up with 9,959 rows in the backup table.

Question 1: Is my approach to making these table changes reasonable?
Question 2a: If so, how can I make sure I don't lose data as part of this temporary migration of the data to my backup tables?
Question 2b: If not, what would be a better approach that isn't going to cause disruption to the application that INSERTs data 24/7 and won't have any risk of data loss?

View 9 Replies View Related

SQL Server 2008 :: Large Tables In OLTP

Jul 14, 2015

How many no of records of the tables are called large tables.

We are getting more deadlocks. We are using default isolation. Read & insert statements are blocking each other and causes dead locks.

I am thinking that might be purging will reduce deadlocks.

The table has 15million records. Is this table consider as large table or not in OLTP systems?

In general how many records we need to consider as large table.

View 1 Replies View Related

SQL Server 2008 :: Track All DML Statements Executed From SSMS Into A Table

Apr 1, 2015

I have a specific requirement. I need to insert the DML statements executed from Management Studio into a SQL table. We have SQL Server 2008 R2 and 2012 instances.

View 8 Replies View Related

SQL Server 2012 :: Find Subset Of Records From A Table - Multiple Except Statements

May 13, 2015

I created a CTE which finds a subset of records from a table

I then ran a SELECT statement against the same table as

SELECT * FROM TABLE
EXCEPT (SELECT * FROM CTE)

Is it possible to add another EXCEPT statement after the CTE EXCEPT statement to cover a condition not incorporated in the CTE definition?

View 9 Replies View Related

SQL Server 2008 :: Update Statistics With Large Databases

Feb 5, 2015

Currently our database size is around 350G. It will grow up to 1.5 TB

We have the

Auto create statistics option :True,
auto update statistics option :True,
auto update statistics asynchronously option : False

at database level

we have a weekly job, update statistics running very long time. It is created through maintenance plan using the option full scan.

Previously they tested with sampling but instead of full scan running with the sampling effected the queries.

Is there option to avoid the long time job duration.

If we didn't run the statistics manually what will happen? How do you maintain statistics with large databases

View 9 Replies View Related

SQL Server 2008 :: Migrate Contents Of Large Table From One DB To Another?

Mar 3, 2015

I have a large table containing about 800 million rows with an average row length of about 1K. The columns in the table are char columns. I need to move the contents of this table into a similar table where the target columns are varchar. The original table column definitions are compatible with the target table but the reverse is not necessarily true. For example, one column is being changed from int to bigint. The table is partitioned.

So, what is the fastest way to migrate the data. I was thinking to unload each partition into a flat file and load the target table running multiple load streams? Is this a good way?

View 0 Replies View Related

SQL Server 2008 :: Select Statements To Output Files With Proper Datestamp

Jun 11, 2015

I have few complex queries and I want to extract the output of results to all different dateformatted output files.

How to write the queries?

I know BCP is a solution but any other effective way to implement it?

View 2 Replies View Related

SQL Server 2008 :: Min And Max Memory Setting

Feb 22, 2015

I have an issue with my production server regarding memory usage (Memory Utilization is above 95%.). Memory is : 12 GB and the service that is consuming the majority of memory 88%/10.5GB is sqlserver.exe. So it would appear that MSSQL is not set to restrict the amount of memory it uses ? How much should I set for min and max memory ? the defauld is min memory : 0 and max : 2 TB

View 5 Replies View Related

SQL In-Memory :: How To Find Memory Usage By Index

Oct 4, 2015

i want to create a lot of index for my database for performance.but i need find memory usage by indexes.

How to find memory usage by index in sql server?

View 9 Replies View Related

SQL Server 2008 :: Replication Error - Field Size Too Large

Feb 1, 2011

I've got two databases on the same server and replicate some tables from one database to another.The replication is configured so not to drop the table if it exists, but to delete the data based on the filter if one exists. There are two tables on the subscriber that have some extra columns.

I get "field size too large" error when trying to replicate them. Is there a workaround without having to make the publisher and the subscriber tables identical by schema?

View 5 Replies View Related

SQL Server 2008 :: Retrofitting Partitioning To Existing (large) Tables

Feb 9, 2015

We have an existing BI/DW process that adds large chunks of data daily (~10M rows) to an existing table, as well as using Deletes to remove stale data. This scenario seems to beg for partitioning to support switching in/out data.

After lots of reading on this, I have figured out the mechanics of the switching, bit I still have some unknowns about the indexes needed to support this.

The table currently has several non-clustered indexes, including one on the partitioning column - let's call that column snapshotdate. Fortunately there are no FKs involved, and no constraints.

Most of the partitioning material I see focuses on creating a clustered PK to assist with switching. Not sure if this is actually necessary, but assume I create one using an Identity column (currently missing) plus snapshotdate.

For the other non-clustered, non-unique indexes, can I just add the snapshotdate to the end of the index? i.e. will that satisfy the switching requirement?

View 1 Replies View Related

SQL Server 2008 :: What Could Cause Large Gaps In Servers Default Trace

Feb 12, 2015

I have the default trace on a SQL Server 2008R2 instance enabled and found today that there is a gap of nearly 4 minutes in the trace during a time of the day when there most certainly is not going to be a 4 minute window of nothing.

What if anything could cause the default trace to have a gap like this? The SQL Server Instance (against my preferences) is hosted on VMware however it has its on HOST and so its resources are not being shared with any other server. The data & log files reside on different parts of the SANS. Our IT & Network admins are looking into the issue on their end but when I looked and found a near 4 minute gap in the default trace it hit me that this could be something above/outside of SQL Server.

View 1 Replies View Related

SQL Server 2008 :: Speed Up Text Search In Large Result Set?

Jul 14, 2015

I have a query below which filters detail field in the #TempLogins table. The details field is a text field which contains many types of text strings, some containing urls that have parts like "ResultID=5" which is what is contained in the ResultIDSearch and ResultSetIDSearch fields. The records with entries like "ResultID=5" are the ones I'm trying to filter for.

The problem I have is that the query takes way too long to run. The TempLogin table has around 200 K records and the TempSearch table has around 80 K records.

select * from #TempLogins a where exists
(select 1 from #TempSearch t1 where
a.detail like '%' + t1.ResultIDSearch + '%'
or
a.detail like '%' + t1.ResultSetIDSearch + '%')

View 1 Replies View Related

SQL Server 2008 :: Set Maximum Memory Setting For Each One?

May 21, 2015

I am curious about maximum memory setting .

Should we set maximum memory setting for each SQL server? For example a server has 6 GB memory then should we set maximum memory setting = 3.5 GB ?

View 9 Replies View Related

Executing Large SQL Statements

Apr 15, 2004

Hello,

I am trying to import a very large amount of data into an SQL database. The data is in a format of SQL statements already - it is all in one large text file consisting of a number of CREATE TABLE X and INSERT INTO X VALUES().

The problem is that the amount of values being inserted into some of the tables is so large, that I am unable to open the .SQL file using query analyzer to run it, because the line-size limit for query analyzer is 64kb, whereas actual line-size in the file is in some cases in excess of 15MB.

I would appreciate any advice on how to get all this data into a managable format. I keep thinking that there simply has to be a way to execute these over-size SQL statements.

Thank you in advance!

-Sergey

View 1 Replies View Related

SQL Server 2008 :: Large Binary Dataset - Database Or File System?

Jun 2, 2015

I have a well-structured but also very large binary data-set that is generated by a C++ application every five minutes. The data needs to be accessed by SQL applications. Since data is generated every five minutes, performance is key, both for write and read. The data set is about 500MB.If data is written to the file system, the write performance doesn't involve SQL server. For reading it, I have a CLR to read the portions of the data that I need based on offset and length. That works and is very fast. The problem is that data is stored in the file system, so it is not self-contained within the database.

A second option that I haven't explored yet, is to write the data into a table as VARBINARY(MAX). I would read the data using SUBSTRING with appropriate offset and length. Performance of SQL write/read of binary data of this size, and whether there is a third option I haven't thought off. I'm using SQL Server 2014.

View 5 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved