SQL Server 2008 :: View Log Cache Size
Jun 21, 2010Can I view the log cache size in SQL Server memory any DMV's which states that.
View 9 RepliesCan I view the log cache size in SQL Server memory any DMV's which states that.
View 9 RepliesHow do i check the size of the datacache allocated from the buffer pool by sql server?
DMV or anything to show me the pool allocation sizes for the various pools in sql server i think i may be able to work from there.
this directory is over 2gb
Program FilesMicrosoft SQL Server100Setup BootstrapUpdate Cache
can you delete the contents safely?
Is there a way to increase the size of the procedure cache. Or is it only a auto configuring option.
I have 2gb of memory, and when I check the size of the procedure cache it is just 10mb. I would like to increase this to around 50mb. Not sure if there is an setting to do this. Had a look on BOL could not find anything.
thanks
Jane
Hi,
I have to perform a lookup in a table based on a query like:
"... where ? = [RefTable].fieldID and ? between [RefTable].AnotherFieldValue and [RefTable].AThirdFieldValue"
So, SSIS has put the CacheType to none. As I really need to speed up the job I want to set the CacheType to partial (full isn't an option due to the custom query I use here).
But here it comes: when using partial CacheType, one has to set the cache size manually - and I really don't know what value I should assign to it - is there a guideline on this topic?
I work on a Win2003 server platform with sql server 2005 - 2 processors - 2Gb Ram - enough disc space
Thanks in advance,
Tom
Hi,
I have 2 questions:
Is there any way of getting around the 128MB file size limit when creating and adding SSEv databases to VS2005? Currently I get the following error when trying to connect to a database:"The database file is larger than the configured maximum database size. This setting takes effect on the first concurrent database connection only...". This after I altered the app.config file to "...Max Database Size=600;..."
Have anyone tried to use SSEv to cache data with the use of the Smart Application Offline Building Block? Is there a provider I can use for doing this?
Thanks in advance!
Hello,
I searched around and not been able to find any guidance on this question: if I am designing a lookup transformation, how do I decide what I should set the cache size to?
For the transformation on which I am currently working, the size of the lookup table will be small (like a dozen rows), so should I just reduce it to 1MB? Or should I not even bother with caching for a lookup table that small?
Hypothetically speaking, if I am working with much larger lookups in the future (let's say 30,000 rows in the lookup table), is there some methor or formula that I should use to try to figure out the best cache size?
If the cache size is set larger than the actual data being cached, is the entire cache size allocated in memory, or is the cache size managed to only be as large as it needs to be? In other words, is the cache size a maximum to which the cache will grow, or is it a preset that sets the cache for that lookup to exactly the specified size?
Thanks,
Dan Read
SSIS Blogging
Is only one plan is kept for one query in plan cache?
i heard generally hash is created for a query and plan is search with this hash.
I have a virtual server (VMware ESX) with 64GB RAM running a single instance of SQL 2012 SP1. The max memory config is set to 59392 (58GB).
The Page Life Expectancy for this server has been averaging well under 10 mins for the last few days, according to our monitoring.
I have been checking the amount of data in the buffer cache periodically during the day with the below query, which seems to show that there is never more than about 10GB of data at any one time, frequently dropping below 5GB:
SELECT COUNT(*) AS BufferPages,
CONVERT(decimal(10, 2), COUNT(*) / 128.0) AS BufferMB
FROM sys.dm_os_buffer_descriptorsWhy would the amount of cached data be so low (and cause so much churn)?
I am aware that other things will require some of that memory (plan cache etc.) but with Max Mem of 58GB, I would expect there to be a much higher amount of actual cached data at any one time. I did the same checks on another VM with the same amount of RAM/Max Mem setting, and there was 50GB of data in the cache, with PLE measured in hours.
I have a table like below,
CREATE TABLE Student
(
Id BIGINT not null
,Name NCHAR(20) not Null
,Branch NVARCHAR (64) null
)
The table contains : 100000 rows .
1)Number of rows in a data page
2)Total number of pages required for the table
3)Total Table size in KB or MB
4)Total file size in Kb or MB
I'm trying to understand what is happening to one large table.
In a DB SQL 2008R2, I'm trying to track a rapidly increasing DB size. It's due to one table recently added.
Despite the table's no rows increasing its size reduces after a scheduled re-index.
I've recorded the space used by this table by recording EXEC sp_spaceused 'tableName'
---
NoRows reserved data index_size unused
128864512300384 KB 2290928 KB9432 KB24 KB AFTER reindex
128864515406232 KB 5366184 KB39280 KB768 KBBEFORE reindex
N.B. The only thing I'm aware of happening in the time period is a reindex as part of scheduled task. I could be missing something else happening.
The table has only one Index the PK which is clustered, No Fill factor is specified. The Server Default fill factor =0.
I read fillfactor=0=100 Will always try and fill the pages so space used will be minimised?
After running the reindex the index fragmentation is v.low. I've not recorded the fragmentation before reindex.
I can see the Data is not added in Clustered index order.
In my database there is a big table and format is something like this:
CREATE TABLE [dbo].[MyTable](
[aaa] [uniqueidentifier] NULL,
[bbb] [uniqueidentifier] NULL,
[ccc] [nvarchar](max) NULL,
[ddd] [nvarchar](100) NULL,.......etc.........
There are some more columns with more 'nvarchar' (max) and other INT data types. Anyway, I know a page is 8K size. How do I find out how much space does A ROW takes with above datatypes? If users add 5000 Rows per day, how do I figure out how much size the table will increase?
I have issue with my DATA file ( MDF) .. The usage is 99.87% for database DB1 . File size is 3 GB . How do I reduce it ?
I have tried to shrink it by changing the recovery from FULL to SIMPLE and set it back to FULL .
I notice the index Defragmentation is high ..
Can I change the Initial Size into 1 GB for example ?
My prod server (only default instance) is configured TempDB 1024 MB data and Log 200MB. when I run 'sqlperf logspace' it shows most of time around 45% 'log space used'. There nothing going on the instance when I ran 'whoisactive' and select * from sys.sysprocesses where dbid = 2!!!
So my questions are is this normal to see log space around 45%, how to find what what CAUSED the tempdb log space to grow 45%? Is there something to do about it?
Hi guys,
I am looking at the plan caches/cached pages from the perspective of
sys.dm_os_memory_cache_counters and sql serverlan Cache - Cache Pages
For the first one I am using
select (sum(single_pages_kb) + sum(multi_pages_kb) )
from sys.dm_os_memory_cache_counters
where type = 'CACHESTORE_SQLCP' or type = 'CACHESTORE_OBJCP'
a slight change from a query in
http://blogs.msdn.com/sqlprogrammability/
For the second just perfmon.
The first one gives me a count of about 670,000 pages only for the object and query cache and the second one gives me a total of about 100,000 pages for five type of caches including object and query.
If I am using the query from http://blogs.msdn.com/sqlprogrammability/ to determin the plan cache size
select (sum(single_pages_kb) + sum(multi_pages_kb) ) * 8 / (1024.0 * 1024.0) as plan_cache_in_GB
from sys.dm_os_memory_cache_counters
where type = 'CACHESTORE_SQLCP' or type = 'CACHESTORE_OBJCP'
it gives me about 5 GB when in fact my SQL Server it can access only max 2GB with Total and Target Server Memory at about 1.5 GB.
Does anyone have any idea what is going on?
After converting from SSIS 2008 to SSIS 2012, I am facing major performance slowdown while loading fact data.When we used 2008 - one file used to take around 2 hours average and now after converting to 2012 - it took 17 hours to load one file. This is the current scenario: We load data into Staging and Select everything from Staging (28 million rows) and use a lookups for each dimension. I believe it is taking very long time due to one Dimension table which has (89 million rows).
With the lookup, we currently are using partial cache because full cache caused system out of memory.Lookup Transformation Editor - on this lookup - how to increase the size on partial Cache size 64-bit? I am being stuck at 4096 MB and can not increase it. In 2008, I had 200,000 MB partial cache size.
For finding size values of temp database:
select * from sys.master_files -
size column value here is 1024 for .mdf,size here for .ldf is 64
select * from tempdb.sys.database_files -
size column value here is 3576 for .mdf,size here for .ldf is 224
Why is there a difference and not the same. size columns in the above 2 tables for temp db's do they represent different values ?
I've never worked with the XML data type in SQL Server, although I know its been there for a few iterations of SQL SErver. Now I've got a situation in which it might store some configuration data as XML, since that's the way it comes. (We had thought about storing the data in a VARCHAR(MAX) field.)
The first question is does the XML data type have a size limitation? For example do you do something like:
ConfigFile XML(1000) NULL
Or is it just something like this:
ConfigFile XML NULL
The second question is persisting the data to a file. As the name I choose for the variable suggests, we want to save the data from a configuration file into a SQL Server database. How do we go about doing that? We'll be developing a C# application, it will read and write the data both from the SQL table and the user's local HD.
I've got two databases on the same server and replicate some tables from one database to another.The replication is configured so not to drop the table if it exists, but to delete the data based on the filter if one exists. There are two tables on the subscriber that have some extra columns.
I get "field size too large" error when trying to replicate them. Is there a workaround without having to make the publisher and the subscriber tables identical by schema?
We have installed SQL Server 2008 R2 SP1 instance and it's having Share Point 2010 databases.
We have 2 dedicated drives for Tempdb on SAN with 50 GB space. Both tempdb data & log files are created with default size. I would like to presize them.
What are the best values to start with?
U ->Tempdbdata having tempdb.mdf file
V->Tempdblog having templog.ldf file
My backups are failing sometimes.My db size is around 400 GB and we are taking backup to the remote server. Free size available on the disk is showing 600 GB but my database full backup run more than 10 hrs and failed. The failure reason is there is not enough space on the disk.
What could be the possible failure reasons? It has more than the the database size on the backup server but why it is showed that msg on the job failures. I noticed same thing occurred couple of times in the past.
Is there any way to find how much the backup file will be generate before we run the backup job?
i.e. If we run the full backup of test1 database now, it will generate ....bak file for that test1 db
currently we are using maintenance plan and with compression (2008R2) and the database has TDE enabled
Any good starting point to understand for a specific db, how many max VLFs are good to have so that it does not cause long startup or backup times?
Also, I need some calculation so that I can identify a best growth parameter I will setup for each database ?
I'm seeing the below msg in errorlog and curious to know the changes (right sizing/growth) to be done? As of now 100 MB of log file growth value is set (refer: [URL] ....)
Database BizTalkMsgBoxDb has more than 1000 virtual log files which is excessive. Too many virtual log files can cause long startup and backup times. Consider shrinking the log and using a different growth increment to reduce the number of virtual log files.
When sending an email in HTML format, shouldn't this allow for 2gb of data? Mine is getting truncated after 4000 characters.
@body NVARCHAR(MAX) = NULL,
EXEC msdb.dbo.sp_send_dbmail
@recipients='someone@some.com',
@reply_to='someone'
@from_address='someone@here.com>',
@profile_name = 'profilename',
@body_format = 'HTML',
@body = 'lots of data'
We have setup a report server where a db's tables are refreshed from a backup daily.
All SP and views are created in a different DB but look at the daily refresh tables
Can I setup the DB that creates the view to point at the DB with the tables so when I say new view it looks at the tables in that DB ...
I have a view made up of a few base tables and another view. I have created an INSTEAD OF trigger on this view, but it doesn't seem to fire whenever a new record shows in the view. The purpose of the trigger is to insert a sister record in a table whenever a new record shown in the view. Here's the catch, the table that the trigger is supposed to insert into is not a base table within the view and the view is not an updatable view. My question is... Do INSTEAD OF triggers only affect the base table(s) within the view and does the view itself have to be an updatable view?
View 3 Replies View RelatedHow can we monitor the all tables in all databases and send notifications to the team.Is there a way to check to find the no of rows and size of a table last month and find out growth % now
View 4 Replies View RelatedA recent SharePoint upgrade has rendered several views obsolete. I am redefining them so that our upper level executive reports show valid data.(yes, I know that doing anything to sharepoint could cause MS to deny support, having said that, this is something I've inherited and need to fix, pronto) The old view was created like so:
USE [AHMC]
GO
/****** Object: View [dbo].[vwSurgicalVolumes] Script Date: 09/04/2015 09:28:03 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE VIEW [dbo].[vwSurgicalVolumes] AS
SELECT
[code]....
As I said, this view is used in a report showing surgical minutes.SharePoint is now on a new server, which is linked differently (distributed?) I've used OPENQUERY to get my 'new' query to work;
SELECT *
FROM OPENQUERY ([PORTALWEBDB], 'SELECT
--AllLists
AL.tp_ID AS ALtpID
,AL.tp_WebID as altpwebid
,AL.tp_Title AS ALTitle
[code]....
My data (ie surgical minutes, etc) seems to be in the XML column, AUD.tp_ColumnSet . So I need to parse it out and convert it to INT to maintain consistency with the previous view. How do I do this within the context of the view definition?Here is a representation of the new and old view data copied to excel :
<datetime1>2014-08-14T04:00:00</datetime1><float1>2.000000000000000e+000</float1><float2>4.190000000000000e+002</float2><float3>1.600000000000000e+001</float3><float4>8.110000000000000e+002</float4><sql_variant1 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:sqltypes="http://schemas.microsoft.com/sqlserver/2004/sqltypes"
[Code] ....
can't format it to make it look decent. InHouseCases =2, InHouseMinutes=419, OutPatientCases =16, OutPatientMinutes=1230. This corresponds to the new data I can see in the XML column; 2.000000000000000e+000 is indeed 2 and 4.190000000000000e_002 is indeed 419.
Here's the scenario. We tried to create a full text indexing on a view, unfortunately the view was created based on Left outer join. Hence we were pushed to do the following.
We planned to create a table based on the data that is rendered in this view(this view is formed by combining 14 tables.) . Finally we were able to create a full text indexing on this new table. For simplicity let us called this new table as tableA.
Now the problem is how can we update (if any data is changed on one of the concrete tables of the view) or insert (when new data is inserted in one of the underlying tables of the view) the respective data into this new table - tableA which was created based on this view.
In other words, there should be a sync between the view and tableA. Is it possible to achieve this ???
View is formed based on different tables (not based on tableA)?
We have two databases with same schema and tables (same table names, basically main DB and a copy of the main DB). following is example of table names from 2 DBs.
CREATE TABLE #SourceDatabase (SourceColumn1 VARCHAR(50))
INSERT INTO #SourceDatabase VALUES('TABLE1') , ('TABLE2'),('TABLE3') , ('TABLE4'),('TABLE5') , ('TABLE6')
SELECT * FROM #SourceDatabase
DROP TABLE #SourceDatabase
CREATE TABLE #ArchiveDatabase (SourceColumn2 VARCHAR(50))
INSERT INTO #ArchiveDatabase VALUES('TABLE1') , ('TABLE2'),('TABLE3') , ('TABLE4'),('TABLE5') , ('TABLE6')
SELECT * FROM #ArchiveDatabase
DROP TABLE #ArchiveDatabase
We need a T_SQL statement that can create one view for each table from both the databases(assuming both databases have same number of tables and same table names). so that we can run the T_SQL on a thrid database and the third DB has all the views (one view for each table from the 2 DBs). and the name of the view should be same as the tables name. and all 3 DBs are on the same server.
the 2 temp tables are just examples, DBs have around 1700 tables each. so we ned something like following for each table.
CREATE VIEW DBO.TABLE1 AS SELECT * FROM [SourceDatabase].[dbo].[TABLE1] UNION ALL SELECT * FROM [ArchiveDatabase].[dbo].[TABLE1]
CREATE VIEW DBO.TABLE2 AS SELECT * FROM [SourceDatabase].[dbo].[TABLE2] UNION ALL SELECT * FROM [ArchiveDatabase].[dbo].[TABLE2]
CREATE VIEW DBO.TABLE3 AS SELECT * FROM [SourceDatabase].[dbo].[TABLE3] UNION ALL SELECT * FROM [ArchiveDatabase].[dbo].[TABLE3]
CREATE VIEW DBO.TABLE4 AS SELECT * FROM [SourceDatabase].[dbo].[TABLE4] UNION ALL SELECT * FROM [ArchiveDatabase].[dbo].[TABLE4]
CREATE VIEW DBO.TABLE5 AS SELECT * FROM [SourceDatabase].[dbo].[TABLE5] UNION ALL SELECT * FROM [ArchiveDatabase].[dbo].[TABLE5]
CREATE VIEW DBO.TABLE6 AS SELECT * FROM [SourceDatabase].[dbo].[TABLE6] UNION ALL SELECT * FROM [ArchiveDatabase].[dbo].[TABLE6]
I gave a user all required permission to view the SSRS report. User is able to select from the dropdown list but unable to view the data, It is showing a blank screen.
View 9 Replies View RelatedI have one view which is based on couple of tables. Here is the definition of view. Which are the options i can use to optimize the view for better performance. This is one of the view which causing issue on database.
CREATE VIEW [dbo].[V_Reqs]
WITH SCHEMABINDING
AS
SELECT purchase.Req.RequisitionID, purchase.Req.StatusCode AS Expr2, purchase.Req.CollectionDateTime,
purchase.Req.ReportDateTime, purchase.Req.ReceivedDateTime, purchase.Req.PatientName, purchase.Req.AddressOne,
purchase.Req.AddressTwo, purchase.Req.City, purchase.Req.PostalCode, purchase.Req.PhoneNumber,
[code]....
I was able to create a view and convert it to excel. Now I want to it to schedule it for everyday and then email the excel file as an attachment to couple of people.
Would SSRS be an option? where I can create a report of the view and schedule it?
Does any know the process I need to follow?
Do I have to uses SSIS ? and then set it up as SQl server job?
The below would work if all of the values in A.Value were numbers but they are not. So I need to restrict the view to only look at the following measures but still show all the other row.
WHERE [Measure] IN ('RTT-01','RTT-04','RTT-07')
SELECT
M.[Description]
,A.*
,M.Threshold
,M.[Threshold Direction]
[Code] ....
Is there any way that I can create a select statement in the case when to only look for them measures that I know contain numbers?