SQL Server 2008 :: What Could Cause Large Gaps In Servers Default Trace
Feb 12, 2015
I have the default trace on a SQL Server 2008R2 instance enabled and found today that there is a gap of nearly 4 minutes in the trace during a time of the day when there most certainly is not going to be a 4 minute window of nothing.
What if anything could cause the default trace to have a gap like this? The SQL Server Instance (against my preferences) is hosted on VMware however it has its on HOST and so its resources are not being shared with any other server. The data & log files reside on different parts of the SANS. Our IT & Network admins are looking into the issue on their end but when I looked and found a near 4 minute gap in the default trace it hit me that this could be something above/outside of SQL Server.
View 1 Replies
ADVERTISEMENT
May 30, 2015
We are using sql server 2008r2 standard edition.
We have replication set up. Publisher and distributor are on the same server, subscriber is in different server.
Suddenly the replication started failing and continued more than 12 hrs. When I tried to connect to the subscriber server, I am able to connect to the server but not able to run any queries or not able to open the error log from SSMS.
Is there any way to find from the default trace why the replication failed and the server is not responded?
What columns do I need to query from the trace?
View 5 Replies
View Related
Mar 28, 2015
I need to populate a table with numbers with some gaps in between with logic below.
first row -1110
last row - 9550
1110
1120
1130
1140
1150
[Code] .....
View 8 Replies
View Related
Aug 12, 2014
I restarted the sql server after c2 audit was enabled and now i can not start the instance getting this error below. how do i bring the sql server up?
Cannot start C2 audit trace. SQL Server is shutting down. Error = 0x80070003(The system cannot find the path specified.)
View 2 Replies
View Related
Apr 24, 2015
Background:
* SQL Server 2008 R2
* Database was created from a third party product. The product writes to the 3 tables that I need to make changes to 24/7 and downtime is not an option. All changes must be done live.
* Database overall size is ~200 GB
* The 3 tables I must update make up ~190 GB of that space.
* Tables have no primary key or ID columns. Therefore, the data is highly fragmented.
* Of the ~190 GB of space allocated for the tables, there is roughly 70 GB of actual data.
* Rows of the table are not guaranteed to be unique. In fact, on one of the tables, tests were ran with a small sample of data and duplicates were very much evident.
What I'm trying to accomplish here is to get an ID column added to the 3 tables and set that ID field as the primary key. Doing so will force the data to become much less fragmented than it is currently and with purging and new inserts, eventually fragmentation will be nearly non-existent.
Problem:
Making table changes on tables this large while data is constantly being added poses many risks and can cause data loss. This was tried on a smaller table than these three and the entire table was lost in the process. Restore from backup was needed to get back to most recent log backup point.
Original Solution:
My original plan was to create a backup of each table and run the script below to migrate the majority of the data temporarily into the new table. I could then update the original table (which now would contain much less data) and then migrate the data back.
CREATE TABLE #temp
(
MsgDate varchar(10)
,MsgTime varchar(8)
,MsgPriority varchar(30)
,MsgHostname varchar(255)
[Code] ....
Original Solution Problem:
The problem with the solution above is that it calls the DELETE function on the original table using the values from the temporary table. When there are duplicate rows, which have not all been inserted into the backup table yet, they will all be removed from the original table because there is nothing unique to separate them out. In my testing, I had 10,000 rows in the original table and ended up with 9,959 rows in the backup table.
Question 1: Is my approach to making these table changes reasonable?
Question 2a: If so, how can I make sure I don't lose data as part of this temporary migration of the data to my backup tables?
Question 2b: If not, what would be a better approach that isn't going to cause disruption to the application that INSERTs data 24/7 and won't have any risk of data loss?
View 9 Replies
View Related
Jul 14, 2015
How many no of records of the tables are called large tables.
We are getting more deadlocks. We are using default isolation. Read & insert statements are blocking each other and causes dead locks.
I am thinking that might be purging will reduce deadlocks.
The table has 15million records. Is this table consider as large table or not in OLTP systems?
In general how many records we need to consider as large table.
View 1 Replies
View Related
Feb 5, 2015
Currently our database size is around 350G. It will grow up to 1.5 TB
We have the
Auto create statistics option :True,
auto update statistics option :True,
auto update statistics asynchronously option : False
at database level
we have a weekly job, update statistics running very long time. It is created through maintenance plan using the option full scan.
Previously they tested with sampling but instead of full scan running with the sampling effected the queries.
Is there option to avoid the long time job duration.
If we didn't run the statistics manually what will happen? How do you maintain statistics with large databases
View 9 Replies
View Related
Mar 3, 2015
I have a large table containing about 800 million rows with an average row length of about 1K. The columns in the table are char columns. I need to move the contents of this table into a similar table where the target columns are varchar. The original table column definitions are compatible with the target table but the reverse is not necessarily true. For example, one column is being changed from int to bigint. The table is partitioned.
So, what is the fastest way to migrate the data. I was thinking to unload each partition into a flat file and load the target table running multiple load streams? Is this a good way?
View 0 Replies
View Related
Sep 3, 2014
I am trying to change the default trace value from 2 to 6, to incorporate rollover and server shutdown. Â I am using this code to create a new trace (with id: 2) with the value of 6 and use that to overwrite the default trace (with id: 1):
  DECLARE @new_trace_id INT;
  EXECUTE master.dbo.sp_trace_create
   @someinteger = @new_trace_id OUTPUT,
   @someinteger = 6,
   @someinteger = N'C: raceTestTrace';
Then I disable the default trace. Â I am then using this code to overwrite the default trace (with id: 1) and replace with new value default trace (with id: 2):
    -- get trace statusÂ
    SELECT * FROM ::fn_trace_getinfo(NULL)
    -- stop trace
    EXEC sp_trace_setstatus @traceid = 1Â
          , @status = 0 Â
    goÂ
    -- delete trace
    EXEC sp_trace_setstatus @traceid = 1Â
          , @status = 2
Then I enable the default trace. Â It works perfectly (default trace with id 1 is showing value 6) until I restart. Â Upon restart no default traces are enabled, once I run the script to enable the default trace then the values for the default trace (with id: 1) are back, and my value for rollover/server restart is back to 2.
View 12 Replies
View Related
Feb 1, 2011
I've got two databases on the same server and replicate some tables from one database to another.The replication is configured so not to drop the table if it exists, but to delete the data based on the filter if one exists. There are two tables on the subscriber that have some extra columns.
I get "field size too large" error when trying to replicate them. Is there a workaround without having to make the publisher and the subscriber tables identical by schema?
View 5 Replies
View Related
Feb 9, 2015
We have an existing BI/DW process that adds large chunks of data daily (~10M rows) to an existing table, as well as using Deletes to remove stale data. This scenario seems to beg for partitioning to support switching in/out data.
After lots of reading on this, I have figured out the mechanics of the switching, bit I still have some unknowns about the indexes needed to support this.
The table currently has several non-clustered indexes, including one on the partitioning column - let's call that column snapshotdate. Fortunately there are no FKs involved, and no constraints.
Most of the partitioning material I see focuses on creating a clustered PK to assist with switching. Not sure if this is actually necessary, but assume I create one using an Identity column (currently missing) plus snapshotdate.
For the other non-clustered, non-unique indexes, can I just add the snapshotdate to the end of the index? i.e. will that satisfy the switching requirement?
View 1 Replies
View Related
Apr 22, 2015
I am monitoring our production server, and noticed that periodically we have spikes of Memory Paging Rate (pages/sec).
How to find particular queries/stored procedures that causing this?
View 5 Replies
View Related
Jul 14, 2015
I have a query below which filters detail field in the #TempLogins table. The details field is a text field which contains many types of text strings, some containing urls that have parts like "ResultID=5" which is what is contained in the ResultIDSearch and ResultSetIDSearch fields. The records with entries like "ResultID=5" are the ones I'm trying to filter for.
The problem I have is that the query takes way too long to run. The TempLogin table has around 200 K records and the TempSearch table has around 80 K records.
select * from #TempLogins a where exists
(select 1 from #TempSearch t1 where
a.detail like '%' + t1.ResultIDSearch + '%'
or
a.detail like '%' + t1.ResultSetIDSearch + '%')
View 1 Replies
View Related
Mar 13, 2015
I was just going through the default trace files and see full of sort warnings, missing join predicates and hash warnings. The server behaved weirdly last night with queries longer than usual time and the server started choking. I didn't find any info from error logs or event viewer.
View 3 Replies
View Related
Apr 9, 2015
I've been put in a situation where I have a number of SQL DB's running on 2005/2008 which I have responsibility for. I've been given limited information so am looking at a starting point to determine where I go from here.
I have of course ensured there is a backup strategy in place to secure the data.
View 1 Replies
View Related
Sep 15, 2015
I have to set up log shipping from Prod server "A" to 2 different DR servers ("B" and "C")...What do i have to do differently (or additional) using the GUI (ie not using Tsql Scripts) to accomplish this, in addition to the steps that are done to log ship to just one DR server?
View 0 Replies
View Related
Jun 2, 2015
I have a well-structured but also very large binary data-set that is generated by a C++ application every five minutes. The data needs to be accessed by SQL applications. Since data is generated every five minutes, performance is key, both for write and read. The data set is about 500MB.If data is written to the file system, the write performance doesn't involve SQL server. For reading it, I have a CLR to read the portions of the data that I need based on offset and length. That works and is very fast. The problem is that data is stored in the file system, so it is not self-contained within the database.
A second option that I haven't explored yet, is to write the data into a table as VARBINARY(MAX). I would read the data using SUBSTRING with appropriate offset and length. Performance of SQL write/read of binary data of this size, and whether there is a third option I haven't thought off. I'm using SQL Server 2014.
View 5 Replies
View Related
Jun 5, 2015
Currently we has a database of size about 300G. Because our backup system failed some time past we were left with a transaction log file which grew to about 160G. However our backups are working again and everything is working fine. My understanding is that now the transaction log file is practically empty but the capacity remains at 160G.
When you delete records the deleted transactions are going to get logged to the transaction file. My understanding is when a backup is done these transactions get discarded out of the transaction file.
could I make use of this relatively large transaction file and start deleting transactions without out actually adding to the transaction file size.
The plan is to delete records from logging tables that are not referenced to by any other table without this increasing the transaction log file.For example over a period of a few weeks we can delete a chunk of records from a table. Then after it has completed a backup we can delete another chunk of records out of this table until we have got the table down to the records that we now need.Will this work?
View 2 Replies
View Related
Sep 21, 2015
I have some huge tables (think 200+GB for a single table) which are excellent candidates for sparse columns. The tables have many columns which are defined with decimal datatypes (13,2) with a large percentage of them (over 50% in most cases- some as much as 99%) being 0.00. Since this is very expensive in terms of storage my idea is to set all the 0.00 values equal to NULL then set them as sparse. Across 100 or so identical databases, I have 5 such tables, with 20-40 columns in each table.
1.) three steps for each column in each table in each db.
Step 1: update table to allow for nulls
Step 2: update tabe set column=null where column =0.00
Step 3 update table set sparse columns
2.)
Step 1: Create entirely new table with sparse column definitions
Step 2: copy entire table, transforming 0.00 to null for affected columns via SSIS
Step 3: drop original table, rename new table to original name
View 0 Replies
View Related
Jan 30, 2015
I have an SSIS job that dynamically loops through each server, grabbing data for typical DBA reporting, like diskspace, and errorlogs. If the server is down for whatever reason the SSIS package fails. Is there any way I can prevent the SSIS package from failing if one of the servers is down?
View 1 Replies
View Related
Apr 23, 2015
i have 70 SQL database servers and i setup DB Mail on the 70 Servers, i want to know is there a way to find the status of all the jobs which i assigned the DB Mail and if its working/failing... is there a script i can run on powershell or SQL to find out that information
View 1 Replies
View Related
Sep 15, 2015
Below is the syntax I am using for creating Linked server from SQL Server i.e windows 2008 R2 standard to Postresql database running on Linux 32 bit Debian (Linux turtle 3.2.0-4-686-pae #1 SMP Debian 3.2.46-1+deb7u1 i686 GNU/Linux) and the version of Postresql is 8.3
/****** Object: LinkedServer [HGCDEV] Script Date: 09/15/2015 17:03:37 ******/
EXEC master.dbo.sp_addlinkedserver @server = N'HGCDEV', @srvproduct=N'', @provider=N'MSDASQL', @datasrc=N'172.16.20.159',@provstr=N'UID=web;PWD=dev123'
/* For security reasons the linked server remote logins password is changed with ######## */
EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'HGCDEV',@useself=N'False',@locallogin=NULL,@rmtuser='web',@rmtpassword='dev123'
This the error I am getting " Cannot initializee the data source object of OLE DB provider "MSDASQL" for linked server "HGCDEV".
How to setup the linked server........... Below are the drivers installed on the SQL server
PostgreSQL35W
PostgreSQL30
View 2 Replies
View Related
Feb 19, 2015
We have a database. It is enabled for mirroring. We need to delete the old records. That is around 500k records from a table. But it has foreign key relation. How to do in Production servers these kind of deletes?
View 2 Replies
View Related
Mar 30, 2015
Our monitoring tool shows that our production system periodically experiencing large rate - up to 800 memory pages/sec. How to find out which particular queries, S.P., processes that initiate this?
View 3 Replies
View Related
Mar 3, 2015
Is there anyway,can we find the list of servers by querying at active directory?
View 3 Replies
View Related
Mar 6, 2015
how best to approach a problem involving two tables across two different servers.
Table 1: Contains IP Address along with assessment findings. Lets say the fields are IPADDRESSSTR, FINDING
Table 2: Contains Subnet information stored in integer format. The fields are SITE_ID, LOW, and HIGH
What I'd like to do is load the IP range information into memory and then return the findings from table 1 where the IPADDRESSSTR is between the LOW and HIGH integer value.
1) Is there a way to load all of the ranges from table 2 into an array and then compare all the IP addresses (IPADDRESSSTR) from table 1?
2) How do I convert IPADDRESSSTR (a string) to an integer to perform the comparison.
View 0 Replies
View Related
Feb 12, 2015
I have a SSRS report using 2008 R2. It prompts the user for the start and end dates. This all works. But now I want the start date parm to default to the first day of the current month and the end date parm to default to the last day of the current month.In the new query window in SQL Server Management Studio, I can run this chunk of code to get the first day of current month:
SELECT DATEADD(MONTH, DATEDIFF(MONTH, 0, GETDATE()), 0)
And this code to get the last day of current month:
SELECT DATEADD(DAY, -(DAY(DATEADD(MONTH, 1, GETDATE()))),
DATEADD(MONTH, 1, GETDATE()))
But I don't know how to do this in SSRS 2008. How can I make my start / end parms to get these values.
View 5 Replies
View Related
Jun 25, 2015
Is there a way to leave the graphical 'Include Execution Plan' on by default in SSMS? I don't know how many times I run a long-running query, say to myself, "wow, that took a while; I wonder what the execution plan looks like?" only to realize that I left it turned off. Now I have to turn it on, and wait for the query to run again. I'm guessing there's a setting in the options somewhere to always leave it on, but I'm not sure where
View 2 Replies
View Related
May 28, 2008
I would occasionally get the error below when trying to access a database in my project/App_Data folder using Visual Web Developer Express 2008.
I would re-boot and the problem would go away.
I now have the problem all the time and am unable to access my database file in the App_Data folder nor ASPNETDB.mdf.
I am not trying to access a remote database. I have not knowingly changed any settings.
Has anyone seen this problem?
Can anyone help?
Thanks,
Charles Smith
ERROR MESSAGE:
€¦ under the default settings SQL Server doesn't allow remote connections. (provider: SQL Network Interfaces, error 26 €“ Error locating Server/Instance specified)
View 1 Replies
View Related
Sep 19, 2007
I know the standard Microsoft recommendation is to make the pagefile at least 1.5 to 3 times larger then the amount of physical memory. However, if you're talking about a server with lots of memory such as 16GB or 32GB, would following this rule be unnecessary. With SQL 2000 running on Windows 2000 Server or Windows Server 2003 I typically see pagefile usage no more then 12% for a 2GB pagefile. Anything over 15% means I need to look at other indicators to see if a memory bottleneck has developed. If I have 32GB of physical memory and make the pagefile only 1.5 x 32GB I have a 48GB pagefile. 10% of this is 4.8GB, which I would hope I never see consumed.
Any thoughts?
Thanks, Dave
View 11 Replies
View Related
Apr 24, 2013
IF NOT EXISTS (SELECT TOP 1 1 FROM dbo.syscolumns WHERE id = OBJECT_ID(N'dbo.Employee) and name = 'DoNotCall')
BEGIN
ALTER TABLE [dbo].[Employee] ADD [DoNotCall] bit not null Constraint DoNot_Call_Default DEFAULT 0
IF ( @@ERROR <> 0 )
GOTO QuitWithRollback
END
It just takes a LOT of time in SQL Server Management studio. I have to cancel the query and cancelling takes a whole lot time. I am using SQL Server 2008.
View 4 Replies
View Related
Mar 13, 2015
I've written a custom script to delete backup files from location. But unable to modify now to count the number of files are deleted. How to modify the script...
/* Script to delete older than N days backup from a specific directory */
USE [db_admin]
GO
IF OBJECT_ID('usp_DeleteBackup', 'P') IS NOT NULL
DROP PROC usp_DeleteBackup
GO
[Code] .....
View 2 Replies
View Related
Mar 11, 2015
when creating a new table. How can I set the default value of the column to equal the value of another column in the same table?
View 5 Replies
View Related