SQL Server Admin 2014 :: How To Read And Load Extended Events Into A Table For Querying
Jun 19, 2015
I am setting up extended events more or less just fine, however I am a bit confused as to how to read and load them into a table for querying. In particular the offset part - is there a way to load just a given dates worth in?
I've got the files configured to be 20MB before rolling over, the XE is running all the time.
So if i load in the full file now, say that covers 2.5 days worth, when I load it again tomorrow to get the updated data I'm also reloading today, which is a waste?
I presume I am going about this wrong, but lack an example that really goes into detail of practicalities of loading this data.
I am wanting to get/filter on all queries and procs that take longer than 2 seconds to run (I'll balance real values later) but I'm not sure which Action out of the XE that I need.
I am using SQL Server 2014 and thought I had used sqlserver.sql_statement_completed.duration > 2000 in a previous version.
I need to find any stored procedures that have not been used over a certain time period.
I have set up an extended events session to gather sp_statement_starting and _completed.
The trace returns the object_id's of many stored procedures. I then query the sys.procedures and plug in the object_id to return the stored proc name.
Do I have to repeat this process for every different object_id or is there a way I can query the trace results, using the object_id as my search criteria as one query ?
I have been tasked with auditing all DDL and selected DML events on a production server and logging them to a table. My solution is to use CDC for the DML and a Server-Level trigger for the DDL. Because there should never but much DDL activity on the server (except when performing update tasks) I don't need to worry about the trigger consuming too many resources.
My question is this: Is there any single specification such as DDL_LEVEL_EVENTS that can capture all DDL activity or do I need to specify each and every DDL action in the trigger?
I am trying to setup querying Active directory from sql for the first time.
We are running on windows server 2012 and using sql 11.0.2100.60. Have tried the following
sql is on sever dev AD is on sever DO
EXEC sp_addlinkedserver 'ADSI', 'Active Directory Services 2.5', 'ADSDSOObject', 'adsdatasource' GO
[Code] ....
I get the following error when I try and query
Msg 7321, Level 16, State 2, Line 2 An error occurred while preparing the query "SELECT name FROM 'LDAP:// xxxx.internal' WHERE objectCategory='Person' AND objectClass = 'contact'" for execution against OLE DB provider "ADSDSOObject" for linked server "ADSI".
What is the best approach for a read only copy of a database that is ~ 1TB. The primary database is fed nightly with an ETL process. We are currently trying to duplicate the ETL to read only server but that process is not going well. So we are looking at other options to let SQL make the copy.
The primary database is on a Win12R2 with SQL 12 or 14, a 2 node A/P failover cluster.
The read only copy will be on a Win12R2 with SQL 12 or 14. It is not a requirement to fail over to the read only copy if the primary should go down.
What would best the approach to accomplish the end result?
Assume if i have a connection(Application intent readonly) starts with reading, writing and again reading data for a report. how it will works in SQL 2014 Always availability on?
We are having a conversation at work and the subject of load balancing with SQL came up. Right now we are running SQL Server 2014 on four (4) machines. I am using a AlwaysOn with Availability Groups (AG). Now I know that we can scale out the reading in AG by allowing the secondary serves to receive reads.
Is there a way to be able to do this with writes? Can I have in essences 2 masters that some how reconcile with each other? We are expecting a huge amount of writes in the near future and we need a way for SQL to handle the amount of traffic we are expecting with out any issues.
I explored the possibility of Peer - to - Peer replication; however, it seems that it would be more work if we are constantly making updates to the database scheme.
I am curious what other people have done to implement read-only routing for a large number of procedures.
Basically figuring out when to call procedures that are read-only with read-only intent.
We have a user application that passes an encrypted string to a web service that directs it to our SQL Servers.
I've been tasked with finding a way to make this happen without changing the application.
The only thing I have been able to come up with is writing something (which I did) that will identify whether something is read-only or not and storing a big list.
Then having the web service look up the given procedure and adding the intent where needed.
Having an annoying AG/AO problem with the read only routing side of it.
Let me give some specifics first:
2 SQL Server Instances, Not Clustered. Availability Group is named 'Ireland'
There is a primary Replica and a Secondary Replica, named:
'IrelandPrimary' and 'IrelandSecondary'
There is a listener configured with the name 'ListenIreland' on Port 14330 (the two 3's are correct)
Read Only Routing URLS are configured as follows: IrelandPrimary tcp://Ireland.dom.local:49891ALL IrelandSecondary tcp://Ireland.dom.local:49841ALL
So now my problem:
When I try to connect using the ApplicationIntent=Readonly; or even using -K ReadONLY in sqlcmd I get the error telling me that my connection was actively refused.
This is connecting to the Listener, not the instance itself - that works fine. I'm at a bit of a loss now.
To explain what I am trying to achieve is a for a connection to be redirected to the secondary replica when its set for read-intent.
I've just noticed that it only fails when I specify ApplicationIntent=ReadOnly; If I omit the Intent It connects to the read-write database instead.
We have a reporting database which is refreshed daily from prod backup and later creating new tables/views/indexes as part of the refresh job. Is there a better approach we can implement in sql 2012/2014 for this scenario since we are planning to migrate to sql2014.
We installed SP1 for SQL Server 2014 this past weekend and got this error message in the logs. I found that if you set the db to read-write, it updates the system objects, even after SP1 has completed. Then you can set it back to read-only. I'm just posting this so other people can find it on the internet, as I wasn't able to find it specifically.
Error Log Entry:System objects could not be updated in database 'x' because it is read-only.
Problem: After installing SP1 for SQL Server 2014 you will find this message in the error logs saying read-only databases could not be updated.
Solution: Simply set the db to read-write and the system objects will get updated, long after SP1 was installed.
ALTER DATABASE [x] SET READ_WRITE WITH NO_WAIT
Then set it back to read-only:
ALTER DATABASE [x] SET READ_ONLY WITH NO_WAIT
You should then see these log entries:
System objects could not be updated in database 'x' because it is read-only. Setting database option READ_WRITE to ON for database 'x'. Starting up database 'x'. CHECKDB for database 'x' finished without errors on 2015-07-25 01:02:28.143 (local time). This is an informational message only; no user action is required. Synchronize Database 'x' (129) with Resource Database. Setting database option READ_ONLY to ON for database 'x'. Starting up database 'x'. CHECKDB for database 'x' finished without errors on 2015-07-25 01:02:29.888 (local time). This is an informational message only; no user action is required.
We have always on setup in our environment with read only replica. The primary database has 2 schema one is a dbo and other xyz. We have some store procs created in dbo schema and xyz schema. These store procs are being used by SSRS reports to retrieve the data (select only) no data changes will be made.
when we run the store proc from the read only server the storeprocs in the dbo schema run fine but xyz schema are failing with the message saying failed to update the database as this is a read only...
I have a 2 node cluster with 2 standalone 2k14 instances having alwayson setup. As per client requirement we have created a client access point with a cname alias in dns to connect to secondary replica. Now, everytime whenerver the roles switch over one has to manually move this resource from the previous secondary node to the new secondary node. This is tedious, and should not be done manually either, so I am looking for a way to automate it so that as soon as the role switches over, the resource group after some time should also switch over to the current secondary.
How you would calculate the average read/write latency experienced by a SQL Server instance during a specific time window in order to monitor this for multiple instances. From this MSDN blog, I know that you have to take multiple samples and do some calculations to get the correct latency.
[URL] ...
However, the SQLServer:Resource Pool Stats object tracks these numbers per resource pool and we want to get one number for the whole server. Since there can be a different base value for each resource pool, you can't simply sum the numerator values together. Here's some sample data from a server that illustrates the problem.
object_name counter_name instance_name cntr_value cntr_type SQLServer:Resource Pool Stats Avg Disk Read IO (ms) default 307318919 1073874176 SQLServer:Resource Pool Stats Avg Disk Read IO (ms) Base default 25546724 1073939712 SQLServer:Resource Pool Stats Avg Disk Read IO (ms) internal 2045730 1073874176 SQLServer:Resource Pool Stats Avg Disk Read IO (ms) Base internal 208270 1073939712
I'm thinking I would need to do some sort of weighted average, but I'm not sure if that will result in the correct value. Here's the formula I am thinking about using currently before doing the calculation over time
I use this code in a utility procedure (for performance testing) but it is really slow.
For example, a session with three events is taking 5 seconds to complete this query:
DECLARE @xml xml= ( SELECT CAST(xet.target_data AS xml) FROM sys.dm_xe_session_targets AS xet JOIN sys.dm_xe_sessions AS xe ON (xe.address = xet.event_session_address) WHERE xe.name = @name );
I am trying to use xquery to get event data back from extended events. I am trying to use some sample data from Grant Fritchey but I am getting null records back. Below is the xml - I just want to retrieve a distinct list of the client_hostname and client_app_name.
WITH xEvents AS (SELECT object_name AS xEventName, CAST (event_data AS xml) AS xEventData FROM sys.fn_xe_file_target_read_file ('C:LoginTraceShared_0*.xel', NULL, NULL, NULL)) SELECT distinct top 1000 xEventName, xEventData.value('(/event/data[@action_name=''Client_APP_Name'']/value)[1]','varchar') Client_APP_Name, xEventData.value('(/event/data[@action_name=''Client_Host_Name'']/value)[1]','varchar') Client_Host_Name FROM xEvents
I know very little about Extended Events, but I know it is supposed to be more powerful than Profiler. The SQL Instance involved is 2008R2. I was asked whether we could capture when a stored proc was called with a certain parameter. So for example, if dbo.usp_mystoredproc is called and is passed a value of '12345a' for @customer, can that be captured? I would want to know the time it was called, the parameter and parameter value. Probably would be interested in the SPID or LoginName as well.
My question is: How can I insert a row for each unique TemplateId. So let's say I have templateIds like, 2,5,6,7... For each unique templateId, how can I insert one more row?
We are planning to convert or change all existing Traces to Extended Events in SQL server 2012. What is the procedure to convert custom traces. We have already created some below custom traces: like this we are planning to convert for all servers.
Previously same records exists in table having primary key and table having foreign key . we have faced 7 records were lost from primary key table but same record exists in foreign key table.
I have a stored procedure in that attempts to perform a WHERE NOT EXISTS check to insert new records. If the table is empty, the procedure will load the table. However, an insert does not occur when a change to one or more source fields occurs against an existing record. The following is my code:
I expected that when one of the source values of any field in the second WHERE clause changes, that the procedure would insert a new record. Why is this not happening? One other note: I am not 'allowed' to use MERGE.
I'm being asked to create multiple filegroups for a new database based on the table type, transaction, lookup, misc... From what i'm reading this doesn't make sense. I'm reading either large tables get file groups, nonclustered indexes when they are about the same size of the data, or a few other reasons...
First of all, we are talking about the same disk (please don't ask me about how it is configured) and I'm not sure yet if restoring separate file groups is even going to be necessary.
So here are my questions (beyond, the test and see what happens) because in the end I'm going to probably have to do what i'm told. So this is for my professional knowledge.
1. Does file groups separated by table type make sense? 2. Should you put tables that are queried often together in the same or different file groups. 3. I'm pretty sure you can't restore single file group for write access, am I correct?