I want to use ShowPlanXML in my traces. Information witch is given by this event is quite interesting but I dunno how to bound it with other events. For example I have SqlBatchComplete event and want to get XML plan of it but I have no idea how to do it.
Any suggestions on how I can monitor the following without using traces? I am a dba/developer working as a developer on a contract, and I'm supposed to be tuning. However, I can't run traces. I've got my own procs that monitor locking, etc. But I would like to get at least i/o and cpu throughout the day. It would also be nice to get the query executed. Basically, the type of stuff you'd normally use traces for.
I know about @@cpu, @@io etc., but these are basically useless (no?) since they only record since the server was started. There is a stored proc but it only monitors these things since the last time it was run.
Does anyone know how I could utilize the above? I tried to write a script but I couldn't get it to work. :(
I realize that in general this is a ridiculous request, but I thought I would ask anyway.
This one has stumped me. Hopefully somebody can help. A while ago, I setup a trace that posted the log to the desktop. I needed to stop the trace this morning, so I went into the profiler and deleted the traces. There was a private and shared trace. Now every time I start up something that has to do with sql server, the log pops up on the desktop. I'm not sure why the trace wasn't deleted or stopped. The trace includes what program accessed sql. Whether it is EM or Query analyzer or ISQL. It gets posted in the log. Any suggestions? I need to remove this because the log fills up the drive and causes the server to crash.
I need some help knowing what to look for in Profiler to troubleshoot an issue.
I've got an application that accesses a SQL Server database that has suddenly started timing out when users launch and attempt to log in, and I'm trying to find out where and why the application might be timing out (whether it's a server issue, a stored procedure or SQL query from the application that could be optimized, a table that could be truncated or archived, etc.). All I have to work with from troubleshooting the database side are a series of trace files from Profiler that were run for a total of about 5 minutes while the application was launched and then timed out. Of course, there are a whole lot of statements being issued, hundreds of tables being accessed, lots of stored procedures and even more ad-hoc queries coming straight from application source code.
So my question is, what do I need to look for in these trace files that might be a red flag to an issue? I'm no DBA, but I know that really long durations might be a tip-off. I'm only seeing these on the occasional Event:Audit Logout (which I read in another thread could potentially be very normal). Anything else that I might want to filter for?
I have a procedure in a history database that does insert into 3 tables inside a transaction. users complaint that the proc sometimes takes too long during heavy usage. I did some traces to see what is taking up the time, I found that the rpc duration was averaging > 500 ms (should only take 50ms). I checked to see if one of that statements were taking too much time, but only see the commit transaction statement taking around 500 ms). I check the avg disk queue to be around 30. ( this is on a single local disk) .
So is this definitely a disk issue, or is there something else I need to check
I am debating whether to go to all the trouble of setting up on-demand Profiler traces on some test servers for the developers here. Really just tracing RPC:Completed and SQL:BatchCompleted, so the developers can at least try to catch a performance problem before going to production. The question I have, though, is just how useful is this sort of information to mid- to low-level (i.e. experience) developers. One of the bigger concerns is over Java applications, which like to hide their queries behind a lot of "sp_cursorfetch" calls.
My question to the forum is if you are a developer, have you ever dreamed of having this sort of information available? How useful is it?
I am going to try to post a poll along with this, but I am not sure it will work..
When I restart the server that hosts the database engine SQL Server 2005 Standard Edition SP4, the trace gets up mytrace-5.trc with a size of 100 MB and id is 2, leaving the hard disk space, then the SQL Server stops execution of the trace due to lack of space. I do not know how to erase the trace, because I do not know where is it.
The archive log looks like : SQL Trace ID 2 was started by login "sa"
Trace ID '2' was stopped because of an error. Cause: 0x80070070(There is not enough space on the disk.). Restart the trace after correcting the problem.
My SQL Server 2005 SP4 on Windows 2008 R2 is flooded with the below errors:-
Date 10/25/2011 10:55:46 AM Log SQL Server (Current - 10/25/2011 10:55:00 AM) Source spid Message Event Tracing for Windows failed to send an event. Send failures with the same error code may not be reported in the future. Error ID: 0, Event class ID: 54, Cause: (null).
Is there a way I can trace it how it is coming? When I check input buffer for these ids, it looks like it is tracing everything. All the general application DMLs are coming in these spids.
I have been testing with the WMI Event Watcher Task, so that I can identify a change to a file. The WQL is thus:
SELECT * FROM __InstanceModificationEvent within 30 WHERE targetinstance isa 'CIM_DataFile' AND targetinstance.name = 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Backup\AdventureWorks.bak'
This polls every 30 secs and in the SSIS Event (ActionAtEvent in the WMI Task is set to fire the SSIS Event) I have a simple script task that runs a message box).
My understanding is that the event polls every 30 s and if there is a change on the AdventureWorks.bak file then the event is triggered and the script task will run producing the message. However, when I run the package the message is occurring every 30s, meaning the event is continually firing even though there has been NO change to the AdventureWorks.bak file.
Am I correct in my understanding of how this should work and if so why is the event firing when it should not ?
Server 2003 SE SP1 5.2.3790 Sql Server 2000, SP 4, 8.00.2187 (latest hotfix rollup) We fixed one issue, but it brought up another. the fix we applied stopped the ServicesActive access failure, but now we have a failure on MSSEARCH. The users this is affecting do NOT have admin rights on the machine, they are SQL developers. We were having
Event Type: Failure Audit Event Source: Security Event Category: Object AccessEvent ID: 560 Date: 5/23/2007 Time: 6:27:15 AM User: domainuser Computer: MACHINENAME Description: Object Open: Object Server: SC Manager Object Type: SC_MANAGER OBJECT Object Name: ServicesActive Handle ID: - Operation ID: {0,1623975729} Process ID: 840 Image File Name: C:WINDOWSsystem32services.exe Primary User Name: MACHINE$ Primary Domain: Domain Primary Logon ID: (0x0,0x3E7) Client User Name: User Client Domain: Domain Client Logon ID: (0x0,0x6097C608) Accesses: READ_CONTROL Connect to service controller Enumerate services Query service database lock state
We are planning to convert or change all existing Traces to Extended Events in SQL server 2012. What is the procedure to convert custom traces. We have already created some below custom traces: like this we are planning to convert for all servers.
We recently upgraded to SQL 2005 from SQL 2000. We have most of our issues ironed out however about every 1 minute there is a message in the Application Event log and the SQL log that states:
EVENT ID 18456 Login Failed for the users DOMAIN/ACCOUNT [CLIENT: <local machine>]
This is a state 16 message which I thought meant that the account does not have access to the default database. The account is actually the account that the SQL services run under.
Any ideas? We can't seem to figure this one out. We actually upgraded to 2005 from 2000 and had an error appear after every reboot that prevented the SQL Agent from running(This application has failed to start because GAPI32.dll was not found. Re-installing the application may fix this problem.) We did a full uninstall of SQL and reinstalled fresh and restored the databases from .bak files and that is when the EVENT ID 18546 started occuring every minute.
We don't have any SQL heavy hitters here so please be detailed with any possible solutions. That you very much for any help you can provide!
Following are the two events frequently observed in the system event error log has any body come across these errors
1.The description for Event ID ( 318 ) in Source ( SQLServerAgent$XYZ ) could not be found. It contains the following insertion string(s):
2.he Open Procedure for service "MSSQLServer" in DLL "SQLCTR70.DLL" failed. Performance data for this service will not be available. Status code returned is DWORD 0.
I get this error in My Event Log on My WSUS server. I am running a Windows 2000 with SQL 2K SP4.
Connection to database failed. Reason=Cannot open database requested in login 'SUSDB'. Login fails. Login failed for user 'WSUSASPNET'.. Connection string: Data Source=WSUS;Initial Catalog=SUSDB;Connection Timeout=60;Application Name=WSUS SQL Connection; Trustedd_Connection=Yes;Pooling='true'; Max Pool Size = 100
i heard somewhere that there is a time a "time event wizard" or something like that in SQL. and i like to know were can i find it. any link or answer will be really appreciated thanks.
hello all, i have a nested gridview. how do i add a SELECTING event for this datasource for my inner gridview? because i need to increase the commandtimeout. thnx. private SqlDataSource ChildDataSource(string strCustno) { string strQRY = "Select ..."; SqlDataSource dsTemp = new SqlDataSource(); dsTemp.ConnectionString = ConfigurationManager.ConnectionStrings["MyConnectionString"].ConnectionString ; dsTemp.SelectCommand = strQRY; return dsTemp; }
i need some clarification here,i changed the CommandField for 'Delete" to the template field like this and its working well with the extender .<asp:TemplateField> <ItemTemplate><asp:LinkButton ID="lnkDelete" runat=server Text="Delete" OnCommand=onCommand_deleter CommandArgument="Deleting" CommandName="Delete" ></asp:LinkButton> <ajax:ConfirmButtonExtender ID="cnfDelete" TargetControlID="lnkDelete" runat=server ConfirmText="Are you sure you want to delete this ?" ></ajax:ConfirmButtonExtender> </ItemTemplate> </asp:TemplateField> I am confused b/c Earlier,i was handling the delete of a gridview row under OnRowDeleting event.Now its being handled by "onCommand_deletePlayer".Then why do I still need to handle onRowDeleting event?It gave me error,when i tried to avoid it.What exactly i shud code in OnRowDeleting. I am checking something like this rightnow in onRowDeleting,but this doesn't cancel the deletion despite of raising the messageif (gridView.Rows.Count <= 1) {e.Cancel = true; lblMessage.Text = "You must keep at least one record // e.cancel doe NOT work,rows gets deleted despite the message,if delete link is clicked..} Is there a way to stop the user from deleting the LAST and ONLY row from db for a given condition?thanks
We are trying to schedule a DTS package. If we run the package when the server is logged in the job works. If we logoff the server the job fails. We looked in the NT Event log and we get a message from the SQLServerAgent that the job failed with Event 208. If we check the job Run History in the Next Run we get message 'Date and Time is not Available'. I should mention that the DTS pachage contains some mapping to drives on the server. Any ideas? We have been struggling with this for a long time. Thanks.
Why would I get an application event viewer error when I truncate the transaction logs on my databases? All the error tells me is that i truncated the database log.
I need to run an SQL job if a specific file exists. Actually, that specific files gets created different time in different day. I need to run the job when the file arrives. Is there any way to do this in SQL? I have done this in seagate scheduler but I need to do it in SQL. Please help!!!!!!!
Hi, We would like to capture events in our system. There seem to be three obvious capture points for us - application, triggers, transaction log. The latter seems to be the most attractive, since we`re looking for a solution with minimal performance impact. In general, our problem is similar to populating data warehouses from on-line databases. Can anyone proffer some advice? In particular, being quite new to SQL Server, I am not sure how difficult/possible it is to read the transaction log in order to cull events. Some direction here would be greatly appreciated. Thanks, Karl
I want to experiment with setting the Truncate Log on Checkpoint option to True to see if this will lessen the chance of my transaction log running out of space. Before I do this I want to be sure that the Transaction Log is not tied to the NT Event Log, SQL Error Log, etc. For example, does the NT Event Log (or any other log) use the Transaction Log? Thanks, Kevin.
I see these 4 events posted in my event log every second. I am using SQL 6.5 The first event says that "Login succeeded- User: probe Connection: Non-Trusted" The second event in the log has this description "DB-LIBRARY - SQL Server message: EXECUTE permission denied on object sp_replcounters, database master, owner dbo" The third event has "DB-LIBRARY error - General SQL Server error: Check messages from the SQL Server." and the fourth event contains "CollectSQLPerformanceData : dbsqlsend failed "
These messages are being posted repeatedly. Could someone shed some light on this please. Thanks Sudarshan
Does anyone know if there is any software available that notifies specified people when an error above a certian level occurs. Im thinking along the lines of email and text message.
Im running sql server 2k at sp3a level, and the software will have to be compatible with it so as to get the event notification to pass the information on.
If there isn't any software, would anyone know of any scripts that will do this?
We have a data import t-sql job which runs every morning to extract data from a large Unix database. This has worked fine on an old server which ran the same SQL version and service pack.
The t-sql uses a linked server and simply truncates the local tables and reimports the whole lot. Appreciate there are other ways to do this but it works fine for a medium sized business and is not mission critical.
However, when we have moved this onto the new server its throwing an error message in the event log when the job finishes. It actually completes the job and reports it as successful but at the exact same time it throws the following error message into the event log:
Error: 0, Severity: 19, State: 0 SqlDumpExceptionHandler: Process 14 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process.
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
INSERT INTO arista_caclient SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM caclient WHERE cl_datopn>=''01/01/1900'' OR cl_datopn is null') INSERT INTO arista_camatgrp SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM camatgrp WHERE (mg_datcls>=''01/01/1900'' OR mg_datcls is null) AND (mg_datopn>=''01/01/1900'' OR mg_datopn is null)') INSERT INTO arista_camatter SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM camatter WHERE (mt_estcmp>=''01/01/1900'' OR mt_estcmp is null)') INSERT INTO arista_cabilhis SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM cabilhis WHERE (bh_bildat>=''01/01/1900'' OR bh_bildat is null) AND (bh_laspay>=''01/01/1900'' OR bh_laspay is null) AND (bh_rundat>=''01/01/1900'' OR bh_rundat is null) AND (bh_remdat>=''01/01/1900'' OR bh_remdat is null)') INSERT INTO arista_cablaloc SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM cablaloc') INSERT INTO arista_cafintrn SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM cafintrn WHERE (tr_trdate>=''01/01/1900'' OR tr_trdate is null)') INSERT INTO arista_cadescrp SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM cadescrp') INSERT INTO arista_catimtrn SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM catimtrn WHERE (tt_trndat>=''01/01/1900'' OR tt_trndat is null)') INSERT INTO arista_cafeextn SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM cafeextn') INSERT INTO arista_fmsaddr SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM fmsaddr') INSERT INTO arista_cabilloi SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM cabilloi') INSERT INTO arista_caferate SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM caferate')