SQL Server 2005 gives you a possibility to create managed triggers. In a managed trigger I can create a new thread and process trigger event in various ways. My question is that are there any reasons why I should NOT start new threads in database triggers? The following code shows how I could create new threads. Do you see that this could cause any errors or problems in SQL Server functionality? My goal is to minimize the trigger effects in a database overall performance.
'This handles database updates of AdventureWorks Person.Contact table.
<Microsoft.SqlServer.Server.SqlTrigger(Name:="UpdateContact_Trigger", Target:="Person.Contact", Event:="FOR UPDATE")> _
Public Shared Sub UpdateContact_Trigger()
'Notify:
SqlContext.Pipe.Send("Trigger FIRED")
Dim triggContext As SqlTriggerContext = SqlContext.TriggerContext()
'Ensure that it was update:
If triggContext.TriggerAction = TriggerAction.Update Then
'Open a connection:
Using conn As New SqlConnection("context connection=true")
conn.Open()
'Fetch some information about the update:
Using sqlComm As New SqlCommand
sqlComm.Connection = conn
sqlComm.CommandText = "SELECT ContactID, FirstName, LastName from INSERTED"
Using rdr As SqlDataReader = sqlComm.ExecuteReader
If rdr.Read Then
'Process the data in a separate thread. The thread could send a message to MSMQ about the change
'or process if differently. The purpose of using separate thread is avoid performance hit
'in the application / database operation.
Dim oData As New TriggerData("update", rdr.GetInt32(0).ToString, "contact")
Dim trd As New Threading.Thread(New Threading.ThreadStart(AddressOf oData.ProcessEvent))
trd.IsBackground = True
trd.Start()
End If
End Using
End Using
End Using
End If
End Sub
Hi there, Values in my database need to updated periodically. The code, upon starting the application, queries the database and stores the values in the Application collection. This is to avoid making a database call everytime the values are needed (increases performance). The drawback is that changes to the database values are not updated in the code. How can I create a database trigger that will update the C# Application colllection whenever a table value is updated?
In the ECASE table there is trigger to get the max value of case_id column in ecase based on project and increment one to that case_id value and insert into ecase table .
When we insert a new record to the ECASE table this trigger calls and insert the case_id column value.
When i run with multiple threads , the transaction is rolled back because of trigger . The reason is , on the project table the lock is happening while getting the max value of case_id column based on project.
I'm trying to troubleshoot a SQL problem that we are having and I'm having difficulty with identifying the guilty process.
Using NT performance monitor I am monitoring all active Threads on the system and I have noticed that one particular SQLSERVR thread (then number obviously changes with each server restart) is hogging 100% CPU.
Is it possible to find out what process a particular thread number relates to ?
As far as I can tell the SQL SPID (from Enterprise manager) does not correlate to a SQL Thread.
I have an app that is critical to our business. It handles and syncronises several SQL Servers, checks integrety etc. I need to make the app so it can run a few things at once. Does anyone have any experience with this? Currently we use Delphi and ADO. I have been fiddling with DMO to get more performance - I am not sure ADO is very quick for some I tasks I need to do.
I suppose my main question *really* is does ADO/DMO multi-thread and has anyone tried it. If not how do people do it?
http://technet.microsoft.com/en-us/library/ms187024.aspx http://sqlblogcasts.com/blogs/thepremiers/archive/2007/05/17/max-worker-threads-configuration-in-sql-server-2005.aspx and this is kind-of related: http://arstechnica.com/news.ars/post/20070529-microsoft-exec-next-version-of-windows-to-be-fundamentally-redesigned.html
How do you know how many threads are being used and how many of those are being shared? Or whether they are all even being used? Are there PerfMon stats for this?
Everyone,I have a data warehouse that at the moment includes around 2500hundred jobs. I am planning for a worst case scenario and would liketo increase the maximum number of sql server threads so that more jobscan execute simultaneously. Could this pose a problem and if so, atwhat number of maximum threads??Thanks!
that if you reduce your max worker threads it will enhance the ability to kill processes (spid###) which cannot be killed.
Recently, we have been experiencing runaway processes. If you run these processes continuously, eventually it will run without failure. We are investigating as to the reasons why this is happening. At this very moment I am sifting through the code to try and find an answer. Evidently, the process hangs and cannot be killed. We are then forced to down the server and I wish not to do this so often. If I reduce the max worker threads from it's current number now (512) to a lesser number. Will this help? and if so, by how much should this number be decreased.
Hi,I have an application where I need to find out about the followinginformation regarding SQL server:Processors enabledi.Threads allocatedii.PriorityCan somebody throw some light on this. How are the processors relatedto the threads running and the priority is w.r.t. what?Thanks,Verve.
We've invested ourselves heavily in subscription-based reporting where the SSRS service is responsible for rendering and delivering reports (to email, file shares, printers, document repositories, etc). We figured that this would be a model that would allow for easier scaling. Users submit their reports and allow SSRS to deliver them in due time. The biggest part of our reporting is now done via subscriptions.
However, adding long-running reports and short-running (but very critical) reports together to the same SSRS database has proven problematic. The long-running reports eventually make make the short reports starve for CPU time.
Does anyone know if there is a way to implement a CPU resource allocation strategy so that short running (but critical) reports will always have the ability to run? For example, it would be nice if certain user accounts or even report paths (RDL's) could have dedicated CPU resources (eg SSRS threads on which to run).
In other words, without creating additional ReportServer databases, I'd like a "pool" of threads for "severity 1" report subscriptions, a separate "pool" of threads for "severity 2" and so on. Then we'd be able to make sure that our critical subscriptions would get a chance to run. Sounds pretty straight-forward, right? Can't figure out how to go about this...
Sometimes ( on average once every two weeks ) I am getting the following error message:
Code BlockError: 2008-01-15 06:51:02.91 Code: 0xC0047024 Source: DF FACT SC 1 DTS.Pipeline Description: The number of threads required for this pipeline is 98, which is more than the system limit of 64. The pipeline requires too many threads as configured. There are either too many asynchronous outputs, or EngineThreads property is set too high. Split the pipeline into multiple packages, or reduce the value of the EngineThreads property. End Error
This is one of the final stages of the ETL, so several other packages are finishing correctly before this package is run. When we restart the complete ETL all other packages are automaticly skipped, and when the ETL arrives to this package it runs without any problems.
So my questions are: What does this error message mean? Is this "64" a SSIS setting or a SQL Server setting or a server setting? Can we increase this setting? The number of engine threads is set to 5 ( default ), what is the relation between the engine thread setting and the system limit of 64? Can we safely reduce the engine threads property? What is causing the SSIS package to need 98 engine threads?
I am working on a project that will require the use of SQL Server 2005 Workgroup Edition. We were planning to use the version that comes with 5 CALs instead of the version licensed based on processors due to the enormous cost difference. Our customer who will be using this is a goverment agency and ms charges them ~$800 for the 5 user version, but the version licensed by processor is ~$20000. This software will be potentially installed in lots of locations so cost is a big factor.
The application I am working on is using c#. I am being told that if I use threads, each thread will require a CAL. Is that correct? I can not find anything in the licensing information that explicitly states this. It was my understanding that a CAL based on user or device could make as many connections to the db as needed.
I want to select all threads even though the count(Messages.TitleID) is zero.. Is like when u start a new thread with no messages currently posted. How can i modify the codes to do it?SELECT Thread.ThreadTitle, COUNT(Messages.TitleID) AS NoOfMessage, Thread.ThreadID FROM (Thread INNER JOIN Messages ON Thread.ThreadID = Messages.ThreadID) GROUP BY Thread.ThreadTitle, Thread.ThreadID
using osql to apply SPs in mutiple threadsHello,I got a weird problem when I was using osql to apply scripts for msdedatabase in multiple threads mode. Sometime 2 sps were missing duringthe whole apply process, sometime not, and seems like only those twoSPs met the problem. No error was appeared. Did anyone meet sameproblem before? Or any possible solutions?Thank you very much!
do i need to use specially synchronized code if i have multiple threads inserting, updating and reading rows to and from the same database?? in this case, i know that no 2 threads will try to insert or update the exact same row into the DB, however, multiple threads might try to read the same row from the database.
Hi, I'm trying to stress test my web application, but when I get high load, the queries that used to take 10-20 ms starts taking 500 - 2000+ ms. Or to put it another way, when i run them single threaded i can do about 43000 a minute, when they are run in paralell it drops to about 2500 a minute.
What can i do about this ?
There are severeal queries thats affected, but here is one example: update [user] with (ROWLOCK XLOCK) set timestamp = getdate() where userid = 1''
btw: im running sql server 2005 sp 1. The stress test is run on 3 machines total (web, sql and client) the client is simulation 400 users, cliking a page as soon as the last one is loaded, ie there will always be 400 page requests.
am experiencing excessive SSB thread block'n...sql error log is reporting LOTS of Resource Monitor messages about non-yielding threads (nothing meaningful can be surmised from it).
I am running on a 4way 64bit 2003 box w/6gb ram!!!
SSB architecture is simple implementation... Leveraging async trigger(s) in 42 db's (all on same instance) that post (via srvc) into a mstr db queue...where a listener is pull'n them off and applyies to a table (trying to avoid excessive 1205's that I was experiencing using sync trigger approach before)....messages sit in respective db's trans queue and draining of queues is extremely SLOW!!!! I mean SLOW!!!
Eventually SqlServer.exe process pegs out ALL processors!!! Only can reboot box to get connectivity back...~
Anyone have this experience!? (really hope not...but I need help)
Have completely cycled SSB machinery (via disable/enable)...and have even stepped thru enabling one db at a time...but still very poor performance!!!
Anyone?
-mt
sp_who output here...
BACKGROUND sa . 16 NULL RESOURCE MONITOR BACKGROUND sa . . NULL LAZY WRITER SUSPENDED sa . . NULL LOG WRITER BACKGROUND sa . . master SIGNAL HANDLER BACKGROUND sa . . NULL LOCK MONITOR sleeping sa . . master TASK MANAGER BACKGROUND sa . . master TRACE QUEUE TASK sleeping sa . . NULL UNKNOWN TOKEN BACKGROUND sa . . master BRKR TASK BACKGROUND sa . . master TASK MANAGER SUSPENDED sa . . master CHECKPOINT sleeping sa . . master TASK MANAGER sleeping sa . . master TASK MANAGER BACKGROUND sa . 16 ThompsonTractorD43 KILLED/ROLLBACK sleeping sa . . master TASK MANAGER BACKGROUND sa . . master KILLED/ROLLBACK BACKGROUND sa . 16 master KILLED/ROLLBACK sleeping sa . . master TASK MANAGER BACKGROUND sa . . master BRKR TASK BACKGROUND sa . 16 master BRKR TASK sleeping sa . . master TASK MANAGER sleeping sa . . master TASK MANAGER sleeping sa . . master TASK MANAGER sleeping sa . . master TASK MANAGER BACKGROUND sa . 16 YancyMachineryCat KILLED/ROLLBACK BACKGROUND sa . . master BRKR EVENT HNDLR BACKGROUND sa . . master BRKR TASK sleeping NT AUTHORITYSYSTEM REFINERY1 . msdb AWAITING COMMAND sleeping NT AUTHORITYSYSTEM REFINERY1 . msdb AWAITING COMMAND sleeping NT AUTHORITYSYSTEM REFINERY1 . msdb AWAITING COMMAND sleeping NT AUTHORITYSYSTEM REFINERY1 . msdb AWAITING COMMAND SUSPENDED NT AUTHORITYSYSTEM REFINERY1 . msdb DELETE sleeping fastironweb DETROIT . Cat_Lvl3 AWAITING COMMAND sleeping mike REFINERY1 . master AWAITING COMMAND SUSPENDED NT AUTHORITYSYSTEM REFINERY1 . distribution WAITFOR sleeping mike REFINERY1 . Cat_Cfsc AWAITING COMMAND sleeping mike REFINERY1 . Cat_Cfsc AWAITING COMMAND sleeping mike REFINERY1 . Cat_Cfsc AWAITING COMMAND RUNNABLE mike REFINERY1 . Cat_Cfsc SELECT INTO sleeping NT AUTHORITYSYSTEM REFINERY1 . msdb AWAITING COMMAND
That is a SqlException I got at a... at System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader() Anyone an idea what THAT means? How do I cause it? How can I work around it?
Our company is in the retail business, thus, the window for processing cubes is very small during Christmas season (only 4 hours each day).
To speed things up, we have partitioned our cube at monthly level so, potentially, 12 threads can be run simultantsly. However, when I looked at DTS, I am not so sure whether or how it can accomplish that task. Has anyone tried this before or is aware of another third party tool can do the trick?
I have been asked by developers if there is any advantage in processing multiple clustering models simultaneously by using AMO and multiple threads as against processing one after another.
I have limited experience with Analysis Services but based on my reading I don't see this method providing any advantage.
Does anyone have any recommendations or advice? The system Enterprise Edition running on an x86 Server with 2 dual core processors and 4GB of RAM. Would the answer alter if the server running x64 version of SQL Server and Windows.
I found a peculiar thing today while working with SQL Mobile in a multithreaded application (VS2005, application for Pocket PC 2003).
I created a class which has one SqlCeConnection object. Every time I call a function to insert/select/delete something from the local db, I open the connection, execute the query an then close the connection again.
But when I'm calling a function from the db class in thread 1 and in the meantime call a different function (from the same db class of course) in thread 2, things go wrong. Because when function 1 wants to close the connection, function 2 is still using the connection and it will crash my application with a native exception (0xC0000005: access violation).
I can see why the error is happening, but shouldn't there be a nice .NET handled exception instead of a native exception which grinds my app to a hold?
(A workaround I use now is to use multiple connection objects instead of one, but I thought I'd give this feedback anyway)
Recently I'm working on a multi-thread solution based on SQL-Server, now I'm facing such a problem:
Suppose I have process No.1(with multi-threads) inserting data to Table A, which has its identity column auto generated. And process No.2(also with multi-threads) retrieving data from Table A ,generate some records and insert the result into Table B. Both of these two processes are doing batch processing(batch retrieving and batch writing), and they are running parallelly.
Now since process No.2 retrieve data sequencely by the identity of Table A, it found there exists missing results. This is due to that records with bigger identities are not necessarily commited earlier than those who have smaller identities.
One direct solution is add one flag field in Table A indicating whether this record has been processed by process No.2, and each time it was processed , the field will be set. But unfortunatelly the table structure is not supposed to be modified.
So is there any other good solutions for this problem? Thanks.
I am searching for some information on achieving performance improvement by spawning multiple threads in single Stored procedure or rather say within Single Database connection. We have a batch process that updates around 200 tables and each table update takes around 2 mnts. I am trying to optimize this by running these updates in parallel rather than sequential. These all tables are mutually exclusive. I have written a stored procedure which updates these tables in loop. Concern is that every update statement waits for other to get over. I am calling this Sp from Java application. One crude way will be opening multiple connections to database each running separate T-SQL statement. It comes with lot of overhead in opening connections .Is there any way I can force explicitly in T-SQL stored procedure to spawn a new thread for every Update statement. In case I try to do same process from a Java connection.. is there a way I can open multiple threads for each statement under same database connection.
we are queirying an stored procedure multiple times same time,from our application. In this case, few processes executing successfully and few getting failed with error "50000 error executing the stored procedure" and if we run thesame process again its getting executed sucessfully.Does the MySQL cannot handle multiple threads same time?
Hello people :-)I'm doing some development work with Visual Studio 2005 and SQLServer 2000. My SQL DB is running on a Windows 2000 Server box in the office, and I'm doing the development on my XPPro workstation. Now I've been trying to connect to the Win2000 box though VS and although I can see the server and the DB when I hit ok I get this error"The SQL server specified by these connection propertise does not support managed objects"What the heck does that mean?any help would be great :-)
I am using distributed transactions where in I start a TransactionScope in BLL and receive data from service broker queue in DAL, perform various actions in BLL and DAL and if everything is ok call TransactionScope.Commit().
I have a problem where in if i run multiple instances of the same app ( each app creates one thread ), the threads pop out the same message and I get a deadlock upon commit.
My dequeue SP is as follows:
CREATE PROC [dbo].[queue_dequeue] @entryId int OUTPUT AS BEGIN DECLARE @conversationHandle UNIQUEIDENTIFIER; DECLARE @messageTypeName SYSNAME; DECLARE @conversationGroupId UNIQUEIDENTIFIER;
GET CONVERSATION GROUP @conversationGroupId FROM ProcessingQueue; if (@conversationGroupId is not null) BEGIN RECEIVE TOP(1) @entryId = CONVERT(INT, [message_body]), @conversationHandle = [conversation_handle], @messageTypeName = [message_type_name] FROM ProcessingQueue WHERE conversation_group_id=@conversationGroupId END
if @messageTypeName in ( 'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog', 'http://schemas.microsoft.com/SQL/ServiceBroker/Error' ) begin end conversation @conversationHandle; end END
Can anyone explain to me why the threads are able to pop the same message ? I thought service broker made sure this cannot happen?
I have a SQL Server project in Visual Studio 2005 which deploys an assembly to SQL Server 2005 containing various stored procedures user defined functions. Is there any way to tell Visual Studio to drop/create the stored procedures in a schema other than dbo?
ie: User.ChangePassword instead of dbo.ChangePassword.
In VS 2003 I used SQLDMO (Com Object) to list all available SQL Servers. Is in SQL Server 2005 a managed .net Component that can do that task?Thanks,Rainer.
Can anyone explain why when I look at table using enterprise manager, highlight a table, all tasks, maanage indexes why only 1 index appears and when I look at the same table in sysindexes is says that there are 8 indexes. This is the sql code I executed: select object_name(id), indid from sysindexes where object_name(id) = 'tbh_matter_summ'
Is it possible that there is a problem with the database?