I have some VB.NET code that starts a transaction and after that executes one by one a lot of queries. Somehow, when I take out the transaction part, my queries are getting executed in around 10 min. With the transaction in place it takes me more than 30 min on one query and then I get timeout.
I have checked sp_lock myprocessid and I've noticed there are a lot of exclusive locks on different objects. Using sp_who I could not see any deadlocks.
I even tried to set the isolation level to Read UNCOMMITED and still have the same problem.
As I said, once I execute my queries without being in a transaction everything works great.
Can you help me to find out the problem?
How the procedure will be called inside the trigger,whether first procedure followed by second or parallel it will execute?
CREATE TRIGGER [dbo].[InsertDatFXActualStaging] ON [dbo].[DatFXActualStaging]--change this table to DatFXActualStaging for INSERT AS BEGIN SET NOCOUNT ON;
Hi, Some of my queries are running too slow.It's taking as long as 30secs .Earlier the same query was taking less than 5 secs. I understand the db has grown BUT I do not know to look at this query where should i start from and what should I look into. It is on production server. the db size is 15GB and unallocated is 9GB. log space used is 4%. TIA.
Howdy. I have a table in my DB that has about 2 million records. The search times are taking 15 - 30 seconds depending on the number of records I am returning. Is this normal? The machine is NT 4 sp6a Dual PIII 866's with 1 GB of RAM on RAID5 SCSI disk. This seems like a long time to me. What kind of performance should I expect? Any kind of tuning steps I can take?
Am very new to SQL server so don't really understand what effects the speed of queries. I have the two below queries, which are nearly the same apart from one has a right join and the other doesn't. The both return about 5000 records, and I am implementing this query from an accss databse with an odbc link to sql server. What I don't understand is it takes about 8 seconds for the query with the right join in to return the records and only about 4 seconds for the one without. What I'm after really is just some general advice on how to bulid fast queries, and any advice on the two below queries would be nice. Thanks
SELECT Employees.Name, Calls.CallDate, Calls.CallTime, Calls.Callername, Contacts.CompanyID, Contacts.ContactID, Calls.CallerNumber, Calls.CallerCompany, Calls.ActionTakenID, Calls.OperatorID, Calls.Confirmed, Calls.Charged, Calls.Notes, Company.CompanyName, Operators.Operatorname, Calls.CallID, Calls.ShortMessage FROM (Contacts INNER JOIN Company ON Contacts.CompanyID = Company.CompanyID) INNER JOIN (Operators INNER JOIN (Employees RIGHT JOIN Calls ON Employees.EmployeesID = Calls.EmployeesID) ON Operators.ID = Calls.OperatorID) ON Contacts.ContactID = Calls.ContactID WHERE (((Contacts.ContactID)=1442)) ORDER BY Calls.CallDate DESC , Calls.CallTime DESC;
SELECT Employees.Name, Calls.CallDate, Calls.CallTime, Calls.Callername, Contacts.CompanyID, Contacts.ContactID, Calls.CallerNumber, Calls.CallerCompany, Calls.ActionTakenID, Calls.OperatorID, Calls.Confirmed, Calls.Charged, Calls.Notes, Company.CompanyName, Operators.Operatorname, Calls.CallID, Calls.ShortMessage FROM (Contacts INNER JOIN Company ON Contacts.CompanyID = Company.CompanyID) INNER JOIN (Operators RIGHT JOIN (Employees RIGHT JOIN Calls ON Employees.EmployeesID = Calls.EmployeesID) ON Operators.ID = Calls.OperatorID) ON Contacts.ContactID = Calls.ContactID WHERE (((Contacts.ContactID)=1442)) ORDER BY Calls.CallDate DESC , Calls.CallTime DESC;
I have a test page where I'm using SqlConnection and SqlCommand to update a simple database table (decrease a number). I'm trying to figure out how to make a number in the database table to decrease by 1 each time a button is being pressed. I know how to update the number by whatever I want to, but I have no idea what the correct syntax is for putting variables inside the query etc. Like "count -1" for instance. The database table is called "friday" and the one and only column is called "Tickets". Here's the code behind the button:protected void Button1_Click(object sender, EventArgs e) {SqlConnection conn; conn = new SqlConnection("Data Source=***;Initial Catalog=***;Persist Security Info=True;User ID=***;Password=***"); conn.Open();int count = -1; SqlCommand cmd = new SqlCommand("select Tickets from friday", conn); count = (int)cmd.ExecuteScalar();if (count > 0) { string updateString = @" update friday set Tickets = 500" ; <------ Here I want to set Tickets like "current count -1"SqlCommand cmd2 = new SqlCommand(updateString); cmd2.Connection = conn; cmd2.ExecuteNonQuery(); } else { } conn.Close(); }
I have sql query to search for fields in a rather big view. If I execute the query in sql server enterprise manager, the results will be displayed in less than 6 seconds. However, if I execute it using asp.net, it will take very long (more than 2 minutes).
The query is a simple one like "SELECT * FROM myview WHERE name LIKE '%Microsoft%'". And the code I use to execute it in asp.net is
Dim dsRtn As DataSet Dim objConnection As OleDbConnection Try objConnection = GetOleDbConnection() objConnection.Open() Dim objDataAdapter As New OleDbDataAdapter(strSearch, objConnection) Dim objDataSet As New DataSet() objDataAdapter.Fill(objDataSet, strTableName) dsRtn = objDataSet Catch ex As Exception dsRtn = Nothing Finally If objConnection.State = ConnectionState.Open Then objConnection.Close() End If End Try
Where strSearch is the sql search string.
I don't have any problem using such code for other queries.
Could somebody suggest the cause of the problem and how to solve it? Thanks!
I am having a query where I am connecting to eight different tables using joins. When I join one table to another the speed of the execution becomes less. Even on my local server it is taking nearly 2 to 3 minutes to execute the query. How can I increase the speed of execution of my query.
Scenario 1: Sproc executed on local server against local tables that took 40 seconds to run, now takes 30 minutes to run. - No blocking locks - Sometimes "NOP" in command when sp_who2 is run. - perfmon shows nothing out of the ordinary when looking at server resources. (memory, processors, etc.) there have been NO configuration changes. - Occaisional lost packets (every 10th) with ping -t - I flushed the procedure cache, and rebooted the server.Scenario 2: Sproc executed on another server accesses tables on Scenario 1 local server via server link, runs with no problems in 30 seconds.SQL Server 2000 SP3a.
We have a quick query regarding SQL performance.We have SQL Server 2000 (32 Bit) and SQL Server 2005 (64 Bit) as twoseparate instances on a DB Server.We were analysing the execution times for the same stored procedure onboth instances:1. Through Remote Desktop of the actual DB server2. Through Query Analyser of my local machine.The results were as follows:1. Through Remote Desktop of the actual DB serverIterationSP Execution Time (in secs)SQL 2000SQL 200512852273327344035383Average 3232. Through Query Analyser of my local machine.IterationSP Execution Time (in secs)SQL 2000 SQL 2005)1379623277335844277954391Average3585Could you please provide some light on why case 2 is slow and anysuggestions to improve the same?Thanks in Advance!
I have a SQL Server 2005 Std. Ed. 64-bit installation. There is one instance supporting a single production database. I have a CLR udf. This udf uses the XMLDocument object to retrieve XML from a URL. When the CLR udf is executed, there seems to be an initial slow response time. Subsequent response times are very fast. If the CLR udf is not called for a few minutes and then called, the slowdown appears again.
Is there something happening behind the scenes with compilation or something like that which could cause this slowdown?
Hi, I have absolutely no knowledge of PHP or SQL .... I moderate a PHPBB forum at www.savingshelterpets.com Our web host (SiteGround) has taken our site down temporarily because we are overloading the server. I have no idea how to fix the problem, so hopefully someone here can help me out! Smiley
PHP version 4.4.4 MySQL version 5.0.27-standard-log
Here's the info sent to me by SiteGround (I don't understand a word of it!):
quote:Upon further investigation, it turned out that the following queries in your account are slow and heavily consume server resources:
- Connecting to (local) server with SQL Authentication
- only 1 Instance of MSSQLSERVER
Simple queries (SELECT * FROM TableName) wher the table has only a few records. This query may take up to 30 or more to execute. This slowness is consistent to certain tables. Other much larger tables run queries fine.
If a different computer logs in to the same server, queries provide instantaneous results.
I was wondering if anyone can explain the positives and negatives of using a single stored procedure that contains one or more distinct queries. I know there are problems with dynamic SQL but I am not proficient enough to know whether this falls under that umbrella.
For clarification, what I am referring to is this: In a single stored procedure, I have a parameter called Query_ID that is used to identify which query in the sproc that I want to execute. Then from my ASP page, I simply pass the appropriate value for Query_ID. So:
IF @QUERY_ID = 1 BEGIN SELECT [whatever] FROM [tbl1] WHERE [conditions] GROUP BY [something] ORDER BY [somethingelse] END ELSE IF @QUERY_ID = 2 BEGIN SELECT [whatever] FROM [tbl2] WHERE [conditions] GROUP BY [something] ORDER BY [somethingelse] END END
We have been working with SSIS for a while and we have not found a solution or a reason for this. We have a master package that calls 10 packages in sequential order. (as shown below). If we execute each one of the package separately the run in less than 2 minutes, but when we call them through the master package the execution time start increasing as follows: Child 1 (2 min), Child 2 (3 min),, Child 3 (4 min), Child 4 (6 min), Child 1 (7 min), and so on. The execute package task has the ExecutionOutOfProcess = false (when we set it equal to True even takes longer to execute, it was creating a dtsHost.exe process for each child and always remain in memory after the package finished executing). Can someone please provide a solution or a workaround for this? Any help would be appreciated. Any help will be appreciated.
I have a big problem with slow execution of stored procedure in SQL Server 2005 but I really don't understand the reason. I have a database with large table (about 400 million rows) and simple stored procedure to get data from that table (one select statement to select time and value columns).
Strange thing is that if I call that stored procedure from .net application (native SqlDataProvider) it takes about 6 seconds to execute but if I call the same procedure with the same parameters from within SQL Server Management Studio it takes only 25 milliseconds to execute!
I've noticed that from .net, procedure is called with binary data and in Management Studio sql script is executed so I've copied/pasted the script from Management Studio to .net code and again the same thing happens (6 seconds from .net and 25ms from Management Studio). I traced executions with SQL Profiler and everything seems to be identical for both applications except it takes much longer time for .net application.
Both SQL Server Management Studio and .net application are on the same machine and SQL Server is on another.
This is the query that when executed in Management Studio takes 25ms:
At first I thought that Management Studio somehow caches results but if I change parameters of stored procedure it always takes less than 30ms to execute. I really don't understand this. Please, help!
I'm working with a table with about 60 million records. This monster is growing every minute of the day as well, by 200,000 - 300,000 records/day. It's 11 columns wide, and has one index on a datetime column. My task is to create some custom reports based on three of these columns, including the datetime one.
The problem is response time. Any query executed on this table takes forever--anywhere between 30 seconds and 4 minutes. Queries such as this one below, as simple as it is, can take a minute or more:
select count(dt_date) as Searches from SearchRecords where datediff(day,getdate(),dt_date)=0
As the table gets larger and large, the response time is going to get worse and worse. Long story short, what are my options to get the speed of queries down to just a few seconds with a table this big? So far the best I can come up with is index any other appropriate columns (of which there is one for sure, maybe two).
Hi All,I have a table that currently contains approx. 8 million records.I'm running a SELECT query against this table that in somecircumstances is either very quick (ie results returned in QueryAnalyzer almost instantaneously), or very slow (ie 30 to 40 seconds toreturn results), and I'm trying to work out how I improve performance.Essentially the query I'm running is nothing more complex than:SELECT TOP 1 * FROM Table1 WHERE tier=n ORDER BY member_id[tier] is a smallint column with a non-clustered, non-unique index onit. [member_id] is a numeric column with a clustered, unique index onit.When I supply a [tier] value of 1, it returns results instantaneously.I have no idea if this is meaningful, but the tier = 1 records wereloaded first into the table, and comprise approximately 5 millionrecords.When I supply a [tier] value of 2, the results take 30 to 40 seconds.tier =2 records were loaded second, and comprise approximately 3million records.I've tried running an execution plan, and while I'm no expert, itappears to me that the index on tier isn't being used, even if I use:tier = CAST(2 as SMALLINT)I'm wondering if anyone can give me ANY advice on how to get anybetter performance out of this SELECT statement?Also, out of curiosity, can a disk defragment have a positive impacton SELECT query performance?Any help very much appreciated!Much warmth,Murray
I have a parent package which executes 14 child packages in parallel, which on average take ~10 seconds each to complete when I execute the parent packege using BIDS or DTEXEC.
However, if I run the parent package using SQL Management Studio (Integration Services > Stored Packages > MSDB > Right Click > Run Package) each package takes in excess of 10 minutes to run, getting progressively slower as each package starts.
Surely the package is executing in exactly the same way as BIDS/DTEXEC, just a differenct UI?
Hey. I've a problem and I think I know the answer also but still want to confirm. We are using SQL 2000 and SSRS 2000. The problem is, we have custom reports which a customer can build and run. I wonder how one can write sp's for that. The way it's written right now is a dynamic select clause then a dynamic, from, a dynamic where, dynamic groupby all appended torgether and run by execute command. I know it'd dynamic SQL and execution plans and stuff will hurt me but someof these reports take forever. Is there anything that can be done to fasten these reports? And if the select will be dynamic and the where will be dynamic, does it make sense to even use a sp? Is it ever going to use the same execution plan? When I run DBCC memorystatus, procedure cache takes up most of this memory. Does the use of dynamic SQL explain that?
Hi to all,Probably I'm just doing something stupid, but I would like you to tellme that (if it is so), and point the solution.There ist the thing:I' having a sp, where I call other sp inside.The only problem is, the name of this inside sp is builded variously,and executed over sp_executesql:create pprocedure major_sp@prm_outer_1 varchar(1),@prm_outer_2 varchar(2)assome codingset @nvar_stmtStr = N'exec @int_exRetCode = test_sp_' + @prm_outer_1 +@prm_outer_2set @nvar_stmtStr = @nvar_stmtStr + ' @prm_1, @prm_2, @prm_3, @prm_4output'set @nvar_prmStr = N'@prm_1nvarchar(128), ' +N'@prm_2nvarchar(128), ' +N'@prm_3nvarchar(4000), ' +N'@int_exRetCodeint output, ' +N'@prm_4varchar(64) output'exec sp_executesql @nvar_stmtStr,@nvar_prmStr,@prm_1,@prm_2,@prm_3,@int_exRetCode = @int_exRetCode output,@prm_4 = @prm_4 outputNow the issue is, I've transactions inside test_sp_11 lets say wherethe 11 is @prm_outer_1 + @prm_outer_2.These procedures are existing inside database, but are called dynamiclydepending of the parameters.The problem is, when I call the specified sp directly, the rollbacktransaction is working without any problem.Inside this procedures test_sp_xx, is a call of another sp (lets sayinside_sp).There is a transaction included.When it is called over major_sp, then the rollback is not performedbecause of error:Server: Msg 6401, Level 16, State 1, Procedure inside_sp, Line 54Cannot roll back transactio_bubu. No transaction or savepoint of thatname was found.The funniest way is, if there is no error inside, the commit is workingwithout any problem!The question is majory (because I'm almost sure, that this is anissue): is it possible, to have atransaction inside dynamicly called sp over sp_executesql?If ok to do that?Thank's in advanceMatik
I have a trigger which requires dropping a member from a role and includes user transactions. However you cannot call sp_droprolemember from within a user transaction. Looking at the code for the procedure gives me a line
I can find no documentation on SetRoleMember or %%owner and attempts to run this line as an exec statement fails with "syntax error near '%'" Nor can I find any way of dropping a member from a role using T-SQL or DDL constructs.
How do I drop a member from a role using T-SQL code and avoiding sp_droprolemember? And where do I find information about the %%Owner construct?
I currently have a large table (35 million rows, over 80GB). I have one varchar(max) column on the table that is used in the fulltext index.
To query the complete index is fast, for example:
SELECT 'ipod', COUNT(*)
FROM CONTAINSTABLE(MyDB.dbo.Contents, [Body], 'ipod') CT
This took 70 seconds (which I can live with). However, I seldom run queries like this, most are more like:
SELECT 'ipod', COUNT(*)
FROM CONTAINSTABLE(MyDB.dbo.Contents, [Body], 'ipod') CT
JOIN Pages ITP ON ITP.PageID = CT.[Key]
JOIN Feeds ITF ON ITP.IPID = ITF.IPID
JOIN Buyers ITB ON ITB.IBID = ITF.IBID
WHERE ITB.ID IN (1342,246)
These queries are much slower (this example took 17 minutes). I understand that FT searches the index and returns all rows that match the query to SQL. SQL then performs the joins and counts only the correct results. (Correct me if I'm wrong here).
One solution I've seen to this to put data or "tags" into the FT column - so my Body column would become something like:
'{ID:1342}' + [Body]
That sounds like a very good idea. I could then change the 2nd query above to be:
SELECT 'ipod', COUNT(*)
FROM CONTAINSTABLE(MyDB.dbo.Contents, [Body], '("ID:1342" OR "ID:246") AND "ipod"') CT
That all works well until I want to select 1000 different ID's because the FT query will become very long and complex. Also I'm only including one column (ID) in this example - but I have about 7 or 8 columns that I would need to include in these "tags". Quering multiple columns become very complex quickly and no doubt I will reach a query limit at somepoint.
If anyone has any other suggestions to the above I'd love to hear them. Another thought I'm having is to partition the table. I can find very little online about how FT behaves on partitioned tables - I fear it behaves exactly the same, what I'd like to think is that I could partition the table on an ID say 100 per partition or something, and then fulltext would only search the relevant partitions. If it behaves like this it may work. If no-one knows then I'll give it ago, but this will take me a while due to the table size - so I'm hoping one of you clever lot know!
Pls tell me where i will be able to find a good material on interpreting the Execution plans................how do i compare 2 diff plans for Quries written in 2 diff ways...giving same output
Dear friends,I have a problem with a simple select statement and I don't know why it is happening.I have 2 tables, Fees and FeesDataRoles. Fees presents all the fees and FeesDataRole is a middle table between Fees and Roles table. So each fee can have multiple Roles and a Role can have many Fees.Now I have a select statement:Select *From Fees Inner Join FeesDataRoles ON Fees.FeeID = FeesDataRoles.FeeIDWhere (FeesDataRoles.DataRoleID = @DataRoleID) AND (FeesDataRoles.RecordStatus = 1 ) AND (FeesDataRoles.ValidFrom >= getdate() ) AND ( FeesDataRoles.ValidTo <= getdate() OR FeesDataRoles.ValidTo is null)Now it shouldn't take that long to execute this procedure but surprisingly sometimes when I insert a value to the table and then execute this store procedure it does now show the data just added. Very strange.....!!!!I ran the procedure 5 times after inserting an item and nearly 1 out of 5 does not return the right result righ. ( It does not include the recently inserted rows)Anyone have any idea....?I used Tuning Advisor, no sugestion. I change the clustered index in FeesDataRoles from FeesDataRoleID(the primary key of the table) to DataRoleID to increase the performance, still it happens sometimes.Is my Where clause so costly that cause this problem.Please help. I really appreciate your help.Regards,Mehdi
2 SQL Execute Task, One Loop container, 2 Data Flow tasks, 1 Foreach loop container, 1 ftp task. The data flow tasks has 1 oledb source, 1 flat file source, 1 row count transformation, 1 recordset destination and 1 oledb destination.
When I load the package into BIDS it takes 125 MB of memory and then everything is slow, the properties panel slides in slowly and exists slowly. The object is the packages are not painted properly. to make changes and run takes lot of time.
Am I doing anything wrong here? Why is it consuming so much of memory?
When i execute the following set of statements only 8 is getting inserted into table instead 6 and 8.
Create Table BPTest(id int) Declare @Id Int Set @Id = 0 While (@Id < 10) Begin begin tran Insert into BPTest values (@id) if(@Id > 5) begin if(@Id % 2 = 0) begin print 'true' print @Id commit tran end else begin print 'false' print @Id rollback tran end end Set @Id = @Id + 1 End Select * from BPTest drop table BPTest
I'm using transactions with my SqlConnection ( sqlConnection.BeginTransaction() ).
I do an Insert into my User table, and then subsequently use the generated identity ( via @@identity ) to make an insert into another table. The foreign key relationship is dependant on the generated identity. For some reason with transactions, I can't retrieve the insert identity (it always returns as 1). However, I need the inserts to be under a transaction so that one cannot succeed while the other fails. Does anyone know a way around this problem?
So heres a simplefied explanation of what I'm trying to do:
Begin Transaction Insert into User Table Retrieve Inserted Identity (userID) via @@Identity Insert into UserContact table, using userID as the foreign key. Commit Transaction.
Because of the transaction, userID is 1, therefore an insert cannot be made into the UserContact table because of the foreign key constraint. I need this to be a transaction in case one or the other fails. Any ideas??
Is it normal practice to check for @@ERROR after a SELECT statement that retrieves data from a table OR we should only check for @@ERROR after a DELETE/INSERT/UPDATE type of statement? The SQL statement is inside a transaction.