I have a SQL Server 2005 Std. Ed. 64-bit installation. There is one instance supporting a single production database. I have a CLR udf. This udf uses the XMLDocument object to retrieve XML from a URL. When the CLR udf is executed, there seems to be an initial slow response time. Subsequent response times are very fast. If the CLR udf is not called for a few minutes and then called, the slowdown appears again.
Is there something happening behind the scenes with compilation or something like that which could cause this slowdown?
Hi, In a .net application there is a link that brings up a SSRS report. I have noticed that if it is the first time this report is requested i.e. Application has just been opened and the report button is clicked, then it takes a while to get this report to appear on the screen. But if this report is requested again (i.e. for the second time or more) then it only takes a few moments for the report to appear on the screen. So it seems that only the first time the report is requested it takes a longer time to get this report. Is there a way to reduce this initial load of the report? Thanks
Each day, the first user who launches our RS reports always gets a long wait time. Subsequent report launches are normal. Does anyone know what is going on? If yes, what is the remedy?
Hi, I use a Remote Sql Server Express instance, and I have a strange behavior.. The first connection is really slow and I don't know how to fix that.I read some posts about this topic but I didn't find the right solution.Is there a way to "keep alive" the connection between my IIS server and the SQL one ?I check the auto-close property and it sets to false. Any help ? Stan
I have sql query to search for fields in a rather big view. If I execute the query in sql server enterprise manager, the results will be displayed in less than 6 seconds. However, if I execute it using asp.net, it will take very long (more than 2 minutes).
The query is a simple one like "SELECT * FROM myview WHERE name LIKE '%Microsoft%'". And the code I use to execute it in asp.net is
Dim dsRtn As DataSet Dim objConnection As OleDbConnection Try objConnection = GetOleDbConnection() objConnection.Open() Dim objDataAdapter As New OleDbDataAdapter(strSearch, objConnection) Dim objDataSet As New DataSet() objDataAdapter.Fill(objDataSet, strTableName) dsRtn = objDataSet Catch ex As Exception dsRtn = Nothing Finally If objConnection.State = ConnectionState.Open Then objConnection.Close() End If End Try
Where strSearch is the sql search string.
I don't have any problem using such code for other queries.
Could somebody suggest the cause of the problem and how to solve it? Thanks!
I am having a query where I am connecting to eight different tables using joins. When I join one table to another the speed of the execution becomes less. Even on my local server it is taking nearly 2 to 3 minutes to execute the query. How can I increase the speed of execution of my query.
Scenario 1: Sproc executed on local server against local tables that took 40 seconds to run, now takes 30 minutes to run. - No blocking locks - Sometimes "NOP" in command when sp_who2 is run. - perfmon shows nothing out of the ordinary when looking at server resources. (memory, processors, etc.) there have been NO configuration changes. - Occaisional lost packets (every 10th) with ping -t - I flushed the procedure cache, and rebooted the server.Scenario 2: Sproc executed on another server accesses tables on Scenario 1 local server via server link, runs with no problems in 30 seconds.SQL Server 2000 SP3a.
We have a quick query regarding SQL performance.We have SQL Server 2000 (32 Bit) and SQL Server 2005 (64 Bit) as twoseparate instances on a DB Server.We were analysing the execution times for the same stored procedure onboth instances:1. Through Remote Desktop of the actual DB server2. Through Query Analyser of my local machine.The results were as follows:1. Through Remote Desktop of the actual DB serverIterationSP Execution Time (in secs)SQL 2000SQL 200512852273327344035383Average 3232. Through Query Analyser of my local machine.IterationSP Execution Time (in secs)SQL 2000 SQL 2005)1379623277335844277954391Average3585Could you please provide some light on why case 2 is slow and anysuggestions to improve the same?Thanks in Advance!
We have been working with SSIS for a while and we have not found a solution or a reason for this. We have a master package that calls 10 packages in sequential order. (as shown below). If we execute each one of the package separately the run in less than 2 minutes, but when we call them through the master package the execution time start increasing as follows: Child 1 (2 min), Child 2 (3 min),, Child 3 (4 min), Child 4 (6 min), Child 1 (7 min), and so on. The execute package task has the ExecutionOutOfProcess = false (when we set it equal to True even takes longer to execute, it was creating a dtsHost.exe process for each child and always remain in memory after the package finished executing). Can someone please provide a solution or a workaround for this? Any help would be appreciated. Any help will be appreciated.
I have a big problem with slow execution of stored procedure in SQL Server 2005 but I really don't understand the reason. I have a database with large table (about 400 million rows) and simple stored procedure to get data from that table (one select statement to select time and value columns).
Strange thing is that if I call that stored procedure from .net application (native SqlDataProvider) it takes about 6 seconds to execute but if I call the same procedure with the same parameters from within SQL Server Management Studio it takes only 25 milliseconds to execute!
I've noticed that from .net, procedure is called with binary data and in Management Studio sql script is executed so I've copied/pasted the script from Management Studio to .net code and again the same thing happens (6 seconds from .net and 25ms from Management Studio). I traced executions with SQL Profiler and everything seems to be identical for both applications except it takes much longer time for .net application.
Both SQL Server Management Studio and .net application are on the same machine and SQL Server is on another.
This is the query that when executed in Management Studio takes 25ms:
At first I thought that Management Studio somehow caches results but if I change parameters of stored procedure it always takes less than 30ms to execute. I really don't understand this. Please, help!
Hi All,I have a table that currently contains approx. 8 million records.I'm running a SELECT query against this table that in somecircumstances is either very quick (ie results returned in QueryAnalyzer almost instantaneously), or very slow (ie 30 to 40 seconds toreturn results), and I'm trying to work out how I improve performance.Essentially the query I'm running is nothing more complex than:SELECT TOP 1 * FROM Table1 WHERE tier=n ORDER BY member_id[tier] is a smallint column with a non-clustered, non-unique index onit. [member_id] is a numeric column with a clustered, unique index onit.When I supply a [tier] value of 1, it returns results instantaneously.I have no idea if this is meaningful, but the tier = 1 records wereloaded first into the table, and comprise approximately 5 millionrecords.When I supply a [tier] value of 2, the results take 30 to 40 seconds.tier =2 records were loaded second, and comprise approximately 3million records.I've tried running an execution plan, and while I'm no expert, itappears to me that the index on tier isn't being used, even if I use:tier = CAST(2 as SMALLINT)I'm wondering if anyone can give me ANY advice on how to get anybetter performance out of this SELECT statement?Also, out of curiosity, can a disk defragment have a positive impacton SELECT query performance?Any help very much appreciated!Much warmth,Murray
I have some VB.NET code that starts a transaction and after that executes one by one a lot of queries. Somehow, when I take out the transaction part, my queries are getting executed in around 10 min. With the transaction in place it takes me more than 30 min on one query and then I get timeout. I have checked sp_lock myprocessid and I've noticed there are a lot of exclusive locks on different objects. Using sp_who I could not see any deadlocks. I even tried to set the isolation level to Read UNCOMMITED and still have the same problem. As I said, once I execute my queries without being in a transaction everything works great. Can you help me to find out the problem?
I have a parent package which executes 14 child packages in parallel, which on average take ~10 seconds each to complete when I execute the parent packege using BIDS or DTEXEC.
However, if I run the parent package using SQL Management Studio (Integration Services > Stored Packages > MSDB > Right Click > Run Package) each package takes in excess of 10 minutes to run, getting progressively slower as each package starts.
Surely the package is executing in exactly the same way as BIDS/DTEXEC, just a differenct UI?
Hey. I've a problem and I think I know the answer also but still want to confirm. We are using SQL 2000 and SSRS 2000. The problem is, we have custom reports which a customer can build and run. I wonder how one can write sp's for that. The way it's written right now is a dynamic select clause then a dynamic, from, a dynamic where, dynamic groupby all appended torgether and run by execute command. I know it'd dynamic SQL and execution plans and stuff will hurt me but someof these reports take forever. Is there anything that can be done to fasten these reports? And if the select will be dynamic and the where will be dynamic, does it make sense to even use a sp? Is it ever going to use the same execution plan? When I run DBCC memorystatus, procedure cache takes up most of this memory. Does the use of dynamic SQL explain that?
Dear friends,I have a problem with a simple select statement and I don't know why it is happening.I have 2 tables, Fees and FeesDataRoles. Fees presents all the fees and FeesDataRole is a middle table between Fees and Roles table. So each fee can have multiple Roles and a Role can have many Fees.Now I have a select statement:Select *From Fees Inner Join FeesDataRoles ON Fees.FeeID = FeesDataRoles.FeeIDWhere (FeesDataRoles.DataRoleID = @DataRoleID) AND (FeesDataRoles.RecordStatus = 1 ) AND (FeesDataRoles.ValidFrom >= getdate() ) AND ( FeesDataRoles.ValidTo <= getdate() OR FeesDataRoles.ValidTo is null)Now it shouldn't take that long to execute this procedure but surprisingly sometimes when I insert a value to the table and then execute this store procedure it does now show the data just added. Very strange.....!!!!I ran the procedure 5 times after inserting an item and nearly 1 out of 5 does not return the right result righ. ( It does not include the recently inserted rows)Anyone have any idea....?I used Tuning Advisor, no sugestion. I change the clustered index in FeesDataRoles from FeesDataRoleID(the primary key of the table) to DataRoleID to increase the performance, still it happens sometimes.Is my Where clause so costly that cause this problem.Please help. I really appreciate your help.Regards,Mehdi
2 SQL Execute Task, One Loop container, 2 Data Flow tasks, 1 Foreach loop container, 1 ftp task. The data flow tasks has 1 oledb source, 1 flat file source, 1 row count transformation, 1 recordset destination and 1 oledb destination.
When I load the package into BIDS it takes 125 MB of memory and then everything is slow, the properties panel slides in slowly and exists slowly. The object is the packages are not painted properly. to make changes and run takes lot of time.
Am I doing anything wrong here? Why is it consuming so much of memory?
after moving off VS debugger and into management studio to exercise our SQLCLR sp, we notice that the 2nd execution gets an error suggesting that our static SqlCommand object is getting reused from the 1st execution (of the sp under mgt studio). If this is expected behavior, we have no problem limiting our statics to only completely reusable objects but would first like to know if this is expected? Is the fact that debugger doesnt show this behavior also expected?
Hi I am slowly getting to grips with SQL Server. As a part of this, I have been attempting to work on producing more efficient queries. This post is regarding what appears to be a discrepancy between the SQL Server execution plan and the actual time taken by a query to run. My brief is to produce an attendance system for an education establishment (I presume you know I'm not an A-Level student completing a project :p ). Circa 1.5m rows per annum, testing with ~3m rows currently. College_Year could strictly be inferred from the AttDateTime however it is included as a field because it a part of just about every PK this table is ever likely to be linked to. Indexes are not fully optimised yet. Table:CREATE TABLE [dbo].[AttendanceDets] ([College_Year] [smallint] NOT NULL ,[Group_Code] [char] (12) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,[Student_ID] [char] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,[Session_Date] [datetime] NOT NULL ,[Start_Time] [datetime] NOT NULL ,[Att_Code] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ) ON [PRIMARY]GO CREATE CLUSTERED INDEX [IX_AltPK_Clust_AttendanceDets] ON [dbo].[AttendanceDets]([College_Year], [Group_Code], [Student_ID], [Session_Date], [Att_Code]) ON [PRIMARY]GO CREATE INDEX [All] ON [dbo].[AttendanceDets]([College_Year], [Group_Code], [Student_ID], [Session_Date], [Start_Time], [Att_Code]) ON [PRIMARY]GO CREATE INDEX [IX_AttendanceDets] ON [dbo].[AttendanceDets]([Att_Code]) ON [PRIMARY]GOALL inserts are via an overnight sproc - data comes from a third party system. Group_Code is 12 chars (no more no less), student_ID 8 chars (no more no less). I have created a simple sproc. I am using this as a benchmark against which I am testing my options. I appreciate that this sproc is an inefficient jack of all trades - it has been designed as such so I can compare its performance to more specific sprocs and possibly some dynamic SQL. Sproc:CREATE PROCEDURE [dbo].[CAMsp_Att] @College_Year AS SmallInt,@Student_ID AS VarChar(8) = '________', @Group_Code AS VarChar(12) = '____________', @Start_Date AS DateTime = '1950/01/01', @End_Date as DateTime = '2020/01/01', @Att_Code AS VarChar(1) = '_' AS IF @Start_Date = '1950/01/01'SET @Start_Date = CAST(CAST(@College_Year AS Char(4)) + '/08/31' AS DateTime) IF @End_Date = '2020/01/01'SET @End_Date = CAST(CAST(@College_Year +1 AS Char(4)) + '/07/31' AS DateTime) SELECT College_Year, Group_Code, Student_ID, Session_Date, Start_Time, Att_Code FROM dbo.AttendanceDets WHERE College_Year = @College_YearAND Group_Code LIKE @Group_CodeAND Student_ID LIKE @Student_IDAND Session_Date <= @End_DateAND Session_Date >=@Start_DateAND Att_Code LIKE @Att_CodeGOMy confusion lies with running the below script with Show Execution Plan:--SET SHOWPLAN_TEXT ON--Go DECLARE @Time as DateTime Set @Time = GetDate() select College_Year, group_code, Student_ID, Session_Date, Start_Time, Att_Code from attendanceDetswhere College_Year = 2005 AND group_code LIKE '____________' AND Student_ID LIKE '________'AND Session_Date <= '2005-11-16' AND Session_Date >= '2005-11-16' AND Att_Code LIKE '_' Print 'First query took: ' + CAST(DATEDIFF(ms, @Time, GETDATE()) AS VarCHar(5)) + ' milli-Seconds' Set @Time = GetDate() EXEC CAMsp_Att @College_Year = 2005, @Start_Date = '2005-11-16', @End_Date = '2005-11-16' Print 'Second query took: ' + CAST(DATEDIFF(ms, @Time, GETDATE()) AS VarCHar(5)) + ' milli-Seconds'GO --SET SHOWPLAN_TEXT OFF--GOThe execution plan for the first query appears miles more costly than the sproc yet it is effectively the same query with no parameters. However, my understanding is the cached plan substitutes literals for parameters anyway. In any case - the first query cost is listed as 99.52% of the batch, the sproc 0.48% (comparing the IO, cpu costs etc support this). BUT the text output is:(10639 row(s) affected) First query took: 596 milli-Seconds (10639 row(s) affected) Second query took: 2856 milli-SecondsI appreciate that logical and physical performance are not one and the same but can why is there such a huge discrepancy between the two? They are tested on a dedicated test server, and repeated running and switching the order of the queries elicits the same results. Sample data can be provided if requested but I assumed it would not shed much light. BTW - I know that additional indexes can bring the plans and execution time closer together - my question is more about the concept. If you've made it this far - many thanks.If you can enlighten me - infinite thanks.
I have been using a licensed copy of Visual Studio 2005 and MS SQLServer 2005 for some time but am only now trying out the Reporting Services functionality.
I have attempted to follow the instructions from url:
However despite confirming that the report services are running and also checking the configuration by following the information in the above web page I still get the following problems.
1. When attempting to create a project via the wizard I get the following error: Exception has been thrown by the target of an invocation. If an attempt is made to add a new data source I am unable to choose the data source type (eg: Microsoft SQL Server). I am not given a choice for the type, in fact the relevant drop down is blank.
2. A project can be created without the wizard but if I right click the Shared Data Source folder I do not see the next Pop up to set the data source parameters.
I assume I am missing something quite fundamental - however so far I cannot see what.
Hi all. The company I work for is looking for a new SQL server. Where can I find information and or a tool for sizing information? By sizing information I mean how big a pile of hardware am I going to need to run MS SQL for x number of connected users with x size database, etc. I've been tooling around the internet and MS' site but can't find any info on this.
i have created a publication whereas i have provided a network path to its snapshop folder e.g ( \serverfolder ) at time of creating. When i try to make a Pull Subscription and follow all steps of wizards, it gives me following error "The initial snapshot for publication '---' is not yet available". can you guide me what are causes of this problem and how may i solve it?
I want get get results in sql that are all written in UPPERCASE but I want to receive them in Initial Case format I know UPPERCASE is UPPER lowercase is lower but what is Initial Case(first letter Capital in a word)
We had a runaway query which built the size of tempdb to 24000mb. Then someone changed the unrestricted file growth property to restricted growth while the size was 24000mb. Now I can not reduce the initial size. I have set the property back to unrestricted file growth. I have shrunk the tempdb and available space is almost 24000mb. I have stopped sqlserver. I even deleted the existing tempdb.mdf & tempdb.ldf files. But when SQL server is restarted, the initial size is set to 24000mb. It will not let me reduce the size. Is there anything short of manipulating the system tables to reduce the size back to 500mb?
I would like to increase the initial size of a SQL 2005 DB from 150 to 250 GB to prevent automatic autogrowth; would this have any impact in production if you do it on the fly?
I need to display the middle initial from a name field that contains the last name, comma, and the middle name or initial.
Example data:
Jane,Smith Ron John,Dow L Mary Jane,Dow Welsh
The result I am looking for is to capture only the first letter of the middle name. In this data example, I would need to display the following on a separate column:
-- I have a first name field that sometimes contains only the first name, -- sometimes contains the first name and the middle initial, and -- sometimes contains the first name, a space, followed by NMI (for no middle initial) -- how do I correctly grab the first letter of the middle initial in all cases? -- I have been playing with patindex but its harder than I thought. guess I need a case -- statement in addition to this. Any idea how I can do this? -- thanks!
I'm not a sql server savvy, so I need assistance on the following two scenarios:
A customer runs a script like this (slightly larger, but I ripped away the meat) --------------------------------------- create database test_database go
USE [test_database] exec sp_changedbowner 'sa'
use master; go sp_grantlogin 'server01CUSTOM_ADMIN'; go
use test_database; go
-- lots of table creations, where one of the tables are TEST_TABLE
sp_addrole 'ADMIN_ROLE'; Go sp_grantdbaccess @loginame = 'server01CUSTOM_ADMIN', @name_in_db = 'USER_ADMIN'; go sp_addrolemember @rolename ='ADMIN_ROLE' , @membername = 'USER_ADMIN'; Go
GRANT SELECT , UPDATE , INSERT , DELETE ON [dbo].[TEST_TABLE] TO [ADMIN_ROLE] GO ---------------------------
Now, if a person is added to the server01CUSTOM_ADMIN group, he/she should be able to do the following: (let's say it's a he)
- Create a test.udl-file (win xp). Set a provider to sql server. - On the connection-tab enter hostname of database server in the Data Source-field and use windows NT Integrated security. - When he now test the connection, it should work, since he has access to the database. - Also, using the dropdown "3. Enter the initial catalog to use:", he should see SOME datatables. Not ALL, not none. The ones that he has access to.
So, if TEST_DATABASE is the only access that server01CUSTOM_ADMIN has, that database and only that one should show, right?
In my customers scenario, some databases show (irrelevant ones), but not TEST_DATABASE. In my test, I still get ALL databases. Even after I rip the guy out of the Administrators-group and Users-group. He's only a member of the CUSTOM_ADMIN-group on server01. "Test connection" succeeds, and all the databases on the server shows.
What I hope for is following questions like "He's probably sysadmin, check it" etc, so that I can systematically (using your brains) filter out the reasons for these scenarios to happen.
Is there a way to decrease the initial size of a database/log file? I've noticed you can increase it, but if you decrease it, after you confirm the change and go checking again, you will see nothing happened.
Currently my db size is only 6 GB but the transaction log file initial size was set to 20 GB and has grown much way beyond the db size with the autogrowth feature turn on. The database was originally a test/development DB and was migrated to a production server including the log file. This probably caused the accumulation of transactions on the log.
We run backup everyday and tried to shrinkfile and file size did not change.
Can I change the "initial size" setting of the transaction log without causing any problems? Do I need to stop the service before I made the change assuming I made the change after the backup run? Or can I change it on the fly?