Turn Logging On To Get Statistics On Execution Times For Stored Procedures
Mar 14, 2000Can anyone tell me how to turn logging on to get statistics on execution times for stored procedures
Thanks
Can anyone tell me how to turn logging on to get statistics on execution times for stored procedures
Thanks
Dear group,is it possible in SQL-Server to see when a stored procedure wasexecuted ?I would say it is only possible with some traces but not with thestandard settings.For a short answer on that matter i'd be thankful.RegardsUli
View 4 Replies View RelatedHi,
How can I turn off STATISTICS XML option while executing a query in new query window of SQL server Management Studio?
Hi
We have a database that has over a 1000 stored procedures. Most of them are used but not all of them. Because it would take to much time to investigate which are used or not, I want to log each execution of a stored procedure for a month. This way I can see which procedures are used and which are not.
Is there a way to do this???
I thought of using a DML trigger that fires after an 'exec' statement but it seems that it can be only triggered after a create, alter or drop statement.
Thanks!
Hi All
Not sure if this has been asked but quite honestly, I have tried several searches and too many hits are returned.
I use 3 separate stored procs for a process in a .net program where sp1 inserts the header. Then, looping through the details, I use sp2 to insert the detail records and then for each detail, I loop through other sub-records and insert with sp3.
I would like to place the whole update within a transaction so that an error on sp3 will rollback the entire thing.
My Problem: Each of sp1, 2 and 3 call another stored proc, say spLog which logs the process and the error messages. When I roll back, how can I prevent my logs from rolling back?
Any help will be appreciated
Hello,I was wondering if there were any built-in objects to handle error handling, in any of the SQL Server objects, or just to use event log or trace or debug or something like that. Or is it not recommended at all to use try.. catch, for whatever reason.Thanks a lot.
View 4 Replies View RelatedI presently have Error Logging turned on for the SQL Executive, but I no longer want it. When I
open up the "Configure SQL Executive" window under the Server menu, the
"Error Log File" field is dimmed and there is no option to turn it off.
How do I turn off logging for SQL Executive?
I am running a SQL maintenance job on a 40 GB database which performsoptimizations by re-orginizing data and indexes pages. After the jobis finished, a separate job peforming a SQL transaction log backup isrun on the same database, which produces a 30 GB transaction log backupfile. Is there any way to turn off logging during the maintenanceplan, so that when the transaction log backup occurrs it will notproduce a large backup file?
View 2 Replies View RelatedI am doing some tempdb work for reporting only, and the trans log is getting to 5GB and I dont really see why I need rollback.
If it fails I can just tell the user and the #temp tables will be deleted when the SP returns anyway.
So, can I tell SQL Server not to bother with the transaction logging on these tempdb inserts/updates.
Have to admit something feels a bit iffy about doing this, but I cant see the problem logically.
Someone please tell me I'm being supid!!
Hi,
I need advice on how to automate stored procedures within my SQL2003 DB.
I have a stored procedure I want to automate on 120 second cycle - to link to some external software.
I have little experiance of this and need some pointers on where to research or how to achieve my goal.
Many thanks,
J
Hi I am slowly getting to grips with SQL Server. As a part of this, I have been attempting to work on producing more efficient queries. This post is regarding what appears to be a discrepancy between the SQL Server execution plan and the actual time taken by a query to run. My brief is to produce an attendance system for an education establishment (I presume you know I'm not an A-Level student completing a project :p ). Circa 1.5m rows per annum, testing with ~3m rows currently. College_Year could strictly be inferred from the AttDateTime however it is included as a field because it a part of just about every PK this table is ever likely to be linked to. Indexes are not fully optimised yet. Table:CREATE TABLE [dbo].[AttendanceDets] ([College_Year] [smallint] NOT NULL ,[Group_Code] [char] (12) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,[Student_ID] [char] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,[Session_Date] [datetime] NOT NULL ,[Start_Time] [datetime] NOT NULL ,[Att_Code] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ) ON [PRIMARY]GO CREATE CLUSTERED INDEX [IX_AltPK_Clust_AttendanceDets] ON [dbo].[AttendanceDets]([College_Year], [Group_Code], [Student_ID], [Session_Date], [Att_Code]) ON [PRIMARY]GO CREATE INDEX [All] ON [dbo].[AttendanceDets]([College_Year], [Group_Code], [Student_ID], [Session_Date], [Start_Time], [Att_Code]) ON [PRIMARY]GO CREATE INDEX [IX_AttendanceDets] ON [dbo].[AttendanceDets]([Att_Code]) ON [PRIMARY]GOALL inserts are via an overnight sproc - data comes from a third party system. Group_Code is 12 chars (no more no less), student_ID 8 chars (no more no less). I have created a simple sproc. I am using this as a benchmark against which I am testing my options. I appreciate that this sproc is an inefficient jack of all trades - it has been designed as such so I can compare its performance to more specific sprocs and possibly some dynamic SQL. Sproc:CREATE PROCEDURE [dbo].[CAMsp_Att] @College_Year AS SmallInt,@Student_ID AS VarChar(8) = '________', @Group_Code AS VarChar(12) = '____________', @Start_Date AS DateTime = '1950/01/01', @End_Date as DateTime = '2020/01/01', @Att_Code AS VarChar(1) = '_' AS IF @Start_Date = '1950/01/01'SET @Start_Date = CAST(CAST(@College_Year AS Char(4)) + '/08/31' AS DateTime) IF @End_Date = '2020/01/01'SET @End_Date = CAST(CAST(@College_Year +1 AS Char(4)) + '/07/31' AS DateTime) SELECT College_Year, Group_Code, Student_ID, Session_Date, Start_Time, Att_Code FROM dbo.AttendanceDets WHERE College_Year = @College_YearAND Group_Code LIKE @Group_CodeAND Student_ID LIKE @Student_IDAND Session_Date <= @End_DateAND Session_Date >=@Start_DateAND Att_Code LIKE @Att_CodeGOMy confusion lies with running the below script with Show Execution Plan:--SET SHOWPLAN_TEXT ON--Go DECLARE @Time as DateTime Set @Time = GetDate() select College_Year, group_code, Student_ID, Session_Date, Start_Time, Att_Code from attendanceDetswhere College_Year = 2005 AND group_code LIKE '____________' AND Student_ID LIKE '________'AND Session_Date <= '2005-11-16' AND Session_Date >= '2005-11-16' AND Att_Code LIKE '_' Print 'First query took: ' + CAST(DATEDIFF(ms, @Time, GETDATE()) AS VarCHar(5)) + ' milli-Seconds' Set @Time = GetDate() EXEC CAMsp_Att @College_Year = 2005, @Start_Date = '2005-11-16', @End_Date = '2005-11-16' Print 'Second query took: ' + CAST(DATEDIFF(ms, @Time, GETDATE()) AS VarCHar(5)) + ' milli-Seconds'GO --SET SHOWPLAN_TEXT OFF--GOThe execution plan for the first query appears miles more costly than the sproc yet it is effectively the same query with no parameters. However, my understanding is the cached plan substitutes literals for parameters anyway. In any case - the first query cost is listed as 99.52% of the batch, the sproc 0.48% (comparing the IO, cpu costs etc support this). BUT the text output is:(10639 row(s) affected) First query took: 596 milli-Seconds (10639 row(s) affected) Second query took: 2856 milli-SecondsI appreciate that logical and physical performance are not one and the same but can why is there such a huge discrepancy between the two? They are tested on a dedicated test server, and repeated running and switching the order of the queries elicits the same results. Sample data can be provided if requested but I assumed it would not shed much light. BTW - I know that additional indexes can bring the plans and execution time closer together - my question is more about the concept. If you've made it this far - many thanks.If you can enlighten me - infinite thanks.
View 10 Replies View RelatedHello
I have a .Net application which invokes an stored procedure (SQL Server 2005 Express installed on the same machine). When the stored procedure is called the first time, the application hangs because the sp never ends execution and the application's process has to be killed. But when the application is executed again, the sp runs as expected. What could be happening?
The stored procedure references remote tables by means of synonyms. If the Management Studio is used instead, the sp never ends execution when invoked the first time, but the query can be cancelled.
Now, if the sp is invoked first in the Management Studio first and then by the application, this does not hang (the sp executes as expected).
Thanks a lot.
I am running a query in SQL 2000 SP4, Windows 2000 Serverthat is not being shared with any other users or any sqlconnections users. The db involves a lot of tables,JOINs, LEFT JOINs, UNIONS etc... Ok it's not a prettycode and my job is to make it better.But for now one thing I would like to understand with yourhelp is why the same SP on the same server and everythingthe same without me changing anything at all in terms ofSQL Server (configuration, code change, ...) runs inQuery Analyzer in 1:05 minute and i see one table get ahit of 15 million logical reads:Table 'TABLE1'. Scan count 2070, logical reads 15516368,physical reads 147, read-ahead reads 0.This 'TABLE1' has about 400,000 recordsThe second time i ran right after in Query Analyzer again:Table 'TABLE1'. Scan count 2070, logical reads 15516368,physical reads 0, read-ahead reads 0.I can see now the physical reads being 0 as it isunderstandable that SQL is now fetching the data frommemory.But now the third time I ran:Table 'TABLE1'. Scan count 28, logical reads 87784,physical reads 0, read-ahead reads 0.The Scan count went down from 2070 to 28. I don'tknow what the Scan count is actually. It scanned thetable 28 times?The logical reads went down to 87,784 reads from 15million and 2 seconds execution time!Anybody has any ideas why this number change?The problem is i tried various repeats of my test, irebooted the SQL Server, dropped the database, restoredit, ran the same exact query and it took 3-4-5 secondswith 87,784 reads vs 15 million.Why i don't see 15 million now?Well i kept working during the day and i happen to run intoanother set of seeing 15 million again. A few runs wouldkeep running at the paste of 15 million over 1 minute andeventually the numbers went back down to 87,784 and 2seconds.Is it my way of using the computer? Maybe i was openingtoo many applications, SQL was fighting for memory?Would that explain the 15 million reads?I went and changed my SQL Server to used a fixed memoryof 100 megs, restarted it and tested again the samequery but it continued to show 87,784 reads with 2 secondsexecution time.I opened all kinds of applications redid the same testand i was never able to see 15 million reads again.Can someone help me with suggestions on what could bethis problem and what if i could find a way to come tosee 15 million reads again?By the way with the limited info you have here about thedatabase I am using, is 87,784 reads a terrible number ofreads, average or normal when the max records in the manytables involved in this SP is 400,000 records?I am guessing it is a terrible number, am I correct?I would appreciate your help.Thank you
View 4 Replies View RelatedBackground: We have SQL Server 2005 x64 running on a quad-core (dual dual-core) machine with 16GB of RAM. The database is about 10GB in size and we execute around a million stored procedures a day on it. Our application uses about 1000 different stored procedures on this machine. The application is a transactional B2B web-app with about 2000 users.
The problem we have is a really odd one that I can't seem to find much information on. We have a small number (3-4) of stored procedures that's exibiting this problem.
The stored proc in question takes on average 100ms CPU time to execute. It's a fairly complex stored proc, about 300 lines long, 6-7 select statements and it uses temp tables. No updates / inserts except for on the temp tables. It's executed about 5000 times per day. About once a week, though, execution times will suddenly jump up to 3000 ms average. This happens randomly during the day, although it seems to happen more often on Monday mornings (the DB is mostly unutilized over the weekend)
To fix this, I force the DB to recalculate the execution plan by adding / removing (depending what I did last time around) the line 'set arithabort on' at the top of the stored procedure. I have no idea why this works, but it does. Within seconds of changing it, the stored proc execution time will go back to it's normal range of 60-150ms.
I've tried setting the execution plan of the stored procedure but I can't get it to work - the execution plan is very long and I don't know how to debug the error I get.
What is happening? This happens with a couple of stored procedures - usually the more complex ones. Has anyone seen anything like this?
In using ADO to connect to SQL Server, I'm trying to retrieve multiple datasets AND statistics that are usually returned via the OnInfoMessage event. For those that are familiar with SQL Server, I need the results returned by the SET STATISTICS IO ON and SET STATISTICS PROFILE ON options. Anyone had any luck doing this before?
Thanks in advance.
We recently upgraded from SQL 6.5 to SQL 7. I have a few .sql files that were each running around 5 - 8 minutes under 6.5. These same files now each take over 30 minutes to run. Has anybody had problems with their queries taking longer to run under 7.0? These files are quite large and are comprised of 3 - 4 batches with several queries in each batch. If anybody has any thoughts on the cause please let me know.
Thanks in advance.
Hi guys,
I identified a problem today with one of our development DB's where tempororary tables were not being cleaned up by some stored procedures, and this lead to a large tempdb size (about 2 gigs data and a translog of 25 gigs!).
I'm currently running a dbcc shrinkdatabase on it, but its been running for over an hour now. In my experience with smaller DB's, this process normally takes a few minutes. Can anyone give me a ballpack figure as to how long I can expect this to run for? Are we talking an hour, a few hours, a day, a few days?
Thanks in advance
Matt
Hello all,
I am struggling around defining a logging mechanism for my packages. I have 2 questions concerning that matter:
I have used event handlers for my loggings (as defined here: http://blogs.conchango.com/jamiethomson/archive/2005/06/11/1593.aspx ), but the problem is with packages that failed validation. I cannot find log entry for these cases since no "onerror event" doesn't trigger (for instance when the table I'm loading to doesn't exsist).
And the second question: many of my packages are executed using execute process task (using dtexec command line). I am trying to capture the result of the execution as a log file by using the ">" in the command line in order to output the execution to a log file in the following format:
dtexec /FILE "MyPackage.dtsx" > " MyPackageLog.log"
This works fine when executed by myself but when using the Execute Process task (defined: Executable: DTExec.exe, Arguments: /FILE "MyPackage.dtsx" > " MyPackageLog.log") I get execution error€¦
Thanks,
Liran
I have a tricky question to Microsoft SQL Server 2000/2005. I have failed to find a solution to the problem yet and others have failed too. So I thought that I throw it around here, because the fact that a lot of knowledgeable DBAs hang around in this forum.
I am looking for a script/stored procedure that is able to show me upcoming job executions for a selected date/time range based on the current settings for the jobs configured on the Database server.
Example
Date/Time From: 2/16/2007 00:00 a.m. (Friday)
Date/Time To: 2/17/2007 12:00 p.m. (Saturday)
A. Job 1 / Schedule 1 = Week Days, 1:00 a.m.
B. Job 1 / Schedule 2 = Saturdays, 10:00 a.m.
C. Job 2 / Schedule 1 = hourly, between 10:00 a.m. and 1:00 p.m.
D. Job 3 / Schedule 1 = every 3rd Saturday of the month, 5:00 a.m.
E. Job 4 / Schedule 1 = every 1st Friday of the month, 8:00 a.m.
The routine I have in mind would not return Job E, because 2/16/2007 is the 3rd Friday of the month and not the first.
Job A: 2/16/2007 01:00 a.m.
Job B: 2/16/2007 10:00 a.m.
Job B: 2/16/2007 11:00 a.m.
Job B: 2/16/2007 12:00 p.m.
Job B: 2/16/2007 01:00 p.m.
Job D: 2/17/2007 05:00 a.m.
Job A: 2/17/2007 10:00 a.m.
Job B: 2/17/2007 10:00 a.m.
Job B: 2/17/2007 11:00 a.m.
Job B: 2/17/2007 12:00 p.m.
Output
It needs to return the Job ID, the Job Name, the Schedule ID, the Date and the Time.
Disabled Jobs and Schedules are by default excluded from the selection, but an option to include or exclude those would be a bonus.
Information such as the min, max and average execution time would be great too.
Notes
The schedules of Job D and Job B overlap as you can see in my example above.
This happens only once per month though. I have over 20 jobs with sometimes very frequent execution times, like every 5 minutes or every 20 minutes and jobs that run hourly, daily, weekdays only, weekends only, monthly once etc.
Purpose
I want to do two things.
I want to determine where jobs overlap, not just by start date/time, but also by average run time and maximum run time.
I also want to be able to generate a report that shows me what should have been done and what was actually done by the jobs (note on the site, 6 of the jobs create new jobs on the fly for other database servers and this is sometimes not happening properly, without getting any error message. The volume of jobs makes the manual search like a search for a needle in a haystack.)
Findings so far:
I did some digging myself and found following stored procedures that do some of the steps that I need and involved tables for the calculation.
Stored procedures:
- sp_get_schedule_description (in db: msdb) (undocumented stored procedure)
- sp_add_schedule (in db: msdb) http://msdn2.microsoft.com/en-us/library/ms187320.aspx
Tables:
- msdb.sysjobs
- msdb.sysjobschedules
- msdb.sysjobhistory
It is not a problem to determine the next execution of a job, but that is not what I need, anyway.
The problem is that it does not help you to determine all upcoming execution times, if the selected timeframe is long enough that SQL Server executes the job more than once.
There is no way around using the sysjobschedules table and calculate the execution dates and times based on the configured settings. See the Stored procedure: sp_get_schedule_description.
That one breaks down nicely the settings as documented for sp_add_schedule at
http://msdn2.microsoft.com/en-us/library/ms187320.aspx, but it does not allow the determination of the exact upcoming dates and times when the job is supposed to be executed.
Another Example
If there is only one job scheduled to run
1) every 5 minutes,
2) on every weekday
3) between 1:30pm and 2:00pm
You would get the following results
1) start date/time 7/7/2007 12:00pm, end date/time 7/8/2007 2:00pm
nothing, because the 7/7/2007 and 7/8/2007 are on the weekend
2) start date/time 7/5/2007 12:00pm, end date/time 7/5/2007 1:45pm
7/5/2007 1:30pm
7/5/2007 1:35pm
7/5/2007 1:40pm
7/5/2007 1:45pm
3) start date/time 7/5/2007 1:45pm, end date/time 7/6/2007 3:00pm
7/5/2007 1:45pm
7/5/2007 1:50pm
7/5/2007 1:55pm
7/5/2007 2:00pm
7/6/2007 1:30pm
7/6/2007 1:35pm
7/6/2007 1:40pm
7/6/2007 1:45pm
7/6/2007 1:50pm
7/6/2007 1:55pm
7/6/2007 2:00pm
Now SQL has a lot more configuration options for the scheduler.
And don't forget that you can have more than one schedule record for any single job, including no-schedule record (which would not interest me).
Autom. when SQL starts freq_type=64
Starts when CPU idle freq_type=128
One Time On Date mm/dd/yyyy at time: hh:mm:ss am/pmfreq_type=1
or
Recurring
Occurs
Daily freq_type=4
Every x day(s)freq_interval=x
Weekly freq_type=8
Every x week(s) onfreq_recurrence_factor=x
Mo [ ], Tu [ ], We [ ], Th [ ],
Fr [ ], Sa [ ], Su [ ],
1 = Sunday, 2 = Monday, 4 = Tuesday, 8 = Wednesday, 16 = Thursday, 32 = Friday, 64 = Saturday. Examples: Su and Mo enabled = 3 (1 (Su) + 2 (Mo)), Mo, We and Fr enabled = 42 (2 (Mo) + 8 (We) + 32 (Fr))
Monthly freq_type=16
Day X of every Y month(s)freq_interval=X
freq_recurrence_factor=Y
or
The 1st,2nd,3rd,4th,LAST WEEKDAYfreq_type=32
of every Y month(s)
freq_relative_interval=1,2,4 (3rd),8 (4th),16(last)
freq_interval= 1=Su,2=Mo,3=Tu,4=We,5=Th,6=Fr,7=Sa,8=Day,9=Weekday,10=Weekend day
freq_recurrence_factor=Y
Occurs Once at hh:mm:ss AM/PMfreq_subday_type=0x1
or
occurs Every X Hours/Minutes
Starting: hh:mm:ss A/PM freq_subday_type=0x4 (minutes) or 0x8 (hours)
Ending: hh:mm:ss A/PMfreq_subday_interval=X
active_start_time
active_end_time
Start Date mm/dd/yyy End Date mm/dd/yyyyactive_start_date
oractive_end_date
No End Dateactive_end_date=99991231
Does anybody has a script that does that or several individual scripts that would have to be combined to do what I want to do?
Thanks. I appreciate it.
Cheers!
Carsten Cumbrowski
http://www.sqlhunt.com/
Hi
I'vo got some trouble with the built-in logging feature and a self defined logging table.
Scenario :
A have a SSIS-package with enabled built-in logging to SQL-Server Data-Provider.
At the start of the package, i have an SQL Task which logs
the System Variable system::ExecutionInstanceGUID into the self defined logging table.
But the execution_id which the built-in logging stores to sysdtslog90 isn't the same
value like the value of the system::ExecutionInstanceGUID,
so i can't link my own table with the sysdtslog90 - table.
Any Ideas what is going wrong?
Thanks and sorry about my english
Ivo Becker (Switzerland)
Has anyone come up/determined a generic way to capture and log indicative information within a data flow in SSIS - e.g., a number of rows selected from the source, transformed, rejected, loaded, various timestamps around these events, etc.? I am trying to avoid having to build a custom solution for each of the packages that I will have (of which there will be dozens). Ideally, I'd like to have some sort of a generic component (such as a custom transformation) that will hide the implementation details and provide a generic interface to the package.
It is not too difficult to achieve something similar on the control flow level, but once you get into data flows things get complicated.
Any ideas will be greatly appreciated.
after moving off VS debugger and into management studio to exercise our SQLCLR sp, we notice that the 2nd execution gets an error suggesting that our static SqlCommand object is getting reused from the 1st execution (of the sp under mgt studio). If this is expected behavior, we have no problem limiting our statics to only completely reusable objects but would first like to know if this is expected? Is the fact that debugger doesnt show this behavior also expected?
View 4 Replies View RelatedHi ya,
Here is the query:
if (criteria == "All")
{
SqlDataSource1.SelectCommand = "select * from [tblproperty]";
}
else
{
SqlDataSource1.SelectCommand = "select * from [tblproperty] where " + item + "" + criteria.Trim() + "'" + txtSearch.Text.Trim() + "'";
}
}
}
Now the item contains any value from a combo box, the criteria would be having (All, =, <,>, like, <>) and txtsearch would be having the exact search field value. How can i transform it into a stored procedure so that I do not have to write 2 procedure plus i would be getting all the values back when the procedure executes?
Any ideas?
Hello :
How to execute a procedure stored during execution of the report, that is before the poster the data.
Thnak you.
I want to know the differences between SQL Server 2000 storedprocedures and oracle stored procedures? Do they have differentsyntax? The concept should be the same that the stored proceduresexecute in the database server with better performance?Please advise good references for Oracle stored procedures also.thanks!!
View 11 Replies View RelatedI have database in the database there are a few users that no one is used. When I try to drop thpse users I got next error message:
"The database principal is set as the execution context of one or more procedures, functions, or event notifications and cannot be dropped." (Msg 15136)
Indeed, I think that those users have execute rights on store procedures.
How do I find for wich procedures or other database objects those users have grants?
How do I delete them from database (and maybe from logins of the server)?
How can I see what grants a user has?
How can I see what grants does STP has?
Hi,
This Might be a really simple thing, however we have just installed SQL server 2005 on a new server, and are having difficulties with the set up of the Store Procedures. Every time we try to modify an existing stored procedure it attempts to save it as an SQL file, unlike in 2000 where it saved it as part of the database itself.
Thank you in advance for any help on this matter
Hello group.I have an issue, which has bothered me for a while now:I'm wondering why the column statistics, which SQL Server wants me tocreate, if I turn off auto-created statistics, are so important to theoptimizer?Example: from Northwind (with auto create stats off), I do the following:SELECT * FROM Customers WHERE Country = 'Sweden'My query plan show a clustered index scan, which is expected - no indexexists for Country. BUT, the query plan also shows, that the optimizer ismissing a statistic on Country, which tells me, that the optimizer wouldbenefit from knowing this.I cannot see why? (and I've been trying for a while now).If I create the missing statistics, nothing happens in the query plan (andwhy should it?). I could understand it, if the optimizer suggested an indexon Country - this would make sense, but if creating the missing index, queryanalyzer creates the statistics with an empty index, which seems to me to beless than usable.I've been thinking long and hard about this, but haven't been able to reacha conclusion :) It has some relevance to my work, because allowing theoptimizer to create missing statistics limits my options for designingindexes (e.g. covering) for some rather wide tables, so I'm thinking why notturn it off altogether. But I would like to know the consequences - hopesomebody has already delved into this, and knows a good explanation.RgdsJesper
View 5 Replies View RelatedWhat is the unit of the numbers you get in the Time Statistics-part when running a query in Microsoft SQL Server Management Studio with Client Statistics turned on?
Currently I get mostly 0´s, but if I try and *** up a query on purpose I can get
it up to around 30... Is it milliseconds or som made up number based on clockcycles or... ?
I would also like to know if it´s possible to change the precision.
- Nikolaj
Hello,
I have been working on this stored procedure, it pulls the results that i want, but it takes a very long time to execute and about half the time it times out, i have tried changing the timeout settings on the sql server to 0 (no timeout) but it seems to have made no difference.
Could anyone help with making the query a bit more effiecient? SELECT DISTINCT
webStats.id, webStats.ip, webStats.useragent, browsers.browsername, os.osname, webStats.datestamp, webStats.hourstamp, webStats.hostname,
webStats.refurl, webStats.refhost, webStats.entrypage, webStats.visitcount, webStats.country, searchengines.enginename,
IpToCountry.COUNTRY AS CountryName
FROM webStats LEFT OUTER JOIN
IpToCountry ON webStats.country = IpToCountry.CTRY LEFT OUTER JOIN
searchengines ON webStats.useragent LIKE '%' + searchengines.agentstring + '%' LEFT OUTER JOIN
browsers AS browsers ON webStats.useragent LIKE '%' + browsers.agentstring + '%' LEFT OUTER JOIN
os ON webStats.useragent LIKE '%' + os.agentstring + '%'
WHERE (webStats.datestamp BETWEEN DATEADD(d, DATEDIFF(d, 0, GETDATE()), 0) AND GETDATE())
ORDER BY webStats.datestamp DESCThanks for your help.
Bart
Using SQL 2005, SP2. All of a sudden, whenever I create any stored procedures in the master database, they get created as system stored procedures. Doesn't matter what I name them, and what they do.
For example, even this simple little guy:
CREATE PROCEDURE BOB
AS
PRINT 'BOB'
GO
Gets created as a system stored procedure.
Any ideas what would cause that and/or how to fix it?
Thanks,
Jason
I’m binding the distinct values from each of 9 columns to 9 drop-down-lists using a stored procedure. The SP accepts two parameters, one of which is the column name. I’m using the code below, which is opening and closing the database connection 9 times. Is there a more efficient way of doing this?
newSqlCommand = New SqlCommand("getDistinctValues", newConn)newSqlCommand.CommandType = CommandType.StoredProcedure
Dim ownrParam As New SqlParameter("@owner_id", SqlDbType.Int)Dim colParam As New SqlParameter("@column_name", SqlDbType.VarChar)newSqlCommand.Parameters.Add(ownrParam)newSqlCommand.Parameters.Add(colParam)
ownrParam.Value = OwnerID
colParam.Value = "Make"newConn.Open()ddlMake.DataSource = newSqlCommand.ExecuteReader()ddlMake.DataTextField = "distinct_result"ddlMake.DataBind()newConn.Close()
colParam.Value = "Model"newConn.Open()ddlModel.DataSource = newSqlCommand.ExecuteReader()ddlModel.DataTextField = "distinct_result"ddlModel.DataBind()newConn.Close()
and so on for 9 columns…
I have basic SQL query that returns a one column result set. For each row returned in this result set, I need to pass the value in the column to a stored procedure and get back a result set.
I have 2 solutions, neither of which are very elegant. I'm hoping someone can point me in a better direction.
Solution 1:
Use a cursor. The cons here, are the SP returns a result set on each pass which generates multiple result sets overall. If there is a way to combine these result sets, I think this solution might work.
Solution 2:
Use a temp table or table variable.
The cons here are, if the schema of the result set returned from the stored procedure changes, the table variable will have to change to accommodate it. This is a dependency I'd rather not create.
Any help is very much appreciated.