Just a general question here.. I'm designing a web application that might have 50 million - 100 million rows plus. Basically its a simple logging table each row probably only 24 bytes wide, however I can see it taking quite awhile to execute.
The query is basically a group by, showing the amount of "hits" per day.
Are there any special types of strategies I should implement ? Or is a properly designed structure with indexes likely sufficient (on the right hardware of course)
I'm using what looks to be a popular script to grant execute privileges to stored procedures, and it works great as long as the user account that you want to grant to is not a domain account. For example, I need to grant execute to myDomaindbUsers, but get a syntax error when the script tries to execute this statement:
SET @SQL = 'GRANT EXECUTE ON [' + @Owner
+ '].[' + @StoredProcedure
+ '] TO myDomaindbUsers'
Incorrect syntax near ''.
The script works fine if a non-concatenated user account is given. We use Active Directory to manage our access, thus the domaingroup. Has anyone found a way around this? Thanks in advance.
Tess
Here's the entire script for anyone who's interested:
USE whateverDatabase
GO
DECLARE @SQL nvarchar(4000),
@Owner sysname,
@StoredProcedure sysname,
@RETURN int
-- Cursor of all the stored procedures in the current database
I am wondering what the advantages of using CRL Sprocs over T-SQL sprocs and what not.
Looking for such comparison and articles on websites resulted in only "how to create CRL sprocs" but none of them were talking about what they are used for in what situations.
I would really appreciate it if you guys can post comments, links and external articles.
I have a production sql 2000 box with 24 gig of memory running 3rd part app. There is a posting process (takes all the orders and does the math). Before we did the upgrade it was a 15-20 minute process. Nothing has changed, but in the last month it has gone 60-75 minutes. There were also some windows updates applied (win 2003 adv server). I realize looking for the needle in the haystack, but has anyone seen this issue with either the sql sp4 or windows SP for 2003?
The strangest thing happens after upgrading to SQL Server 2005 64bit (4 cores, 16G RAM)
A table base function call that usually returns instantly would slow down to 25secs! The underlying data is unchanged.
Now, if I reload the function (run alter function but without any changes to the function) the performance is restored for a while, but in about one week it slows down again.
Two questions: 1. most important: is there anything I could do to preemptively restore the performance? Like running a dbcc statement every night? 2. what is the cause of this slowdown?
If I remove the TOP 200 this query returns about 2.5 million rows. It combines a lot of records and turns it into much more programmer friendly results. The query slowed down from 2 seconds to about 13 seconds as it has grown from about 10k to the now couple of million.
Code Block
SELECT TOP 200 * FROM ( SELECT [UserProfile].[UserId] ,[aspnet_Users].[UserName] ,[City] ,[State] ,[RoleName] ,[ProfileItemType].[Name] AS pt_name ,[ProfileItem].[Value] FROM [UserCriteria] ,[aspnet_Users] ,[aspnet_Roles] ,[aspnet_UsersInRoles] ,[Location] ,[ProfileType] ,[ProfileTypeItem] ,[ProfileItem] INNER JOIN [UserProfile] ON [ProfileItem].[ProfileId] = [UserProfile].[ProfileId] INNER JOIN [ProfileItemType] ON [ProfileItem].[ProfileItemTypeId] = [ProfileItemType].[ProfileItemTypeId] WHERE [UserProfile].[UserId] IN ( SELECT [UserCriteria].[UserId] FROM [UserCriteria] WHERE Zipcode IN ( SELECT [Zipcode] FROM [ZipcodeProximitySQR] ('89108' , 150)) )
AND [UserProfile].[UserId] = [aspnet_Users].[UserId] AND [UserCriteria].[UserId] = [UserProfile].[UserId] AND [Location].[Zipcode] = [UserCriteria].[Zipcode] AND [aspnet_UsersInRoles].[UserId] = [aspnet_Users].[UserId] AND [aspnet_UsersInRoles].[RoleId] = [aspnet_Roles].[RoleId] ) AS t PIVOT ( MIN([Value]) FOR pt_name IN ([field1],[field2]],[field3]],[field4]]) ) AS pvt ORDER BY RoleName DESC, NEWID()
The line: FOR pt_name IN ([field1],[field2]],[field3]],[field4]]) I change the values from the long names to read field1, field2... because it was irrelevant but confusing because of the names.
I am currently working on a report with 3 cascading parameters. These three parameters depend on the datasets whose data are retrieved from a large table with SELECT DISTINCT. As the table grows larger, selecting values for these parameters cause postback, and slow in performance from user perspective. I am looking for a way to reduce postback. Can anyone suggest the way to retrieve the dataset all once and filter the dataset without causing postback? Or any other way to improve the performance will be greatly appreciated.
We have an application in which one particular stored procedure goes from .3 second response times to 20-40 second response times, suddenly, at seemingly random times of the day. Recompiling that particular stored proc fixes the problem temporarily.
The 3 main tables have between 500K and 1.5M records, are defragged daily and contain the past 42 days of work. Old records are deleted once daily. The slowdowns do not conincide with the deletions, occuring without a clear pattern. Nothing else on going on on that server -- it is dedicated to this one app. Nothing shows in event log. It is clustered Windows 2003, with SQL Server 2005 RTM version. SP2 of SQL2005 due for an install next Monday. This behavior also was seen when the app was on a different server running SQL2K SP3, so think the underlying problem is some sort of design issue with the app, not a SQL server bug. Has anyone seen something like this and what suggestions do you have for doing a permenant fix? Think that a recompile of the sp causes a new execution plan, but why would that be necessary daily or even several times a day? 30-50 users are banging away at it. The app is an order entry system. The tables contain what are basically order histories and label data. Am running out of good ideas. Thanks for any help.
We seem to be having a rather strange issue. We just finished migrating from 2000 to 2005. I've been working on converting packages from DTS to SSIS. Everything has been going well in testing, but a problem has shown up going into production. The first package I deployed ran significantly slower on the production server, going from about 7 seconds to 90 seconds. The production server is much more powerful, so this was rather confusing. The major difference is that the server is running 64 bit.
After searching around, I haven't found much that is helpful. To test things out, I created a blank package that does absolutely nothing. All it contains is a single connection manager (OLE DB). Executing this package takes 45 seconds. If I remove the connection manager, it executes in 0.1 seconds. Setting DelayValidation to TRUE has no effect. If the connection is to a nonexistent server, or if an incorrect login is used, the execution time is still 45 seconds. Adding a second connection manager increases execution time to 75 seconds. The package runs on my local computer in 0.1 seconds.
Any ideas what could be going on here? I can€™t believe that no one would have seen an issue like this. Is there some sort of strange configuration issue going on here?
When running a test against SQL Server 2005 Express most of the connections work fine, but some don't. I get that Exception;
When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)
It seems to me I am reaching some kind of limitation. What can I do to increase the number of connectionst?
Hi, Our app uses an NT4 SQL 2k box and a Win2k IIS box. To fulfill the text searching requirements we've had to use dynamic SQL (created within a large SP). This query searches about 8000 rows in a flat table (which we create and keep up to date using a scheduled SP which trawls the underlying tables). It's a complicated SP, and can return all 8000 rows (I think we're winning the battle to have this capped though). When we get about 50 users hitting the app and doing searches simultaneously, performance drops severely (20 secs+ for a query rather than ~2 secs), and we get a lot of errors - which report that: Source: .Net SqlClient Data ProviderMessage: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. We also get a lot of Invalid Viewstate errors, which i'm totally baffled by, and the odd IIS Access Denied error - as if authentication (NTLM) is timing out. Neither the Web not the SQL machines are being hit very hard - the web server in particular is not under any strain whatsoever. It doesn't appear to be related to connection pooling, from the logged error messages we're getting. Does anyone have ANY suggestions as to where I should be looking?!
I'm trying to take data and push it into a new database.
This data was taken from a database where there were many contraints, FK etc. quite a complicated structure and now sent to us with just the data in the tables with none of these constraints, FK!!
This data now needs pushing into the original database structure, so all the data will live within the constraints, but using an insert SPROC throws some FK errors, what is the best way to push this data into it's structure!
somebody suggested DTS but this doesn't take into account the constraints, also we need to run this at anytime for a number different datasets. Can anybody help ?(if you can decipher my explantion)
Many thanks for reading, please indicate if you need more info.
I am doing heavy Bulk insert task to one Database.When i checked the Activity Monitor I saw it was suspended and Wait type:PAGEIOLATCH_EX So it stopped itself.Do any experts have good idea about what is going on?
What will be best procedure for the following situation.
Heavy traffic database on daily basis. G growth every day. so Full backups every nights are needed. Vendor recommends not taking Log backups but copy just log files over other location. will this help avoid degrading the performance during business hours.
if i don't take log backups, i am not able to recover Point in time if needed. also log files can then grow faster and then i will have to shrink it more often.
I have a specific job that should be run with a decreased memory usage when the workload is heavy on the SQL Server. This is a heavy job that has no specific requirement when it comes to response time. It is important that the rest of the application shouldn’t be affected with longer response time when this job is running. How can this job bee handled from the application, without having to create a separate batch job.
I find the replication put many sprocs with sp_ prefix in our database. Do you think that should be changed? I have been told not to use sp_. See http://www.sqlmag.com/Article/ArticleID/23011/sql_server_23011.html.
SELECT COUNT(id) as viewcount from location_views WHERE createdate>DATEADD(dd,-30,getdate()) AND objectid=357 SELECT COUNT(id)*2 as clickcount FROM extlinks WHERE createdate>DATEADD(dd,-30,getdate()) AND objectid=357
But I want to add the COUNT statements, so this is what I did:
select COUNT(vws.id)+COUNT(lnks.id)*2 AS totalcount FROM location_views vws,extlinks lnks WHERE (vws.createdate>DATEADD(dd,-30,getdate()) AND vws.objectid=357) OR (lnks.createdate>DATEADD(dd,-30,getdate()) AND lnks.objectid=357)
Turns out the query becomes immensely slow. There must be something I'm doing wrong here which results in such bad performance, but what is it?
As part of my SSIS package, a list of sites is created that need to be created on a remote machine. let's say 1000 sites. I need to pass this list to a web service so web service sitting on that machine creates these sites for me. MY SSIS package does not run frequently so I can sacrifice time a little bit to get better functionality.
I need to move the sites that are not created (for any reasons) by web service to another table and successfully created sites to another table, so I need to get confirmation for each site from the web service.
Which option is better?
1) Calling web service for every single record (site) and get the confirmation and then based on the confirmation I move the records accordingly. I know this might be very time consuming, but as I said my SSIS package might only run every six months
OR
2) Sending records to a web service in a batch and get the result. I don€™t know how to do this though.
i have a question. how do i protect my website from sql injection.right now most of my queries are in the form of: Public Sub updateCredits(ByVal deduct As Int16, ByVal userid As Guid) Dim cmd As New SqlCommand Dim con As New SqlConnection(ConfigurationManager.ConnectionStrings("LocalSqlServer2").ConnectionString) cmd.Connection = con cmd.CommandType = Data.CommandType.Text cmd.CommandText = "Update [userprofile] SET credits = credits - @c WHERE userID= @id" cmd.Parameters.Add("@id", SqlDbType.UniqueIdentifier).Value = userid cmd.Parameters.Add("@c", SqlDbType.Int).Value = deduct Using con con.Open() cmd.ExecuteNonQuery() con.Close() End Using cmd.Dispose() End Sub is that a safe way to do it? using parameters and stuff? or should i completely switch over to stored procedures as i hear they are safer.
I know you can use sql profiler to see what sqlcode actually executed when you run a sproc, but is there any way toget this information in asp.net? After executing a sproc, I'd like to send the sqlcode that was sent, to my Audit class. Is there any wayto retrieve this in asp.net itself?cheers!
While trying to assign a variable a table name then later use the variable name in a select statement (ie select sys_id from @table_name) it fails and says incorrect syntax.
How can I use a variable for a table name to later use within the sproc?
is it possible to have a sproc with a input parm of a column name and have this column name be inserted into an exec statement that runs and provides the output as a OUTPUT parm instead of a result set?
i can get the sproc to take the column name as a parm, run the exec, but cannot figure out how to assign the "dynamic sql" output to a OUTPUT variable instead of returning the result set.
I know that stored procedures(sql server) caches stored procedures in memory where it keeps the compiled execution plan in memory, how does it work with the views does sql server store /cache the views. Just wondering Thanks
Can someone explain the generated sprocs of VS2005 if one column can be nullableDependentOfSeqID = @Original_DependentOfSeqID OR ((@IsNull_DependentOfSeqID = 1) AND (DependentOfSeqID IS NULL))In VS2003 the generated sprocs would beDependentOfSeqID = @Original_DependentOfSeqID OR ((@Original_DependentOfSeqID IS NULL) AND (DependentOfSeqID IS NULL))Which is the best?
Due to a business rule change, I had to take what was 1 column in a table and split it off into a new table. Now I need to find every time that column is used in a SPROC and change those sprocs. Is there a way to sift through the sprocs to search for a "phrase" (the column name) -- other than reading through every one manually?
there's a concept named cyclomatic complexity in software dev which measures the complexity of code by its number of decision points. This would be measured by # of if statements, nested if statements, etc in a method.
Do SQL queries have any type of equivalent? For example, # of joins, # of conditions, etc. Factors into a complexity metric which indicate how complex, risky or error-prone a sproc might be based on certain factors?
Anyone have the code that would allow me to see if any of my sprocs contain references to a function? I imagine it would someting like select name from sysobjecst where charindex(whatevertextis, 'ufnName') > 0
Is it possible to rollback changes made to the DB when debugging a t-sql sproc in VS2005? i.e. step through the sproc, then hit rollback and be able to step through it again in the same state