Which Needs The Better Performance: Logs Or Database?
Feb 6, 2008
I have a set of disks allocated for a high performance SQL implementation that will entail lots of large queries. My question is do I allocate more IOPS to the logs or to the database?
For example, if I have a 10 disk RAID 1/0 and a 4 disk RAID 1/0 available, which do I allocate to the logs and which do I allocate to the database? Which will require the most IOPS?
Hi,I'm using SQL Server Management Studio Express and I have made a website that uses a database with stored procedures.When running pages in my webpages it cost around 2 seconds to load each page.Thanks too long!! But where is the problem???I like to see a list of all executed stored procedures with there execute time (for each page).In this way I can check if the problem is here.How can I get a logfile like this?Thanks!
I want to automate the offloading of SQL Server 6.5 logs before they become full. I've set this up 'manually' by triggering a job to run when the LogSpaceUsed threshold of the SQLServerLOg counter exceeds a certain value, but what I want to do now is to automate the starting of this performance counter. Ideally what I'd like is to (1) Start the counter once SQL is up, having (2) first ensured a particular instance of the counter isn't already executing (3) have some means of detecting whether this counter ever stops running in which case we'd have to restart it. This is probably more of an NT question than anything but if anyone has experience of this please let me know....I have tried trying to get SQlServer to start the performance counter as it starts up. I used sp_makestartup to define a stored procedure which runs on startup, this uses xp_cmdshell to call a CMD file which calls the saved counter definition (a .pmw file, performance monitor workspace) I thought this should work but the counter wasn't loaded......this doesn't check whewther an existing version of the counter is already running anyway.
There are so many to choose from, which ones are the most important to monitor?
Also if you have your data files and trans logs set to grow automaticlly and would like to change this to a fixed number is there a way to determine how large you should set them at? Thanks in advance. :confused:
Hi Gurus,i am having problems with restoring a ms sql database.i have restored the database using veritas to a different location('g:datafiles') in no recover mode.when i view the database through the Enterprise Manager, it shows thedatabase as silver icon (loading).i go to sql analyzer, and put in the restore commandrestore log myDatabasefrom 'mylog'with recoverythis produces an errorRESTORE FILELISTONLYFROM jobsServer: Msg 3206, Level 16, State 1, Line 1No entry in sysdevices for backup device 'mylog'. Update sysdevicesand rerunstatement.i look in master..sysdevices - no entry for mylog, but then no entriesfor the log files for any of the other perfectly working databaseseither.i do have a copy of the log files ('c:logfiles') in another locationi would like the following help if possible:* a way to update sysdevices with the log file i wish to apply to myrestored database so it will let my restore go through properly.* a way to specify to use apply the logs in 'c:logfiles' withouthaving to give the restore statement a logical name for the log files(which naturally won't be in sysdevices!)pls supply transact sqlmy thanksEdwina63(if wish to email please remove h from edwinah@)p.s sp_add_log_file_recover_suspect_db will not work in a partiallyrestored database
In my case I have to log the errors raised by any task in a package to either windows event log, text file or SQL server. Also I need to send an email notifications to a group of people telling them about the error.
Now can I use SSIS package logging for logging the errors into the required destinations. I mean right clicking on the package and selecting Logging, then adding the required log providers and enabling the events for logging into those. I think I have to upfront select the log providers to log the error, I will not have the liberty to log the error to the destination, the name of which is passed as a variable to the package. This is okay with me though.
Now what will a custom log provider help me to do in this case. Also can I somehow configure my package to call the send mail task everytime an error is raised.
Also, one more option can be developing a package that only does the error handling. It will take in the paramters or the error codes and descriptions, the destination to write to and a flag to send mail or not for that particular type of error.
I've got a sitatuation where one of our sql databases appears to be frequently "starting". The log entry looks like:
Starting up database 'Database'.
And seems to occur at irregular intervals and does not seem to be inline with any other db activity....i.e. transaction log backups, insertions or reads.
This DB is fairly busy receving inserts from our PBX CTI software almost constantly.
Note this is the only DB on the server displaying this behaviour (we've got two named instances running with several databases in each).
I have a Db that is 1.7 gigs. The table data takes approximately 200megs. The transaction logs were truncated. Where else can this large size be coming from and how can I confirm?
DB is generally small. ~25 tables, 100 SPs, 10 views, etc.
Note:
I have 4 queues using SQL Notifications, but when selecting from them results in no data.
Not an SQL admin. We have an SQL 2005 server that has about 5 DB's on it. One database is maintained primarily by a third party. Often when they need to do upgrades they login remotely to the desktop of our SQL server. Is there a way to apply permissions to specific databases like you would for NTFS? That way they can only backup their database and not do anything to any other databases? Thanks.
Hi guys, just wanna ask about the backup and restore database method. What's the best way for database and restore which able to view all the transaction logs after the database being restored. Currently I backup my database daily for recovery purposes. However, if I restore the backup file at another server and use SQL log application for viewing the prefer database's transaction log, it shows all the previous log had been truncated.
Therefore, I want to know is there any way that able to get the transaction logs after restore from a database backup file? Hope able to get any assistance here as soon as possible. Thank you.
I am using SQL Server 2012. I Want To Maintain all Type Logs In Particulars database or server. I want to track all Query Which Execute in Particulars Database. and all other activity?
What is the performance comparison for XML and database? Using system.IO to read the XML file will be slower than reading data from database , if only read data and not sorting? The RAM/CPU memory usange will be higher for get data from XML compare to get data from database?
Same databse server, two databases, one a copy of the other origanol giving bad performance. New copy will return 300000 rows in a second. the origanol will take thirty seconds to return same data set. 7 users on bad one 3 on good one. Bad one has been reindexed, checkdb and newalloced with no errors. Still giving very bad performance. Any one got ant further ideas on what to do??? please help.
Im a operator for backup (and many other),but i had a problem with Sql 6.5 and ArcServeIT. Sql is the database for Arcserve. Many of the Fileservers (30 Servers) stored over 700.000 Filenames in one Fullbackupsession (every Friday)into the database. (the Table called astpdat)
When i search one File in the database (via Arcserve,its like clicking the explorer-tree)the performance is veerryy slloww. How can i make this speed up?
the database is 5GB ,the server on it is, has 4 cpu´s ,1gb Ram the Sql Server has 100 MB Ram ,100 MB TempDB (HP LXr 8000) thanks for answer joe from (Bratwurst) germany
Hi, I wonder if someone can answer a quesiton for me: I'm modifying adatabase with the purpose of adding the new feature of address changehistory. My model would consist of a table for keeping clientname/logon (for a public site) info in one table, and address info inanother table because the login info would likely be more frequentlyaccessed/changed than address updates. Now a group that does dataentry internally through a web interface always need to see theaddress.For the first stage I don't want to change the old table, just have anew one for now. But moving forward, I thought it would be neat tohave all address update records in one table and have a Profile typevalue to distinguish whether data entry or a public website usercreated the update record.However, a thought occured to me: If one table is responsible forshowing current address as well as adding records whenever there is anaddress change, would it hurt performance? Would I get betterperformance, splitting the record types into two tables, or does itmatter since the table I'm thinking of creating would have nodeletions: Only insersions and modifying an expiry date field so weknow which record to use. I'm not a specialist on database performanceso if any of you database gurus out that can advise me on that thatwould be GREAT. Thanks a million guys.Jonah A. Libster
In Oracle i can get Performance varables like Library Cache Hits, Dictionary Cache Hits, Database Buffers Read ,Redolog Buffers Read etc from the system dynamic tables.
I want to know how to get the same / related performance details in sql server 2000 and 2005. ( which are the parameters , Optimal value and which table/dynamic view to query).
I have an asp.net application on SQL Server 2005. I have completed indexing all the physical primary and foreign keys, virtual primary and foreign keys, sorting order, where clause fields and so on. On first day, I only index all the physical primary and foreign keys, virtual primary and foreign keys. I noticed the loading performance has improved. So I continue with the remaining index process on the second day. This time, I noticed the loading performance is slower by 0.5 to 1 second. Is there any possibility that the loading performance will be slower after indexing? Please advise. Thanks.
Does anyone know any good resources to get ideas / scripts for measuring capacity and statistics on SQL Server 2000 database or otherwise? I want to incorporate SQL script jobs that will email me various statistics everyday and am trying to find a starting point.
First of all, sorry if this is in the wrong section, didn't know where to put it.
I'm doing a university paper comparing Microsoft Access versus SQL Server 2000 and I want to run a benchmark on them to see which is faster, does anyone know of any applications that will let me do this?
You may think this is silly since SQL Server is quite obviously faster, thing is a can't just say that in my paper, I have to be more specific so was hoping I could run some benchmarks and show the scores on the paper.
I know I can run an ASP script that can time how long the query takes to run, however this can't test multiple concurrent users accessing the system (useless I get all my friends computers around my house, bring up the page and get them to click on refresh all at the same time :) ).
I basically want to run a simple SQL SELECT statement on an identical database in both database systems (Northwind), but for mutliple users. Anyone know of any application?
Or does anyone know any performance tests that has been done on comparing Access with SQL Server? All I can find is material comparing high-end database against high-end database (Oracle vs SQL Server vs IBM DB2 etc.). If I can't do my own I can always use other peoples. Cheers!
Hi I am using SQL server database with asp as front end. I use asp command object to call sql stored procedure. The procedure runs a while loop for say 100,000 records and based on the IF condition calls particular stored procs which process the record. I am running this app on a p4 IBM pc with win 2000 sever and IIS on same m/c . The cpu utilisation goes upto 98-99% and the process has started running very slow of late. Is this slow processing speed a hardware/OS problem or is it due to calls for stored proc within stored proc? how can i optimise the process. Each stored proc called does have if conditions,table scans etc.
Hi there This is the scenario: I have a heavy duty database, that is being accessed very, very frequently (i.e. 100 times in a minute). Now, I would like to make a backup of the database, just in case something goes wrong (recovery reasons, etc.). My question is how will making a backup impact the performance of the database and how will I be sure that the backup is in the consistent state? Thank you
I have client tools installed on a server and I have registered our 30+ instances hosted on various servers to this one MS SQL 2005 Management Studios.
Question:
How can I use this set up to send an e-mail distribution list a nice monthly chat showing the sizes of the database, memory, cpu utilization of all the registered databases?
from your experience in SQL 2005 - do i have any free software that can help in improve performance or can help in identifying performance bottleneck. two examples of performance and help that i use usually use are the maintenance plan that do (check DB > reorganized index > rebuild index > update statics) and the second software is the SQL 2005 DASHBOARD for the reporting help. do you have any other free tools and help that you can give me for performance or any thing that i must have in my SQL 2005 servers.
Hi,I am facing a peculiar problem while looking ahead in a live Databasecurrently under operation in one of my client’s Project. AnApplication that is updating 3 - tables in the Database is missing toupdate a certain number of Fields in one of the Tables. The fact isnot frequent and I have checked through the Server Performance Monitorthat there is no performance slag of the Server during any point oftime.The Tables are indexed with common Key fields. Can anybody help me inthis regard ?Thanks & Regards.--Posted using the http://www.dbforumz.com interface, at author's requestArticles individually checked for conformance to usenet standardsTopic URL: http://www.dbforumz.com/General-Dis...pict193836.htmlVisit Topic URL to contact author (reg. req'd). Report abuse: http://www.dbforumz.com/eform.php?p=655931
We have an application with a SQL Server 2000 back end that is fairlydatabase intensive -- lots of fairly frequent queries, inserts, updates-- the gamut. The application does not make use of performance hogslike cursors, but I know there are lots of ways the application couldbe made more efficient database-wise. The server code is running VB6of all things, using COM+ database interfaces. There are someclustered and non-clustered indexes defined, but I'm pretty surethere's room for improvement there as well.Some tables have grown into the millions of records in recent months,and performance of the application slowed to a crawl. Optimizing thedatabase helped a little, but not much. We know that several millionrecords in a table is a lot, but one would think that SQL Server shouldbe able to still handle that pretty well. We do have plans to archivea lot of old data, but in the meantime, we were hurting for a quickfix.So we threw hardware at the problem, and transferred the database to anew, more powerful server. The performance improvement was dramatic.Transactions were many many times faster than before. Withoutimplementing any of the other more difficult performance improvementswe have planned, we suddenly became minor heros. :-)Well, the honeymoon seems to be somewhat over. While performance isstill much better than when the database resided on our old server,performance appears to have degraded rather significantly again.Performance is also not significantly better with fewer users on oursystem. What the heck?Yes, the database continues to grow unchecked as we haven't quite gotan archive utility in place yet, but the growth is relatively gradual,so you wouldn't think that would be the issue. The database isoptimized on a weekly basis, and our web and database servers are bothrebooted monthly. Our database administrators don't seem to haveanswers, so I appeal to the experts reading this forum to maybe offersome clues.Prior to posting I did do a fair amount of research to see what peoplehave suggested in similar situations, and ran this by our databaseadmin. Here's what I can tell you from this research:- Statistics are updated weekly along with whatever else the databaseoptimization does- We do not use the "autoshrink" option for automatically shrinking logfiles- Regarding preallocating space and setting growth factors for log anddata files to minimize time spent allocating disk space, our adminsays, "We do allow database files to grow unchecked, but we do monitorgrowth and manually expand as needed. Autogrow is typically set in50MB increments or less as the amount of time it takes to expand thisamount is negligible."- Transaction logging is turned on, and data and log devices are onseparate physical disks- The database server is monitored to ensure no process is hogging allof the CPU, I/O or memory
I have been running a reporting App on an SQL 2000 server, which reads from one large table (roughly 80 million records which grows at around 2 million records a week).
However, it would seem that when multi-users try to access the App and the one large table is Read from using 1 database user name up to approximately 25 times, the system would slow to a halt. Essentially, each request on the App would be a new connection to the SQL database using the same database user name. Recently, we have been running into performance issues since we have increased the number of users for the App.
What would be causing this slow down? and what could solve this problem?