I hard that SQL Server 7.0 has problems when the database reaches
50 - 100GB, in areas such as backup, transaction logging, and database
admin and that by 100GB parallel queries are also affected.
Is this true ? Where I can get information on this ?
I have a database approximately 30 GB in sixe which need to be moved from one SQL server to another. Does anyone know the most efficient way of doing this, other then backing up to tape?
I have been experimenting with SQL Server 2005 partitions. I loaded a terabyte of information into 2 tables. The first holds the document information and the second holds the actual binary document (in this case pdf). Most of the documents are about 1 megabyte in size, but the largest is 212 megabytes.
SQL Server has no problem storing the blobs. The problem occurs when I attempt to get the data.
I did some quick tests to test how fast I could pull the documents out. The largest took about 24 seconds. The 1 meg documents are sub-second.
Here is how the 212 meg doc breaks down:
Time to load datatable: 18.79 seconds Time to load byte array: 3.84 seconds Time to Write and open document: 0.01 seconds
If I access the file from a file server, the time is 0.04 seconds to begin showing the document.
As you can see, the longest time period is related to retrieving the data from SQL, and it is much slower that launching it from disk across the network. (note: the sql server and file server used to test are next to each other).
My question is, how can I speed up the access from SQL Server? I believe the keys are "partition aligned". Any suggestions would be appreciated.
I will add the table definitions and partition information as a reply since only 5000 chars are allowed in the post.
I have a 50Gig OLTP production database that currently takes +- 50 minutes to backup, (normal sql flat file backup to disk).
This database will grow to +- a terrabyte by next year.
My major concern is how will i be able to backup this DB when it is that big in 2 hours or less.
I have been checking out my options, in terms of SAN snapshots/clones. Also multiple backup devices and using differential/filegroup/full backup strategy.
What i want to know is if anyone out there is backing up VLDB's what strategy/methos/tools are you using, even 3rd party tools for faster,smaller backups?
Any pointers/best practices for VLDB backups would be greatly appreciated.
Does anyone have experience/advice with large databases (5-10 Gig)? If so, I was wondering about performance/other benefits of spanning a large database across multiple devices (different disks). Would anyone vote for or against doing this?
I have a production database that is in the low gigabyte size andgrowing steadily. No issue there.I wish to completely refresh the development database daily on asecond server. What is going to be the fastest easiest way to do thiswith hindering performance on the production system ?Thanks,Craig
I need to manage the problem of negative performance implications when I fragment a 1TB+ DB. I want to perform Index Reorganization if fragmentation is no higher than 30%, and Index Rebuild if the fragmentation exceeds 30%.
Firstly can anyone recommend a script which uses sys.dm_db_index_physical_stats system to ascertain the fragmentation level. Secondly, is there a technique I can employ to prevent the ONLINE operation completely killing performance on 27/4 production system?
I've got a few VLDB's that we want to make smaller. Since the tables are running on legacy stuff, all of it's basically made with int's and char's and it's horriably inefficant.
The problem that I came across is when I made a new table with the best data types and copied the data from the old table, the table size was the exact size (excluding the index size). It was estimated that a total of ~20 GB would be saved with this change. As it turned out, 0 bytes of data were saved with the data types chagnes.
Why are the two tables the same, even though one has much more efficant data types?
If you want more information about the table I'm using:
391 columns. 50,147,035 rows. 65,295.625 MB in size.
I have been using AlwaysON AG for a long time now and currently have about 10TB of data across 120 databases and 3 AG groups for any application that is on SQL 2012 with great success. Each AG group is running on patch level 11.0.5058.0 with 2 synchronous replica(on different SANS) in Primary Data center and 1 ASYNC replica in DR. Migration has been a non-issue because none of the databases weren't substantial enough that I could not fit into my maintenance window which is 12-4AM on SAT morning.
My issue is that my last application to migrate to 2012 includes a 4TB TDE encrypted databases database which is about 10x larger than any of the previous ones I have migrated. The database takes 4 hours to backup after tuning extensively(I hate TDE!!)
The restore to the primary replica is instant because of seeding incremental but the issue comes from having to backup the database before adding to the availability group. 4 hours is my exact outage window and I can't get any more. My plan to migrate application is to -
First Outage Window
1) Restore Database from 2008 to 2012 Primary Replica 2) Change application ARECORD(or cname not sure which) to Primary replica 3) Run database on single node until next outage window
Week Later 1) Add database to availability group 2) Change ARECORD/CNAME to listener
What I don't like about this is I am going an entire week with 1 node instead of 3 which is worrisome. How to accomplish this I would love to hear from you or any type of comment from people who have worked with VLDB in availability groups and what you like/hate/loved about doing it. I am trying to go all in on this software and have loved it so far but getting worried when it comes to the VLDB migration.
The column I'm adding needs to be part of the clustered PK (it will be the last of three columns) so I need to recreate all the indexes.
My DB is set for FULL recovery mode ALLOW_SNAPSHOT_ISOLATION ON. I've tried two methods so far.
Method 1:
BEGIN TRANSACTION CREATE TABLE dbo.Tmp_copyoftablewithnewfield ( ) ON PRIMARY IF EXISTS(SELECT * FROM dbo.originaltable) EXEC('INSERT INTO dbo.Tmp_copyoftablewithnewfield (<original fields>) SELECT <original fields> FROM dbo.originaltable WITH (HOLDLOCK TABLOCKX)') GO DROP TABLE dbo.originaltable GO EXECUTE sp_rename N'dbo.Tmp_copyoftablewithnewfield', N'originaltable', 'OBJECT' GO <recreate PK constraint> <rebuild indexes> COMMIT
Pro's: Lets me add the new field in the spot I'd like it (not a big deal) Con's: Tons of wasted space and time. It took about 15 hours.
Method 2: SET XACT_ABORT ON GO SET TRANSACTION ISOLATION LEVEL SERIALIZABLE GO BEGIN TRANSACTION <drop PK constraint> <drop indexes>
ALTER TABLE [dbo].[originaltable] ADD [newfield] [tinyint] NOT NULL CONSTRAINT [DF_originaltable_newfield] DEFAULT ((1))
Pro's: No making a copy of the entire table taking up 200GB more space in the db data file Con's: My tempdb grew to accomodate the row versioning info for every row in the 200GB table. It took over 30 hours.
A lot of time and disk space is wasted with both.
Since the db is going to be unavailable to users I have some flexibility here. I was considering turning ALLOW_SNAPSHOT_ISOLATION OFF and then trying method 2 again which should stop the versioning in tempdb and then turning it back on.
I was also curious if setting the database recovery mode to SIMPLE would cut down on db log usage and then I could set it back to FULL when done.
Do these really need to be in a transaction? If there's some hardware failure or something unexpected I can just restore from backup and do the conversion again. If the presence of the transaction itself is causing more disk usage for logging or any other slowdown, I think I'd rather do without.
Given the amount of time this conversion takes, I wanted to get some feedback other than "just try it" before doing any new tests.
Hi, Im a Jr DBA and have been given an assignment by my lead to find information on the following. We are to migrate existing db of size 4TB to a DELL PowerEdge 2950[Mem:Up to 32GB] OS : Windows Server 2003 Std Edition X64 SP2 DB : SQL Server Enterprise Edition x64
I am to find on how to design the db to provide optimum performance,fail over and consider the growing factor of the db.
1)What would be the recommended RAID settings? 2)Placement of the tempdb ? 3)Should we do clustering and why ? 4)What Data partioning would do to help? 5)Any Other aspects to be considered for sizing db ? 6)Placement of data files and log file on separate physical disk ? 7)Indexing?
I have read many sites.I would appreaciate if someone could write suggestions and opinions based on their current db design spec or previous experience,by selecting best db design points.Thank You.
Hi, I would like to delete a data from a 750million row table in chunks of 10000,without blocking the users.As ours is a 24/7 shop I donot want to block the users for a long time. Answer for this is highly appreciated. Thanks Samna
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***
We are trying to create some alerts in our SQL Server 2014 BI edition.Issue is that, after I chose "Type" as "SQL Server performance condition alert" nothing is listed in the "Object" list box.SQL Server event alerts are working. Issue is only with "SQL Server performance condition alert".
I can't find 'SQL Server: SSIS Pipeline' performance object in performance monitor on a 64-bit SQL Server. I see it on a 32-bit. Does anybody know why?
I set up the collector, and specify the Run As as my AD account in the Collector Set - Properties - General screen. My AD account is the local admin of the remote server.
However, the collector does not seem to work. Although the collecting set is shown as running, the The blg file stays at 64K. If I open it, there is nothing inside (no counter at the bottom). What did I miss?
Hi, I am interested if anyone else has come across performance problems with the SQL Server linked servers to SQL Server. I suspect that the OLE DB Provider that I am using perhaps has some performance issues when passed parameters.
I have set the dynamic paramters option on, and use collation compatible.
Does anyone know of a quick check list I can use (config setup), for the Server running my SQl Server to enable best performance. The box is dedicated to the SQL Server. The problem is when running a farely heavy sp, the box lockeds up, CPU hits 100% and I'm ending up staring at a screen trying to load itself for ages.
We are currently in the process of loading large amounts of data into our database. We are running into a situation where we are getting a message of "Waiting for WRITELOG", and this is slowing down our process to a crawl. I do no have a lot of experience in the performance side. I would truly appreciate any help on this matter.
Hi guys, I have windows NT workstation 4.0 and SQL Server Client Configuration Utility installed on my computer,so I have remote administration of SQL Server. But because Performance monitor is'not a part of NT workst. I can't check behavior of Cache hit ratio and ..... May be you give me some idea how can I do it remotely without Performance monitor.
My company is contemplating which platform (Oracle or SQL Server) to develop a new imaging application in (large databases, many users). All of our current SQL Server applications are relatively small now (100 MB databases). Can anyone give me some idea of what size databases they are running in SQL Server and number of users concurrently accessing them. What can SQL Server realistically handle?
I have to admin a datbase which makes almost no use of stored procedures. The C++ frontend makes use of ad hoc calls to the database(No this is not my idea of how to do things, but I have no say). Any ideas on the best way to tweak the SQL Server to handle this? Thanks
We have an application that uses SQL 2000 server. I am almost certain all the performance issues we are having are due to the SQL server. I really need to confirm this.
We have an application that uses SQL 2000 server. I am almost certain all the performance issues we are having are due to the SQL server. I really need to confirm this.
Hello Everyone,Regarding stored procedures and views, I know that stored procedurescause SQL Server to create a cached execution plan. Is the same thingdone for views? Also, how bad is the performance hit for a storedprocedure that use 1 or a few views as opposed to re-creating the sameselect statement with the proper joins to the required tables?I know that there are a bunch of variables that affect this stuff, Ijust trying to get a ball park idea of how this stuff works.Thanks,Frank
I am investigating a SQL server performance issue where the systemoperates well at times and poorly at others. This SQL server isconnected to a SAN where I believe the issue lies. I have started sometesting using the SQLIOSimx86 utility from Microsoft with theapplication stopped. My initial results show that there are quite afew errors that indicate"IO requests are outstanding for more than 15 sec."I am next going to look @ the IO system as a whole (bios,driverversion)Any thoughts? I can't seem to find any documentation regarding thisand whether this is acceptable.Thanks,Scott
Recently created a library module (ASP.Net and SQL Server 2005) where we store files inside the SQL DB. We now have several thousand files and the DB is around 25 gb. I think we are starting to see performance problems when trying to select files for download. Filed under 10 Mb seem to download fine, but over 10 MB we are having problems with. Was wondering if someone could point me to a good article that might talk about these kind of performance issue and what I might do to over come it.
We have recently updated an application from SQL Server CE 2.0 to SQL Server Mobile 2005 and we are seeing a huge decrease in performance? Is this normal? Database query that used to take 8 or 9 seconds are now around 20 secs, the database is only about 5 MB and the two tables in this particular query have 20 rows and 14K rows respectively. The query is basically:
select * from table1 join table2 on table1.myint = table2.myint
myint is the Primary Key of table2 and I have even created an index on myint for table1, any ideas?