I posted a link to a prior article in here, that one about high
performance hierarchies, and have the first two parts of a new series.
Hopefully this is of value to someone.
I realise this is a stupid quesiton but i cannot really find any confirmation of this in BOL.
If you are running High Safety with automatic failover, when failover occurs does this automatically change to High Performance mode. SInce for failover to occur something has happen with the primary , it will be impossible to commit transactions on the new primary and mirror asyncronously since 1 of them is no longer available.
So am i correct in assuming that automatic failover also automatically changes the mode to High Performacne for that session?
I am looking to improve the performance of my sql server databases.
I currently have a dual location system, the database server setup is basically a quad xeon with 4gb at my office and a double xeon with 4gb at a remote webhosting location. There are separate application/web/intranet servers at each site. The two databases servers are replicated with the local server publishing to the remote server.
The relational database holds circa 26 million records, growing by a volume of 10,000 per day, there are approximately 50,000 queries performed per day.
My theory is that the replication of the two databases is causing a slowdown; despite fast network connections (averaging 200ms between servers) the replication seems to place a large load on the local server. Would it be sensible to replicate to a second local server and then replicate to the remote server, placing any burden on the second server?
I am planning to upgrade the local server to a high capacity 4+ cpu 64bit server, my problem is that although I have noticed a slow down in performance over time, I am unsure how to go about measuring and quantifying this in order to diagnose the bottlenecks and ensure that investing in a new server would be worthwhile. Where would one be best advised to start this project?
Ive got an ETL process I have written which takes about 10 million rows from a staging database and loads it into production database with an INSERT statement. The INSERT statement makes a function call to retrieve the surrogate key for each row. The function looks in a replicated copy of our production database so no load is on our production environment during this time.
So: INSERT INTO foo(...) SELECT name, address, zip, dbo.fnGetSurrKey( name, address)
It took about 12hrs to insert 6 million rows last night and Im wondering if there is a better way of doing this. Maybe a multithreaded way like SSIS might have.
Assuming my function is optimized as much as possible, does anyone have any tips for speeding this up?
Also, the machine this ran on has 16gb of RAM but was setup to use only 2GB during this process. I have already changed it to 12gb and restarted the process a week ago, but the change doesnt take affect until you reboot. Would I see a significant performance increase from that?
Hello ,i am a master student and i am making a seminar about high volume DB performance problems ,example : if i have a table with length of 1000000 record and this length is growing exponentially by the time,what the problems may i face in insertion ,deletion , search,in such table?? and what the problems in processing such DB in general
I recently converted a column that was once an int to an bigint on one of my tables. The modified column provided a generic row id information and there are duplicates within this column. I am trying to perform a self join via the following:
SELECT a.row_id FROM test_db a INNER JOIN test_db b ON b.row_id < a.row_id.
This code use to work when the column was an int but now I am getting high CPU issues since I converted to bigint. I am unsure on why the change to bigint will cause such an issue. The OS/SQL is 64BIT.
We recently implemented merge replication.We were expereincing. The replication is between 2 SQL Servers (2005) over same network box, and since we have introduced the replication, the performance has degraded considerably on subscriber end.
1) One thing that should be mention is that its a "unidirectional Direction" flow of changes is from publisher towards subscriber (only one publisher and distributor as well and one subscriber ).
2) Updates are high than inserts and only one article let say "Article1" ave update up to 2000 per day and i am experiecing that dbo.MSmerge_upd_sp_Article1_GUID taking more cpu time.what should be do..
on subscriber database response time is going to slow and i am experiencing a lot of number of LOCK time outs on application end.
can any one can also suggest me server level settings for aviding locking time out.
Is there a way to configure mirroring to go from High Availability to High Protection without having to reconfigure Database Mirroring? Using the interface in Management Studio, I can change the configuration option to High Performance, but not High Protection despite both of them being Synchronous.
If not, what are the recommended steps to configure the mirror once it already has been configured? Is just like initially setting up the mirror or would there be any shortcuts I could take? If I stop the mirroring and remove the witness, will the High Protection option be available?
There is a SQL Server 2008 R2 SP3 Clustered Instance that has Transactional Replication. It is by no means a large replication setup in terms of data/article count. SQL Server was recently patched to SP3 and is current on Windows 2008 R2 Patches.
When I added a new article to replication (via 2014 SSMS GUI) it seems to add everything correctly (replication tables/procs show the new article as part of the publication). The Publication is set to allow the snapshot to generate for just new articles (setting immediate_sync & allow_anonymous to false).
When the snapshot agent is run, it runs without error and claims to have generated a snapshot of 1 article. However the snapshot folder only contains a folder for the instance (that does have the modified time of the snapshot agent execution) and none of the regular bcp/schema files.
The tables never make it to the subscribers and replication continues on without error for the existing articles. No agents produce any errors and running the snapshot agent w/ verbose output provides no errors or insight into any possible issues.
I have tried:
- dropping/re-adding the article in question.
- Setting up a new Snapshot Folder
- Validated all the settings and configurations
I'm hesitant to reinitialize a subscriber since I am not confident a snapshot can be generated. Also wondering if this is related to the SP3 Upgrade, every few months new articles are added to the publication and this is the first time since the upgrade to SP3 that it has been done.
First of all; My Oracle publication works fine when I don't explicit specify the shema_option parameter for the articles I'm adding to the publication. The reason why I then want to explicit specify the parameter is as following.
I'm developing a replication solution to get data from our production server (Oracle) to our Data Warehouse (SQL Server). The SQL Server (and the Data Warehouse code) uses the SQL_Latin1_General_CP1_CI_AS collation. When I don't explicit specify the schema_option, the nvarchar columns of the replicated tables are created using the SQL_Latin1_General_CP1_CS_AS collation and this results in some comparison errors, when for instance a select statement is trying to compare two nvarchar strings using different collations.
I've tried to specify the schema_option parameter as "@schema_option = 0x80" (Replicates primary key constraints.) to avoid the use of the SQL_Latin1_General_CP1_CS_AS collation when creating the destination tables - I'm not sure it's enough? No matter what, I'm getting an error when I'm doing it (see below).
Message 2006-07-13 12:00:15.529 Applied script 'ITEMTRANSLATION_2.sch' 2006-07-13 12:00:15.544 Bulk copying data into table 'ITEMTRANSLATION' 2006-07-13 12:00:15.544 Agent message code 20037. The process could not bulk copy into table '"ITEMTRANSLATION"'. 2006-07-13 12:00:15.591 Category:NULL Source: Microsoft SQL Native Client Number: 208 Message: Invalid object name 'ITEMTRANSLATION'. 2006-07-13 12:00:15.591 Category:NULL Source: Number: 20253
The questions are now whether I actually have a schema_option alternative for Oracle Publishing? If so, what is the solution, and eventually how can I avoid the error stated above?
If I'm not able to avoid the article columns getting created with the "wrong" collation, is there then any other obviously solution to the problem?
Can someone at Microsoft comment on this article, specifically, how it relates to SQL Replication? Will SQL Replication fall into the category of what this article describes, or only the mirroring feature?
For the last week, our production SQL server is running very slow and causing the CPU usage to go 80-100 % almost all the time. This causes certain queries to time out. Our application has never timed out before ever. Also, we did not do any updates on our production machine or installed anything recently.
Has anyone of you ever experienced this issue? If yes, then what did you do to resolve it? any help would be greatly appreciated.
I have a 2003 server with sql 2005 on it and the sqlservr.exe is using 880 meg of memeory and it will climb to 1.4 gig. if i reboot server it will go back to 100 meg and slowly climb back up. any ideas i am not a sql guy.
We have a live server that has had very high CPU usage in the last few days, therefore the site is extremely slow. There are about 60,000 users per day on the average. It always had high CPU usage, but not as bad as in the last few days, in the 90s, sometimes even reaching a 100.
Any solutions? I've run SQl Profiler to check on queries that have high CPU usage. When I run the same queries on our staging server, they are very quick. For example, the same query can take 1 minute on stage, but 12 minutes on the live site. We do know that this is also related to traffic, during lower traffic times, the same query takes less time, but still never as fast as stage.
Oh yeah, it's SQL Server 2005 and asp code running IIS6 on windows 2003 server.
Please help. I really appreciate any advice. Thank you.
I've been asked to look into the possibility of using SQL Server in a high availability environment. We have a few web based applications that use SQL Server back end DBs. What we are looking into is whether we can use multiple instances (on multiple physical servers) of SQL Server using some type of clustering/load balancing. I haven't worked with SQL Replication before, so I'm not even sure where to start in exploring the possible avenues we can explore.
Can anyone push me in the right direction? Any info would be greatly appreciated!
To All,I have a SQL2KSP3a database(<1GB) running on a 4x3GB physical CPU with4GB of ram. It is Windows Server 2003 with hyper-threading turn on.There are ~420 .Net users/cxns (fat client, no web/app servers) withconnection pooling and ~1 trx/sec. The database growth is neglegeableand actually is not even relevent which I will explain in a minute.99% of the trxs are from one SP that does a select. The resultsets arerelatively small as well 1~100 rows. Yes I have tuned it with indextuning wizard as well, changed the SQL memory configurations, etc....My problem is this...The first day after a reboot, the server runs 6%CPU during peak hours.During the non-peak hours until the next day something apparentlyhappens. The next day (2nd day after a reboot), it jumps to 40%CPUduring peak hours. The server will continue to run at 40%CPU duringpeak hours until the next reboot. This phenomenon has been occurringfor 6 months or more and the traffic on the server is the same for day1 as it is for day 2,3,4,... This database was on another server with100+ dbs and exibited the same behavior, thus bringing that server toits knees, and thus we had to move it to the server in question with noother dbs.I have googled my eyes out, Microsoft site, white papers, perfmon,SQLDiag, PSSDiag, execution plans, index tuning wizard, and the listgoes on! I currently have a case open with Microsoft that has beenopen for months now. I have been passed around to the 3rd "MS TechSpecialist". I have ran PSSDiag a total of 6 times for them for hourson end. I have changed MAXDOP. I could give more information, but Iwould be here for days. I am running out of patience/ideas andMicrosoft is apparently blowing smoke.Any ideas are greatly appreciated!Thanks in advance!JL
Hi,We have a prod server running on SQL server 2000 64 bit. It is a4cpu server with 16GB of RAM. we have a maxmemory setting of 15.5GBfor sql server. Inspite of 15GB being available for sql server, itstill uses paging file space, a lot. When looking thru task maanger wecan see sql server using 15.5GB of Memory usage and 22GB of Virtualmemory usage. I don't understand why it should even be using closer to7GB of Paging space, when it has so much memory. How does SQl serveruse Virtual memeory vs Physical memory?HAs anyone seen this before.ThanksGG
If you delete rows in a table and do a full table scan...Is that supposed to read up to the highest block/extent that thetable ever attended.(like in some databases I use)If so what is the best way to take care of such tables in sql server.I appreciate your responsesVince
Our company recently combined our DBs into one SQL 2005 Server.
Dell Power Edge 1800 with 3.00 GHz Xeon Processor 800 FSB, 1 GB of RAM Dell Power Edge 1600 with 2.80 GHz Xeon Processor 533 FSB, 1 GB of RAM
Combined into one: Dell Power Edge 2950 Dual Core 1.6 GHz Xeon Woodcrest Processor, 4 GB of RAM
However, the CPU utilization on this new server is maintaining at about 90% with 3.82 GB of RAM used as well. It's a Windows Server 2003 R2 x64 edition running SQL Server 2005 SP2 x64. I have searched around Microsoft's website for any information that could be of help to me, but I was unable to locate anything. I was hoping that someone could provide some insight as to why this might be occuring. Or if this is a known issue.
We are using an IBM Xeon server with 4 GB RAM with windows 2000 server and MS Sql server 2005.
More frequently our server responce time is very slow, although the cpu utilization is between 7 - 10 %. we can not able to run even notepad on the server. we observed that the memory occupaid by the sqlserver prog is high. If we restart the server then it will return to normal level. But we do not want restart the server frequently.
We are using an IBM Xeon server with Sevrer 2000 and sql server 2005 with 4B RAM.
We noticed that the responce time of the server is very slow. During this time the memory occupancy by the sqlserver is very high. Althouh the cpu utilization is very low (7-10%) we are unable to run even notepad on the server. In some other situations the cpu utilization will be 100% (for more than one hour) during this time also the sql server occupaies more memory. If we reatarts the server then the problem will be fixed, but we do not want to restart the server very frequently.
We are having a big performance issue at our site. Here is the configuration of the box running SQL Server 2005:
64 bit Windows Enterprise Edition + SP1
Dual CPU, 16GB RAM
RAID 1 and RAID 5 - internal
SQL Server 2005 64-bit Enterprise Edition
With SP2 (CTP from December)
The "Lock Pages in Memory" is set and is being run under the same account that is being used to run SQL Server Services.
We are noticing that under load, the CPU utilization becomes nearly 100%. I have researched this and have come across a couple of posts that indicate that this issue was fixed in SP2 - example: One post talked about the hotfix #716 which is also a part of SP2 but even after the application of that service pack, we are still having this issue. I haven't tried setting the parameterization option to forced for the database yet.
Is this a known issue with SP2? If not, what can we look for and fix in our environment? Please let me know if I can provide more information.
I made a backup of the database on the QA box, and restored it on the staging box. Yet when I run something as simple as a select query (select * from <table>), the less powerful QA box is faster.
I figured maybe the statistics are different on the staging box. I ran dbcc showcontig to make sure the statistics were identical. Also ran RedGate's SQL compare and data compare to make sure everything was identical.
I figured maybe the query optimizer needs to be tweaked. I recreated the indexes and updated statistics on the staging box. The queries actually got slower as a result.
I thought maybe SCSI drives are slower. Tried breaking the mirror on the staging box. No luck. Put the mirror back in place, ran a test where I copied a large folder from one directory to another on the staging box. Repeated the same test with the same data on the QA box. The staging box was more than twice as fast than the QA box.
It doesnt appear to be a problem with the query, adjusting memory in SQL server has not effect, both boxes are using SQL server 2000 SP3, why is the bigger machine running queries hundreds of milliseconds slower than the smaller machine? Any help will be appreciated!
Why SQL Server 2000 Enterprice Edition price is so much higher than Standard Edition. I saw the SQL Server 2000 Editions Comparison and i did't find any good reasons.
We are in the process of moving existing clustered SQL server databases to AWS. There is one major database that has intensive reads and writes transactions. I'm wondering what is the best design to optimize the performance for both R/W since we have constant issues historically with the current environment when massive updates are happening. Reads shall have higher priority over writes.
I am trying to create a job that runs against my High Availability listener server.
It is a fairly simple SQL statement in the job - execute tsql.
When I try and run the job I get the error:
Executed as user: NT SERVICESQLAgent$SQL2014A. The target database ('BB_Prod') is in an availability group and is currently accessible for connections when the application intent is set to read only. For more information about application intent, see SQL Server Books Online. [SQLSTATE 42000] (Error 978). The step failed.
I thought there was a way to run a select statement as a job against the listener? The tsql step is only a select.
Is there a way to pass in the application intent = readonly as part of my SQL statement?
I have heard that high numbers of VLF's aren't good. It can impact performance and can delay recovery time, so I wanted to test that.
I created 2 DBs with 100MB datafile and 50MB logfile.
TestDB log file had 100MB autogrowth TestDB2 log file had 1% growth.
I inserted 1048576 records, took the backup
Ran DBCC loginfo and TestDB had 40 VLFs and TestDB2 had 165 VLFs
But when I restored both DBs, this is what I got.
TestDB: RESTORE DATABASE successfully processed 42258 pages in 4.420 seconds (74.691 MB/sec). SQL Server Execution times: CPU Time = 125ms, elapsed time = 8323 ms.
TestDB2: RESTORE DATABASE successfully processed 42257 pages in 3.943 seconds (83.724 MB/sec). SQL Server Execution Times: CPU time = 109 ms, elapsed time = 8314 ms.
Question is: Where is the difference? How TestDB which has 40 VLFs are better than TestDB22 which has 165 VLFs.
The SQL Server Database hangs overnight and also consumes high disk space on one of our servers. This has been recurring for quite a few weeks and occurs daily.
Can somebody assist me in trouble-shooting the same
I have do some benchmark test between the "Principle", "Mirror" and "Witness", things seems normal, the System usage (CPU, Disk Read/Write) on the "Witness" is very small and the CPU usage on Principle and Mirror is similar, However....
For the "Disk Write Bytes/sec", the usage on Principle is 200,000 and the Mirror is 1,500,000, the mirror server is talking about x7 times Disk Write Bytes/sec on the Mirror.
I cannot think of any reason why the Disk Write Bytes /sec is so much higher than the Principle, can somebody help me?