Hi, we have a development server with SQL 2000 SP3 on it. The hardware is dual CPUs server with win2000 Server installed, plus 1Gb of RAM.
With only 60 opened connections, the SQL server is using a 700Mb of RAM, so It became very slow, and I can't find the reason of it. Any ideas? With amount of open tables (objects) it doesn't look normal.
Local SQL 6.5 SP4 on a Windows NT SP4 workstation (Pentium 350 / 128 MB). SQL server has 32 MB of RAM. It starts well but begins continuously eating up virtual memory by 16k blocks in a second, until it takes all virtual memory.
Server's shutdown generate following error in Event log System section:
Event ID: 7011 Source: Service Control Manager Description: Timeout (120000 milliseconds) waiting for transaction response
When I start Server from command prompt with -f switch, it did not eat virtual memory, and did not generate any error in Event log at shutdown.
If there was a view that joined 2 tables and I accessed the view the 2 ID fields in the view would still have the AutoIncrement attribute still set to true so that I knew those were Identity fields.
In SQL server 2005
I dont' know why but if you reference a View that has Identiy AutoInc fields in ADO it doesn't keep those properties.
Also for whatever reason we Set the ID field to 0 to let ourselves know its a new Record. SQL 2000 let it happen and assumed it to be null where as By Setting the ID to 0 in SQL 2005 causes it to blow up on me.
Is there some sort of setting in SQL that can make SQL 2005 work like SQL 2000 in these two instances...
I did a load testing and found the following observations:
1. The Memory:Pages/sec was crossing the limit beyond 20.
2. The Target Server Memory was always greater than Total Server Memory
Seeing the above data it seems to be memory pressure. But I found that AvailableMemory was always above 200 MB. Also Buffer Cache HitRatio was close to 99.99. What could be the reason for the above behavior?
My server is a dual AMD x64 2.19 GHz with 8 GB RAM running under Windows Server 2003 Enterprise Edition with service pack 1 installed. We have SQL 2000 32-bit Enterprise installed in the default instance. AWE is enabled using Dynamically configured SQL Server memory with 6215 MB minimum memory and 6656 maximum memory settings.
I have now installed, side-by-side, SQL Server 2005 Enterprise Edition in a separate named instance. Everything is running fine but I believe SQL Server2005 could run faster and need to ensure I am giving it plenty of resources. I realize AWE is not needed with SQL Server 2005 and I have seen suggestions to grant the SQL Server account the 'lock pages in memory' rights. This box only runs the SQL 2000 and SQL 2005 server databases and I would like to ensure, if possible, that each is splitting the available memory equally, at least until we can retire SQL Server 2000 next year. Any suggestions?
I am in need of storing long pieces of text in database, length between 2000 and 4000 characters (nvarchar limit).
If I have a column in a table:
my_text nvarchar(2000)
does this mean that in database the size used will always be 2000*2=4000 bytes at least (+ rounding), independent on what the real length of the text stored is?
Anoter alternative - storing texts in files is expected to become unmanageable very soon, so I would like to keep to the db.
sql server 2000 is running on windows server 2003 ... 4gb of memory on server .... 2003 was allocated 2.3gb nd sql server was allocated (and using all of it) 1.6gb for total of approx 4gb based on idera monitor software ... all memory allocated betweeen the OS and sql server .... then 4 more gb of memory added for total now of 8g ... now idera monitor shows 1.7gb for OS and 1.0 gb for sql server ..... 'system' info shows 8gb memory with PAE ... so I assume that the full 8gb can now be addressed .... why are less resources being used now with more total memory .... especially sql server ..... i thought about specifying a minimum memmry for sql server but i amnot convinced that would even work since it seems that this 1gb limit is artificial .... it it used 1.6 gb before why would it not use at least that much now ??
Does anyone have any basic, simple scripts of sp's that I can give mycomputer operators to use to monitor for serious conditions on our sqlservers? We are new in the ms-sql arena, a small shop and we cantreally purchase any tools to monitor these servers, but we need somebasic checks that we can use to make sure a server is performing in areasonable fashion, no blocks, cpu<xx%, memory not pegged, IO working,etc...and we need to be able to do this without bugging ourprogrammers every time we are wondering why things are slowwnig down.We are a split shop between as400s and sql servers, and our operationsstaff has no problem performing this type of duty on the 400s. I thinksince the sql servers are so easy to deploy into production, the ideaof how to monitor these things health got lost in the shuffle andallowing the programmers to do it just serves to destabilize ourenvironment.Please help! Our shop has turned to chaos since we went live withMSSQL centric applications and everytime one programmer "resolves anissue" some other programmers application starts to act up.
I am receiving the following error when starting a program called ShelbySystems that is supposed to connect to a local database. I don't think this is a security issue but I don't know much about SQL server either so...
  DIAG [08001] [Microsoft][ODBC SQL Server Driver][Shared Memory]SQL Server does not exist or access denied. (17)   DIAG [01000] [Microsoft][ODBC SQL Server Driver][Shared Memory]ConnectionOpen (Connect()). (2)
System Info: Windows 10Â Home -Â upgrade from 8 64 bit SQL server 2012 Express SQL Backwards compatibility 2005 64 bit ShelbySystems software v5.4
I am including the trace log in case it is useful.
NODE1 -256GB INST1 - 64GB min/64GB max INST2 - 64GB min/64GB max NODE2 - 256GB INST3- 64GB min/64GB max INST4- 64GB min/64GB max
With this configuration and if all instances are running on the same node there will be enough memory for them to run. Knowing that normally i ll have only 2 instances in each node wouldnt it be better the following config?
NODE1 -256GB INST1 - 64GB min/128GB max INST2 - 64GB min/128GB max NODE2 - 256GB INST3- 64GB min/128GB max INST4- 64GB min/128GB max
With this configuration and in case all the instances (due to a failure) start running on only 1 node, SQL will adjust all instances to just use Min memory specified?
When I close a web form that has a connection to my SQL Server, I am not seeing the memory process close in task manager (of the SQL Server). I am using the "open late close early" theory of database connections. I am using the "close" method for my database connections. Is there any automated utility that will shut down these processes? I thought when the user was disconnected from the database, the memory process would automatically shut down.
I accidentally set max server memory to 0. Now I cannot rectify as there are insufficient resources in internal memory pool to rectify. How I can recover? I've been unsuccessful to date in running sqlcmd in single user mode.
Dear All, I have hosted ASP.Net site with Sql server 2005 I have also configured State server for my site and started ASPNET State Server I have gone one problem hat is SQL serve memory varies form 800MB to 1GB I have been closing all the db objects in my code also installed SQL serve SP 2 still the problem exist Can any one help
Hi all, I am using StringBuilder to build a huge insert string which clubs around 12,000 insert strings into a single string. But whle executing this string the SQL server does no insert and the profiler shows that SQL server was out of memory. Any ideas? Is there a limit to the maximum number of inserts I can do?
We have an issue where every month or so one of our servers that we upgraded from SQL 7 to SQL 2000 says that its memory is low. We get this error through our SQLProbe application. If we reboot it will go away. Any suggestions?
Question: Exchange & SQL 7.0 are running on the same server. You notice the performance in exchange is degraded. The min server memory,Maximum server memory & set working area are set as they were automatically in the installation. What you do to free memory for exchange. Answer: Increase Min server memory. why ?
I just got a new server I need to install SQL Server 6.5 on and am wondering if someone could give me a little advice as to how much memory to configure for it. It will be dedicated to SQL Server. It has 1GB of memory presently with 2- 450Mhz Xion processors. I don't know if you need any other information to make this decision or not. All the SQL Server memory charts I see only go up to 512MB of memory.
Does anyone have any suggestions for me? I do realize that this will have to be fine-tuned depending on the specifics of our server, but any range to start with will be beneficial.
I have two NT 4.0 Boxes with SQL 6.5 on both. Server A transfers data to Server B once a week. Server B is connected to the net via a cable modem. Sometime during the day Server B has its CPU become 100% utilized. SQL Server memory goes from roughly 7 meg to 31 meg. When I do an sp_who at this time, it appears there are 4 spids running that shouldn’t in the master db. Somehow, Sever B also connects to Server A and connects some spids that do not appear to do anything. I am stumped on how to narrow down what is causing this behavior. It should also be noted, that currently no one is connecting to Server B. There are no scheduled tasks to be run on either machine.
I have 2 processor server with 4GB memory. It is dedicated SQL server. Currently I have Windows 2003 standard edition and SQL 2005 standard edition. It used only about 1,7 GB for the SQL server and 300 MB for other application, task manager showed 2,0 GB of use. Then I added /3GB switch to boot.ini. After that, system shows use of about 3GB, but SQL stayed on 1,7 GB max (even I set more in properties), the only advantages was, that I could use another 1,3 GB for other applications. But I would like to give more memory to SQL server. I searched web, but I am still not sure about if it is -possible on current software or if not, -what is the cheapest way how to do that (upgrade to Windows Enterprise is of course much cheaper than to SQL 2005 Enterprise, will AWE memory help me?)
I have a SQL Server 2000 (EE) machine dedicated to running only SQLServer that is not using all of the memoery on the machine. I have readand tried setting all of the memory setting available in the BOL withno luck. The database is about 10 Gb in Size. I have four 2.7 Xeon MP's16Gb of ram. The SQL server is only grabbing about 2 Gb of Memory.Since I am running Enterprise Edition I am not sure why It will notload more of the db into memory. I have run all of the profiles and theresults would indicate that it wants more memory but the box shouldhave plenty. Any suggestions would be greatly appreciated.Thanks in Advance,rhorner
I have a performance and memory management issues with SQL SERVER. Ifeel SQL server does not releases the unused memory back to the OS. Ihave been monitoring that at the end of the day the SQL SERVERperformance gets really bad. And it this point it has acquired all theCache it could. I have abt 4 GB of RAM on my server and out of it SQLSERVER uses almost 2.7 GB whether there is load on the machine or not.If there any way or tool to monitor this. Also is there any way to Freememory from SQL server back to OS.
From everything I read setting the max server memory to a reasonable capped limit is a best practice to support effective performance and reliability of your SQL Server. However, I have seen a few discussions saying that althrough SQL will use lots of memory, it can easily release it to the OS if the OS requires it for OS operations. If that is genuinely the case, why do you need to cap the amount of memory SQL can use, if when the OS needs memory resources SQL will release them?
In my production environment, I am having one Quad processor with 260GB of HDD, with Database Size 21GB and 3GB of RAM.
After tuning all server-level objects (indexing and statistics, procedure tuning etc.) , server is still responding bit slow. It has been found that RAM (Server Memory) could be the blockage for the same.
Can anyone help me to identify the proper size of RAM for the same configuration.
I often read about that the best is that the whole database could fit in RAM.
Lets say I have a dabase that is 1000gb. I have 20gb RAM.
Its quite obvious that it wont fit in the memory, but is this always the case?
Say for example that in the database there is one table that is 990gb, and another table that is 10gb.
Nothing is written or read to the 990gb table, all the activity is in the 10gb table.
If I delete the 990gb table from the database, will sql-server need to perform less IO then, or doesnt it matter if I delete it?
Another scenario could be that we have two databases in the system. One 990gb and one 10gb, the system still has 20gb RAM. There is no activity in the 990gb database, but there is alot of activity in the 10gb database.
If I delete the 990gb database, will sql-server perform better, or doesnt it matter?
Is there any way to recycle SQL Server memory, other than restarting the service? Should we restart the service on schedule? The memoy goes high and once high it stays there. SQL Server is not releasing it...
I am running a 9GB SQL Server Database on Compaq ML 530 Prolent with 3 GB of board memory. Max Server Size has been specified as 2.7 Gb, but SQL server is only able to use 1.72 Gb and beyond that even after putting more load the memory usage does not increase, infact the performance starts to degrade.
Is it possible to leave this parameter (max server memory (MB)) on default(2147483647) if on my Cluster I have Oracle, Lotus Notes and some other things running, or I have to calculate the amount of memory SQL Server needs? Thanks for any suggestions!