SQL Server Disk Sub-system (overhaul) Performance Questions
Jul 20, 2005
I 'inherited' a group of SQL Server server class machines. They are
true server technology but the disk sub-systems are lacking. There is
one hot-swap backplane that all the drives share (with one SCSI
channel) thusly even though there are three logical drives (composed
from 6 to 8 hard drives), they all go through one channel. This is
creating a performance issue that is noticable and can be seen in
various performance counters that Microsoft recommended one should
monitor in terms of disk I/O. For a cheaper 'fix', I can add a
seperate two drive bay (with its own SCSI channel) with mirrored
drives. I would then mostly likely place the transaction log files on
this new channel. Or I could place the indices filegroup files on
this new channel for DBs with mainly searching going on (not much
updating). If I went this route I would be leaning towards the
transaction log move since the second method would require me moving
DBs around quite a bit. Any input on this solution (besides spending
more money)?
What I would prefer to do is get a better server class machine or add
an external drive bay solution (not a SAN). I would try to get three
or four SCSI channels in the new hardware to split the different
file/filegroups out (i.e. transaction logs files, data filegroup,
indices filegroup, etc.). My only concern here is: would this more
expensive solution be worth the money? As far as replacing servers, I
have only two kinds of experience...replacing somewhat underpowered
servers with slightly less underpowered servers and replacing overkill
servers with even more overkill servers. In both cases, the disk
sub-systems were fairly equivalent from the old system to the new one.
Will going the three/four channel route really get data moving along?
We have one server in particular that hosts a database (one of many on
it) for a web application that gets decent traffic (it is a private
login based system for internal use and external use by our clients'
agents). Periodically throughout the day, there are 2-5 minute bursts
where performance slows to a crawl. I want to spend more time
profiling queries and such before recommending we spend more money,
but the folks I am working for want quick results and there is quite a
bit of stored procedure logic to profile and investigate. I know the
disk sub-system is definately in need of an overhaul, but I would like
to get an idea of peformance gains from adding either one additional
channel over the existing single channel as well as going the
three/four channel route over the existing single channel setup.
Summary: Started replication April 1 of 4M xact / day publishing system to subscribing system.
Performance was good. Latency was ~ 5-7 seconds.
May 10 we noticed that the DB was behind (latency was 12 hours).
All performance counters seem good with the exception of the disk.
. Performance spikes are 8 minutes apart and last from 30 - 60 seconds.
. During this period, Disk % Busy (1 - Disk % Idle) is 100%
The publisher DB publishes about 50-52 xacts/sec.
Rate of distribution (distribution DB to Subscriber DB) is ~ 47 xacts / second, so latency is increasing (currently at 33 hours). Previously my Subscriber system's "capacity" was 150 xacts / sec.
I know this because several weeks ago, the network went down, we were 24 hours behind.
When the network came back up the replication subscriber system was able to catchup at around 150 xacts / sec, or 3X the production system rate.
What has changed between then and now? Not much. We did install Tivoli Service Manager (IBM's backup system) a couple of weeks ago. It seems to run fine on a nightly basis, but I don't see any periodic heavy Disk I/O from that. Just to be sure, I've had them shut the TSM services down just to be sure.
We've also eliminated all extraneous processes other than those I need for performance monitoring (there was a RTVscan, virus scan process).
I've eliminated Autogrowth's as an issue as I've bumped the growth so that they are very infrequent (several days at this point. When we resolve the problem, I'll dial this down to something more reasonable.
My disk configuration is not ideal I realize (single Raid-5 disk with 3 partitions), however, this has not changed in the 6 weeks.
Thanks for any help on this!
Jack Griffith
Configuration:
Subscribing System:
SQL Server: 2000, SP4 - 8.0.2039
CPU - 2.8GHZ Xeon, Quad Dual-core
Memory - 3.5GB RAM
Disk: 3 partitions on a single RAID-5 disk with 1118 GB of space:
C: 39GB System and Programs
D: 97GB Log space
E: 982 GB Data space
Replication configuration:
- nosynch, continuous Transactional Replication - Distribution db is on Subscription system - distribution - Publication of approx. 50 transactions / second
Subscriber DB configuration: DB size: 64458 MB Logging: Simple (at this point)
Need some help with configuring the Disk Sub system using Raid 10
1. What shud be the disk configuration to have 4000 inserts per second on a RAID 1+0 ? 2. What is the best way to maintain a Hot Stand by Server for a very high OLTP system ( say for a day Trading Database ).
I'm having some performance-wise thoughts about my new sql-server 2005 installation ... I have my SQL installed on one partition (the system dbs are also on that partition by default), my regular databases (non system dbs) are on different partition.
The question is - if my sys dbs are on different partition, could I experience some performance issues ?
One senario that I can think of is when the SQL looks for SPs starting with sp_ in the master DB, the disk will have to check a different partition. Perhaps such senario was solved using some kind of caching methods on the sql server itself.
Hi All. Hopefully I posted this to the right section.
I would like help identifying if I have a disk performance issue or not. First the background: we have a j2ee application using the MS SQL 2005 JDBC driver and Hibernate on 4 application servers, and an active-passive SQL Server 2005 cluster. All of the servers reside in the same physical rack and switch.
Our application is typically bounded by CPU on the app server, or throughput from the database. Several months ago we were using SQL 2000 and would often max out the CPU on the database server before anything else, but often the database could keep up and we would max out the app servers CPU. Now we have 2005 on a much more powerful machine and more app servers, but we seem to be running up against a problem with throughput from the database.
The issue is not CPU. The total cpu average on the database server, as monitored in perfmon on 30 second intervals, stays consistently below 40%. The app servers stay well below 30%. But what concerns me is the Average Disk Queue Read Length on the database server, particularly for our E: drive. On this db server, the transaction log, the system and temp dbs, and our application's database are all on separate EMC SAN shares, connected via fibre chanel. The E drive houses the app data and is a 15-way meta device (fifteen 10GB logical devices striped at 960k for a 150GB device) in a RAID-S configuration, EMC Symmetrix array located in the same rack. The database is roughly 30GB.
I have read various articles online describing how to interpret the Average Disk Queue Read Length performance counter with regards to SQL Server. Some have said this should not exceed the number of physical spindles * 2. We are seeing values of 32 consistently, averaging over 60 during peak processing hours, and spiking to well over 100 on a scale of 1.0. (3-second sample interval).
So since our application servers seem to be waiting on their database calls (a lot of inserts with frequent, but small-resultset selects) and do not show I/O issues either with their local storage, memory, or network interface. The database server again has no CPU, network, or memory issues. I should add the the Average Disk Queue Write Length counter does not have any issues; its always below 1 (on a 1.0 scale). The EMC array has both read and write caching. The indexes of the application database are rebuilt weekly and defragmented every day, with stats rebuilt after the defrag.
So how can I further determine where my performance problem lies? All thoughts appreciated! Thanks!
I am running SQL server 2000 SP4 on a server with 2 Dual core 4G processors with data attached via a SAN>
I have a 70G database with 10 users that is giving attrocious performance. I have just tried to run a count(*) accross a couple of tables and am still waiting for the results 15 mins later. When I look at the disk queue it is around 50/60. I thought the target for this was around 2. I am sure that the hardware that we have in place is capable of running this db. However I`m not sure how to fully analyse what is going wrong here.
I would like help identifying if I have a disk performance issue or not. First the background: we have a j2ee application using the MS SQL 2005 JDBC driver and Hibernate on 4 application servers, and an active-passive SQL Server 2005 cluster. All of the servers reside in the same physical rack and switch.
Our application is typically bounded by CPU on the app server, or throughput from the database. Several months ago we were using SQL 2000 and would often max out the CPU on the database server before anything else, but often the database could keep up. Now we have 2005 on a much more powerful machine and more app servers, but we seem to be running up against a problem with throughput from the database.
The issue is not CPU. The total cpu average, as monitored in perfmon on 30 second intervals, stays consistently below 40%. But what concerns me is the Average Disk Queue Read Length, particularly for our E: drive. On this machine, the transaction log, the system and temp dbs, and our application's database are all on separate EMC partitions, connected via fibre chanel. The E drive houses the app data and is a 15-way meta device (fifteen 10GB logical devices striped at 960k for a 150GB device) in a RAID-S configuration.
I have read various articles online describing how to interpet the Average Disk Queue Read Length performance counter with regards to SQL Server. Some have said this should not exceed the number of physical spindles * 2. We are seeing values of 32 consistently, averaging over 60 during peak processing hours, and spiking to well over 100 on a scale of 1.0. (3-second sample interval).
So since our application servers seem to be waiting on their database calls (a lot of inserts with frequent, but small-resultset selects) and do not show I/O issues either with their local storage, memory, or network interface. The database server again has no CPU, network, or memory issues. I should add the the Average Disk Queue Write Length counter does not have any issues; its always below 1 (on a 1.0 scale). The EMC Celerra array has both read and write caching. The indexes of the application database are rebuilt weekly and defragmented every day, with stats rebuilt after the defrag.
So how can I further determine where my performance problem lies? All thoughts appreciated! Thanks!
Have a sql 2000 db which I have no say in design, just make it run. My typical sql counters such as system queue, and buffer cache and cache hit ratio are all good. If I need to monitor disk activity (mainly how fast my data is being read, how long the user is waiting for that data for both reads and inserts), what are the best counters for this, and what value should throw up a red flag.
We have an application that is experiencing I/O contention,particularly in tempdb but also in two other databases. The data isstored on mirrored PowerVault 220's, each with 10 of 14 possible disks.The PowerVaults are JBOD devices, not true SANs. The current config hasfour separate groups of physical drives assigned to distinct logicaldrives for log files, tempdb, and the two app dbs. This means, forexample, that tempdb resides on one mirrored drive. The standard advicewhen faced with disk contention is to add spindles if possible. With 4empty slots, we would presumably assign the new physical disks to themost stressed db, e.g. tempdb.An alternative arrangement would be to combine all the physical drivesinto one logical drive, and put all the files, log and data, onto thesingle logical drive. The hope for this configuration is that thePowerVault would automagically distribute the data among the drivessuch that all drives were in use, all spindles reading and writing atmaximum capacity when necessary. It is my understanding thatfull-featured SANs, like NetApps and EMC models, do this. My questionis whether this configuration is best for the PowerVault, as well. Oris this the essential difference between JBOD and a true SAN?Has anyone tried both arrangements?Advice is much appreciated.
We have a production database that sits on a 4 proc server with 4 GB of memory and SAN disk storage via fiber. There are some stored procedures that run and they take approximately 10 minutes to run. A developer has SQL Server installed on his local pc that has 1 2.5 GHz processor and 2 GB of memory and the stored procedures run in approximately 2 minutes. I have updated statistics and rebuilt indexes to no avail. He is questioning why it runs so much faster on his smaller pc compared to the production environment. I have monitored CPU, Memory, and Disk Queue Length and none of these performance counters look concerning to me while the stored procedures are running.
Can anyone out there give me some input on what I could check to figure out why we are experiencing this performance difference?
Fellas!!This is a very complicated one and it took me a few days to figure outexactly what's going on, but here's the final story:I have a production environment running on .NET with a SQL Server(2000, SP3). The SQL Server is on a dedicated Proliant computer with2GB RAM (the actual SQLServer.exe process has dynamic memoryassignment and can reach up to 1.6GB RAM). Nothing else is running onthat specific computer.Once the SQLServer is started, it hits 300MB RAM (the minimum that wasset in the configuration of the server - remember, it is dynamicallyaquired).Then there is a .NET program that requests just about all the data theSQL Server contains (apart from a single table that contains roughly1.6 million rows and another table that contains about 10000 rowswhich are all of type IMAGE).Once all the data is retrieved, the RAM is at about 400MB. From thereon, every update I make to the data on the server causes the RAM to goup by a bit (that updates are done in a Transaction which of course iscommitted at the end). It seems that BLOB updates are the majorproblem in all of this. For some reason, uploading a blob of size 9MBcauses the RAM to go up by roughly 20MB and after commit it gose down10MB (total gain of roughly 10MB RAM). Eventually the SQLServerprocess hits its upper limit (1.6GB) and at this point it startsslowing down.Some performance checks showed me the SQLServer has a lot of diskactivity, it seems it is reading and writing pages of data from/to theHD all the time (which causes the queries to be much much muchslower).We have a development environment running the exact same code (it isthe exact same in everything, except for the amount of data stored inthe DB). This does not happen there at all.I have a few questions:1. Why is the RAM going up after BLOB updates?2. Why is the RAM going up at all?3. How can I tell the DB which tables should remain in the RAM at alltime (never swapped back to the HD?) - DBCC PINTABLE does not seem todo the job.It does not seem to have anything to do with the .NET code.Thank you very much,M Yamo.
Depending on the way I write a query, I come up with these 2 stats. Is there a sure winner in this race, keeping in mind the overall health of the server? (I'm not sure of the specs of the server, as I can't log on to it :/ but are there any sql variables that would show cpu speed and # of cpus?)
I almost am leaning towards the single cpu query because of lower resources used - or are most of the "reads" in the parallel'd query not read directly from the HD, but using the Table Spool created internally (query plan shows it)?
I have a stored procedure that takes less than 1 second in sql query analyzer to return my results. I run this same SP in ASP.NET using a calendar control and using perf monitor I notice that for me from my dev machine my cpu utilization is sometimes over 40%.Is there any tweaks I can do to help decrease CPU utilization.
We are new to SQL 2000, and would like to bounce a couple questions off some of you gurus out there. We are using SQL Server 2000 to build a data repository to assist us in transitioning from our old flatfile legacy system to SAP. We are also looking to use the SQL Server 2000 repository to build a smallish Enterprise Data Warehouse on the same SQL 2000 platform.
Here is our problem: We have SQL Server 2000 loaded on a little scrapper PC with 1.4GHZ single processor, 1GB of memory, and a single 40 GB IDE drive.
When we are initially loading any of our repository tables the process cruises along pretty well, even respective of trying to locate the record for update before doing an insert. But, if we do something as simple as selecting count(*) against the table that's loading, performance on the load goes to its knees. We understand we're pretty much at the mercy of the hardware we have (that's the budget), but we'd like to get as much bang as we can out of what's there.
Our questions are: 1. Is there anything we can do with our server configuration (short of new hardware) that will help us? 2. Are there any recommendations as far as monitors to help us better tune this specific configuration?
I have downloaded an evaluation copy of a SQL Server performance tool (a fancy version of the Profiler) called Speed Coefficient from Imceda. Pretty interesting so far, but some questions are forthcoming (and probably will continue to come as I drill down and learn more about performance thangs).
It tells me I have a recompilation that I did not expect, it says the reason is "object not found at compile time, deferred to run time", but doesn't do too well at specifically telling me which object it is complaining about (yeah, not REALLY a complaint, but perhaps more a "mention", but I digress...)
I thought originally it was, perhaps, an object that I had not referenced correctly, but as it turns out, it is, I believe, referring to a global temporary table one of my procs creates. Upon further reflection/introspection, it makes sense to me that this is the case, since it won't HAVE the temp table object to kick around until it is created at run time.
Does this make sense? If so, I guess this is one of those times where the tool just makes reference to a possible issue, but it's up to the user to understand what the underlying cause of the "mention" is, and to determine if it is "OK" to have the recompilation occur.
hello,all I am new to Sql 2000,I installed sql 2000 database in C disk,but Now I found my C disk space is smaller than before,So I want to move my databse(include data and structure) from C Disk to D Disk(its space is very large) . is it possible to do it ? if its can be done ,do I need to change my asp.net program source code (exp: chaneg my crystal report connectstring ) ? thanks in advanced!
I have a three tier system using SQL server 2000, we are currently experiencing IO bottle necks on our SCSI Raid 10 array, which holds the Data and the logs in separate partitions.
So my options as I understand it are:
Get Enterprise edition
or
Get another physical raid 10 array and separate the logs and data i.e. data on one array and logs on the other array.
I would like to try the latter but I am totally unsure how much difference this will make or whether it will make any difference at all.
Does anyone know how much performance increase I will get from using two arrays as opposed to one?
Any other advice on this scenario would be greatly appreciated.
Hi, I am developing an editorial system(MS-SQL). There are a public part(frontend for reader) and an editorial part (backend for editor, admin,...). The public part of system will be used approximately 1000 acesses/s.
A possible solution: I solve whether I have to use two databases: 1.DB all data 2.DB with data for frontend
I am developing a new application and am trying to find the best way to deliver XML documents to the app. Here's the scenario: each set of xml docs will have a unique identifier. The app needs to retrieve a known xml doc by using this identifier. So the dilema I am having is as follows: I could create a folder for each identifier and load the xml docs relevant to each id into the folder. For the app to retrieve the document, it would simply construct a file path and the windows B-tree search would find it and return it very quickly. Alternatively, I could create a table with a primary key that stores the unique identifier, cluster it, and store the xml in a blob field.
Does anyone have any experience, references, or thoughts about which would be the best way to do this? I am an avid database developer and the app will be using sql 2000 anyway, however it seems in this situation the database might not be the best option in terms of speed, performance, and resource useage. Thoughts?
Hi,I am starting a new job next week. Part of what I am required to do isa "system diagnostic" on a SQL Server 2000 box to determine what areascan use improvement - this would include configuration settings,backup/recovery, sql tuning, etc... and anything else I may not havementioned here!What I need is a thorough and systematic approach to doing this. Cananyone please give me advice, or point to a FAQ or other links thatdiscuss this. I am running out of time.THANKS MUCH
Hello, I just wanted how we can find the system performance without using tool like performance monitor or profiler. I just need the query like equalent to peformance monitor for see the systmem status of CPU ,IO ,Memory and etc.. Thanks, Ravi
Hi, I am trying to write a table-valued function in SQL Server 2005 (SP1) to return all active directory groups a user belongs too, using managed code (VB.NET).
Testing the code with a simple winform I get the list of groups in about 0.4 seconds. However the table-valued function takes upwards of 17 seconds to run! Is this normal for managed code in SQL Server?
Imports SystemImports System.TextImports System.DataImports System.Data.SqlClientImports System.Data.SqlTypesImports System.CollectionsImports System.DirectoryServicesImports Microsoft.SqlServer.ServerPartial Public Class UserDefinedFunctions#Region "Constants" ''' <summary> ''' The connection string for Active Directory. ''' </summary> 'Private Const LDAP_CONNECTION_STRING As String = "LDAP://<My LDAP connection string> ''' <summary> ''' The LDAP search filter need to find a user in Active Directory. ''' </summary> 'Private Const LDAP_SEARCH_FILTER_USER As String = "(&(objectclass=user)(objectcategory=person)(sAMAccountName={0}))"#End Region ''' <summary> ''' Gets all active directory groups for the user. ''' </summary> ''' <returns>All dataset permissions for the user.</returns> <Microsoft.SqlServer.Server.SqlFunction(DataAccess:=DataAccessKind.None, FillRowMethodName:="udfUserActiveDirectoryGroupsFill", TableDefinition:="GroupID NVARCHAR(100)")> _ Public Shared Function udfUserActiveDirectoryGroups(ByVal userName As String) As IEnumerable ' Setup the active directory search. Dim searcher As New DirectorySearcher(LDAP_CONNECTION_STRING) searcher.Filter = String.Format(LDAP_SEARCH_FILTER_USER, userName) searcher.SearchScope = SearchScope.Subtree searcher.PropertiesToLoad.Add("distinguishedname") ' Run the active directory search. Dim result As SearchResult = searcher.FindOne() Dim userEntry As DirectoryEntry = result.GetDirectoryEntry() Dim userGroups As New ArrayList GetActiveDirectoryGroupsForEntry(userEntry, userGroups) Return userGroups End Function Public Shared Sub udfUserActiveDirectoryGroupsFill(ByVal source As Object, ByRef GroupID As SqlChars) GroupID = New SqlChars(CType(source, String)) End Sub ''' <summary> ''' Recursively gets the active directory groups for the directory entry. ''' </summary> ''' <param name="entry">The active directory entry.</param> ''' <param name="groups">The list of groups.</param> Private Shared Sub GetActiveDirectoryGroupsForEntry(ByVal entry As DirectoryEntry, ByVal groups As ArrayList) For i As Integer = 0 To entry.Properties("memberOf").Count - 1 Dim memberEntry As New DirectoryEntry("LDAP://" + entry.Properties("memberOf")(i).ToString()) groups.Add(memberEntry.Properties("sAMAccountName")(0).ToString()) GetActiveDirectoryGroupsForEntry(memberEntry, groups) Next End SubEnd Class
If I return the Average, Minimum, and Maximum values for the counter Physical Disk: Avg. Disk Queue Length, and those values are 10, 0, 87 respectively, which value do I use to compute the Avg. Disk Queue Length for a 4 disk array(RAID 10): Average, Minimum, or Maximum? The disk(lun) is on a SAN.
-- Initialize Control Mechanism DECLARE@Drive TINYINT, @SQL VARCHAR(100)
SET@Drive = 97
-- Setup Staging Area DECLARE@Drives TABLE ( Drive CHAR(1), Info VARCHAR(80) )
WHILE @Drive <= 122 BEGIN SET@SQL = 'EXEC XP_CMDSHELL ''fsutil volume diskfree ' + CHAR(@Drive) + ':'''
INSERT@Drives ( Info ) EXEC(@SQL)
UPDATE@Drives SETDrive = CHAR(@Drive) WHEREDrive IS NULL
SET@Drive = @Drive + 1 END
-- Show the expected output SELECTDrive, SUM(CASE WHEN Info LIKE 'Total # of bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS TotalBytes, SUM(CASE WHEN Info LIKE 'Total # of free bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS FreeBytes, SUM(CASE WHEN Info LIKE 'Total # of avail free bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS AvailFreeBytes FROM( SELECTDrive, Info FROM@Drives WHEREInfo LIKE 'Total # of %' ) AS d GROUP BYDrive ORDER BYDrive
I am trying to setup a test cluster and am having an issue. When I try to create the resource of a physical disk it takes both the drive e: and drive q: and doesn't seperate them into two physical disks as resources. This means when I try to associate the quorum disk it links the to physcial disk resource of drive e and q. Then when I try to install SQL2k5 I get the warning about installing SQL on the quorum disk. Am I missing something? Is there a way to seperate e and q onto two physical disk resources so I can specifically associate the quorum to q and the sql to e or should I be setting the quorum disk to a majority node set? Thanks in advance.
Ok guys, I am trying to install Sql server 2005 on Vista but I am still stuck with this warning message in the System Configuration Check during Sql server 2K5 installation :
SQL Server Edition Operating System Compatibility (Warning) Messages * SQL Server Edition Operating System Compatibility
* Some components of this edition of SQL Server are not supported on this operating system. For details, see 'Hardware and Software Requirements for Installing SQL Server 2005' in Microsoft SQL Server Books Online.
Now, I know I need to install SP2 but how the hell I am going to install SP2 if Sql server 2005 doesn't install any of the components including Sql server Database services, Analysus services, Reporting integration services( only the workstation component is available for installation)?
Any work around for this issue?
P.S.: I didn't install VS.NET 2005 yet, can this solve the problem?
I have created a windows library control that accesses a local sql database
I tried the following strings for connecting
Dim connectionString As String = "Data Source=localhostSQLEXPRESS;Initial Catalog=TimeSheet;Trusted_Connection = true"
Dim connectionString As String = "Data Source=localhostSQLEXPRESS;Initial Catalog=TimeSheet;Integrated Security=SSPI"
I am not running the webpage in a virtual directory but in
C:Inetpubwwwrootusercontrol
and I have a simple index.html that tries to read from an sql db but throws
the error
System.Security.SecurityException: Request for the permission of type 'System.Data.SqlClient.SqlClientPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) at System.Security.PermissionSet.Demand() at System.Data.Common.DbConnectionOptions.DemandPermission() at System.Data.SqlClient.SqlConnection.PermissionDemand() at System.Data.SqlClient.SqlConnectionFactory.PermissionDemand(DbConnection outerConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection,
etc etc
The action that failed was: Demand The type of the first permission that failed was: System.Data.SqlClient.SqlClientPermission The Zone of the assembly that failed was: Trusted
I looked into the .net config utility but it says unrestricted and I tried adding it to the trusted internet zones in ie options security
I think that a windows form connecting to a sql database running in a webpage should be simple
I have been tasked with moving our SQL server estate onto new 64bit SQL 2008 Virtual servers on a VM base. Each Virtual server will be attached to our SAN that i will have no control over. Do i ask for multiple LUNs pretending that there is a COS), Etemp), FData) and Glog) disk structure or do I just present a very big space as a single C: drive and let it go.We are consolidating lots of old physical servers onto fewer (more powerful) virtual servers (according to the VM and SAN administrators)
Hi, I have a 250 GB database and not much space left on the disk drive. I want to run SQLMAINT to do optimization and integrity checks on this db. My question is : How much work space does SQLMAINT need to perform these tasks?. Thanks in advance for your help. F.
I am looking for an API to flush all data in memory held by SQL Serverto disk. Also, is there a tool for SQL Server like eseutil forExchange that lets you correct a SQL database?