I need to know if there are any other failover options available for SQL Server/NT, beyond Microsoft Clustered Services. If there are any, which ones are advised?
MS recommends to use remote Distributor for mirroring. It looks like it will be a single point of failure. What are my options ? We can't mirror it, is failover clustering the only way to go (SAN being single point of failure) ? How would log shipping work, if I didn't get the very last completed transaction log fast enough to the other machine, so that recovered distribution database would be behind ? Also, if distributor fails, would my transaction logs on publisher fill up the entire disk space ? Can it be stopped ?
Well, today's the day. I finally got approved to upgrade our 24x7 cluster to SQL Service Pack 2. I'm so looking forward to running Profiler without causing the server to failover, not to mention ArcserveIt and Spotlight an... Anyway, Any feedback about the upgrade process is greatly appreciated. I've installed SP2 elsewhere without a hitch but I realize that I have to uncluster my production servers first and that there is no easy way back from SP2. I have full backups scheduled to run prior to the upgrade, any other ways to protect myself? Thanks in advance!!
I am new to MS Cluster server. Could anyone tell me the correct procedure to shutdown and restart MS SQL Server 2000 in a Clustered environment? Thanks
1. Once fail over to secondary replica, what will happen to connected session in primary node? can the session fail over to secondary seamlessly or need to re-login. what happen committed transactions which has not write to disk. 2. Assume I have always on cluster with three nodes, if primary fails, how second node make write/ read mode. 3. after fail over done to 2nd secondary node what mode in production(readonly or read write). 4. how to rollback to production primary ,will change data in secondary will get updated in primary.
We have set up Mirroring with a witness server and everything works fine when we failover from the SQL Management console.
However, if we failover when our Maccola client is connected, the client blows up - clearly because it can no longer connect to the database.
The ODBC DSN used by the Maccola client shows a checkbox for the 'select a failover server' but the checkbox is grayed out.
Also the summary of settings for the DSN at the end of the wizard reveals that the failover to server (y/N) option is set to N.
The default setting for this DSN is 'populate the remaining values by querying the server' but it doesn't appear to be getting the settings for failover from the server or any other interactive DSN settings either. The server is clearly set for mirroring.
Another suspicious item is that the DSN cannot connect to the server with SA permissions, even though the server is set to mixed security and we use the correct authentication.
Is it possible that the client MACHINE is not authenticating with the domain or sql server properly. We are logged into the client with the domain account that is the SQL admin account on the sql server box.
We should be able to interact with the sql server settings through the ODBC DSN on the client shoulnd't we?
1. In alwaysON fail over cluster, Once fail over to secondary replica, what will happen to connected session in primary node? can the session fail over to secondary seamlessly or need to re-login. what happen committed transactions which has not write to disk.
2. Assume I have always on cluster with three nodes, if primary fails, how second node make write/ read mode.
3. After fail over done to 2nd secondary node what mode in production(readonly or read write).
4. How to rollback to production primary ,will change data in secondary will get updated in primary.
Hi I am not having luck creating a linked server on SQL7 running HA on MSCS. We keep getting the "Error 6: Specified SQL server not found"
1. The Linked Server has an advanced entry in the Client Network Utility 2. Setting up the Link to a SQL 7 Server 3. Security is Mapped to a remote user which does exist on the Remote Server. 4. The two machines can ping each other. 5. On the Linking Server the linked Server can be registered as a registered server.
I would like to compare the MSCS with Polyserve's solution and thus wondering if anyone has an experience using both or knows -
What are the major differences between the two? What are the limitations? What are the advantages and disadvantages? Which is advisable to use as a HA solution?
Any other information which may help me to make this decision would be appreciated.
Very often, when I generate SQL scripts for a table, I forget to go to Option tab to click the pk, default, index boxes. Is there a way to permernatly set the whole sql server generate sql scripts options ONCE?
HiI have created a Sql Script through Enterprise Manager for Drop acolumn. By default its creating lot of 'SET' commands. I doubt allthese SET options are required or not. Pls comment on this issueBEGIN TRANSACTIONSET QUOTED_IDENTIFIER ONSET TRANSACTION ISOLATION LEVEL SERIALIZABLESET ARITHABORT ONSET NUMERIC_ROUNDABORT OFFSET CONCAT_NULL_YIELDS_NULL ONSET ANSI_NULLS ONSET ANSI_PADDING ONSET ANSI_WARNINGS ONCOMMITBEGIN TRANSACTIONALTER TABLE EmployeeDROP COLUMN OrderDetails_IDGOCOMMITDil
I could do with a couple of pointers to the best options to acheive my goal, I'm pretty close with the way I've done it, but I feel there is a more elegant solution out their so your help would be most appreciated.
The problem is finding the best way of moving some SQL Server 2000 changed data into sql server 2005. We are only interested in some tables in 2000 (and sometimes just subsets of those). Because there are quite a few tables and the we want to set up a schedule to run periodically, we chose SSIS. The main reason for this is to utilise a for each loop that pulls each tables name from a one column staging table of table names. (that way we can do more or less comparisons by simply adding and removing from the staging table) Also in this loop, using the table name as a variable, we run an exec sql task along the lines of 'SELECT * from varTable EXCEPT SELECT * from varTable_tracker' which gives us the difference beteween the two tables (where the tracker table is a copy of the data table which is sychronised at the end of the job run). So far so good. Now the tricky bit, EXCEPT only works under 2005, the tables are in 2000 so we ended up having a linked server in 2005 back to the 2000 table. Is there a way of acheiving the same result without involving the linked server - or is there a task (script?) we can run to verify the linked server is up before we excecute the job -we already run checks on Connection Managers to see if they are up but never tried linked servers? Lastly, will performance be an issue
anyone know about the undocument options of DBCC? which options are undomented? i.e. log There are only a few options which are documented in SQL Online Books thank in advance
What are the most critical "dbcc" options that should be run to insure the sanity of database and DBA alike? Do these have to be run in single user mode or can they be run while users are on the system?
An automatic monthly delete has recently grown from 15 to 20 million rows. It is now filling my 70GB T-Log completly. I don't have any space to expand the T-Log. Do I have any options other than reducing the number of rows in the delete?
hey guys, i need your help please. here is the scenario:
1. I need to return a data back to client (result set varies 20-10,000) 2. I only want to show 20 records at a time 3. To get info i need to display i need to join 10 tables
When there are small #s of records it works but when i get over 8000 then it becomes a problem:
1. The first version was: Get all data using big query and return everything back to client and display only 20 at a time (not very proficient). Takes around 15 seconds to view 20 records.
2. Inspired by 4GuysFromRolla (http://www.4guysfromrolla.com/webtech/062899-1.shtml) Use Stored Procedure w/ server side paging logic to get 20 records at the time. I had to pass every filter parameter and stuff. SP had to sort resultset and return only 20 records i need to display. Takes around 5 seconds to view 20 records.
I still think it's slow, i know this is a very broad question but is there any other way to do it, logically?
Hi folks, Recently i've installed a fresh Installation of Opertaing Sys and SQL. Win 2000 server, sp4 SQl 2000 Enterprise, sp3. I am using Domain user service startup for sql and is member of domain admin. I've manually added the user in Local-Admin group of the system. Obviously it's also has Sys-Admin server roles. The problem is; when i use enterprise manager and change any of the server settings; Priority Boost or Fix Memory Allocation for the Sql server, nothing happens, the options dialog box closes and doesn't ask for restarting the server neither does the settings take effect when i restart SQL. However if i change the settings using sp_configure using the same user; it works. i've assigned a fixed memory to SQL but the option "Reserver Physical Memory For SQL" won't work. Couldn't find this option in SP_Configure. Any ideas, what has gone wrong.
This follows on from a query I had a few days back (and for which I was promptly flamed! However, I've got skin like a rhinoseros, so here goes...)
I have a table - ProjectSite - that is pulling information from a two tables (Project and Site). This table contains data regarding which sites are part of which project.
I now want a means of reporting dates against this. The problem is that each project has bespoke milestone dates, so I can't just create columns in ProjectSite. The only solution I can see is to pull each project (and there's quite a few and its corresponding sites into a new table) and then I can create my bespoke columns.
Does this sound like the best viable option, or can anyone suggest another means of doing this?
I am considering the different options for package deployment on the server. Until now, I have found several different ways to deploy packages to the server (File System):
Using the Import option from the Management Studio (only one by one) Using the Deployment Utility (Needs building the whole project. Opens all the packages in debugging mode, cannot deploy to different folders) Using the dtutil by constructing a command line for each package deployment. (complicated) Simply copying the files from the local project folder to the "Program FilesMicrosoft SQL Server90DTSPackages" folder on the server.
Does anyone have any other suggestions for deployment? The 4th seems to be the easiest one, but I seen anybody suggesting such an action. What's the downside of such an action?
I hoipe someone can point me in the right direction here.
I have an application with the following requirements (using SQL CE 2 alas)
A set of tables on the server that need to be imported to the handheld. Using rda, I need to get the modifications to these tables from the server (add/edit/delete) but the handheld will never update these tables.
A set of tables on the server that need to be imported to the handheld. The handheld needs to add/edit existing records, and it needs to get any changes from the server.
A set of tables on the server where the handheld needs to import a subset of the records. It needs to add (but not edit) new records, upload the new records to the server, and download any changed (add/edit/delete) records to the handheld. What tracking options should I use in these 3 cases?
The problem comes in that I need to have some foreign key relationships in the database on the handheld. Since rda munges the names of primary keys (and indexes), I do not know of a good way to add these foreign key constraints. Any suggestions?
Hello all. I am currently doing some research into options for setting up reporting. Right now we have a server on EE that's getting hit a bit too hard by the reports. The budget is currently a bit low, but we already have a second server purchased.
For our reporting, we need data that is up to date within the last 15 minutes (less if possible). Because of the potential size of some of our transactions, I've ruled out log shipping as being too much downtime of the reporting data while the second server is catching up to speed. So, I'm trying to figure out what reporting options I have left open to me.
1) I understand that for reporting purposes, a snapshot must be taken of the mirrored server. Why are reports not able to run directly off the mirror live (or am I mistaken?).
2) Is it possible to mirror from EE to SE (remember, low budget for the second server)?
3) How high is the overhead when doing a snapshot every 5-15 minutes ( I would think it's machine specific, but overall is it pretty quick or prohibitive based on how often the snapshot would be needed)?
4) Is replication perhaps a better option based on how up to date the data has to be? Are there any other options that may be available for near-realtime reporting?
I have been attempting to locate a hosting company that offers SQL Server 2005 in addition to Analysis & Reporting Services but have been unable to find a hosting company which does so without purchasing the entire server. Anyone know of a company?
I have got a small project that requires to feed in a .CSV flat file and load the data into SQL server 2000.
I developed a SSIS package for this and get it working in my computer, but I need to deploy it to customers that they don't have VS 2005 or SSIS installed. May any one of you give me some clues on that?
I played around with the flat file deployment and again it seems only working on my computer as I have everything installed.
Just wanted to get some feedback on this scenario. I will be developing an ASP.net application for our local intranet that employees will have access to. It may also be implmented to allow employees to access this ASP site from home as well. I would be using SQL Server 2000 as the backend DB and wondered what type of licensing would best fit this scenario?
Sorry if this is double posted....seemed to be having some issues with posting.
Hi, I was recently experiencing a slowness when executing stored procedures from a .NET Application, but it went fast when executing from Query Analyzer. Research led me to find that by turning ArithAbort ON that it forces the SQL Server to use the same Execution plan whether the request is coming from Query Analyzer or the Application. My concern now is the effect of ArithAbort. I understand what turning this option does, but I am trying to think of a scenario where turning it on could be bad. Does anyone have any suggestions on what I should be aware of when disabling/enabling ArithAbort or ArithIgnore? Thanks. -Brian
Hello, I am trying to bind a sqldatasource control to the gridview. I have selected the sqldatasource control and specified the connection string, on configure Select statement page under advanced options both the check marks Generate INSERT, UPDATE and DELETE Statements Use Optimistic Concurrency are disabled for me.I have a proper SQL Server database not an express data base, how do i get to generate the InserCommand, EditCommand etc Any help would be great .. relatively new here thanks
I'm writing an Insert form which will write records to a few tables. What I want to know is how do I write multiple answers to one question in different rows in the table but keeping the ID?
For example.
The form has the following fields:
HotelIDHotelFacilities (CheckBoxList)
Now each hotel (in this case) will only have one ID but more than one HotelFacility .
How do I get my table to read...
HotelID
HotelFacility
1
Bar
1
Restaurant
1
Cafe
1
Wi-Fi Access
I presume INSERT INTO tblHotelFacilities(HotelID, HotelFacility) VALUES(@HotelID, @HotelFacility) won't write more than one selected facility?Thanks,Brett
Advanced SQL generations options: generate INSERT, UPDATE, and DELETE statements is all greyed out? in my sql data source control.? I have made a brand new instance with sql server management express....have I missed something?