Recovery :: Highly Critical Application With No Downtime?
Oct 29, 2015
1. I am looking data base solution for highly critical application. what type of HA/DR solution will suite for this.(always on, transactional , log shipping or mirroring). make sure all secondary nodes should be identical to production .
2. for any need to upgrade /patching /maintenance work, production should not be able to down. No downtime 100%.
Dear all I am a pretty new in the development world fresh from uni. I am doing development on a system that has a security database. Access to the data in that database is pretty important. So in case the main server where the database is stored for soem reason fails or gets compromised i need to have a second copy with the most recent data in that database and keep the application up and running. The data i have is stored in a SQL 2005 database. What are the recomended aproaches for acheiving this needed reliability? Would running the SQL Agent every 2 minutes do the trick? And replicate the database to another server and then have asecondary deployment on that server running as a backup? Or are there any other means? Any advice is apreciated. Sincerely
My application supports a single database connection and in the app console I can produce reports. If I include the app database in an AlwaysOn availability group with a read-intent replica will SQL automatically route the “selects” to that second instance thus offloading my application’s reporting activities or I need a separate db connection (maybe from a reporting app or cli) with a connection specifying read-only intent?
Hi,First of all my apologies if you have seen this mail already but I amre-sending as there were some initial problems.This query is related to defining indexes to be unique or not andconsequences thereof.Some documented facts that I am aware of include1. Defining uniqueness allows optimiser to create optimal plans egselect based on keys in such an index allows the optimiser to determineat most only one row will be returned2. Defining uniqueness ensures that rule (business/Primary key) isenforced, regradless of how the data is entered.We have many cases where non unique indexes are defined. The approach todate has been that even though we are aware of some of the benefitsoffered by defining uniqueness , we have chosen not to add keys to nonunique indexes such that they become unique. The primary reason for thiswas that we did not want to make the keys comprising the indexesunnecessarily large and therefore ensuing consequences when DMLstatements are performed.However, I have concerns that having highly duplicate indexes can haveperformance impacts , including deadlocking. I am also aware Sybase usedto store duplicate values in overflow pages and therefore there wereperformance consequences. Could SQL 2000 have the same behaviour ?Thanking you in advancePuvendran*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
I have a situation where I'll need to get results from tables based on totally arbitrary filters. The user can select the field to compare against, the value, the comparison operator, and the boolean operator, so each bit in brackets would be configurable:[field] [>] [value] [and]The user can specify an arbitrary number of these, including zero of them. I like the coalesce function for situations that are a little more structured, but I think I'm stuck generating a dynamic query for this -- please correct e if I'm wrong!
I'm working on the viability of using SSIS to process data files that are (currently) sent by email or FTP and processed by hand. They are .csv or Excel and have some columns that are required, some that are included but the data is discarded, usually have some anomaly (like columns out of place, or columns missing) from source to source, and if they have headers the actual headers are inconsistent from source to source (and sometimes from submission to sumission ). Some sources send only one file, others might send a dozen. Due to the nature of the sources - many are not very technically inclined and sometimes there are other factors involved in how the data is exported - there's not a lot I can do to leverage them into all using the exact same format. Creating an SSIS package for each source (over 100 in all) is not a viable solution.
The compromise that I've come up with is I would like to come up with a universal SSIS package that can:
Take a .csv file with headers (I think we can force the sources to include the right headers if nothing else) as input , with or without quote delimiters. Not be sensitive to the order of the columns. Not be sensitive to the number of columns. Loop through multiple files from one source. Place the ouput in a staging table for further processing. Reading a .csv file, looping and the staging table is not a problem. What I'm struggling with is the middle portion. I've tried to make the column headers dynamic, but then I lose the ability to have quote delimiters. My next instinct was to the source file(s) and create one interim file with a different delimiter (such as pipe or ## or something) but that doesn't seem to work quite the way I want it - since the source has comma delimiters, the output has both the pipe and the comma. Another way I could do it is replace the commas within quote delimiters with a blank space in the data flow, which would eliminate the quote delimiters in the output. That I haven't tried yet.
Any suggestions? If I could bring down the hammer and tell every source "You must send it this way" I would, but my hands are tied.
My SQL Server 2005 Database was down last night.From logs I can find out below details only.
Code Block Date 11/30/2007 1:01:34 AM Log SQL Server (Archive #1 - 11/30/2007 1:01:00 AM) Source spid4s Message SQL Server is terminating in response to a 'stop' request from Service Control Manager. This is an informational message only. No user action is required.
Now I want to find out root cause for this.Who has stopped this service.And If services was stopped automatically then which program/service is resposible?
So Please suggest me how to do root cause analysis for this downtime.
I am trying to create an alert on my new SQL 2005 box to email alert/notify when/if the service ever stops, etc. Is there an existing alert I can use or do I have to script one, and if I do - how would I do it?
Hello,I'm upgrading from SQL 7 to SQL 2000 on another box. To minimize thedowntime I would like to1) backup my sql 7 database,2) copy it to the new box with SQL 2000 already installed,3) restore the database on the SQL 2000 box,4) Shutdown my sql 7 database,5) Copy the transaction logs to the SQL 2000 database,6) Restore the transaction logs to the SQl 2000 database,7) Bring up SQL 2000.My only concern with this is restoring the transaction logs that werecreated on SQL 7 to SQL 2000. Do you know if I can do this?Do you see any (other) problem(s) with my plan.Thanks, Scott
I am trying to create an alert on my new SQL 2005 box to email alert/notify when/if the service ever stops, etc. Is there an existing alert I can use or do I have to script one, and if I do - how would I do it?
I'm preparing a checklist for myself before getting ready to migrate from 2005 to 2012. Our largest database is a nice one at over 250GB. I'm thinking my best bet to minimize any downtime would be to Restore the DB (NORECOVERY) on the new server and keep rolling it forward with the transactional logs. Eventually I'll need to bring the old DB offline and do one last backup and apply that one to the new server but that should be a small time frame given the whole process could take several hours.
There is a great book on database refactoring that contains a comprehensive set or recipies on how to revise databases that are supposed to be always online and may have various clients that can't be upgraded at the same time. I guess this is a typical case with large databases and I would be surpised if Amazon stops their servers just to move a column from one table to another. The book describes necessary steps for such changes. Basically it's all about creating intermediate database schemas that would be used during transition period.
For example, if we need to move a column from one table to another:
Version 1. Table A columns: Name, Price Table B columns: Quantity, Date
Let's say we move Price to table B:
Version 2. Table A columns: Name Table B columns: Quantity, Date, Price
The book suggests an intermediate version:
Version 1_2. Table A columns: Name, Price Table B columns: Quantity, Date, Price Additional trigger that will synchronize "Price" columns between A and B.
Version 1_2 can be used by both clients written for version 1 and 2. Software developers don't need to rush their upgrades, transition can last months and include several changes.
This technique requires accuracy in version control management, but looks very good to implement non-interruptible database schema upgrade. I wonder if this is the only option available for data schema upgrade with no downtime. I can't think about anything else - it this how large data warehouses updata their databases?
I have a situation where deleting old records is blocking updating latest records on highly transactional table and getting timeout errors from application.
In details, I have one table called Tran_table1 in OLTP database. This Tran_table1 is highly transactional table, it will receive data for insert/update continuously
While archiving 2 years old records from Tran_table1 into Tran_table1_archive in batches(using DELETE OUTPUT INTO clause), if there is any UPDATEs on Tran_table1,these updates are getting blocked and result is timeout errors in application.
Is there any SQL Server hints to avoid blocking ..
I have a process that restores a production DB, overwriting the existing copy each night. I'd like to keep the solution "up" for as long as possible. And this'll be more important if I want to update it in the day (where there are more queries) too. The nature of queries thrown at the system is that there are about 20 per hour, it's underpinning a reporting system, it's not an OLTP system.
It seems to me I could restore the fresh DB copy into a holding DB, then rename it to the production DB name at the end of the process. The rename process should be pretty much instant.
But I need to think about detecting and waiting for queries to complete on the prod DB, before removing/demoting it (actually, I though to rename it, then reusing it as the next copy to update).
I need a query to find out the server uptime and downtime of the server from MOM database, i don't know in which tables MOM actually stores this infomation.
I need this very urgently.
Thanks in advance
You can use this code to find out the information stored in the MOM tables:- ############################################################################ create PROC [dbo].[SearchMyTables]
I'm trying to run a query to check the downtime in production lines, but if a line has assigned more than one cause for the downtime it repeat the info for each cause.
This is the code.
SELECT D.Line AS Line, D.ProductionLine AS ProductionLine, D.Shift AS Shift, D.DownTime, CONVERT(VARCHAR(10), D.DatePacked,101) AS DatePacked, AssignedDowntime, (D.DownTime - AssignedDowntime) AS NOASSIGNED, R.Enviromental,R.Equipment, R.IT_Systems, R.Material_External,R.Quality,R.Material_Internal, R.Method,R.PreProduction,R.People FROM ( SELECT Line, Shift, DatePacked, SUM(Cast(Downtime AS INT)) AS AssignedDowntime,
[Code] ....
I'm expecting that if is more than one "Down Reason "it will include in the same line. At this moment if i have more than one reason it create a line for each one for example:
If i have a total Downtime of 50 minutes and they are assigned 10 for itequipment, 30 by testequipment and 10 assigned to quality issues i will have and output like this:
I have a table (named table1) with 20million rows. It takes around 11 minutes to apply the primary key to this table. There are some tables with over 100 million rows so based on the previous time if my calculations are correct it will take close to an hour apply this primary key for tables with around 100 million rows.
My current solution is to create another table (named table2) with no indexs or primary keys. Pump over only like 5 days worth of data, then apply the primary key. Then have a script that will eventually populate table2 with the rest of the data gradually. When I say gradually I mean like insert like every 100k per hour or something. Keep in mind this table2 is heavily updated with new records.
Two servers are configured with Windows 2008 / SQL server 2012 utililizing Always-On for HA. We need to upgrade both servers to Windows 2012 / SQL Server 2014 with minimum downtime(Time for Always-On failover). The upgrade to SQL 2014 is straight forward with for minimum downtime.The Windows upgrade(2008 -> 2012) is the problem. From what I have observed and read in blogs.The Windows node to be upgraded must be removed from the Windows cluster before the node can be upgraded to Win 2012.A Win 2008 and Win 2012 node can not reside in the same cluster. If this is true then the only option I can think of is to dump the DB on WIN 2008 server and restore on Win 2012. This is an outage(time it takes to dump and restore).Is there any other method to upgrade these two nodes utilizing Always-On of some other method without downtime?
I have 32-BIT MS SQL 2005 running on Windows 2003 R2 platform. The code is 100% bug-free and works fine on staging and production. Production has an issue AT RANDOM TIMES. Most of the time, it works fine...connections are pooled and reused. Out of nowhere (very randomly), it will start opening new connections for each request and keep doing that until the DB server crashes (could not open connection exception). If I restart IIS, it works fine again...all connections are being reused (no more than 6 connections). Just for fun, I restarted IIS again....it starts opening new connections for each request!! I restarted again, it now reuses existing connections. What's going on??? This has occurred 2 times on our production box. .NET ALSO has a SERIOUS bug where if you nest master pages or user controls, sometimes it will throw a compilation error on a LIVE site (microsoft admits that it's a bug in the engine and currently...there is NO FIX for it..there are patches...none of them work).
I get this error when I try to access one of my tables.
Msg 605, Level 21, State 1 Attempt to fetch logical page 4377 in database 'maillist' belongs to object '1340531809', not to object 'client'.
I know that when my database is restarted it will be marked suspect because of this error. Does anybody know: What causes this error? Why I keep on getting it? How do you fix it?
I am having a big issue now. I made what I thought would be a simple change to our reporting services application, which has been running smoothly for about 2 years. The change has caused my "forms authentication" to start throwing an error that I rember from a while back while testing. Our rporting server is down, sad to say. Has anyone had this problem.
The error : Client found response content type of 'text/html; charset=utf-8', but expected 'text/xml'. The request failed with the error message:...
Changes made 1. Stoped RS web app in IIS 2. Changed the ASP.NET Config Settings(Report Server App)
a. Authentication Tab - cookie time out (changed to 60) b. State Manegement Tab - Session Time Out (changed to 60) 3. Changed the ASP.NET Config Settings(Our aps.net app that displays the reports)
a. Authentication Tab - cookie time out (changed to 60) b. State Manegement Tab - Session Time Out (changed to 60) 4. Changed the SessionState Timeout property in the web config file in our web app that displays the reports. 5. Restarted the Report Server app. 6. Recycled the app pools
Could not log in! I am catching the above error in my web app's ReportExecution.LogonUser(). I rebooted the server and continue to get sqldump logs and this exception. This error occured when I was testing the redirect a while back but that issue was resolved. The only changes I have made are above.....
source flat file containg row having length 803, frm that i parse the fields of diff lengths ,record type field is one of them,
after flat file i took derived column to parse the record types(i.e type1,type2, .......) later i took conditional split and split those record types in to diff derived columns,
problem starts now ?
i got 500 records before conditional split and after condi..split.. they r 499,
Does anyone ever face this sort of error? We were launching a SSIS from our .Net console:
0x80070002 while loading package file "C:Documents and Settingsadminsql2k5Local SettingsApplication DataMicrosoftSQL ServerSmoInnerPackage.dtsx". The system cannot find the file specified.
InnerPackage.dtsx is a sort of template for SSIS??
We have a requirement to build SQL environment which will give us local high availability and disaster recovery to second site. We have two sites- Site A & Site B. We are planning to have two nodes at Site A and 2 nodes at Site B. All four nodes will be part of same Windows failover cluster. We will build two SQL Cluster, InstanceA will be clustered between the nodes at Site A Server and InstanceB will be clustered between the nodes at Site B, we will enable Always On Between the InstanceA and InstanceB and will be primary owner where data will be written on InstanceA and will be replicated to InstaceB. URL....Now we want we will have instanceC on the Site B and data will be writen from the application available on Site B, will be replicated to the instance on the Site A as replica.
Pages on a full recovery model database corrupted, need to ensure data loss is minimal for restore operation am thinking about restoring the latest full backup.
Publc role has been grannted 'SELECT' privelege to syslogins and sysusers tables in the Master and GTSS database
The syslogins table contains all the logins that were created on the server. The sysusers table contains the users that are mapped to the database. Unauthorised access to these tables would reveal critical authentication info of other users
Restrictive permissions should be configured on critical database tables such as sysusers and syslogin.
Our SQL server keeps crashing with the following error. When it crashed it completely shut down the server. Could you please give me advice on how to stop this from happening again? I would like to thank in advance for your help.
A MS DTC component has encountered an internal error. The process is being terminated. Error Specifics: A non-MS DTC XA Library threw an exception in function olog ntdll!KiFastSystemCallRet + 0x0 + 0xd58c3c0
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
Here is the information of our server
OS Name Microsoft(R) Windows(R) Server 2003, Standard Edition Version 5.2.3790 Service Pack 1 Build 3790 Other OS Description Not Available OS Manufacturer Microsoft Corporation System Name SQL2387 System Manufacturer Dell Computer Corporation System Model PowerEdge 2850 System Type X86-based PC Processor x86 Family 15 Model 4 Stepping 3 GenuineIntel ~3790 Mhz Processor x86 Family 15 Model 4 Stepping 3 GenuineIntel ~3790 Mhz Processor x86 Family 15 Model 4 Stepping 3 GenuineIntel ~3790 Mhz Processor x86 Family 15 Model 4 Stepping 3 GenuineIntel ~3790 Mhz BIOS Version/Date Dell Computer Corporation A04, 9/22/2005 SMBIOS Version 2.3 Windows Directory C:WINDOWS System Directory C:WINDOWSsystem32 Boot Device DeviceHarddiskDmVolumesSQL2387Dg0Volume1 Locale United States Hardware Abstraction Layer Version = "5.2.3790.1830 (srv03_sp1_rtm.050324-1447)" User Name Not Available Time Zone Eastern Standard Time Total Physical Memory 4,095.08 MB Available Physical Memory 1.75 GB Total Virtual Memory 1.83 GB Available Virtual Memory 3.81 GB Page File Space 2.00 GB Page File C:pagefile.sys
in the process of migrating a big db from server 1 to server 2, we had to roll back the change. I started with taking a full db backup and restoring it on server 2 with norecovery, and then a couple logs with norecovery, and then the last log with recovery.
Is there some way to continue this chain now, I mean to change the db to norecovery, or other way to restore logs.
I dont want to do a new full backup.
If I try to do a log restore now i get the message:
Msg 3117, Level 16, State 4, Line 1
The log or differential backup cannot be restored because no files are ready to rollforward.