Synchronous Application Scenario
Oct 1, 2007
I'm new to SSB, so please bear with me. Our application requirements are:
1) Web app gathers user input from a web UI.
2) Web app calls a stored procedure, passing in the user input gathered in step (1).
3) Procedure issues queries to multiple data sources (SQL Server 2005 db's) derived from the user input.
4) Procedure waits for replies from these multiple data sources, governed by a timeout in case a data source is unavailable or slow to respond.
5) Procedure aggregates the data returned in step (4) from multiple data sources into an XML document.
6) Procedure returns the XML document as an output parameter.
This is different than the usual SSB asynchronous application paradigm. In particular, I'm wondering:
How can I setup a synchronous dialog, where the procedure that issues the SEND waits for a reply from the target service? I don't want the initiator procedure to end after SENDing, but rather wait for the replies to those messages so it can aggregate the data from the reply message bodies.
Thanks - Dana Reed
View 4 Replies
ADVERTISEMENT
Mar 8, 2015
I am trying to implement a read only replica to move much of the data read for an application to the secondary replica. Initially I had the the primary and secondary set to asynchronous commit. QA brought up an issue with creating entities from the application because after the creation of an entity the application turns around and repopulates the entire aggregate object. Well it seems that the application was reading the secondary replica before the data had been committed. Although I understand the issues that synchronous commits can cause, I went ahead and made the change as I expected it to fix the issue. After changing the primary replica to synchronous we still had the error, so I also changed the secondary although that makes no since, but the issue remains.
View 5 Replies
View Related
Jan 23, 2007
Hi,
Please excuse me if I get some terms wrong - I am not a developer!!
I am working for a company who has a requirement to interface to a customer to query for work. The customer has an established Web Services that it uses for external interfacing, these services are passive at their end, in that it will be my clients who initiate all the communications using SOAP, receiving responses in XML.
There are, as I see it, seven conversations.
Receiving Datasets
1. Requests All new Jobs - This will be a post to a HTP with a list of job numbers sent back, These jobs will then need to be polled from the same webservice.
2. Job Changes - existing jobs that have changed definitions (costs etc..)- simular to above
3. Job Changes - Appointment Times
Sending Datasets
1. Completed Jobs - one at a time
2. Cancelled Jobs
3. Appointments (I believe this is responses to 1 and 3 above)
4. Subsequent Jobs - new work derived from initial Job.
I believe the sending datasets will be one way only (ie POST).
I have suggested using Service Broker for this application, with a custom App sitting inbetween ServiceBroker and the Web Service to deal with the HTTP POSTS and GETS, and communicating with the Service Broker, posting to queues and reading from them.
Is this viable??
Many Thaks
Lawrenso
View 1 Replies
View Related
Jan 5, 2008
The following statement is from Microsoft documentation:
If you use the ExclusionGroup property to specify that rows should only go to one or another of a group of outputs, as in the Conditional Split transformation, you must call the DirectRow method to select the appropriate destination for each row. When you have an error output, you must call DirectErrorRow to send rows with problems to the error output instead of the default output.
I have a question about this because I have never used the "ExclusionGroup" property. For example, I have a script component where I specify 4 separate outputs, because I am sending different groups of rows to each output. I accomplish this programmatically using a lot of conditionals and it works fine.
I did not have to use the "ExclusionGroup" property to do this. So I'm not sure why I would ever need this, or to use DirectRow? I'm trying to understand this better, because maybe I feel like I'm not understanding the DirectRow, or how/when to use it.
Thanks
View 1 Replies
View Related
Jan 3, 2008
Can someone please clarify:
If you have a data file, and you only want CERTAIN rows to pass to the destination, ie) a table
and you are using a script task to accomplish this,
is this a synchronous or asynchronous transformation?
Q. And how do you assign the values to the output? Do you have to create output columns, or not?
I am very very confused right now. I can't seem to find a decent answer to what is a very basic question either in my SSIS book or in the documenation. Perhaps it is so basic, that the question doesn't seem valid? I don't know. But I just don't understand this at all.
Thank you
View 9 Replies
View Related
Sep 25, 2006
HI, I need to know whether I am on the last row or not in my script component. If this is the case, I would alter a column in the row that indicates me that I am processing the last row. Is there a way to do it? I tried with process input but when the EndOfRowSet() indicates me that the last row is porcessed, I cannot alter the row in the buffer.
Thank you,
Ccote
View 4 Replies
View Related
May 24, 2007
Hi guys,in my db i have these three tables1.Stores 2.Products3.Partstheir structure is something like :Stores ----Products ----PartsStores----------------StoreId, StoreNameProducts----------------ProductId, StoreId, ProductNameParts----------------PartId, ProductId, PartNamenow, in my application i wanna to implement a bulk-copy operation souser can copy products from one store to another one and when aproduct copied to new store;all of it's parts should copy too.in fact i need a method to insert a Product item in Products table andsynchronously copy it's parts into Parts table and repeat this stepsuntil all of proucts copied.how can i do that without cursors or loops ?Thanks
View 19 Replies
View Related
May 13, 2006
I have synchronous mirroring. Some times I loose connection to witness and mirror servers. These times primary server is down. Is there any way I can change mirroring to asynchronous when primary server is down due to communication break down between witness and mirrored servers? I can break mirroring but to re-establish mirroring, I have to backup and restore on the other side. So if I can change mirroring to asynchronous when primary server is down due to connection breakdown between witness and mirrored server, then when witness and mirror servers come back, I don't need to restore the entire database. Ofcourse I could use asynchronous always but that does not failover automatically. I am thankful to all answers and suggestions. Thanks.
View 3 Replies
View Related
Aug 30, 2006
hi there
I am using SQL SERVER 2005.
I am creating a Merge replication(between two databses), its working well, but i need to goto the Local publications & right click there & select the "View Synchronization Status" & then start the Agent manually to Synchronous the databases.
Now I want the Two databses should be replication (Merge) automatically without going to any menu.
I mean the Synchronization takes place when any of the Databases changes, automatically, without touching anything.
For Example:
If one record is inserted in a table(on commit) of Database ONE, it should be Reflected to the other table of the database TWO.
Any Idea, or Link or solution???
Gurpreet S. Gill
View 4 Replies
View Related
Mar 19, 2008
Porting an existing SQL 2k DTS job over to a SQL 2k5 SQL Server running SSIS.
Background:
The job loads data into an empty work table and performs some work before clearing out the work table.
This job runs every minute.
Question:
If the job happens to take longer than a minute, does SSIS create a second instance of the job?
Or perhaps it does what DTS did and reschedules the job for the next iteration?
Concern:
I need to know because there would be key contraint violations if another instance of the job started before the working table was cleared out.
Thanks in advance
View 1 Replies
View Related
Jan 15, 2008
I'm creating a script component that reads from an OLEDB source and writes to an OLEDB destination.
For every input row, I need to output several rows.I tried using the Row.DirectRowToOutput0() method inside a loop in the
Input0_ProcessInputRow routine but that's not working. Should I be using Addrow() instead? If I use Addrow() does this mean it needs to be an asynchronous transformation?
I remember seeing a blog entry (Jamie's?) that did almost exactly what I wanted, but I can't find it now.
Any pointers appreciated
View 13 Replies
View Related
Jun 8, 2006
Hi
I'd like to know if there's a way to control the execution of ETL packages, such that:
Different packages, or at least packages that don't access the same table or database run asynchronously with respect to each other; e.g., two different packages run at the same time
and
If a package is called for execution more than once by different requests, force them to run synchronously, or one after the other.If this is possible, what resources would it require? Is this possible under, say, a dual or quad processor machine?
Thanks.
View 6 Replies
View Related
Mar 13, 2006
Hi
I am currently trying to write a custom transform componet in c# that will take a row of data, perform a look-up via an external system,
then if there is a match then send the data from the extranel system down macth ouptut (which will have different columns to the input) and drop the data that
was read, else send the data down the unmacthed output which will be the same as the input.
So I would like to write a synchrons transform becuase I don't need read all the rows from the input buffer before I started processing, also I wish have millions of rows
load in memory.
Can this be done? also does any have explame code of how to do this? becuse I can't see how to send data down the match output buffer,
as this will have the lookup results data which will have diffent columns to the input data and how disgard the input data as well.
Thanks Steve
View 7 Replies
View Related
Apr 2, 2007
Hi,
Can anyone please point me in the right direction?
What I am trying to do should be very straightforward:
Take a flat file, perform various transformation on various columns using the SCRIPT COMPONENT task, then send the transformed (and un-transformed) rows to a table in the database.
My question is, how to do this using scripting? I have yet to see an example of what I'm trying to do. (I have both Kirk Haselden's book, Donald Farmer's SSIS scripting book, and the msdn website, but I have yet to see an example of what I'm trying to do!)
FILE SOURCE --> SCRIPT COMPONENT (synchronous transform) --> OLE DB DESTINATION
How do I account for all the columns that will be both transformed and un-transformed, and get them into the table? That is the missing piece of information I can't find anywhere.
The closest thing I found was this code snippet. Do I need to use this syntax, eg. Me.Output0Buffer.FirstName = (where FirstName is the actual column name??)
etc.
Then, once I hook up the SCRIPT COMPONENT to the OLEDB Destination, which uses a connection manager to the table, it will insert FirstName with what I specify?
Help. Thanks.
Me.Output0Buffer.AddRow()
Me.Output0Buffer.FirstName = columnValue (or whatever)
View 8 Replies
View Related
Nov 10, 2014
We are seeing high number of hadr_sync_wait types on our server after setting up AOAG during peak times. We have setup sync type as synchronous commit and failover automatic. Can we change these settings to async and manual failover whenever we need and change them back to sync commit during off peak timings. Any drawbacks because of these changes ?
View 4 Replies
View Related
Nov 21, 2006
Wanted to enquire how this is done. Tried doing it in the setUsageType method I have overwritten but only allows description to be changed. Basically need to change "Name".
Best option would be to change it instantly when a user selects a column from the inputs in the custom component, ie. it changes the Output Alias to a desired value. (Input tab in advanced editor)
All this is being done in a custom component which I would like to be synchronous, can achieve a similar result asynchronously.
Thanks
View 2 Replies
View Related
Jan 8, 2008
Hi,
Can someone explain how the PR testharness is supposed to function? When I run it f with and without debugging and by using dtexec other processes can start and stop before the awaiting response from the console date and number of days to process. If you just look at the testharness and the loadgroupfulldaily the increment date and SQL Audits will finish prematurely before the console data is even entered. Is there a property that I am missing?
Thanks,
Larry
View 1 Replies
View Related
Apr 23, 2008
I have a package with 10 synchronous dataflows, which, combined, load about 300MB of flat file data to a database. This package would run successfully on 2 of our database servers, but would regularly fail on a third. The server on which it was failing is a 4 processor box with 16GB Ram with Windows Server 2003, SQL 2005, SSIS and SSRS installed - much more robust than one of the others that the package worked on. The SSIS error messages returned alternated between the following (with no apparent reason why one would show up rather than another, though the first was the most common):
"The file name "\Server1Folder1File1.txt" specified in the connection was not valid."
"The file name property is not valid. The file name is a device or contains invalid characters."
"An error occurred while initializing the flat file parser."
For the first error message, the error would report different connection managers and their associated file as invalid from run to run. All of the files across the 10 dataflows resided in the same network folder, and the package would read in and process a few of them before failing, so the problem was definitely not the connection string.
Searching the forums, etc. for these errors provided no useful information - given the real cause of the problem, these error messages are worse than unhelpful, they send you looking in the wrong direction. It was only when trying to track down another problem on the same server that I discovered the issue. When trying to copy database backups greater than 12GB over the network to this server, the operation would fail with an "Insufficient System Resources" message.
Some research led to the discovery that problem was caused by the /3GB switch in the boot.ini file of the server (don't let your Server team use that switch if you have 16GB of memory or more). Removing the switch and setting SQL to utilize AWE, fixed both the file copy problem AND the SSIS package failure problem. The SSIS package failed, not due to a bad connection string, but rather to insufficient server resources (read memory) to handle the simultaneous connections.
I hope this may help any others trying to track down this kind of SSIS package failure.
I will also provide here what I have gleaned about setting up Memory usage for SQL Server 2005 running on 32 bit Windows Server 2003 (with the caveat that I am no expert €“ corrections and additional information are welcome).
The following links got me started in my research (thanks to the folks who provided such useful information):
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=55191
http://articles.techrepublic.com.com/5100-10878_11-6091280.html
http://www.simple-talk.com/community/blogs/brian_donahue/archive/2007/09/30/37747.aspx
http://blogs.technet.com/askperf/archive/2007/03/23/memory-management-demystifying-3gb.aspx
http://www.modhul.com/2007/11/10/optimising-system-memory-for-sql-server-part-i/
Also, search BOL for:
Server Memory Options
Enabling Memory Support for Over 4 GB of Physical Memory
Enabling AWE Memory for SQL Server
Windows Server 2003 provides access to 4GB of virtual address space. By default, 2GB is assigned to the OS and 2GB to applications. This default can be change to 1GB for the OS and 3GB for applications by the use of the /3GB switch in the boot.ini file.
Physical memory over 4GB can be addressed by enabling Physical Addressing Extensions (PAE), which is done by setting the /PAE switch in the boot.ini file. This does not increase the systems virtual address space, rather it increases the size of the page table (which is maintained within the virtual address space), adding entries to reference the physical memory above 4GB.
It is important to note that these two switches are not interdependent (they do different things and you can turn each on or off regardless of the others status), though the combination of them has an impact on server performance and the maximum amount of physical memory which can be addressed.
The /3GB switch only impacts the allocation of the first 4GB of memory (virtual address space) between the OS and applications (default 50/50 % split, with switch on - 25% OS and 75% applications). The /PAE switch enables the system to reference/manage physical memory above 4GB, but does not alter the allocation percentages of the first 4GB of memory between the OS and applications. However, when PAE is enabled, the OS requires more memory within the first 4GB to manage the physical memory above 4GB (due to increased page table entries). With the /3GB switch, the OS has only 1GB of virtual address space, and only enough space to manage a total of 16GB of physical memory. If 32GB of physical memory is installed, 16GB of it will go to waste.
Address Windowing Extensions (AWE) is an API that allows an application to address more than the 2-3GB of memory that is available to applications within the virtual address space (first 4GB of memory). SQL Server can utilize AWE to take advantage of memory above the first 4GB that is made available via PAE, and can even reserve portions for its own use. I believe (though I can€™t remember where I got this bit) that SQL utilizes AWE memory only for the page cache (buffer pool €“ which seems to be a misnomer), and not for other operations.
To enable AWE, see the BOL references above.
The big question: what are the recommended settings for all of these? That all depends on what you have running on the server. You need to leave space for the OS, SQL Server and any other applications you have.
The hard and fast rules:
If you have more than 4GB of RAM, you must use the /PAE switch in order to take advantage of it.
If you have more than 16GB of RAM, you must NOT use the /3GB switch in order to take advantage of it.
Based on anecdotal evidence, I€™ve noticed the following generally recommended guidelines €“ assuming the server is dedicated to SQL.
Use of the /3GB switch seems to be a generally accepted practice if you have 8GB of RAM or less. For between 8 and 16GB, some say never use the /3GB switch, others say you can use it up to 12GB and still others up to 16GB. I interpret this to mean that it all depends on what types of loads are being placed on the server and that testing on individual servers will be required to determine whether or not to use the switch. Certainly that was my experience - the /3GB switch worked fine with 16GB RAM, until the server encountered a certain workload. For me, no more /3GB switch.
For setting SQL to use AWE, most seem to agree that it should be enabled if you have more than 4GB RAM. The setting of max server memory is more complicated. BOL seems to suggest (the €˜Server Memory Options€™ entry) a formula of Total Physical Memory minus 1-2GB for the operating system. Based on a desire to be a bit more conservative, I am now using the following formula:
max server memory = total physical memory
minus
4GB for the OS and application processes (since the AWE memory is utilized for page cache, not SQL processes)
minus
AWE memory required by other applications, including other instance of SQL Server
If anyone has additional insight, or a more refined equation, I could certainly benefit from it.
View 1 Replies
View Related
Apr 23, 2004
Consider the following scenario:
I got 2 webservers (2003) using NLB.
I want some kind of common data store (SQL Server).
I need this data store to be fault tolerant.
I don't have any more machines.
I don't have an enterprise licence for windows.
How do I solve this???
View 3 Replies
View Related
May 4, 2007
I have a company with 5 offices connected with a dedicated line (500+ employees).
They have five databases, one in each office.
Now they want a new application which will replace the old (VB6 on MySQL will now be VB.NET 2.0 with MS SQL Server 2005).
The want all the data availiable in all the offices. We will still have 5 databases in 5 offices, but all with the same data.
Sometimes the dedicated line is lost.
Now I have a few questions:
Will the database be still running while a line is down?
Will the database be still running if replication is of type Merge (instead of continuesly).
Is it good practice to use unique identifiers versus Numeric keys?
Is it good practice when the application uses the "master" database to save global things like "Suppliers" and the local database to add and change Orders?
Do you have some recommended reads?
Thx for your time!
Henri
~~~~
There's no place like 127.0.0.1
View 3 Replies
View Related
Aug 13, 2007
For High Protection mode (2servers, no witness, FULL safety), in the event of a catastrophic failure of the principal, can a remote (but well connected) mirror be forced to assume the principal role? The MS manual "Database Mirroring in SQL Server 2005" on p2 says the failover type for HP mode is manual only (as opposed to forced). Elsewhere it seems to seems suggest otherwise but there is no clear and unambiguous discussion describing the exact procedure.
Can anyone clarify this matter? Thanks,
Bud
View 3 Replies
View Related
Oct 29, 2006
I currently have a workflow "like" application which uses msmq to pass messages to each step transactionally. I'd like to take advantage to something like SSB to make the app scale. Here's how it works today:
Start Workflow WebService puts message on queue.
Windows Service has threaded listeners on each queue for the workflow.
The listeners do a transactional Receive off the queue. By doing this is in a transaction, we can enforce that only one listener will pick up and process the message. Once they have a message they call a .Net Assembly to perform some work. Once the work is performed succesfully, the listeners put the work context on the next queue using the same transaction as the receive and commit the transaction effectively removing the message from the receive queue permanently. If a fatal error occured during processing, the work context is moved to an error queue transactionaly. If the process blew up due to other reasons, the transaction is rolled back and the message reappears on the original queue so that another instance of the service can pick it up. Currently all this is running on one instance of the Windows service and cannot span more than one host machine since MSMQ did not support transactional reads off public queues without using DTC.
I saw the external activator sample and it looks great but I have a few questions. First, I'd have to modify it to call assemblies instead of external processes. No biggie there. One other requirement that one and only one receiver should process the message and that when the next message is sent to the next queue, it's done within the same transaction as the receive. I hope that this can be done without any deadlocks. I also saw mentioned somewhere that this messaging could advantage of the local SQLExpress instances so that even if the main SQL Server instance is down, the messages can go out. Last requirement is that our message is a blob (serialized .Net object) so how do we define that kind of message?
Thanks
Costas
View 6 Replies
View Related
Dec 9, 2005
Table1: WriterID, WriterName, WriterSurname
Table2: BookID, WriterID, BookName, Category
Table3: CategoryID, CategoryName
There will be one form. Which way i should use to enter data from one form to multiple tables with same WriterID.
1.) Should i use SqlCommand class and a Stored Procedure?
( Like that: http://www.aspnet101.com/aspnet101/tutorials.aspx?id=13 )
2.) Is there another way without a stored procedure? I can't imagine how to insert same form to multiple tables. In real scenario there are lots of table and all contains a column that holds same value such as WriterID.
View 1 Replies
View Related
Sep 22, 2005
Hi, I want to know the a solution for my Synchronization Scenario
I have a several client databses which are SQL Server 2005 Express and i have a master database which is SQL Server 2000 containing all the individual Client databases. All the individual client databases are kept seperately at the master location. I need to Synchronization the client database with its copy at the master database (something like Merge replication). Both the Client Copy as well as Master Copy could be Publishers & Subscribers.
Now the problem is Because of security & firewall issues, only the Client should have the ability to schedule & initiate the synchronization process with the master copy. Unfortunately SQL Server 2005 Express has only subscriber agent and not a publisher agent.
Any help on how to achieve this would be appreciated . Thank You
View 1 Replies
View Related
Oct 10, 2005
Hello All ..
This is the scenario I'm having :
-- I'm a beginner so bear the way I'm putting it ... sorry !
* I have a database with tables
- company: CompanyID, CompanyName
- Person: PersonID, PersonName, CompanyID (fk)
- Supplier: SupplierID, SupplierCode, SupplierName, CompanyID (fk)
In the Stored Procedures associated (insertCompany, insertPerson, insertSupplier), I want to check the existance of SupplierID .. which should be the 'Output' ...
There could be different ways to do it like:
1) - In the supplier stored procedure I can read the ID (SELECT) and :
if it exists (I save the existing SupplierID - to 'return' it at the end).
if it doesn't (I insert the Company, the Person and save the new SupplierID - to 'return' it at the end)
------------------------------------
2) - Other way is by doing multiple stored procedures,
. one SP that checks,
. another SP that do inserts
. and a main SP that calls the check SP and gets the values and base the results according to conditions (if - else)
3) it could be done (maybe) using Functions in SQL SERVER...
There should be some reasons why I need to go for one of the methods or another method !
I want to know the best practice for this scenario in terms of performance and other issues - consider a similar big scenario ..... !!!
I'll appreciate your help ...
Thanks in Advance . ! .
View 1 Replies
View Related
May 26, 2008
How can I be able to use "NewReceivedTime" as a variable?
Since having to create a new column using CASE statement in SQL would mean that user will not be able to use this new column name and having to receive error such as "Invalid Column Name: NewReceivedTime "
(case
when <value>
else <value>
end) as NewReceivedTime
I'm asking this because, I would want to use "NewReceivedTime" I've created to equate to another Time Column like
NewReceivedTime = LogDateTime something like that.
Thanks.
View 1 Replies
View Related
May 11, 2006
Hi,
i hav a text file in this format..
currency,exchangerate(INR),date
dollar,45,20/04/2006
dollar,46,22/04/2006
britishpound,65,20/04/06
dirham,12,20/04/06..etc..
now,i want that using this as source.. 2 tables should be created and filled with appropriate data..
CurrencyMaster..Currencyid(PK),Currencyname
CurrencyDailyRate..ID,Currencyid(FK),rate,date
how can i do it using SSIS?
thks
View 1 Replies
View Related
May 16, 2007
What I'd like to do is use SQL Express as a back-end for an existing Access 2003 application. The application is currently a single-file Access solution. I'm just investigating options to separate the front-end from the back-end so that the users can access a single data store from multiple PCs. We're only talking about maybe 5 users total, and really no more than 2 accessing it simultaneously at any given time (although the same setup exists in several offices). Right now they are just opening the Access db from a fileshare. This has resulted in corruption on a few occasions, which is part of the reason for wanting to replace the current solution with something that will be a little more robust. I'm wondering if there is a way to deploy a SQL Express db on a fileshare so that it can be connected to by the Access front-end. While we can install things on their desktops, we can't install anything on the file server, we can just put files there. Is there any way that I could make that solution work, or should I just stick with separating the Access front-end and back-end?
Thanks,
Kris
View 1 Replies
View Related
Dec 17, 2006
Hi all,
I have a huge replication task I need to perform. The source table has over 250,000,000 records and everyday approximately 400,000 records are added to the system regularly.
Currently, I am running snapshot replication and it's taking 10 to 11 hours to complete (The internet connection between the production and the report server is slow). The reason I am using this method is because the source table does not have a timestamp column (so I can run an incremental replication) and I cannot modify that table because it belongs to a third party software. It does have a field which is linked to another table holding the timestamp.
Here is the source table and the other table's structure for reference:
DataLog
Name Datatype Size
DataLogStampID int 4
Value float 8
QuantityID smallint 2
DataLogStamp
ID (Which is foreign key to DataLogStampID field in the above table) int 4
SourceID smallint 2
TimestampSoruceLT datatime 8
What I would like to know is, what is the best practice to handle such replication with MSSQL Server 2000.
I am thinking of developing a procedure which follows the following step using a temp table:
This procedure will run (say) every night.
The temp table will contain a timestamp field.
For one time only, I will run a snapshot replication, so that I can have the existing data into the destination db.
I will record this timestamp into a repl_status table which holds the timestamp of the last replication.
The procedure will read the record from repl_status and select all records from the source table which were added since this date and put them into a new temp table (this table will have the same structure as the source and destination tables) and then a snapshot replication will "add" these new records only every night to the dest. table. By using this method, I will only transfer the records which have been added since the last replication (last night - less records).
Any comments would be greatly appreciated.
Thanks for your time in advance,
Sinan Topuz
View 8 Replies
View Related
Jul 20, 2006
Hi,
I am faced with a scenario where I have to predict the time period that will
required to solve a particular problem depending on various factors like
Problem Type, Severity Level, Resource Availability etc.
I would like to know which algorithm is best suited for the
above scenario and the reasons for the same.
View 3 Replies
View Related
Dec 24, 2007
Hi,
I'm about to implement a DB Mirroring on my production server.and i'm wondering about the scenario what will happen if.
1. if my primary server suddenly die.how i can operate the mirror DB without losing data?
2. when the primary server comes to life, how i make the primary server to lead again with the DB Mirroring?(do i need to recreate all procedures of DB Mirroring again)
3. how can i be sure that the DB mirroring is indeed mirrored on the drp server?
4. what about alerting - can i be notified via email if the DB Mirroring isn't sync? or if the drp server is down and other issues that can be occur .
Thx
View 9 Replies
View Related
Aug 10, 2006
Hi,
I have a product basket scenario in where I have to recommend contracts to a customer based on the product and the quantity he/she buys. Product Quantity is an important factor which administers the user in the purchase of a particular contract
I have the following tables with me.
Customer product transaction table, Customer Contract transaction table but there is no direct relationship between contract and product in the database. The only way the two can be linked is through the customer.
If I create a mining structure with Customer-Product information as the nested table and Customer-Contract information as the nested table with customer being the link between the two, the model is showing some irreverent contract recommendations.
what is the solution for the above problem? Is it because the is no direct relationship between the product and the contract?
How can I overcome this problem?
View 6 Replies
View Related
Apr 25, 2007
Hello All
We have Asynchronous DB Mirroring established for our Production database
which runs on SQL Server 2005.
For the DR Test we plan to do the following during the period of low activity :
1. Pause the mirroring on the Principal Server.
2. Break the mirror on the Mirror Server
3. Take on offline backup on the Mirror.
4. Bring the Mirror Database up.
5. Run the DR Tests on the Mirrored Database
6. Restore the Offline Backup taken in step 3.
7. Reestablish the mirror from the Principal to Mirror.
8. Resume Mirroring on the Principal Server.
9. Verify Mirroring is happening on the Mirror Server
Can u please let me know if this plan is feasible and if there should be any modifications to the plan that are required. Any other suggestions/input is appreciated.
TIA
Best,
Jay
View 1 Replies
View Related