Please Help With Oltp Solutions

Aug 28, 2007

hi. i dont understand what they mean when they say developing oltp solutions. can anybody pls explain it to me. also does anyone know what ways there are to develop sql oltp solutions using SQL 2005 reporting services, OLTO, Excel Services. as well as any good tutorials for it?

thanks for the help.

View 6 Replies


ADVERTISEMENT

Replication Solutions?

Jul 20, 2005

Hello everybodyI work at a company in Iceland and we have developed a 3-tier solutionwritten in ASP - Visual Basic - MSSQL2000, 4 companies are using thesolution almost constantly and accessing it through a browser. Theconnection has never gone down (yet) so that it has affected ourclients but we are thinking of how be able to run the solution locallyat every place and then create a replication to a main server that ishosted at our place.My question is: Does it affect speed for the clients that are usingthe solution or is there a better way of doing this?The solution is a ticket sale system and our clients use it every dayand people that sit at home should be able to order ticket online.Because of that we can't update the database every 5 minutes or 15minutes because we don't want a double booking in the same seat.Any help appreciated!- Sindri

View 1 Replies View Related

Solutions And Projects: What Are They And How Do They Help The DBA

Mar 10, 2006

I guess the Subject line sums it all up, but I need some experienced explanation of what do a solution and a project represent, and how do I use them to my advantage.

Is a solution an entire database? If so, how can I create a solution from my existing databases?

Are either of them a way to collect together scripts etc which will be run against a production database when the solution is rolled out?

What is a project? Is it a single set of scripts related to an upgrade to a database? If so, can it be executed as a single entity? How is the sequence of execution controlled?

And so on and so on....

2005 is such a step backwards for DBA's with all the features we used to have and now don't. If it wasn't for the fact that MS will eventually stop supporting 2000, I frankly see no incentive to upgrade myself.

Ok, OK, flame off.

Can anybody suggest some resources which might give me some insight into these questions?

Thanks,

-Rob

View 3 Replies View Related

Replication--Too Many Questions---Too Little Solutions

Dec 15, 1998

1) Do I have to install publishing on both servers (A and B) even though one will be publisher and the other will be subscriber.

View 1 Replies View Related

Replication--Too Many Questions---Too Little Solutions

Dec 15, 1998

1) Do I have to install publishing on both servers (A and B) even
though one will be publisher and the other will be subscriber.


2)a. Can named pipes be used for communication between these two servers
which are on the same domain but not on the same network. Why or why
not, whatever the answer may be?
b. If I use TCP/IP, it the connection set up using the client
configuration utility? How is the connection string set up in this
case?
c. Suppose the publishing server was not using Net-Beui. Could this pose
any problems for communication. (is using lmhosts sufficient in this
scenario)


3) I have set up a (remote) SQL Server to be a Publisher/ Distributor.
Both SQL servers have been configured to be remote servers relative to
each other. Following are the steps I have carried out to set up
replication:

______On the Publication Server (a remote server)
I went to Server --> Replication Configuration ---> Install Publishing

Next, I chose a local distribution server. I think that the
instdis.sql script ran fine because the distribution database was
installed successfully.

___Next, I went into Manage Publications from Server menu to set up
the publications.


_____________When I went the subscription server to subscribe to the
published articles, I got the following error message:


Error 14093: [SQL Server] You must be System Administrator (SA)
or Database Owner (dbo) or Replication Subscriber (repl_subscriber)
to execute the stored procedure.




PS
Please Help

View 1 Replies View Related

Bidirectional Replication Solutions

Apr 9, 2008

According to this article from last year:http://searchsqlserver.techtarget.com/tip/0,289483,sid87_gci1251149,00.htmlThese are the main options:* Merge Replication* Bi-directional Transactional replication* Immediate Updating* Queued Updating* Peer to Peer* RDAAre there any new alternatives that have popped up over the last year? Are all of six above still good options based on needs?We currently have a three server topology using merge replication.ServerA (App1DB) <--> ServerB (App2 DB)ServerA (App1DB) <--> ServerB (App3 DB)ServerA (App1DB) <--> ServerCServerA supports 1 intranet application using 1 DBServerB supports 2 extranet applications using 2 DB's (1 per application)ServerC is our DW server that we have installed a Search DB which is used by all applicationsPrior to our "upgrade" to merge replication we were using 1-way Transactional Replication so our topology looked like:ServerA --> ServerB (App2 DB)ServerA --> ServerB (App3 DB)We also had linked servers between ServerB and ServerA as well as between ServerC and ServerA to update data on ServerA. We would simultaneously update/insert the tables on ServerB/C and create custom stored procedures to handle the data already processed from the subscribers.With our new implementation we are seeing more latency as well as locking since merge replication is not running off of transaction logs anymore.My main question is would we see an increase in performance and less locking as a result of a topology like this:Master <--> ServerA (App1 DB)Master <--> ServerB (App2 DB)Master <--> ServerB (App3 DB)Master <--> ServerCWhere Master is a server and DB supporting no applications (hence no OLTP). Would latency be the same/better/worse? Should we stick with our current implementation and just performance tune it?A secondary question I have is given the bidirectional replication options above did we choose the best one for us? These servers are all on the same network hosted by the same provider over Gigabit Ethernet (I assume). I think we have the polling interval set at 5 seconds and we are thinking of moving it to 10 seconds at most. Real-time latency is not critical to our business but it would be a "nice to have". For conflict resolution we are keeping it simple, whichever was inserted/updated last "wins". It looks like Bi-directional Transactional replication might be a better option for us. Would it give us the autonomy we are looking for? Any major "cons" to using Bi-directional Transactional replication over merge replication (beside scalability). Scalability may come into play a few years down the road but for now it is not a high priority. Also would the Master model described above using Bi-directional Transactional replication be a successful implementation?ETA - One thing merge replication gives us is autonomy between our application servers, particularly when ServerA needs to come down for upgrades, the applications on ServerB can still function without any dependencies like we had before with 1-way transactional replication with linked server calls.

View 2 Replies View Related

Meta Integration Solutions

Apr 27, 2007

Hi,



Has anyone experience with Meta Integration Solutions and converting to SSIS ?

and converting to SQL ?

and converting to Analysis Server ?

and converting to Reporting Server ?



You can find their website http://www.metaintegration.net/Products/Overview/Solutions.html



Constantijn Enders



View 1 Replies View Related

SQL Northwind End To End Database Solutions (examples)

Apr 30, 2004

Does anyone know where I can find a Northwind end to end database solutions (examples) written in ASP.NET (VB). I would like to reverse engineer this project to learn more about ASP.NET?

Thanks.

View 1 Replies View Related

Issues For Embedded SSIS Solutions

Mar 16, 2006

My warehouse app employs a distributed architecture. Extractions from disparate (wildly so) systems, and transformations and loads into a standardized schema are performed at various locations close to the source systems (both physically and "logically" speaking). There are security and other reasons for this. However this causes some related design and implementation challenges for the ETL processing.

For one, the ETL processes must be successfully operated by non-technical medical administrators, who actually have little interest in the application and sometimes even the analytics produced by the system, who have other more pressing day to day work they want to be doing, in organizations where turnover is high,training is spotty, and LANs are fragile and often congested.

So, real-time feedback to the operator during processing is pretty dern important. I have built a fairly sophisticated GUI (using .Net forms inside a script component) for the operational interface-input boxes just wouldn't cut it).

But that interface is lacking real time feedback as to processing progress at runtime.

Anyone got that T-shirt yet? I'm thinking I need progress bars and real-time task and component progress reports. Also. is there a way to capture the built in logging output in real-time?

View 5 Replies View Related

Solutions To Large Access In SQL 2005

Dec 17, 2007

I am on a project to develope an route finding system that search for the optimal route to stick with for users of the system. The current version that i've done and successfully run is using normal database access in MS SQL 2005. I stored nodes information in the database and the application will query them using normal "select" clauses and return a datatable object to the application. The result is rather slow cause by the multiple access to database server to query. The application used 8 second to look for a short route withour cosidering lots of calculation of traffic information that i will use later. Any comments on the architecture or approach to switch my algo to T-SQl?

View 5 Replies View Related

Need Links For Multiple Keyword Search Solutions Please

Jun 6, 2008

I have been informed that all my keyword search solutions are susceptible to SQL injection attacks.  Does anyone have links discussing basic ' multiple ' keyword search solutions?  I would think this is a very common routine (perhaps so much so than only newbies like myself do not know it).  I have read the posts about escaping ', doing replace " ' ", " '' ", using parameters and yet every multiple keyword solution I come up with is said to be injection prone.
Example: visitor enters:  Tom's antiquesinto a TextBox control and the C# code behind securely generates the below call to the database.
SELECT L_Name, L_City, L_State, L_Display FROM tblCompanies WHERE L_Kwords LIKE '%' + 'Tom's' + '%' AND L_Kwords LIKE '%' + 'antiques' + '%' AND L_Display = 1 RETURN
I understand that concantenting string parts using an array and then passing the sewn together string to a stored procedure exposes it to injection.  I hope that my single keyword routine below is secure, if it is not then I am not understanding how parameterized SP are supposed to be constructed to protect against injection.string CompanyName;CompanyName = TextBox1.Text;PROCEDURE CoNameSearch @CompanyName varchar(100)AS SELECT DISTINCT L_Name, L_Phone, L_City, L_State, L_Zip, L_Enabled, L_Display FROM tblLinksWHERE (L_Name LIKE @CompanyName + '%') AND L_Enabled = 1 AND L_Display = 1 ORDER BY L_NameRETURN
 

View 5 Replies View Related

What Solutions Make Returning From A Disaster Easier?

Apr 18, 2007

I'm trying to figure out what solution (replication, mirroring, clustering) would work best for me.

I have been reading many articles in BOL and in this forum. Most talk about getting data TO a backup/standby/subscriber, but I can't find a lot of info regarding getting the data BACK after a disaster is over.

We have a main office and a disaster recovery facility. Most of the time there are no data updates at the disaster location. So, I need to get data to the disaster facility via WAN (latency is not a huge issue - end of day syncing is fine) for backup purposes. In the event of a disaster, the main office will be offline and data changes will happen at the disaster site. When the disaster is "over" and we return to the main office, what's the best scheme to reverse the data back to the main office to start business again? We are a financial company, and have gigabytes of relatively static data. Most changes are current day. So, to snapshot a 100GB database when I know only a few hundred MB changes a day doesn't seem feasible to me.

Most replication scenarios (at least from what I see) can't easily "reverse" the replication after a disaster situation. I'm looking at merge replication on a schedule which seems to look good, but was wondering if anyone else has any ideas or suggestions?

View 5 Replies View Related

Best Practice For Report Projects Related To Application Solutions

Feb 20, 2007

Hello Reporting Services Gurus!

I'm about to start on my first reporting services project, but before I mess it up, I'm looking for some guidance on how best to achieve my mission. Here's what I'm looking to achieve:

I have a datacentric application (SQL Server 2005 Express w/ Advanced Services backend) in which I want to build about 50 "canned" reports for the end users. I want to build the reports utilizing server mode so I can take advantage of some of Reporting Services advanced features. I'm not sure what the best practice would be to build the reporting services project. Is it better to include the report project as another project within the application solution? Or, should I build the report project independent of the application solution? What are the pros and cons of doing it either way? How does including the report project build if it's included in the application solution? How would a ClickOnce deployment deploy the report project to the report server?

My ultimate goal would be to have an "off-the-shelf" software solution that includes an installation package consisting of the application project and report project. Is it even possible due to the Reporting Services architecture to achieve an install in this manner with ClickOnce, Windows Installer, or Installshield? Or, is building the report project indepedent of the application project and deploying the reports to the report server "manually" (i.e. deploy within the report server project) the only solution?

Any help would be greatly appreciated!

Tony

View 1 Replies View Related

SQL Server 2005 Non-ansi Joins: Any Easy Solutions?

Jul 20, 2006

My company wants me to research and flags or registry tricks that would allow non-ansi joins '=*' and '*=' in SQL Server 2005 with a compatiblity mode of 90 to be allowed.

The way I understand the situation is that in SQL Server 2005 with the database compatiblity set to 90, non-ansi join SQL such as the following would not work.

Select * from
Customer, Sales
Where Customer.CustomerID *= Sales.CustomerID

To work, the SQL above would have to be converted to ansi join SQL such as the following:

Select * from
Customer LEFT OUTER JOIN Sales
On Customer.CustomerID = Sales.CustomerID

Many hours would be spent browsing through millions of lines of code to find the non-ansi SQL and have changes made.

Does anyone know of any trace flaqs or registry entries that would allow SQL Server 2005 work in 90 compatiblity and still allow non-ansi =* and *= joins in SQL?

Thanks,
AIMDBA

View 3 Replies View Related

Limits In OLTP

Aug 1, 1998

Is it possible / advisable to use Sql Server as the backend database for handling 40 million transactions,
taking around 8 GB space?

View 2 Replies View Related

Oltp && Olap

Dec 19, 2007

hello everyone,

Is it make sense to create 2 databases, OLTP for the insert, update and delete and OLAP for selection
and i'll sync between the 2 databases.

Thanks

View 2 Replies View Related

Any Good Whitepapers On Security/deployment For Entire SQL Server BI Solutions?

Aug 1, 2007

At my current employer we are struggling with the best way to manage security and deployment of a project that contains databases, SSIS, SSAS and SSRS components, using configurations.

Environment (Dev):
3 SQL Server databases, all using mixed-mode security, using SQL Server security credentials.
12 SSIS packages; one master package, eleven child packages, 3 shared data sources
1 SSAS database; one cube, 15 dimensions, three referenced data sources from the SSIS project (in same solution)
6 SSRS reports, one data source to cube (not shared- doesn't appear SSRS can share datasources among other projects in the solution? Why?)

Everything runs fine in development. Now comes the tricky part.

Deploying SSIS and SSAS into production environments:

-Packages use XML config files for connection strings to three relational data sources.
-Deploy to SQL Server storage. Deploy wizard copies package dependencies (including XML config files) to default location set in INI file. When I do this, no config file shows up in remote server (remote server not set up identical to local, so directory does not exist. Need UNC path?) So, being a developer with no "special" permissions on the PROD server, what security permissions is allowing the deployment wizard from copying files to this location on a production server?
-Using a deploy script using dtutil doesn't copy the SSIS dependencies. Is this matter of using COPY or XCOPY to copy the configuration files to the dependency location? Again, in real-world practice, do developers typically change this location in the INI file to another location, or stick with the default. In either case, how does security work that allows files to get copied to the remote folder? (i.e. manual, or SQL Server manages this file folder permission through some other magic)
When using SSMS and running the package after being deployed on the remote server, if the config path is the default (e.g. C:program filesMicrosoft SQL Server90DTSPackages...) it appears to be read from the local machines directory rather than the remote machines directory path (do I need to use UNC paths? The wizard doesn't give this option it seems)
-When scheduling the job from SQL Agent, does the proxy account need permissions to the folder the config files sit in?
-What about the roles security on the packages themselves? Where does the server roles come into play (dtsltuser, dtsadmin)
-Because the SSAS project uses connection references to the SSIS project in BIDS, and SSIS project uses configurations, will SSAS pick up on these connections?
-What about impersonation levels for SSAS? Leave all data sources set to default, and set the database impersonation level to "UseServiceAccount"? What if the developer is not the same as the OLAP administrator on the production server? In this case, Use Service Account isn't an option, and neither is the current users credentials.
-SSAS database also has security for Full Control, but still doesn't prevent security at the data source level within the database (talking about impersonation level, not source db credentials)
-How can SSRS connections leverage other shared connections?


As you can see, there are a ton of security considerations, none of which are intuitive and can be configured multiple ways and actually work (and a ton of ways that won't work).

I need a simple cheat-sheet about each step to take to configure this so multiple developers can work without interruption, hot-deploying SSIS, SSAS, and SSRS changes into different environments (QA, PROD).

-Kory

View 2 Replies View Related

OLTP Vs Decision Support

Mar 2, 2004

Whilst on the Nth hour (n = many) of my magical journey through MS Sql BOL I've come across OLTP Vs Decision Support. After a couple of searches here could someone shore up the following for me please...

A decision support database is the same as warehouse database.

This is for static data commonly used for reporting and analysis.

OLTP is a live database (accomodates inserts, deletes, updates etc).

Is that right?

Also would it be fair to assume that a decision support database is generally going to be spawned from the historical data of an OLTP database? Any real world examples of these two terms would be greatly appreciated too.

Cheers

Dan

View 2 Replies View Related

ETL: OLTP -&> DATA Store

Jul 23, 2005

Greetings All, I was wondering if any of you would share some of yourexperiences regarding the task of loading a data store from an Oltpsource. We are using Analysis Services in a BI product that requiresdata to be pulled from one of our products, an OLTP database. Thedesign is to first run an ETL process from the OLTP source into anoperational data store, from here Analysis Services will pull its datato do its thing. Now, for small OLTP databases (< 1Gb) the storedprocs I have written to do the extraction works well, it is relativelyfast and efficient. However, we have a few databases that are 10Gb'sand the load could end up taking several hours. During this long loadthe OLTP source may be in use and I want to avoid write blocks, or if Iwere to use "select ... NOLOCK" I could get dirty data brought over. Icould used BCP for some of the big tables or Bulk Copy but I wanted tosee if anyone has dealt with this issue and what their specificresolution was for their specific problem. It is my hope that byseeing how others have dealt with this I will be able to architect asolution for my specific problemRegards, TFD.

View 1 Replies View Related

What Are The Differences Between OLTP And OLAP ?

Mar 7, 2008

I want to know the basic dfferences between these 2 (OLTP and OLAP) ?

View 3 Replies View Related

OLTP And Reporting Databases Seperated?

Jan 12, 2006

we are using an object database for our OLTP but for reporting we have
got some issues about performance as the cpu becames a bottleneck.And
we want to be able to run on low end computers...

One of our team members suggested to replicate the object database to a
SQL table.But just a single one.The most denormalized thing ever.(358 coloumns)

is this the fastest way we can get in reporting?
   
    *we don't want harddisk,ram or cpu to became a bottleneck. ( must run on cheap staff)

View 2 Replies View Related

ETL Architecture - Streaming Data Into An OLTP

Oct 15, 2007

Interested in feedback from the SQL grand wizards (and would-be wizards) that haunt these forums.

Let's say you need to constantly stream data into an OLTP system. We are talking multiple level hierarchies totaling upwards of 300 MB a day spread out not unlike a typical human sleep cycle (lower data during off-peak, still 24/7 requirements). All data originates from virtual machines running proprietary algorithms. The VM/data capture infrastructure needs to be massively scalable, meaning that incoming data is going to become more and more frequent and involve many different flat record formats.

The data has tremendous value when viewed both historically as well as in real-time (95% of real-time access will be read-only). The database infrastructure is in it's infancy now and I'm trying to develop a growth plan that can meet the needs of the business as the data requirements grow. I have no doubt that the system will need to work with multiple terabytes of data within a year.

Current database environment is a single server composed of a Dell PowerEdge 2950 (Intel Quad Core 5355, 16 GB RAM, 2 x 73 GB 15K RPM SAS ) with an attached Dell PowerVault MD1000 (15 x 300 GB 10K RPM SAS in RAID 5+0 [2x7] w/hot spare) running Win 2k3 64-bit and SQL Server 2005 x64 Standard, 1-CPU.

I am interested in answering the following questions:

Based on the scaling requirements of the data capture and subsequent ETL, what transmission method would you find most favorable? For instance, we are weighing direct database writes via stored procedures for all VM systems versus establishing processes to collect, aggregate and stream CSVs into a specialized ETL environment running SSIS packages that load data and then call SQL Stored procedures to scrub and prepare for production import. The data will require scrub routines that need access to current production data, so distributing the core data structures to multiple ETL processing systems would be expensive and undesireable.
Cost is very important to the overall solution design. In terms of database infrastructure, how would you maximize business value while keeping cost as low as possible? For instance, do you think there is more value in an ACTIVE/ACTIVE cluster (2 x CPU licenses) where one system acts as ETL and the other as OLTP or would you favor replication of production data from ETL to OLTP or (vice-versa). With the second scenario, am I mistaken in thinking we could get away with a Server/CAL licensing model for the ETL server?.
Are there any third party tools that I should research that would greatly aid me here?


I appreciate all feedback, criticism, and thoughts.

Best Regards,

Shane

View 5 Replies View Related

Need Info Pls: How To Convert OLTP Db To OLAP

May 21, 2008



Hi al,


I need some steps to create OLAP DB .

Actually i have OLTP Db . i created SSAS soln and created cube with necessary dimensions . i deployed it on SSAS instance of management studio.

my qn, is the instance created under SSAS instance OLAP?.

pls provide me steps to have a OLAP database...

Thanks,
Nav

View 2 Replies View Related

DB Design :: Extracting History From OLTP?

Nov 3, 2015

Simple and conventional OLTP database, we need to capture all changes for insert into a DW via staging / ods etc.

Is there a recommended approach for this? Obviously it has to be real time as there might be multiple updates for a time period. I'm thinking of triggers on OLTP tables (bad for performance as it's synchronous), or change data capture, service broker as asynchronous methods.

View 8 Replies View Related

Writeback To The Source OLTP Database.

Apr 22, 2008

I was wondering if it is possible to use SQL Server Reporting Services 2005 with the 'writeback' feature to the source OLTP database?

I have seen article's that refer to using SQL Server Analyses Services (SSAS) and writing back to the ROLAP/MOLAP database, however this is not desirable for our case.

I have almost come to the conclusion that it is not possible without SSAS.

Cheers,

View 3 Replies View Related

SQL Server 2008 :: Large Tables In OLTP

Jul 14, 2015

How many no of records of the tables are called large tables.

We are getting more deadlocks. We are using default isolation. Read & insert statements are blocking each other and causes dead locks.

I am thinking that might be purging will reduce deadlocks.

The table has 15million records. Is this table consider as large table or not in OLTP systems?

In general how many records we need to consider as large table.

View 1 Replies View Related

Analysis :: Creating Tabular Model On OLTP

Nov 4, 2015

I have been looking at implementing a tabular model based on an OLTP database that's not dimensional. I know that this is possible but during my proof of concept I have encountered numerous problems ...

The things that I have run into are: After setting up the relationships I have found that measures filter context don't propagate along the relationships as I would expect. if the measure is coming from a target table and not a source then an ALL member is returned ( as in multi dimensional when a dimension isn't related to a measure group). Given the lay out of an OLTP database this will be hard to avoid.

One thing I have done to try an mitigate the above problem is to combine the tables used for measures in a view and using that in the source to connect to the rest of the tables. however due to the tables being of different grains this has then created duplication in some of the keys and measures. so the keys cant be used in relationships and the measures aren't accurate.

Are these things other people have come across? or should I give up the ghost and just recommend using dimensional models for the source? is tabular just geared towards a DW the same as multidimensional?

View 2 Replies View Related

Mirroring OLTP DB With Transactional Replication To Staging DB

Mar 12, 2007

I want to create a mirrored DB set for data entry in a extremely busy OLTP DB. I want to add transactional replication between the production server and a staging server outside my quorum that I will use to index the data and prepare it for reporting and warehousing purposes.

If/when fail-over takes place, what happens to my transactional replication between the former production sever (now presumably offline) and my staging DB? Does it switch to the new production server automatically or do I have to manually set the replication between the new production server and the staging DB?

Thanks in advance.

View 2 Replies View Related

Best Backup/restore (OLTP/OLAP) Practices In 2005

Aug 14, 2007

We have a live OLTP database for which we create full backups every week and differential backups every day. Recently we added an OLAP database, which we need to update daily with changes from the live database.

This is the process we are planning to use.
1. Restore last full OLTP backup.
2. Apply the last differential OLTP backup.
At this point we should have a replica of the live OLTP database.
3. Update OLAP database based on the OLTP replica database.
4. Delete the OLTP replica database.

Two questions.
1. If different from the process above, how is this OLTP-to-OLAP transformation typically done in the industry?
2. What is the best way to implement this process with SQL Server 2005?

Thanks.

View 3 Replies View Related

Data Warehousing :: Can Use Dimensional Model For OLTP System?

Jul 9, 2015

Question: Is it feasible to use a star schema dimensional model for an OLTP system that incurs few (750 per day)Sales Orders transactions?

Background: My customer wants to replace an existing OLTP system database because it runs on Oracle and their in-house expertise is in SQL Server.  The original database developers that designed the Oracle DB have apparently retired.  The Oracle database has been over-normalized, to say the least.  The number of sales orders being entered daily is small: about 500-750 per day.  These entries are done at the five clerks' convenience, from a paper form, and are very unlikely to ever be entered in quick succession.  Nothing else gets regularly entered into this database except for the occasional change to a customer, but new customers are very few and far between.  

I've designed a star schema for the replacement database with the Sales Order Header and Sales Order detail table combined into a single 'fact' table, and I've introduced some duplication into dimension tables (like customer) in order to eliminate some of the joins (and confusion) that were built into the original database.

I've never tried this before.  Is there any reason this would not or should not work?

View 5 Replies View Related

OLTP Database Design Help For Bank's Customer Table

Aug 9, 2007


Hello,friends

1) CustomerID
2) FirstName
3) MiddleName
4) SurName
5) Title
6) Marital Status
7) Education
8) Occupation
9) Annual Income
10) Line of Business
11) DOB
12) Father Name
13) Mother Name
14) SpouseName
15) Gender
16) Email
17) MainTel
18) Home Tel
19) Passport Number
20)----------------------
21)- - - - - - - - - - -


100)-------------------
Above mentioned list is a snapshot of our customer master table ,which contain approximately 100 attributes related to a customer.

We are designing an application for banking sector (but NOT Core banking solution),for which we may need to capture variable number of addresses for bank's customer,i.e more then three types of addresses Fixed,Temporary and Communication addresses(which is generally the case with all banks).
A single address includes address1/address2/city/country/state/pincode fields.
In context of OLTP database,We have option to put multiple addresses in child table but that involves various joins at the time of data retrival and slow down the query.


As another option we can can create redundent addresses columns(address1/address2/city/country/state/pincode) in master table that will accumulate addresses if demand for more then three type addresses arises(although there is reasonable numer of extra addresses is expected, i.e 10)

Database is expected to serve the records of 25 million(approx) bank's customer,so does someone can suggest me how to maintan the balance between two approches.

View 2 Replies View Related

DB Engine :: Unexpected Update / Delete On OLTP Database In 2005

Jun 17, 2015

We had one of the major issues where one of the table on a heavily used OLTP database seems to have updated the records which were not expected.

Scenario:

We got around more than 12K contracts updated to status expired even though the expiry date is not set to be so:

for E.G : Below table has a column contract status which overnight seems to have updated the values to expired.

Even though the start and expiry date does not follow the logic for above.

We had the above working for past 3 years via a SP scheduled via SQL agent Job which Expire active contracts whose expiration date is less than today's 12:00AM.There has been no change in SP.

How can i track how it happened and what caused it?

View 28 Replies View Related

DB Engine :: In-Memory OLTP Use With Existing Tables / Index / Procedures

Nov 10, 2015

1. I need to make use of in memory engine for my pr-existed develop procedures ,tables ,index.  do I need and code changes for application and how to store tables /indexes in OLTP memory

Assume table index may have primary key index as well.

2. If table with one primary index and 2 foreign constraints, 3 non clusters indexed. which one able o load to memory area and how t do that.

3. In memory is lock free zone. usually locks will happpen in RDMS context . how this works without locks.

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved