I've implemented a solution with application, database and report server on seperate machines. The application is a web app and is Internet facing. What is the best method for executing reports on the RS server that are initiated from the web server? Using URL access requires a login or anonymous access neither of which are desired. Web services works but I loose access to the toolbar. Is there some other way to pull this off where I can let the public access reports and give them access to the toolbar?
I have stored procedures which have an output parameter with a rowcount.I used to just access the stored procedure directly. However, I'm separating my data access into a method in my data access class, and the method returns a datatable.I'm not sure how I'm supposed to access my output parameter anymore.
We have database thats transaction intensive, so we are trying to sepetrare ldf file from mdf file to a different disk array. what raid should I use for the Transactional log file(.ldf).
I would like to know where I can find a senior database architect. Someone who can develop and implement the database and its stored procedures. I am looking for an experienced person.
It is a contract position, in San Franicisco. The pay is good. Could anyone help me? I tried Dice, Monster and it seems all of you are working...
I am replacing the corporate SQL Server at work. The new server will have 6 striped disks of 160G with about 4G of RAM. The current SQL Server currently has two instances which run web applications and a small database warehouse about 6G. Analysis Services is also installed.
Due to a couple of new apps being added to the server and the SQL Server 2000 enterprise license we acquired, i was thinking of adding 2 more instances so that the applications can be independently managed in terms of restarting the SQL Server. I also would like to permanently fix the memory settings on each instance to give more resources to more important applications. The log and data files would also be spilt onto 2 separate hard disks. i understand there are implications on performance such as CPU etc. Is it normally advisable to have more than 1 or 2 instances ? Most of the applications are not very CPU intensive. What other implications or performance issues would l have ?
We will be creating a moderately high-volume OLTP database application that needs 24/7 availability. We are planning to offload OLAP processing to a second copy of the system. We will be using SQL Server 2005.
I originally planned to set the second server up with SQL Server 2005 mirroring to cover the 24/7 availability requirement, with the idea that we could also do OLAP reporting off of the mirrored copy of the database. But I've gotten some indications that a mirror database is offline and not available for querying. So I figured I would use transactional replication to keep the OLAP database current. Now I am wondering if I need to use mirroring at all, or if I should just use transactional replication on the entire database and swap to the replicated database if the production server crashes.
What is everyone's opinion?
Replication only, for both OLAP reporting and failover? Mirroring to one database for failover, with replication to a another database for OLAP reporting?
In a situation where one may have a single master SQL Server that ultimately needs to communicate information back down to 1000's of downstream servers, what is the recommended architectural approach?
It doesn't feel right to have to add 1K-5K routes to the master SQL Server. Is there a way to have the dowstream servers "broadcast" their existence to the master, so that new servers can be added and updates can happen seamlessly? Does this fall into a pub-sub scenario or is there a better way? And, if so, how to ensure an open conversation (so that one server doesn't miss information that all the other servers received)? Should the master dynamically create routes or better to rely on an open conversation initiated by the downstream server?
Hi Everybody , Could Anybody please explain in detail about the Achitecture of Sql server reporting services? 1) What a Report processor in detail and what it does? 2) What is the exact work of Extensions? 3)What is SOAP and WMI?
hi i am haveing 1.5 years exp in asp.net(c#),i want to improve my knowlege in Database Architecture (datmodeling,uml,normalization,etc..).could anyone suggest me course or any books
I have the following stored procedure: INSERT INTO MyTable ( Value1, Value 2) VALUES( @Value1, @Value2) SELECT SCOPE_IDENTITY() How do I put this sp in the DAL typed dataset, so I can get the Identity value in the Business Layer?
Don't know if this is the appropriate forum. I am looking for an experienced SS consultant to review our setup, hardware architecture, recovery plan, and to provide high-level advice moving forward. My company is a CRM hosted software provider with a dynamic, metadata-based product built in Visual Studio 2005. Currently we run on SS 2000, but plan to migrate to SS 2005 or 2008. We anticipate quite a bit of growth and want to make sure that we are on the right path. Let me know if you are interested or know someone who is.
Hi,Is 'sqlserver.exe' the only windows process does everything for thatinstance of the database?Please explain in details the SQL server process architecture.Thanks*** Sent via Developersdex http://www.developersdex.com ***
To all the SQL H/A experts, we were wondering if we could have 3 physical nodes and 2 active/passive clusters architecture setup on a SAN as seen in the image below? http://www.geocities.com/juanlieu/CP_Arch.JPGIn case you cannot see the diagram, it would looks something like this: active/passive Cluster A ---> physical server A (Win2003/SQL2005) ---> HP EVA SAN ---> physical server B (Win2003/SQL2005) ---> HP EVA SANactive/passive Cluster B ---> physical server B (Win2003/SQL2005) ---> HP EVA SAN ---> physical server C (Win2003/SQL2005) ---> HP EVA SAN In this setup, I understand that Server B cannot be called upon as the active server at the SAME time by both clusters. question: what would happens if it does, would Server B reject the last cluster that calls it?Appreciated in advance.
Hi All... We're developing an asp.net based project. We plan to deploy it across multiple servers running in an NLB environment. That is NLB via hardware or software - we generally leave that decision up to our customer. Different from prior projects we've done, this application will rely on a SQL Server 2005 database. With the NLB, we essentially install our application many times across multiple servers. As hits come in from clients, they'll get directed to one server or another by whatever NLB technique is being used. The applicaiton generally doesnt care which server is hit.
But what about the database? What's a typical or best architecture to employ? Should each server have an instance of SQL Server and then somehow through replication they all keep each other updated? Or should each of the servers hit one lonely instance of SQL Server and it somehow keeps some other backup instance updated? I think the first approach seems to make more sense from a load balancing point of view, but depending on how many servers there are, it could get quite complicated.
Again, we're in the early stages so we really havent done much research on this yet. Any tips would be appreciated. Any good white papers out there?
I need to load a lot of Excel, CSV, ... etc. files. These files have hundreds of columns and I need to validate the data. Some are simple range type checking, some are more complex checking involve multiple columns.
There may have several hundreds of such rules. And I may need to let the program to automatically correct some invalid data in the future.
Where to implement it in SSIS? Or just load the files without any checking (all type to text), and checking using T-SQL?
Hi, My application is about to scale up significantly, and it seems that SSB could be very useful to help me scale it right, especially I like the multiple readers feature.
So, here's the deal €“ my application is about to get lots of records (peaks could gets up to €“ 300k records per hour). Each record must be processed separately, but could be processed in parallel €“ no dependency between rows (gut feeling tells me not to overuse parallelism, but I€™ll do my tests). It takes about 10ms-15ms to process a single row; I don't mind sliding to non peak hours if needed. Each record is constructed of about 10 columns.
My questions:
1. Is SSB the right solution as a buffer queue? any famous use case is similar to my problem ?
2. What is the fastest easiest way to serialize / deserialize each record?
I would prefer not using CLR integration due to performance issues and stick with tsql for now. I don€™t necessarily prefer XML serialization. If binary serialization works faster, it€™s fine with me.
Can anybody give me design ideas on the following?
I have numerous tasks, any one of which can fail. I want to point them all (via 'Failure' constraint) to a SendMail task for a "failure notification" email. This I have setup, and is working fine. Now, I want to have a changing message for the email's body (MessageSource) to say "Process A has failed", or "Process B has failed", etc.
My initial thought was to add a variable, then add a ScriptTask between each task and the single SendMailTask, have the script update the variable, and have the MessageSource (body of email) mapped to the variable. Is this the proper way to go about this? Seems like if I have 20 processes that could fail, then I'll have to add 20 script files; this becomes a bit unwieldy the more processes that I want to monitor.
This is out of my league. I’m hoping to get some good advice from someone experienced in the area. My inquiry is how to best handle large amounts of records, say 500,000 records or so. I am web programming and can’t send all this info from server to client. Part of the problem is the manner in which the data gets stored. I cannot calculate what records I need to get for a distant page (i.e. if 10 items per page then where is my data getting page #512). These are the very first five (5) records. First row is the primary key.
14 451 0 V5 2 vials 1 V5 8/10/2007 3
20 451 0 V10 2 vials 2 V5 8/10/2007 3
25 451 0 V5 2 vials 1 V5 8/15/2007 3
26 451 0 V10 2 vials 2 V5 8/15/2007 3
27 451 0 V40 2 vials 8 V5 8/15/2007 3Because records 1 through thirteen had been deleted, the primary keys for the first (5) are no longer 1, 2, 3, 4, 5. Had this been the case, a person could easily retrieve page 512, by mathematical calculation.
Page 1 would have been Records 1..10 # 2 = > 11..20 # 512=> 5111..5120. I already have a program that loads the entirety into an arraylist; then picks out the page of data from the arraylist location. I could rewrite things so that a temporary SQL Table is created – but I don’t know is that a good idea? All advice welcome - TIA
I have created an application that I intended to be 3-tier, but I am not sure if I did it properly. I constructed it like this: I created a DLL that contains methods that validate the passed parameters, checks the data against business rules, and issues ADO.NET methods to access the data. The ASP.NET presentation layer uses Object Data Sources to link to these methods. With this architecture I consider the ASP.NET pages to be the presentation layer, the DLL to be the business layer, and the database itself to be the data layer. Now I am wondering if the standard practice is to have a further division? In this case, there would be a business layer DLL whose only purpose is to validate the parameters passed to it by the presentation layer, and to do business rules checking. There would also be a data layer DLL whose purpose is to accept parameters from the business layer and issue ADO.NET methods to access the database. In this scenario the database itself would be considered part of the data layer, or not considered to be a layer at all. Either one will work, but I would like to implement the architecture that is most accepted, and allows the easiest maintenance. What is the best practice for designing a 3-tier architecture?
Thank you in advance for any advice that is provided from by the Dev Shed Users.
I'm on a development team that has been having an ongoing discussion/argument about the best way to handle our users needs while in Europe.
We are developing a Purchase Order application in VB.net using MSSQL Server 2000. We have about 5 out of 10 users that take a six-week trip to Europe. While in Europe the users will need to use the application. However, there are some cities they visit where the network connection will be slim to none.
The ongoing argument is as follows:
-Should we create a server running SQL Server 2000 for them to take Europe and sync up the two databases when they return?
-Should we create a version of the application running MS Access?
-Should we create a version of the application running MySQL?
-Should we do something completely different that we haven't thought of?
-Also, I'm not sure if the following is a possible architecture, from what I've found online I haven't seen an architecture that runs both SQL Server and Access (hope this makes sense):
Build the VB.net application running a shell of the database in Access (locally) that temporarily houses the data and tracks the database transactions made while using the system. Then upon closing the application a ?module? would execute that would perform the transactions made by the user on SQL Server. Then we could dump the SQL Server Database to an Access database before they go to Europe, and run their changes on the SQL Server Database when they return. Users at home will not be making changes to the same data as those in Europe.
Interested in feedback from the SQL grand wizards (and would-be wizards) that haunt these forums.
Let's say you need to constantly stream data into an OLTP system. We are talking multiple level hierarchies totaling upwards of 300 MB a day spread out not unlike a typical human sleep cycle (lower data during off-peak, still 24/7 requirements). All data originates from virtual machines running proprietary algorithms. The VM/data capture infrastructure needs to be massively scalable, meaning that incoming data is going to become more and more frequent and involve many different flat record formats.
The data has tremendous value when viewed both historically as well as in real-time (95% of real-time access will be read-only). The database infrastructure is in it's infancy now and I'm trying to develop a growth plan that can meet the needs of the business as the data requirements grow. I have no doubt that the system will need to work with multiple terabytes of data within a year.
Current database environment is a single server composed of a Dell PowerEdge 2950 (Intel Quad Core 5355, 16 GB RAM, 2 x 73 GB 15K RPM SAS ) with an attached Dell PowerVault MD1000 (15 x 300 GB 10K RPM SAS in RAID 5+0 [2x7] w/hot spare) running Win 2k3 64-bit and SQL Server 2005 x64 Standard, 1-CPU.
I am interested in answering the following questions:
Based on the scaling requirements of the data capture and subsequent ETL, what transmission method would you find most favorable? For instance, we are weighing direct database writes via stored procedures for all VM systems versus establishing processes to collect, aggregate and stream CSVs into a specialized ETL environment running SSIS packages that load data and then call SQL Stored procedures to scrub and prepare for production import. The data will require scrub routines that need access to current production data, so distributing the core data structures to multiple ETL processing systems would be expensive and undesireable. Cost is very important to the overall solution design. In terms of database infrastructure, how would you maximize business value while keeping cost as low as possible? For instance, do you think there is more value in an ACTIVE/ACTIVE cluster (2 x CPU licenses) where one system acts as ETL and the other as OLTP or would you favor replication of production data from ETL to OLTP or (vice-versa). With the second scenario, am I mistaken in thinking we could get away with a Server/CAL licensing model for the ETL server?. Are there any third party tools that I should research that would greatly aid me here?
I appreciate all feedback, criticism, and thoughts.
HiI joined a project where 100,000 rows were added everyday. Now due toadditional customers the expectation is 2 million reocrds/day ie 10 GB worthof textfiles. We have to estimate the hard disk, memory, # of CPUs etc.Wewill have one yearworth of data in the db. Rest will be in tapes etc.We will be using WIN2000, SQL Server2000.- Any comparable server sizing willbe appreciated.1. Tohandle every day load, I thought that we will have a table for each day(pre created in the database )and have a view with union all selectingfromall these 365 tables. (This is the only way to partition in MSSQL Serverright?).2. The requirement is to populate datawarehouse tables with all the data.However there will be only inserts mostly but there can be updates too whichhappenned in the past 12 days.Hence we have to use the data from the last12 days and massage it etc and populate into datawarehouse tables.How can I do this so that I will have the datawarehouse tables with n-12days of data and I will alwys add the last 12 days data to it.Do you have any suggestions?Ragu