I am hoping someone can give some advice on the following things:
I have read a few times about a data access layer in an n-tier application. I am assuming that this should be done
using sprocs. Is there an advantage of using sprocs instead of views ( in situations where the same thing could
be accomplished using either)? Will a sproc run faster than a view? Can any share any info?
Are sprocs best suited for data access and to enforce business rules?
I know SQL Server has reserved words that shouldn't be used. I am wondering what the best thing to do is
in the following situation? What is the best way to handle storing a customer or clients address? I am working from a book that shows the name of a column as "Address". I have found that with SQL Server 2005
Express that this is a reserved word(it is shown in blue in the query window). I want to keep my names short. I am trying to avoid a name like "StreetAddress". Is my book teaching bad habits?
...........................................thanks...........................................................
Hey Gang, I got a question about authentication. I have a Just loaded SQL Server on my virtual box and loaded Microsofts bpa and microsoft security anlyzer. I get a funny reading. The security scan work but the bpa scan does not. I also look at the database that I can get access to and notice a new database schema. I was thinking if I remove this database will it have any affect on the present database.
I'm developing a trading application in C# that processes streaming data that can be very heavy at times. Transactions are occuring, logging information is stored, etc., often at a very rapid pace. Up until recently, I had been storing all of this information in memory in DataSets -- upon a graceful exit, the application would call DataSet.WriteXml() to a file, and then next time the application was opened, it would consequently call ReadXml() to obtain its last state. This is all great in theory because it is super fast, there is negligible lag when I add a row to a DataTable that already has 12,000 rows (at a rate of 300 per second), however if the program were to crash, without a chance to write the data to file, then I'm screwed.
My solution is to have the various DataSets bound to a SQL Server database -- I've created strongly typed DataSets and TableAdapters to help aid this process. Because often I'm adding rows VERY quickly and in large numbers to these tables, having an INSERT command execute on the database for EVERY transaction is prohibitively slow.
What I would like is to have some mechanism in place where I only affect the local DataTables on the fly, and then occasionally make calls to TableAdapter.Update (on their respective TableAdapters) during slow periods (or lulls in the message traffic) so that any changes to the in-memory data is persisted on the database. I'm looking for general "best practices" in this regard -- nothing specific, just advice from people who have dealt with this type of application/environment before and might have some tips.
The first thing I thought about doing would be a relatively simple algorithm that, upon receiving a new transaction, sets a timer (for, say 500 ms). When this timer is triggered, it calls the Update command on the DataSet that was updated. If another update comes in before the 500 ms, it first checks to see if there's an active timer for this DataSet, if so, it cancels it, and sets a new timer for 500 ms. This way, if I have a very rapid set of transactions that all occur within a few ms of each other, it will not make any calls to the database during the "peak" of data -- only when there's a 500 ms gap will it make a call.
Re: Best Practices (security): Should SQL Server (2005) *not* be installed on the same physical HD as the Windows OS (Server 2003 R2) ?
Hi,
We're setting up some new servers, and today I'm looking into best practices for the SQL Server Setup portion of it.
The servers have include 2 x 250G HD, and from what I've read, where IIS is concerned, it should not be installed on the drive that has the OS on it, for security reasons. I was wondering if the installation of SQL Server should be on the non-OS drive as well ?
Does anybody have a link to either of these two documents. My company is getting ready to go through an audit and we need some firepower and to know what is expected. Any help with obtaining microsoft SQL Server 2005 best practices documents is appreciated.
Dear All,Please suggest some of the best practices for writing SQL serverstored procedures?I'm writing a business function (stored procedure), which callsmany-stored procedure one after another.I want this to be best optimized, so that speed can be very good.Suggestion in this regard will be appreciated.Thanks in advance,T.S.Negi
I'm an experienced SQL Server and .NET developer, but I wanted to expand the way I look at things and see how other developers approach the situation I'm going to outline in this post. I'm going to be engineering a large, new project soon and I want to examine how I approach this and see if there is a better way.
I work in a small development group with two developers (myself and another). We pretty much wear all the design, testing ,and development hats during the course of a system's development. I had a discussion today with the other developer about creation of stored procedures.
I like to create small specific stored procedures for whatever I'm doing. I will usually have at least 4 stored procedures for each table; Insert, Delete, Update, and Select. Frequently I'll have more Select procedures for special cases. I do this for several reason. One I can get Visual Studio to generate the basic procedures for me and utilize them in a typed dataset. Secondly I can keep all my SQL code server side, and in small maintainable chunks. It is also fairly obvious what my stored procedures do from the name. The main drawback is that the list of stored procedures gets huge.
The developer I work with likes to create a single stored procedure for Insert, Update, and Deletes. Based on the passed primary key, the procedure determines what it should do. For example:
Code Snippet
CREATE PROCEDURE udp_users_processing @key int output, @name varchar(200), @status int AS IF IsNull(@key,0)=0 BEGIN INSERT INTO ut_users(key, name, status) VALUES (@key, @name, @status) SET @key = SCOPE_IDENTITY() END ELSE IF KEY > 0 UPDATE ut_users SET key = @key, name = @name, status = @status ELSE BEGIN DELETE FROM ut_users WHERE key = @key END This has the advantage of being compact, but it has issues with VS.NET and designer support. Loss of designer support isn't a huge problem, but it can be handy to have. I'm also not certain how this approach would work when using typed dataset and the table adapter to do updates.
What is YOUR opinion? How would YOU approach this in your situations? Are there other alternatives that might work just as well?
I am very well versed in the proper way to set up a SQL Server server prior to installation.
In this I mean, the proper process in placing your MDF, LDF and NDF(s) on seperate spindles/discs and also to place TempDB on its own spindle/disc and such.
There are numerous other points to cover in setting up the server based on memory, security, processor and such but I am sure you understand.
What I am looking for is the link(s) to the whitepapers discussing these Best Practices methodologies for pre-installation setup.
I looked on the Best Practices page but did not seem to find a doc that contains all the Best Practices that should be followed, if possible of course, in setting up a server prior to the SQL Server 2005 installation process.
Can anyone please point me to a link(s)/doc(s) that describe what I am looking for.
I need to pass this information down to other members of my team.
what pro's cons would there be to having a linked server run a local stored proc against another sql server or create that stored proc on that other sql server and call it from there in the c# code. i would think that calling the stored proc would be more efficient that running a linked server - but please let me know your thoughts. I'm not sure i can have permission to add a stored proc on that server, so possibly the linked server is the only solution - but if i can put a stored proc on that server should i? thanks. Jeff
I'm building a hosted website and I am using SQL 2005. The DBA for the host has told me that i can not encrypt a symmetric key with a certificate, when using that symmetric key for encryption. As i read that this method provided optimum performance/ security for encrypting columns of data.
The DBA told me i can use a cert or a symmetric key for encryption. I have searched for comparisons and found a blog entry by Laurentiu Cristofor comparing certs with asymmetric keys. Which leads me to believe that certs and asymm are very different than symmetric keys.
My question is which is the best choice in a hosted environment for column encryption, a cert or symmetric key. Which is more secure? Does one offer a significant performance (dis)advantage?
I would like to know best practices for setting up my environment. To date, I've had everything running on a single server. That would include the database engine, SSIS, SSAS and SSRS. The box configuration is dual hyperthreaded 3.6GHz Xenon with 4GB of RAM on Windows Server 2003. I just received a much larger server and want to configure it to maximize our environment. The new box contains four 2.6GHz Quad Core processors with 16GB of RAM. I would like to know if I should split the ETL and database engine from SSAS and SSRS, or should this box have enough horse power to house it all and use my other box as a dev environment. Also, we are planning to purchase Performance Point 2007 primarily for PAS and Scorecard Manager so please take that into consideration as well. Any comments are greatly appreciated.
Microsoft recommends using Windows authentication instead of SQL Server authentication in SQL Server 2005 for improved security. What are the Microsoft best practices for implementing this? Will be helpful if someone also provides some links that talks about this....
This (demo) statement is fine in Access, and so far as I can see, shouldbe OK in SQL Server.But Enterprise Manager barfs at the final bracket. Can anyone helpplease?select sum(field1) as sum1, sum(field2) as sum2 from(SELECT * from test where id < 3unionSELECT * from test where id 2)In fact, I can reduce it to :-select * from(SELECT * from test)with the same effect - clearly I just need telling :-)cheers,Jim--Jima Yorkshire polymoth
Ive been trying to get some type of Blogpost tutorial Etc on how to set up SQL Server 2005 to serve data to a website1 How do I setup users? a) Can I have 3 roles? 1a) Owner of DB can read/write 2a) reader Can Only read from database 3a) Writer. Can only write to database How would I set this up? How can I call all these from ASP.NET depending on what the user is currently using on the website? eg: Just serving pages with content (reader) Forms (writter) admin (owner)I also need to have the SQL keep sessions (Ive already ran aspnet_reqSQL.exe) and created all that im just unsure what type user can access all thisAny tutorials on how to set up a whole WEb application project from DB to VS 2005? Thanks
I have some C# code that is pulling data from a database where a majority of the values being retrieved are NULL , yet their initial column data types are both string and int, which means that I have to temporarily store these NULL's in int and string data types in C#. Later on in my code I have to test against these values, and was wondering if I am doing it correctly with the following code.
The following statement the variable or_team_home_id is of a string data type, but may have had a NULL value assigned to it from the database if (!or_team_home_id.Equals(DBNull.Value)) {}
The following statement the variable or_manager_id is of a int data type, but also may have a NULL value assigned to it from the database. if (!Convert.IsDBNull(or_manager_id)){}
Are these the correct way to test against NULL values retrieved from teh database and that are stored in their respective data types.
SELECTTOP 1 GeoIpLocation.Country, GeoIpLocation.City FROMGeoIPLocation INNER JOIN GeoIPBlock ON GeoIPBlock.GeoIpLocation_Id = GeoIPLocation.Id WHERE@nIpNumber BETWEEN GeoIPBlock.IpFrom AND GeoIPBlock.IpTo Result: disaster.
The between operator uses the index on PK on GeoIpBlock, which results on half the table for first part of between and half the table for the second part of between. A bit better query is with FORCESCAN on GeoIPBlock, but it still runs slow, as it scans a lot of records.
Question: Is there a better way to index this kind of data?
I am looking for help ( at beginnners level ) in the use of the newly released / promoted SQL Server Compact Edition (what used to SSEv ).
I have a single user desktop application - not PDA / mobile, which uses JET database today & I need to convert it to current databse as JET is no longer supported under Vista. I understand that it is SQL Server Compact edition. My application is native c++ with no .NET and I would like to keep it that way. Can someone point me to examples, documentation, or any help which will get me going?
Currently, I am connecting to the JET database using ADO and a connection string ( again ADO, not ADO.NET ). Is it just a matter of changing the connection string and making sure that the righ dlls are included and registered?
Partial Class Default3 Inherits System.Web.UI.Page
Protected Sub Login1_Authenticate(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.AuthenticateEventArgs) Handles Login1.Authenticate
Dim cmd As New SqlCommand con6 = New SqlConnection("Server=localhost;UID=sa;PWD=XXXX;Database=knowledge;integrated security=sspi") con6.Open() cmd.CommandText = "select Status from usr where Uid='" & txtUsername.Text & "' and Pass='" & txtPassword.Text & "'" cmd.Connection = con6
I am using the open query method to connect a Oracle server. Below is my code to connect to oracle,when I execute the same query in oracle it fetches 199 rows whereas in Sqlserver it returns only 66 rows. I have tried only one record based on id..sqlserver query returns 0 rows..whereas the oracle returns 4 rows..Can some one tell me what will be the problem
SET QUOTED_IDENTIFIER OFF declare @sql varchar(750) select @sql = "SELECT * from openquery(PTTSTATUS," + '"' + "SELECT A.PROJECT_ID,C.STATUS_NAME ,A.CNUMBER FROM PTT.PTT_PROJECT A, PTT.PTT_STATUS C WHERE (C.STATUS_NAME IN ('Closed', 'Cancelled')) AND A.PROJECT_STATUS_ID = C.STATUS_ID AND A.CNUMBER is not null ORDER BY A.CNUMBER " + '")' EXEC (@SQL)
After running geography::Point(Latitude, Longitude , 4326) on the latitude and longitude provided for each location, my Geography column for each row is populated with the following:
I have a rather complicated report with lots and lots of textbox and line controls. When I preview the report on the Report Server the layout is all kinds of skewed and all kinds of stuff is out of place. But when I export the report to PDF or TIFF, the output reverts to it's proper form. Why is it doing this? Is there anything that I can do to not make it so ugly when previewing?
I wonder if somebody here could recommend a good article about MS Service Broker. I'm looking for some advice and tips in designing applications using SQL Service Broker, mainly QN. For instance, maintenance routines and common faulty scenarios I might find later when my solution is implemented. I have googled for a while but all I can find are recopied examples of QN.
Recently, several of our users have started to experience issues with hyperlinks in a report. I have created a textbox added navigation and selected the report to 'jump to' and the users are getting this error:
The specified server url does not link to the report server for this report manager or is not in the correct format
I have changed this to a an url but the error persists - there is nothing on google or MSDN that i can find which relates to this error so im kinda stuck with this one! The error is not affecting all users, just a small subset of them. Each of the users has different AD rights with one of them being a Domain Admin so i dont think it is a security issue.
*obviously the <SERVERNAME> and <REPORTNAME> are not real server or report names - im just being cautious by not including this detail on a forum post.
Report manager is installed on several machines as part of a webfarm but this has not caused any problems in the past (i have over 200 reports under my belt thus far)
Hi I use data presentation controls like gridview, formsview in my application. In many of the webforms i also use multiple datasources mainly for the purpose of 2 way data binding for controls within data presentation controls.I am concerned about the performance issues this might cause as users using these pages increase.What is the likely performance impact ?Once the databind is done and values are populated in the respective controls, does the database connection of datasource control get closed, or is it open?What are the best practices while implementing datasource controls?
I'm looking for some documentation on SQL 2K Installation tips on a Windows 2000 Member Server platform as well as best practices for ongoing maintenance .
Real world experience as well as Microsoft propaganda are all welcome.
I am looking for some examples of how to manage DDL scripts amongvarious versions of a production db and development and testing. Ihave tried a few things in the past, and it always gets very muddledand cumbersome.I need to be able to build any version of the database from scratch,BUT I also need to maintain an upgrade path from any version to anylater version. So it is not enough to just maintain a master buildscript, but I don't want to maintain 2 different things (modify themaster build scripts AND create a new "ALTER" script for each versionchange).I thought I had seen an article somewhere that layed out a process formanaging this, but I can't find it now (I thought it was in SQL ServerMag). Does anybody know of this article or have a resource they couldpoint me to that outlines best practices in this area?Thanks,Jason Wood, DBA in training.
Hi All,My question is what are the best practices for administering largeDBs. (My coworker is the DB administrator. I'm more of thedeveloper. But slowly being sucked in.) My main concern is that wehave some DBs that take approx 3 hrs a night just to rebuild theindexes. I know that with MSSQL 2000, I can use partitioned views tobreak out the table(s) into smaller databases and tables. But we alsohave an older server that runs MSSQL 7. Lastly how do you handledrive space issues? Do you spread out the DB across multiple MDFfiles on different drives? Thanks in advance.