Database Development/Deployment Best Practise For Large Teams
Jan 2, 2008
Hi,
I work with a large team developing ASP.NET application that has a large database with over 50 complex stored procedures. It is proving more and more difficult and time consuming to centralise the development and update of the database changes and I was wondering if there were any best practises/tools that could be recommended. I have looked on the web for good articles and haven't found anything difinitive (except that Team Foundation Server is the way forward)..
A brief background to the current process is that everyone develops on the same database, and then updates the stored procedure scripts in source safe (manually). Then when we do a new release someone builds a script of all the database updates and runs it. There are issues related to developers updating there stored procedures over other peoples and other concurrency.
I am looking to move all the developers to start using local databases so that there work only effects them, but then this brings up problems of keeping all the local databases up to date whenever they get the latest source code. The only way I currently see is to build a database update program, that will run and update to the latest version.
Surely this must be a common issue? Anyone have any good ideas/concepts?
Also our setup is Visual Studio 2005, SQL Server 2005 and Source Safe 2005.
We are setting up a new Reporting Services 2005 enterprise reporting tier that will support multiple developers, applications, and end users. We will have mirrored environments including development, test, and production each with their own database cluster, and reporting server.
We have multiple report developers who share a single Visual Studio solution which is saved in SourceSafe and is setup to have separate report projects for each business unit in the orgainzation. Each report project is mapped to a specific deployment folder matching the business unit. Using the Visual Studio Configuration Manager, we can simply flip to the envirnoment we want to deploy to and the reports are published to the correct environment and folder structure.
My problem lies with the common data sources. We are using a single master Common Data Sources folder to hold all of the data sources. The trick is that each and every reporting folder seems to have to have it's own copy of the data source in visual studio. There does not seem to be an easy way to change the data sources for the reports when you publish to various environment, i.e. development, test, production etc.
Ideally, we would have a single project for the common data sources that all reporting projects and associated folders would map to, and we would have a way to associate the appropriate data source for each environment when we deploy.
I'm looling for best practices on how to setup data sources for development and deployment in an enterprise environment that uses Visual Studio to develop and publish reports. We have 3 environments, and 6 data sources per environment and about 20 reporting folder / project in Visual Studio. That's 360 changes that have to be manged when deploying reports. Is there a best practices way to do this?
There has got to be a better way? Can anyone give me some insite into how to set this up?
The web hotel I am using have MS SQL server installed. I do not have it on my local development PC.
How do I develop and test a website, locally, using MS SQL DB, without having MS SQL sever. Is there any light MS SQL sever I can use with a good user interface, and then just upload the DB to MS SQL server?
Hi!I have 6-7 tables total containing subobjects for different objects like phonenumbers and emails for contacts.This meaning i have to do some querys on each detailpage. I will use stored proc for fetching subobjects.My question therefore was: if i could merge subobjects into same tables making me use perhaps 2 querys instead of 4 and thus perhaps doubling the sizeof the tables would this have a possibility of giving me any performance difference whatsoever?As i see pros arefewer querys, and cons are larger tables and i will need another field separating the types of objects in the table.Anyone have insight to this?
I want to log all changes made to a table (only updates, since there will be no deletes or inserts).
I would like to see the user who changed it, the date and time, fieldname, old value, new value. If more fields are changed during the update, than add more records into the logging table.
I have some data that is updated every day but I don't know when. I'm trying to make a solution that runs a SQL query to check if this data has been updated. If it has, I'll send the updated data with FTP as as text file.
How would you solve this?
My idea is to have 2 SSIS packages. - Package1: One runs at the same time every day (inserts any missing updates to a table) - Package2: One runs every hour to check the missing updates table, and runs Package1 if any update for a missing data is found. My only worries is if Package1 is running and at the same time Package2 decides to run Package1 then I could get into trouble if I'm using temp tables with the same name for the text file updates etc. Thank you.
Hi,I am trying to create a development database server (make use of an old machine), with which to learn about sql 2005 and oracle etc. I'm using VS 2005 Beta 2 on my development/workstation machine.My workstation and the prospective server are connected via a router and can 'see' each other.I have installed win 2003 server on two seperate partitions (multiple boot) and installed sql server 2005 on one partition and will install oracle 10g on the other. (I understand these two databases can run on the same machine/OS, but I just wanted to keep things tidy and I won't be using them at the same time, so ...).My question is how do I/should I configure win 2003 server / sql server 2005 on my server machine, in order to be able to connect from my workstation via vs 2005 beta 2 ?Any suggestions or resources on configuration appreciated.
Hi all, I have the following SQLDataSource statement which connects to my Gridview:<asp:SqlDataSource ID="SqlDataSourceStandings" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionString %>" SelectCommand="SELECT P.firstName, P.lastName, T.teamName, IsNull(P.gamesPlayed, 0) as gamesPlayed, IsNull(P.plateAppearances,0) as plateAppearances, IsNull( (P.plateAppearances - (P.sacrifices + P.walks)) ,0) as atbats, IsNull(P.hits,0) as hits, P.hits/(CONVERT(Decimal(5,2), IsNull(NullIF(P.atbats, 0), 1))) AS [average], (P.hits + P.walks)/(CONVERT(Decimal(5,2), IsNull(NullIF( (P.atbats + P.sacrifices + P.walks) , 0), 1))) AS [OBP], (P.hits - (P.doubles + P.triples + P.homeRuns) + (2 * P.doubles) + (3 * P.triples) + (4 * P.homeRuns)) / (CONVERT(Decimal(5,2), IsNull(NullIF(P.atbats, 0), 1))) AS [SLG], P.singles, P.doubles, P.triples, P.homeRuns, P.walks, P.sacrifices, P.runs, P.rbis FROM Players P INNER JOIN Teams T ON P.team = T.teamID ORDER BY P.firstName, P.lastName"></asp:SqlDataSource>There are 8 teams in the database, and somehow the average and obp results are as expected for all teams except where T.teamID = 1. This doesn't make sense to me at all! For example, I get the following results with this same query: First NameLast NameTeamGPPAABHAVGOBPSLG1B2B3BHRBBSACRRBI
ErikGalvezMelville82625180.7200000.7307691.24000011322101015 As you can see, all teams except for Safe Haven's have the correct AVG and OBP. Since AVG is simply H/AB, it doesn't make sense for Gabriel Helbig's results to be 1.00000. Can anyone shed ANY light on this please?Thank you in advance,Markuu ***As a side note, could anyone also let me know how I could format the output so that AVG and OBP are only 3 decimal places? (ex: 0.719 for the 1st result)***
I need to load a lot of Excel, CSV, ... etc. files. These files have hundreds of columns and I need to validate the data. Some are simple range type checking, some are more complex checking involve multiple columns.
There may have several hundreds of such rules. And I may need to let the program to automatically correct some invalid data in the future.
Where to implement it in SSIS? Or just load the files without any checking (all type to text), and checking using T-SQL?
I am running both SQL Server 2000 and SQL Server 2005.
Lets say I have a live database that is version 1 for an application. The database is copied onto a development machine to develop version 2 of the application. I want to be able to take a snapshot of the database and then get a program to write all the changes made after the snapshot into sql code. This is so I can run this sql code on the live database and transform version 1 to version 2.
Is there any program/tool that will allow me to do this?
I have a production database with a backup job that creates files with thenaming convention dbname_db_200503291800.bak. I want to schedule a restorejob that will retire yesterdays backup. How can I write my restore statementso that it will specify the backup file with yesterdays date.Thanks
I am about to upgrade my main database server (5 db's - largest 16Gb) from NT 4 SP 6a / SQL Server 7 SP3 to Windows 2000 SP 2 / SQL Server 2000 SP 2
I am planning to detach the db's, backup to tape a few times and then totally trash the server, rebuilding it with the new software, restore the db's from tape and the reattach the db's.
Any reason I should not use this method and can folk advise the best practice way of achieving this?
Hi, hoping I can get a few view on a question I have relating to the above.
I am new to Stored Procedures and Triggers and I am trying to understand 'best practice' a little better. Here is my question: If I have a table that stores information, and when any field in that table is updated (and changes) I would like to inactive the row, prior to change and then add the change by way of a new, active row. This way I can see what it was before and that it's inactive, and what the active value is.
Hope this makes sense, if this is the wrong way to manage change history any suggestions would be appreciated.
A second question I have is as follows: If I have a table that stores a number, based on that number, what would be the best way to create new records in a different table that pulls from the first table. Where the number stored in table 1 represents how many times the record is to be created in the second table.
Thanks. If anyone needs more data, please feel free to ask, I will help as best as I can and appreciate any advice & comments that you can give.
Hi. I have a question that I have been thinking about for some time. Have in mind that I'm somewhat a SQL newbe. I work on a few webapplications that are database driven applications. When we desides to change something in the database, we normaly do the changes on our replicat development database, and after some testing the changes are suppose to be implamented in the live, public database. Now, what is the best way, or recomended way to do this? Today we bo it manualy. But at one time I experimented with the export/import wizard to se if it was possible. What I want to do is change de design of a tabel, without loosing the original data, and without geting the data from the development database into the live database. To export an stored proccedure, I usualy just create a SQL-script for the procedurs in question and the runthe script against the public DB using the Query Analyser. But if I do the same procedure on a table, I loss all the data in the database. Do I make any sense at all? Any tips/idea/best practise on this subject? /Johan Christensson
We're using SQL Server 2000 as back end in our web project. The problem is we've 3 different copies of same database - one each for Development, Test and Production sitting in 2 different machines.
My question is - is there any tool for comparing the objects (tables, stored procedures, etc) ?
I'm neck-deep in trying to normalize all the info I need into separate tables, but I seem to have hit a wall running.
I was curious as to what the "best" way is to develop a database of concert events would be? What I think I've figured so far would be if I created a table called events...that would be 1 row of data per concert event. Each row could then have a 'event_id' as PK, 'date', and a column with the associated 'band_id' from the bands table. Then anytime the date was accessed (assuming a populated row for that date), a list of concerts for that date would show.
I would also like to use the 'event_id' as an access point for my reviews table...where, quite obviously, I would like to be able to have reviews associated with each event from the calendar as well. Does all this seem right? Or is there a better way (i.e. more efficient, smaller database, and/or use less memory)? How, for example (in short, obviously), does a website like Pollstar.com database their events?
I guess my question with regards to this database is also...within my bands table, should I include their tour dates? Or should I put the band_id in my events table?
I was hoping to get a just a little bit of direction so I don't turn some baby missteps into a long-term project down the wrong road. Any help or response is greatly appreciated. Thanks again!
I have been looking for some documentation that would support or rejectmy opinion on Production -vs- Development naming conventions. Ibelieve that each environment should be housed on separate servers withidentical names, access, users, stored procs....... If you eitheragree or disagree with this methodology, I would appreciate your input.TIA,Bill
I am trying to refresh a test database with data from a production database. Both database structures are identical, e.g. constraints, stored procs, PK, etc. I am trying to create a package in SSIS that accomplishes this task and I am having extensive problems. The import export wizard is out of the question because the constaints are not carried over, plus when I try to refresh the data using the import export wizard, it fails on 1 specific table because of a column in that table named "Error code". I think "Error code" is a micrsoft keyword, so it fails on this column. Does anyone know a workaround that I can do to accomplish this simple task, that could be completed in minutes using DTS. I understand that SSIS is not as straight forward as DTS, but this task is something that DBA's do on a regular basis and therefore should not be this difficult.
So I got 2 classes one I wrote to interrogate databases using normal ADO:Mine:SqlConnection myConnection = new SqlConnection(m_sConnectionString);SqlCommand myCommand = new SqlCommand(sQuery, myConnection);myCommand.CommandTimeout = 120; // 60 Seconds TimeoutmyConnection.Open();SqlDataReader result = myCommand.ExecuteReader(CommandBehavior.CloseConnection);return result;Microsoft WaySqlDatabase dbSvc = new SqlDatabase(m_sConnectionString);DbCommand dbCommand = dbSvc.GetSqlStringCommand(sQuery);return ((SqlDataReader)dbSvc.ExecuteReader(dbCommand));What's faster?My way:SqlConnection myConnection = new SqlConnection(m_sConnectionString);SqlCommand myCommand = new SqlCommand(sQuery, myConnection);myCommand.CommandTimeout = 120; // 60 Seconds Timeout// Use a DataTable – required for default pagingSqlDataAdapter myAdapter = new SqlDataAdapter(myCommand);DataTable myTable = new DataTable();myAdapter.Fill(myTable);myConnection.Close();myConnection.Dispose();myConnection = null;return (myTable);Microsoft Way:SqlDatabase dbSvc = new SqlDatabase(m_sConnectionString);DbCommand dbCommand = dbSvc.GetSqlStringCommand(sQuery);DataTable dtData = null;DataSet dsData = dbSvc.ExecuteDataSet(dbCommand);dtData = dsData.Tables[0];return (dtData);Comments? Ideas?Al
I have master tables that I will be updating from our ERP system. Some examples I have seen take an approach of dropping a table in SQL server then creating it again before importing; some, and probably my choice, append and update; I have not seen an example where records are all deleted then the data appended afterwards. Of the three approaches which is generally regarded as best practise / most efficient?
We are using SQL Server 2012 for both production and development server. Currently, i have plan to transfer the full backup of SQL server database to the development server and then restore it on the development server for testing purpose, so we don't want to disturb on the production.
I have successfully backup the database from the production server and transfer to the development server. however, i encounter some difficulties when trying to do restore. my question is:
1. do i need to firstly create an empty database on the development server and then restore it into this empty database? say i create database call "Test2", then i perform a restore into this database.
2. currently, there is already an existing database being restored previously in the development server and this database is actually the older version of the backup that being restored by the previous engineer. should i remove this database first and restore a new one or both of the database can coexist as long as we put different name for the database?
Hi all! I know I might sound stupid, but I have this problem. I have developed my database on my laptop which does not use Active directory. Lets say I access the database with: SPIDERMANDB1 on SQL server, where the name of my laptop is SPIDERMAN, and the database is DB1. Now, I create an SQL script from sql server and make the necessary changes, like changing the computer name to say, SUPERMAN where my deployment server's name is SUPERMAN. But when I run the SQL script on the SUPERMAN server, it gives me an error like: "User or role SUPERMANDB1 does not exist in this database". I have deployed databases in this way on servers without Acive directory, so I'm sure its not a mistake on my side. So how am I supposed to go around this active directory thingi! Please help me out! I'm counting on u guys!
How do you deploy changes to a database? Is there something you can run to compare the two versions and create a script to do this? I've heard about the DTS package (now DTSX I believe) but am not sure how to set one up. If anyone could point me to an article or something that might clear things up, it would be greatly appreciated. Thank you!
I am currently developing a big winforms project that connects to a SQL Server 2005 database.
The idea is that the application will be deployed on random clients whenever needed. These clients have neither SQL Server installed, nor is there a server to provide a central database. The only option is to include the database file with the deploy.
Now, I've searched all around the internet on how to do this. I just can't seem to connect to the database-file from my deployed version. Is there something special that needs to be done?
I have an application being developed in VS 2005 using SQL Express. I've built up some new roles and have several users pre-populated that I'd like to deploy to a shared hosting evironment. The problem is that I seem to keep getting: Database 'd:hostingmemberasangaApp_DataASPNETDB.mdf' already exists. Choose a different database name.Cannot attach the file 'd:hostingmember 2agilitysite1App_DataASPNETDB.MDF' as database 'ASPNETDB'. I saw another post with a similar issue, but that resolution has not seemed to help. Can anybody give me insights as to where the "asanga" is coming from? That might help me diagnose the issue. Thanks in advance
Greetings,I've been looking around all morning and can't seem to find a good answer on this so I thought I'd post a message here.I'm using the recommended practice of keeping the ASPNETDB.MDF data separate from my site data. (I have my site data located in separate databases but still keep them in the App_Data directory.)I'm using SQL Server 2005 Express on a Server 2003 box and I've implemented scheduled backups on the databases in the Sql Server.The questions I'm having are:1. Why isn't the ASPNETDB.MDF database getting attached to the server? Is it using User Instances?2. If it is using User Instance then isn't this degrading performance? If so, how to force it to not use User Instance?3. If User Instance is how it is supposed to work then how are we supposed to do a full backup of the ASPNETDB.MDF database if it isn't attached in the server? (xcopy doesn't seem to work on User Instances because of the dreaded "is being used by another process" error.)Any recommendations, advice, links, or answers to my questions would be greatly appreciated.
This question has been asked before, with vague responses I'm using Visual Web Developer 2005 Express, SQL Server 2005 Express and SQL Server Management Studio Express. When I create an application with Login controls, VWD automaticlly creates a ASPENTDB.MDF security database. I also create my own database for my application e.g. abc.mdf. In other words, I have two databases. However, my hosting company supports only one database. What do I (we) do? I have seen articles on: aspnet_regsql . Does this create a new database with all the security features of ASPNETDB.MDF built into my new database i.e. abc.mdf? (A) If yes, how do you run it with SQL Server Managment Studio Express? It sounds silly, but I need instructions here please ... Also, do you run this against abc.mdf or do you use it create abc.mdf? (B) If not, how do you achieve a single database scenario? I'm sure this is a very common deployment question, which is very confusing for most of us hobbyists that want to deploy their web applications