I have a small webapplication, sql server 2000. The users can only update the data in the system. However my client needs a report that display changes. The changes are that needs to be monitored are only change of order status, change of delivery date and when a user splits an order.
What is the best practice to keep track of changes? A mirror table for each table with changes?
Here is the situation:Tables:AccountAccountDocumentAccountTestThen we have the following "Activity" table refencing "Account" tableabove:Table: ActivityColumns: ActivityID, AccountNo,...when "Activity" with accountno is created, we'd like to able to takesnapshop or copy all Account related data records from tables: account,accountdocument, accounttestand store it somewhere.This way even if account, accountdocument and accounttest tables changelater, then "Activity" accountno integrity in maintained.Do we need to create or replicate similar tables to store data foraudit?or Which other ways can this be achieved?
I have made a server security audit and specify from database audit specification to audit "select" on a certain user and on a certain table. I logged in by this user and made the select statement..when i run this query
"select * from sys.fn_get_audit_file('d:Auditaudit1*',null,null)"
It return a value at which time the query has done
after 15 minutes i repeated the same action, i run the audit query and the same result is showed off on the panel.is it suppose to return a list of values by how many times this user has made the select statement on that table ? for example at 5:00 pm then 6:00 pm and so on
I need help...here is the problem.Last weekend, the servers in our datacenter where moved around. After thismove, and maybe coincidental, 1 server is performing very poor. Afterrunning a trace with SQL Profiler, I saw the problem which was laterconfirmed with another tool for SQL server performance monitoring. It seemsthat all connections to the SQL server (between 200 - 400) are doing a login/ logout for each command that they process. For example, the user'sconnection will login, perform a SELECT, and then logout. This is not a..NET application. The client software was not changed, it is still thesame. The vendor has said that it is not supposed to do that, it issupposed to use 1 connection that log's on in the morning and logs off atthe end of the day or whenever the user exits. 1 user may have severalconnections to the database.At times, the server is processing over 250 login / logouts (avgeraged for30 second period). Has anyone seen this problem? I have the server inAUDIT FAILUREs only. The server has become very unresponsive, things thattook 3 seconds now take over 15 seconds.Any ideas???
Dear friends,In the area of GIS (Geographic Information Systems) there is a featureknown as versioning (long transactions).This feature allows databases to maintain different versions of data,in a hierachycal structure, in order to do simulation (what if ...),historical snapshots, concurrent editing, etc.Each version can be reconciled with its parent version in any moment(merge-post changes).I have recently seen that Oracle supports this feature from version9i.I am very interested in knowing if SQL server will support thisfeature in future versions. Looking at SQL Server 2005 doc. I haven'tseen any related info.Thanks,Jerry
I'm having problems to keep proper versioning control in place between the development and production environments. I'm running SQL2000 and we have hundreds of packages that runs daily. Some on shedules and some not.
Every time a package is saved, it creates a version in SQL Server. After development I want to be able to use something like "Visual Source Safe" and book the final version in that was moved into production. Something with a version number etc.
This is especially a problem if I want to roll back to a prior version of a package. I do not know which one of the 1000 versions to choose from that were created while developing the package.
Another problem is that I do not know if someone is working on a package if I want to work on it.
I can not run a search on all the packages to get a list of which tables/fields are used where to determine the impact of a program/database/design change that needs to be implemented.
I have a big delimma. We are developing and application that requires parallel work to be done one different copies of databases at the same time. Then when one when group is done and ready to ship their bug fixes/features, the changes they make to the database and data have to be merge back into the baseline database.
Here's the specifics. 4 databases (that make up the product), 4 copies of the 4 database (one for each team).
I was thinking about using SQL DMO to attach to each database and comare each table's schema and data against the baseline (the current release) then scripting out the changes that were made.
Can someone give me some tips on how to maintain parallel database development and the merge process that can make this happen?
I replicate (transactional replication) my data entry database to aread-only database. Both are SQL 2000+SP4. The web server reads theread-only database. At times, there will be lots of changes in the dataentry database, thus lots of replications to the read-only database. Iam concerned that the replication may lock the data in the read-onlydatabase, causing slow response to the web server.I would like to use row versioning so that the read-only database cansupply old data when the same row is being written by replication. Iread that row versioning is a feature in SQL 2005. Is there anyversioning capability in SQL 2000?Thanks
I have a question about how I should go about handling different database versions (schema changes) with my application. I am using an sdf database as a local data store (either on the .NET framework or Compact Framework).
I set it up so that the database file has the database schema, but no actual data, which is copied to the AppData folder if it isn't already there. The I load the database into the dataset, and can store data in the database with no problems.
What I want to figure out is what happens when I later decide to change the database schema. For example, say I add a column to a table. When I load the existing database into the dataset, I get an exception because the existing database doesn't have that column.
It seems that there should be some way to update the existing database so that it adds the column into the table, and sets the rows to just have NULL for that new column.
I am not sure if the TableAdapter or some other object should handle updating the existing database so it matches the latest dataset schema, or if I need to manually write SQL statements to modify the existing database.
I was wondering if anyone knows if there is a way to version a report, after changes have been made kind of like there is to an application. when changes are made to an app with a version # of say 4.0.1 you can changge to 4.0.1. Without using a program like SourceSafe. Thanks in advance for the speed responses.
Could someone point me in the right direction? I have an internal development database and a production database. Is there an easy way to replicate the changes that have been made to the development version on the production server without modifying the actual data in the tables? So, if I add a new user in my development version I offcourse don't want to see it pop up in the live version. But adding/deleting/updating a new table or column should.
And if possible I'd also like to know how you could do the following: Let's say we have an OrderDetail table containing information about the purchased product. Let's say I'd like to add a new column 'total' to skip calculation on the database every time I want to know the totals. It should be able to initialise the value by doing 'times ordered * price' for every existing row. Is that possible as well?
At ScottGu's Blog about "Database Publishing Wizard", AlexD from codeplex said: "Regarding the multiple requests for versioning, backup/restore of remote database, and selection of individual objects - these are all things we are actively looking at for our next release in the first half of 2007." After so many searches, I still don't know if this tool performs Versioning, i.e, when deploying the database, just update de diferences between the local and server database. Did Visual Studio Express 2008 have somethingh like that? (I know that VS Team Edition 2005 had). If this tool can't make it (versioning), which tool/method did you recomend me? Thanks in advance. Alberto
I did a seach here and found some posts but none that answered my specific problem.I am a programmer tasked with building an application for generating Quote Proposals. The database is for the most part fairly simple except when it comes to versioning and history. Basiclly every quote can be revised and modified several times prior to making a final decision (final approved quote), so I need to keep track of the changes that occur durning the revision process. I am not a DBA but I have had some database experience. From what I can tell I have two choices a.) Duplicate all the data everytime the quote is revised. While this method is does cause a lot of duplicate data it is very straitforward and easy to explain (or turnover to someone else) and reporting becomes very easy as well.Reporting.... this is my biggest area of concern as the users of the app should have the ability to print out the original quote proposals as well as the revised quote proposals. Duplicating all the data makes reporting very easy.b.) Create a history table and record the original data (along with who and when) before recording the new value in the main table. While this method does conserve disk space it make reporting a bit difficult as you would have to pull the specific value for a specific Quote Revision and display the orginal values on the report instead of the current ones.Table info: I looking a 10 to 12 tables to record and store the data. The largest table will have about 40 fields. Current estimates are producing about 5 to 8 quotes per week. Each quote is revised an average of 2 to 3 times.Are the pros and cons I listed the main ones to be concerned with and are there any other options?Thanks
Hi All, I'm new on SSIS, but have worked for some time with DTS and a long time with other ETL tools like Informatica or OWB.
I would like to know in which way can i, easily, control my project/package versions. At the same time i need to implement a concurrency management system, which will control what developer is using which package, and when finished update the central repository (As it does Informatica or even OWB).
I have heard that i could implement versioning with source safe, but can i implement this in the way that i've referenced before. Can i use CVS?
For the past few months I've been developing an DW and ETL with SQL 2005 / SSIS. My packages are being deployed to a SQL Server. Although in the end game we will have a Dev/Staging/Production environments, I would still like to archive production packages when we push staging to production. Essentially I would like to archive the last X packages that were deployed to production where X is a reasonable number (3 - 5). I don't necessarily need to have them accessible to run. One of the purposes is to have another safeguard should we miss anything in user testing and need to roll back a deployment.
I am utilizing VSS and we will have backups running on the production server, but I would prefer to have a archive that is a little more accessible.
I just wondering if anyone has any thoughts on how to extract/archive production packages when the push is made. I could easily develop an app that queries the MSDB and exports the packages to the file system.
Edited:Maybe I should have posted this to the "managed" newsgroup. If any admins think that would be better, then let me know. I don't want to duplicate unnecessarily.
Hi,
We developed a custom Control Flow task for SSIS (2005, not yet had a lot of time to look at 2008 yet) and found that it does not handle versioning, or an uninstall and the resulting lack of an addressable component very gracefully.
Here is a typical scenario:
Baseline
Install component MyCustomTask 1.0 Create Project Save Project
Action 1
Uninstall MyCustomTask 1.0 and don't install the new version (a typical user scenario!) Open Project SSIS acts like the world has ended, especially if the user forgot to manually remove the item from the toolbox Fix:None, obviously, but it would be nice to be a bit more graceful and informative. Backdoor Toolbox fix: "Cleanse" the toolbox when it goes haywire by deleting the toolbox.tbd, in Documents and Settings<UserName>Local SettingsApplication DataMicrosoftVisualStudio8.0
Action 2
Uninstall MyCustomTask 1.0 cleanly, plus removing the toolbox item by hand. Install MyCustomTask 1.1, with identical interfaces etc, and add the toolbox item by hand. SSIS acts like the world has ended, and fails to ask you a sensible question like "do you want to upgrade the project to use the new version of the component" Fix:Identify major and minor version component changes and throw the user a rope. Backdoor Fix: Go into the DTSX manually - attack the DTSExecutable ExecutableType and DTS Name, for a Task in our case and replace it with the new version info. Even if the interface for the component has changed slightly, it seems to deal with that OK.
Given the fact that it seems to be very likely that there will need to be SSIS version specific builds of components (I am assuming that a task created in 2005 will not work with 2008), what is the best way to deal with the current lack of SSIS smarts.
Would this be the best approach:
Version the interfaces, but never the builds within a version i.e. My.CustomTask90 v1.0, My.CustomTask100 v1.0 etc.
This is a bit of a pain, rather than the simpler My.CustomTask v9.0 / v10.0 etc.
Or, are there some nice improvements in the pipe to alleviate this, plus perhaps even a way to programatically add components to the toolbox, rather than the low-rent method of getting the user to do it by hand.
in simple words it's about versioning at record level.ExampleTableEmployee - EmployeeId, EmployeeName,EmployeeAddress, DepartmentId,TableDesignationMap - EmployeeId, DesignationId, EffectiveDate,validityTableDepartment - DepartmentId, DepartmentTableDesignation - DesignationId, designationVia Modify-Employee-Details screen following are editableEmoyeeNameEmployeeAddressDepartmentDesignationthis screen should allow user to navigate through changes history.Example :Version -1EmoyeeName John SmithEmployeeAddress 60 NewYorkDepartment AccountsDesignation AccountantVersion -2EmoyeeName John SmithEmployeeAddress 60 NewYorkDepartment AccountsDesignation Chief Accountant - changedVersion -3EmoyeeName John SmithEmployeeAddress 60 NewYorkDepartment Sales - changedDesignation Marketing Manager - changedQuestion :What is the best proposed database design for maintaining historyrecords bound with version and retrieval techniqueBest RegardsSasanka
I have a question on locking pattern of read committed with snapshot isolation level that when two transaction update two different records then why do they block to each other even if they have previous committed value (old version of record).
I executed the below batch from a query window in SSMS
--Session 1: use adventureworks create table marbles (id int primary key, color char(5)) insert marbles values(1, 'Black') insert marbles values(2, 'White') alter database adventureworks set read_committed_snapshot on set transaction isolation level read committed begin tran update marbles set color = 'Black' where color = 'White'
--commit tran
Before committing the first transaction I executed below query from second query window in SSMS
--Session 2: use adventureworks set transaction isolation level read committed begin tran update marbles set color = 'White' where color = 'Black' commit tran
Here the first session blocks to second session. These same transactions execute simultaneuosly in snapshot isolation level. So my question is why this blocking is required in read committed with snapshot isolation level?
When setting up databases for end users, what's the best practice regarding who's the dbo for each individual database - the user itself or a sysadmin?
Does it really have any importance at all who the owner (as defined by 'dbo') is ?
1.- a list of all the terms that start with A% 2.- a list of all the related terms … that belong to terms that start with A%
For number 1 - I am doing a select on Terms table with where term like A%.
For number 2 – I am joining both tables and then once again doing a where term like A%.
Would it be more efficient to take the first results and put them in a table variable, and then just do a join with the second table RelatedTerms.TermID = Terms .TermID
The number of records that generally comeback are between 500 to 1000 records that
What would you consider is a better approach ? or maybe there is an even better way ?
i'm a newbie for sql , but i want to learn sql on my own , is there any way that i can learn sql , do i have to download sample database from the internet, do i need to have my own server to play with. Hopefully someone show some lights on this.
Please point me to a web resource from where I can study:1) writing complex queries such as those involving HAVING, mult-levelnested queries, GROUP BY, T-SQL functions2) Joins - a lot of practice3) Stored Procedures, transactions, cursors and triggers - I need someheavy-duty practiceWhere can I get some good practice of the above? Also, please recommenda good SQL Server/T-SQL book in the light of the above requirement.
Folks - had a look around Google and no surprises, but never found what i was looking for.
I want to see a real work best practice C# Stored Procedure for Sql 2005 (express is what i am using, but don't mind the Sql edition).
Almost everything i see is a "select * from table" which to be honest was my first stored proc many years ago - everything since has been fairly detailed.
I ask as i am sceptical, after years of trying to STOP building Sql queries in code (as it's hellish!) that the CLR technique really makes any kind of a diffence.
If someone has found that it HAS i'd love to hear about it. The thought of:
SqlCommand cmd = new SqlCommand ( "My Whole Stored Proc as Text" );
... doesn't appeal, never mind the potential for debugging syntactical issues and so on.
I was excited by this, until it became something i had to do in a real situation and then i got a little worried. Should i be?
create a table and name it Salary Information. Add an Employee Name and Salary column to the table. Create a column in the Employee table and name it Salary. Create a trigger that updates the Salary table with the employees's name and salary each time u insert data into the Salary column of the Employee table.
Hi, my database is growing over 1Gb, and I only have one .mdf to keep them all. Should I use a secondary data file for my data? Can I do that now? Thanks.
Good Morning, I work for a company that has sees alot of people come and go. The one thing I have noticed is that people use their admin accounts to log into SQL and create sp, views and databases.When the user leaves I am stuck with all these objects that are owned by somone no longer working for the company. So my question to you guys is: What is the best practice to use in creating new objects? Thanks for your guru-ness!
say i have a customer.aspx that allows a user to enter in customer data. on customer.aspx, i have dropdownSalesRep which allows the user to associate a sales rep with the customerbut some customers come to directly, and not thru a sales rep, so I want the user to be able to specify "none" Is it best to have a dummy record in my SalesReps table called "none" with an ID of say "999", or is there some other better way to deal with this?
Hi. We have developed as quite simple ASP.Net webpage that fetches a number of information from a SQL 2005 database. We are having some problems though, becuase of a firewall that is beetween the webserver and the SQL server, and I think this is because of bad code from my part. I'm not that experiensed yet, so I'm sure that there is much to learn. Usualy when I do a query against a SQL database, I do something like this: Function GO_FormatRecordBy(ByVal intRecordBy As Integer) Dim dbQueryString As String Dim dbCommand As OleDbCommand Dim dbQueryResult As OleDbDataReader dbQueryString = "SELECT Name FROM tblRegistrators WHERE tblRegistratorsID = '" & intRecordBy & "'" dbCommand = New OleDbCommand(dbQueryString, dbConn) dbConn.Open() dbQueryResult = dbCommand.ExecuteReader(CommandBehavior.CloseConnection) dbQueryResult.Read() dbConn.Close() dbCommand = Nothing Return dbQueryResult("Name") End Function Now, lets say that I have a DataList that I populate with Integer values, and I want to "resolve" the from another table, then i do a function like the one above. I guess that this means that I open and close quite alot of connections against the database server when I have a large tabel. Is there any better way of doing this? Chould one open a database connection globaly in lets say the ASA fil? Whould that be a better aproch? When I added the CommandBehavior.CloseConnection to the ExecuteReader statment, I noticed that it was a bit faster, and I think there was fewer connections in the database, so maby there is more to the "closing connections" then I usualy do. Any tips on this? Best reagrds,Johan Christensson