There is a production database which has ever increasing data. For testing purposes though, I would like to build a test database with exactly the same schema but only a subset of data copied from the production database . I'll specify the criteria (something like a where clause in select query) for copying the data from the production database.
Is there a tool that anyone has come across to do this job ?
I'd like to move data from my prod. env. to my dev. env. The data in dev would be replaced by the prod one. I cannot do a detach/attach or backup and restore due to some already existing dev objects located in the dev. env.
I am debugging one of our programs and ran the fix in Test. I would liketo compare table 1 between Production and Test. I want the query to outputcolumn 1 if Production <> Test output.What is the best way to achieve this?jeff--Message posted via http://www.sqlmonster.com
We are setting up a new Reporting Services 2005 enterprise reporting tier that will support multiple developers, applications, and end users. We will have mirrored environments including development, test, and production each with their own database cluster, and reporting server.
We have multiple report developers who share a single Visual Studio solution which is saved in SourceSafe and is setup to have separate report projects for each business unit in the orgainzation. Each report project is mapped to a specific deployment folder matching the business unit. Using the Visual Studio Configuration Manager, we can simply flip to the envirnoment we want to deploy to and the reports are published to the correct environment and folder structure.
My problem lies with the common data sources. We are using a single master Common Data Sources folder to hold all of the data sources. The trick is that each and every reporting folder seems to have to have it's own copy of the data source in visual studio. There does not seem to be an easy way to change the data sources for the reports when you publish to various environment, i.e. development, test, production etc.
Ideally, we would have a single project for the common data sources that all reporting projects and associated folders would map to, and we would have a way to associate the appropriate data source for each environment when we deploy.
I'm looling for best practices on how to setup data sources for development and deployment in an enterprise environment that uses Visual Studio to develop and publish reports. We have 3 environments, and 6 data sources per environment and about 20 reporting folder / project in Visual Studio. That's 360 changes that have to be manged when deploying reports. Is there a best practices way to do this?
There has got to be a better way? Can anyone give me some insite into how to set this up?
CREATE TABLE #tblTemplateBlocks ( TemplateID int, BlockID int, OrderID int
[Code] ....
I have a table called TemplateBlocks which contains which Blocks are on a Template. In this example - just one template - with three Blocks.
Table tblFields contains a list of Fields that are on each TemplateID/BlockID. In this example there are 3 fields on each TemplateID/BlockID pair.
Before I can use a template, I have to check that, in tblFields, for each Template/BlockID pairing - one of the fields must be set as the Stage Base (I cannot have 2 fields as StageBase or no fields as StageBase). In the example data above, the data would be okay as each Template/BlockID pairing has one row where StageBase is true.
Having checked that each Template/BlockID pairing has a StageBase, I need to check that each row where StageBase is true has a value for the WeekStart column and that, taking into account the order of the Blocks in tblTemplateBlocks, the values in WeekStart for each TemplateID/BlockID pairing are getting progressively bigger.
So, for example, the example data above would fail because the third TemplateID/BlockID pairing has no value for the WeekStart column in the row where StageBase is true.
If I added a value of 2 for WeekStart in the row for the third TemplateID/Block that has a StageBase of true - again the data would fail because, taking into account the order of the Blocks - the values for WeekStart would be 0,3,2 and these numbers need to increase.
0,3,4 would be fine. 0,3,10 would be fine. 0,3,3 would fail.
I can do this easily using a cursor or two - but how to do this without cursors.
We have both a production SQL 7 server, QA, and Development. From time to time, I want to move just the data from the production server to the other 2 servers without modifing the objects that may have been changed such as stored procedures and rights. Is there a way using the SQL tools provided that we can just move the data. Becuase also what happens is that the rights to the objects change which means my developers no longer have access to the tables for selects in QA since the changes where overwritten by production where they do not have the rights.
Hi all, I have a asp .net 1.1 application running on the intranet which uses SQL Server 2000. The application is in production and everytime I want to do some changes, i do the changes on my development machine then I copy the application dll on the server. The problem is that I'm using Stored Procedures for all my Select, Insert and Delete statements. These stored procedures are live on the server so I can't do the modifications locally and test them then copy to the server.
How can I do modifications without affecting the production server and the users ??? thanks.
Sould one has a seperated environment for production and test system? How do you do it on a same server? Install two instance? How do you seperate test DBs from the production DBs? Please advise...Thank you
I have just finished upsizing an Access database to SQL Server 2k. Now the SQL Server need to be run on a test basis to determine if i need to make more changes to the front-end (Access). The problem I am facing is how to keep the two databases in sync while I am testing. Any suggestions?
Also any suggestion or comments on how to run a test setup like this (in parrallel) are also welcome since this is my first time attempting a project like this.
We're using SQL Server 2000 as back end in our web project. The problem is we've 3 different copies of same database - one each for Development, Test and Production sitting in 2 different machines.
My question is - is there any tool for comparing the objects (tables, stored procedures, etc) ?
Is there any tool available to migrate the data from the SQL Server test database to SQL Server production database. Data Migration should be based on a condition which can be given as an input for a table by the user. The dependant tables also should be migrated based on the condition. i.e data subsetting based on the matching conditions.
Ex : Salary > 2000
The rows of the table which matches the condition alone need to be migrated for the corresponding table. Also its dependant table's rows should be migrated based on the given condition. Please help me with a tool which can automate this.
We will be implementing our first SQL cluster in December. Our current plan calls for a shared development/test database server with one physical server, but two SQL Server instances. Our production environment will be a SQL cluster. Is it necessary to create a clustered test environment for testing patches, hot-fixes, etc...?
How do I change application code to easily switch between the application working against a test database versus working with a production database?
My thought is to change the connection string to work with a test DB, and when ready to Publish, change the connection string back to the production DB. After Publish is successful, change the connection string back to the test DB.
At first, it appears it will work. Will it? Whether it will or won't?
I'm totally new to SQL. I have a SQL 2005 server with 3 sets of mirrors - 1 for the OS, 1 for the logs and 1 for the DB. SQL had already been installed and DB's put into production before I knew the logs and DB's were all on 1 mirrored set. I need to move the logs to their own drive. How do I accomplish this?
Hello I have a production database that i need to refresh to our test environment daily. The database size is 700 MB. I do not need to transfer the stored procedures and triggers , users and logins. Would a DTS package that runs every night be the best and the easiest solution to implement or should i look into log shipping and snapshot replication.
I am attempting to create a Test db from a full backup of the production db. With 2012, I cannot do it the the way i had done it in previous versions (and now i understand why because of Logical names).
The Test db runs in the same instance as Prod db.
I attempted to run this but come up with errors. This is what i executed:
RESTORE DATABASE TEST FROM DISK = 'E:<path>FULL.BAK' WITH REPLACE, RECOVERY, MOVE 'PROD' TO 'E:<path>TEST.MDF';
The errors are all cannot execute due to PROD is in use.
Setting up Transaction Replication in test environment. I am willing to bet that most of you take a production backup (if so, how, and using what?), restoring the database to your test environment, then running a snapshot to your subscriber and away you go.
But perhaps you take a backup of your publisher and subscriber, if so, how do you know there are no inconsistences because there were transactions sitting on the distributor?
What do you do if you have additional indexes on the subscriber for reporting, that are not on the publisher?
Here at work we are having issues with getting consistent databases set up with T Rep, missing rows, duplicate keys at subscriber etc. How to avoid these issues.
Is there any tool available to migrate the data from the SQL Server test database to SQL Server production database. Data Migration should be based on a condition which can be given as an input for a table by the user. The dependant tables also should be migrated based on the given condition. i.e data subsetting based on the matching conditions.
Ex : Salary > 2000
The rows of the table which matches the condition alone need to be migrated for the corresponding table. Also its dependant table's rows should be migrated based on the given condition. Please help me with a tool which can automate this.
I am trying to copy a production db (26.5 gigs) with a 3 gig log from production to a test server. The Prod db name is EDD_Cat which resides on one logical drive for the data (.mdf) and another logical drive for the log (.ldf). The test server does not have the same physical raid allocation. The only way that I can get that much space is to spread the data across 3 logical drives. I have preallocated a database called EDD_CatT with the same total physical db size. I have not been successful in restoring from a sql backup device (copied from production) to the new test db. Here are my tsql statements and error:
Restore Database EDD_Catt from Iloc01bkp with File=2, Move 'EDD_Cat_dat' to 'D:Mssql7DataEDD_Cat.mdf', Replace, Move 'EDD_Cat_dat' to 'E:Mssql7DataEDD_Cat2.ndf', Replace, Move 'EDD_Cat_dat' to 'F:Mssql7DataEDD_Cat3.ndf', Replace, Move 'EDD_Cat_log' to 'G:Mssql7DataEDD_Log1.ldf', Replace
start db restore --------------------------- 2001-01-02 12:23:31.610
(1 row(s) affected)
Server: Msg 3257, Level 16, State 1, Line 0 There is insufficient free space on disk volume 'E:' to create the database. The database requires 20447232000 additional free bytes, while only 1732972544 bytes are available. Server: Msg 3013, Level 16, State 1, Line 0 Backup or restore operation terminating abnormally.
I also tried using EM but basically got the same type of error.
I could do this with SQL 6.5 as long as the db size was the same or larger.
Any advice/suggestions will be greatly appreciated. BOL and the manuals that I have seem to only give examples that have one file for the data and another for the log but I could not find one that gave an example of what I am trying to do.
Thanks much for your time Calvin Matsumoto - State of California
Howdy; I've tried this in the 'tools' area, but that didn't work too well. I suspect, I will have to generate a T-SQL code then schedule it as a job. Why I can't just drag and drop with basic desires, is beyond me, but THAT probably does exist.
anyway here is the problem [this server has many databases, on SQL 2000 sp2] 1. User only wants me to use Monday morning's full backup, which is good in that it doesn't include transaction logs. 2. Restore that data overtop/into Developement db. = good, no data to worry about damaging. 3. User does NOT want me to do this by hand, but schedule it.
ok, a. must do a RESTORE WITH FILELISTONLY from [?] what ?, master? and if I user the *.bak of the production, it has a coded date field in the name entry SO, I would, I guess, have to generate all sorts of wonderful code to find the date and build a file name. Why, because using the FROM DISK = 'F:MSSQLBACKUPDBPRODUCTION_yyyyddmm.BAK' is not going to work with a wild card. Can I do a file lookup using a 'PRODUCTION' prefix into a variable, then use that or should I look for latest file date [remember there are several database backups here], or ????
then. How does one schedule such a T-SQL. Do I save it to some text file, and invoke it using a job scheduler.
I am able to run the package successfuly in test database. but not in production database. It throughs up error saying
Description: Unable to load the package as XML because of package does not have a valid XML format. A specific XML parser error will be posted. Description: Failed to open package file "D:\TAHOE\APPS\SSISPackages\Integration Services Packages\ArchiveMain.dtsx" due to error 0x80070015 "The device is not ready.". This happens when loading a package and the file cannot be opened or loaded c orrectly into the XML document. This can be the result of either providing an incorrect file name was specified when calling LoadPackage or the XML file was specified and has an incorrect format. End Error Could not load package "D:\TAHOE\APPS\SSISPackages\Integration Services Packages\ArchiveMain.dtsx" because of error 0xC0011002. Description: Failed to open package file "D:\TAHOE\APPS\SSISPackages\Integration Services Packages\ArchiveMain.dtsx" due to error 0x80070015 "The device is not ready.". This happens when loading a package and the file cannot be opened or loaded corr ectly into the XML document. This can be the result of either providing an incorrect file name was specified when calling LoadPackage or the XML file was specified and has an incorrect format.
How would I write a select statement that would return multiple fields in a records based on a a distinct of one of those fiels.
Example
Table Name : Sales Table Field Name : Name Address Phone Zip Sale Rec1: Peter Smith 12 Market St 999-999-9999 12345 99.99 Rec2: John Jones 73 Broadway 999-999-8888 12345 12.34 Rec3: Charle Brown 42 Peanuts Ave 999-999-7777 12345 34.56 Rec4: Peter Smith 12 Market St 999-999-6666 12345 67.89 Rec5: John Jone 73 Broadway 999-999-5555 12345 36.52
How would I be able to return the columns Name Address and Phone based on the distinct of Name.
I have a need to dump a subset of a database from the server (SQL Server) to a notebook via the network, for data entry to be done on the notebook when it is in the field & not connected to the network & then the changes made to this data on the notebook to be applied to the database on the server.
The application for the front end to this is in Access. Would MSDE be the way to go for the database on the notebook ?
It's a small application with not many users, likelihood of conflicting edits is small.
Would the data transfer best be done with replication or with DTS ? Presumably replication would allow options for control over conflicts, such as the same bit of data being changed on the server & on the notebook’s copy of the data ?
I need guidance re direction to head in with this.
Hi,I was wondering if you guys have any nth script to reads from tableand outputs into a temp table subset of records. There was a nth toolI used to use it was GROUP1 which was written in C and it used to bevery fast on nth -in a flat file. In this program we used to pass fewparamaeters. For example if I want 30,000 records from the file of500,000. The function seams to work something like this. you dividethe 30,000 records of 500,000 which will result with .090909090909.Now we would pass only the first 7 digit (0909090) as parameter thatwould nth the file down to 30,000 records. This function allwaysworked whichever number you use as long as the read file is largerthan output fileI like to use the similar concept in Sql Server and I was wondering ifanyone has any script to do this or how to go about this?Thank you. I appreciate your feedbackagron
I have multiple tables with information about a user. The tables are Roles, Users, Groups and Profiles.
For a user session I need information from all those tables. Would it be better to make a table called UserSession and collect the necessary data from the above mentioned tables and stick them in one the UserSession table or should I just write a query that goes out and gets the data from the different tables.
I am wondering if it is possible to use SSIS to sample data set to training set and test set directly to my data mining models without saving them somewhere as occupying too much space? Really need guidance for that.
We are setting up a test lab environment with 100 machines. Â We want one master testing db that gets replicated to each to run scripted application tests nightly. Â
My goal is to minimize the amount of work to move this thing to each of the 100 test machines. Â I am wondering if we need to even have the sql local and invest in a monster db server with 100 copies of the db we restore and each test machine point to their own db on that server, or if I should use db mirroring or something to get the master test db to each of those machines instead.
This is probably an easy solution for some of you seasoned DBA Vets but here is my problem.
I have to take production data and scramble certain sensitive columns such as SSN, DOB, Address, First Name so that our Management team can use it as demo material. Is there a quick solution to this issue?
Now that we have a good programming model in SSIS - the question is whether to write automated unit tests for your packages, and would it generally be a good idea for packages?
Also - if yes to write tests - then where to find more informations regarding How to accomplish that?