I have a production database that is in the low gigabyte size and
growing steadily. No issue there.
I wish to completely refresh the development database daily on a
second server. What is going to be the fastest easiest way to do this
with hindering performance on the production system ?
I have a database approximately 30 GB in sixe which need to be moved from one SQL server to another. Does anyone know the most efficient way of doing this, other then backing up to tape?
I have been experimenting with SQL Server 2005 partitions. I loaded a terabyte of information into 2 tables. The first holds the document information and the second holds the actual binary document (in this case pdf). Most of the documents are about 1 megabyte in size, but the largest is 212 megabytes.
SQL Server has no problem storing the blobs. The problem occurs when I attempt to get the data.
I did some quick tests to test how fast I could pull the documents out. The largest took about 24 seconds. The 1 meg documents are sub-second.
Here is how the 212 meg doc breaks down:
Time to load datatable: 18.79 seconds Time to load byte array: 3.84 seconds Time to Write and open document: 0.01 seconds
If I access the file from a file server, the time is 0.04 seconds to begin showing the document.
As you can see, the longest time period is related to retrieving the data from SQL, and it is much slower that launching it from disk across the network. (note: the sql server and file server used to test are next to each other).
My question is, how can I speed up the access from SQL Server? I believe the keys are "partition aligned". Any suggestions would be appreciated.
I will add the table definitions and partition information as a reply since only 5000 chars are allowed in the post.
I have a 50Gig OLTP production database that currently takes +- 50 minutes to backup, (normal sql flat file backup to disk).
This database will grow to +- a terrabyte by next year.
My major concern is how will i be able to backup this DB when it is that big in 2 hours or less.
I have been checking out my options, in terms of SAN snapshots/clones. Also multiple backup devices and using differential/filegroup/full backup strategy.
What i want to know is if anyone out there is backing up VLDB's what strategy/methos/tools are you using, even 3rd party tools for faster,smaller backups?
Any pointers/best practices for VLDB backups would be greatly appreciated.
We have bulk copy option enabled for our DB and we really use it. Will it be possible to set up a snapshot replication over the Internet of particular tables to a remote server from which the data will be only retrieved and never changed? Also, is it necessary to have PKs in all tables for this one-way snapshot replication? (for transactional replication it is needed, as I know)
I am trying to test a replication or at least get the feel for it on my local copy before I set it up on the real server.
Can I do that by setting my local database as the publisher as well as the subscriber?
I am getting an error message but I am wondering if there are any settings I can change to make this work. Or any other ideas of how to see the replication tool before actually doing it live?
Error message: "Server 'MyComputerName' is neither a Publisher nor a Distributor, or you do not have permission to access replication functionality on this server.
I might add that I am not only new to the project but also new to MS SQL Server.
I have an application that I wrote that is running in the local office and a remote office. The two offices are connected via a hardware VPN. The connection in the remote office is wireless and can give speeds down to 40kbps.
Each office is running MSDE 2000 and runs off of a separate database with a different name. I would like to have the database from the remote office available in the local office. It doesn't have to be completely current. A 24-hour delay would be fine. Since Transaction replication is not available in MSDE, I can use either Merge or Snapshot. Since the local office wants to allow folks to access the remote office's database without allowing them to affect the remote database (query purposes only), it seems that Snapshot is the way to go.
The database in the remote office is as follows: Data File - 50MB; Log File - 5MB. I don't expect this to grow very fast.
The question I have relates to performance over this slow link. Would I be better off using Snapshot replication or just creating a DTS package and having that run on a nightly basis to copy the database?
Also, with a DTS Package, if the job fails due to the link resetting (remember it is wireless), I would have to configure retries, etc. Would Snapshot replication automatically recognize this failure and try to run again?
I got a problem in regarding Transactional Replication.
Let me explain my scenario.
I€™m doing transactional replication between two databases.
When publisher and subscriber created the data going to be bulk copied from publisher table to subscriber table.
My main intension was to create replication between different tables with different fields in which I got succeeded.
But main problem is I want to stop this bulk copy from publisher to subscriber.
Scenario 1: my subscriber table may contain some previous data which will be replaced with publisher data due to bulk copy. I don€™t want this .I want to avoid this bulk copy and wants to create procedures(for insert, update and delete transactions) in subscriber which will take care of replication.
I achieved almost everything but not able to avoid this bulk copy during the creation of subscriber.
As I know the only way I can stop bulk copy is by creating subscription without subscription agent. But here without subscription agent the procedures(for insert, update and delete transactions) won€™t get created in subscriber.
Help me regarding the above scenario and I need it urgently.
Is there an easy way to copy a replication publication definition from one server to another? In our environment we have a development environment, testing environment and soon a production environment. Right now keeping the replication definition between the development and testing environments is a manual process, are there tools that can do a diff on this or a way to keep this in sync?
For example when we change what information gets replicated in the development environment I need a way to propogate that change to the testing environment.
I know I can generate scripts from within SQL Server Management Studio, but it creates scripts very machine specific. So short of going through that and modifying the server names to match the different environments, is there a better way?
I hard that SQL Server 7.0 has problems when the database reaches 50 - 100GB, in areas such as backup, transaction logging, and database admin and that by 100GB parallel queries are also affected.
Is this true ? Where I can get information on this ?
Does anyone have experience/advice with large databases (5-10 Gig)? If so, I was wondering about performance/other benefits of spanning a large database across multiple devices (different disks). Would anyone vote for or against doing this?
I'm running SQL 7.0 SP3 on two different machines (one with additional hotfixes). I'm taking a nightly snapshot of imported data on Server1 and pushing it out to another SQL 7.0 server on our network, Server2. All but one table is copied successfully. On the final table, I receive the message, "The process could not bulk copy into table '"%"'." Error Information Category: Data Source, Source: Server2, Number 4813.
Full error message: "Expected the text length in data stream for bulk copy of text, ntext, or image data."
I've looked up 4813, but it's pretty ambiguous/generic. Also, when I SELECT from Server1 and INSERT INTO Server2 in the QA, I receive no errors. Does anyone have any insight?
I want to replicate the  foreign keys to secondary.I changed the value Copy foreign key constraints value is to True.
I changed this value at pub properties - Articles -
And then it is asking for MARK for reinitialization with the new snapshot.I clicked ok.
When I checked sync status it has given the message like initial snapshot is not yet available.I started the snapshot and the subscription started replication records.When check at pup properties the value Copy foreign key constraints again false.
After changing the value  to true it is showing as False.
I need to manage the problem of negative performance implications when I fragment a 1TB+ DB. I want to perform Index Reorganization if fragmentation is no higher than 30%, and Index Rebuild if the fragmentation exceeds 30%.
Firstly can anyone recommend a script which uses sys.dm_db_index_physical_stats system to ascertain the fragmentation level. Secondly, is there a technique I can employ to prevent the ONLINE operation completely killing performance on 27/4 production system?
I've got a few VLDB's that we want to make smaller. Since the tables are running on legacy stuff, all of it's basically made with int's and char's and it's horriably inefficant.
The problem that I came across is when I made a new table with the best data types and copied the data from the old table, the table size was the exact size (excluding the index size). It was estimated that a total of ~20 GB would be saved with this change. As it turned out, 0 bytes of data were saved with the data types chagnes.
Why are the two tables the same, even though one has much more efficant data types?
If you want more information about the table I'm using:
391 columns. 50,147,035 rows. 65,295.625 MB in size.
I'm trying to setup transaction replication between 2 servers. This is a one-way replication: Server A to Server B, not Server B to Server A.
I am able to replicate all the tables except one. I added commands to the agent so that it would create an output file, possibly with more or better information.
Here is a portion of the error causing the failure
Agent message code 20037. The process could not bulk copy into table '"tblSuppContractFee"'. [5/5/2006 8:02:10 PM]01sqlft003.distribution: {call sp_MSadd_distribution_history(4, 6, ?, ?, 0, 0, 0.00, 0x01, 1, ?, 6, 0x01, 0x01)} Adding alert to msdb..sysreplicationalerts: ErrorId = 65, Transaction Seqno = 000075400000ff9b000b00000002, Command ID = 6 Message: Replication-Replication Distribution Subsystem: agent 01sqlft003-EDGE-01SQLFT004-4 failed. The process could not bulk copy into table
[5/5/2006 8:02:10 PM]01SQLFT004.EDGE_REPLICATION: exec dbo.sp_MSupdatelastsyncinfo N'01sqlft003',N'EDGE', N'', 0, 6, N'The process could not bulk copy into table ''"tblSuppContractFee"''.'
Can somebody help me in finding a solution for this error? I don't see any Error Text and there are no resources available for the error code throwing up in the log file.
I have been using AlwaysON AG for a long time now and currently have about 10TB of data across 120 databases and 3 AG groups for any application that is on SQL 2012 with great success. Each AG group is running on patch level 11.0.5058.0 with 2 synchronous replica(on different SANS) in Primary Data center and 1 ASYNC replica in DR. Migration has been a non-issue because none of the databases weren't substantial enough that I could not fit into my maintenance window which is 12-4AM on SAT morning.
My issue is that my last application to migrate to 2012 includes a 4TB TDE encrypted databases database which is about 10x larger than any of the previous ones I have migrated. The database takes 4 hours to backup after tuning extensively(I hate TDE!!)
The restore to the primary replica is instant because of seeding incremental but the issue comes from having to backup the database before adding to the availability group. 4 hours is my exact outage window and I can't get any more. My plan to migrate application is to -
First Outage Window
1) Restore Database from 2008 to 2012 Primary Replica 2) Change application ARECORD(or cname not sure which) to Primary replica 3) Run database on single node until next outage window
Week Later 1) Add database to availability group 2) Change ARECORD/CNAME to listener
What I don't like about this is I am going an entire week with 1 node instead of 3 which is worrisome. How to accomplish this I would love to hear from you or any type of comment from people who have worked with VLDB in availability groups and what you like/hate/loved about doing it. I am trying to go all in on this software and have loved it so far but getting worried when it comes to the VLDB migration.
The column I'm adding needs to be part of the clustered PK (it will be the last of three columns) so I need to recreate all the indexes.
My DB is set for FULL recovery mode ALLOW_SNAPSHOT_ISOLATION ON. I've tried two methods so far.
Method 1:
BEGIN TRANSACTION CREATE TABLE dbo.Tmp_copyoftablewithnewfield ( ) ON PRIMARY IF EXISTS(SELECT * FROM dbo.originaltable) EXEC('INSERT INTO dbo.Tmp_copyoftablewithnewfield (<original fields>) SELECT <original fields> FROM dbo.originaltable WITH (HOLDLOCK TABLOCKX)') GO DROP TABLE dbo.originaltable GO EXECUTE sp_rename N'dbo.Tmp_copyoftablewithnewfield', N'originaltable', 'OBJECT' GO <recreate PK constraint> <rebuild indexes> COMMIT
Pro's: Lets me add the new field in the spot I'd like it (not a big deal) Con's: Tons of wasted space and time. It took about 15 hours.
Method 2: SET XACT_ABORT ON GO SET TRANSACTION ISOLATION LEVEL SERIALIZABLE GO BEGIN TRANSACTION <drop PK constraint> <drop indexes>
ALTER TABLE [dbo].[originaltable] ADD [newfield] [tinyint] NOT NULL CONSTRAINT [DF_originaltable_newfield] DEFAULT ((1))
Pro's: No making a copy of the entire table taking up 200GB more space in the db data file Con's: My tempdb grew to accomodate the row versioning info for every row in the 200GB table. It took over 30 hours.
A lot of time and disk space is wasted with both.
Since the db is going to be unavailable to users I have some flexibility here. I was considering turning ALLOW_SNAPSHOT_ISOLATION OFF and then trying method 2 again which should stop the versioning in tempdb and then turning it back on.
I was also curious if setting the database recovery mode to SIMPLE would cut down on db log usage and then I could set it back to FULL when done.
Do these really need to be in a transaction? If there's some hardware failure or something unexpected I can just restore from backup and do the conversion again. If the presence of the transaction itself is causing more disk usage for logging or any other slowdown, I think I'd rather do without.
Given the amount of time this conversion takes, I wanted to get some feedback other than "just try it" before doing any new tests.
Hi, Im a Jr DBA and have been given an assignment by my lead to find information on the following. We are to migrate existing db of size 4TB to a DELL PowerEdge 2950[Mem:Up to 32GB] OS : Windows Server 2003 Std Edition X64 SP2 DB : SQL Server Enterprise Edition x64
I am to find on how to design the db to provide optimum performance,fail over and consider the growing factor of the db.
1)What would be the recommended RAID settings? 2)Placement of the tempdb ? 3)Should we do clustering and why ? 4)What Data partioning would do to help? 5)Any Other aspects to be considered for sizing db ? 6)Placement of data files and log file on separate physical disk ? 7)Indexing?
I have read many sites.I would appreaciate if someone could write suggestions and opinions based on their current db design spec or previous experience,by selecting best db design points.Thank You.
Hi, I would like to delete a data from a 750million row table in chunks of 10000,without blocking the users.As ours is a 24/7 shop I donot want to block the users for a long time. Answer for this is highly appreciated. Thanks Samna
I set up DB mirror between a primary (SQL1) and a mirror (SQL2); no witness. I have a problem when I issue command:
alter database DBmirrorTest Set Partner = N'TCP://SQL2.mycom.com:5022'; go
The error message is:
The remote copy of database "DBmirrorTest" has not been rolled forward to a point in time that is encompassed in the local copy of the database log.
I have the steps below prior to the command. (Note that both servers' service accounts use the same domain account. The domain account I login to do db mirror setup is a member of the local admin group.)
1. backup database DBmirrorTest on SQL1
2. backup database log
3. copy db and log backup files to SQL2
4. restore db with norecovery
5. restore log with norecovery
6. create endpoints on both SQL1 and SQL2
CREATE ENDPOINT [Mirroring]
STATE=STARTED
AS TCP (LISTENER_PORT = 5022, LISTENER_IP = ALL)
FOR DATA_MIRRORING (ROLE = PARTNER)
7. enable mirror on mirror server SQL2
:connect SQL2
alter database DBmirrorTest
Set Partner = N'TCP://SQL1.mycom.com:5022';
go
8. Enable mirror on primary server SQL1
:connect SQL1
alter database DBmirrorTest
Set Partner = N'TCP://SQL2.mycom.com:5022';
go
This is where I got the error.
The remote copy of database "DBmirrorTest" has not been rolled forward to a point in time that is encompassed in the local copy
Hi! I did: alter database mydb set single_user with rollback immediate; exec sp_detach_db @dbname='mydb', @keepfulltextindexfile='true';
then I tried to copy files to new location on other drives, same server but got >>Cannot copy <myfile>: Access is denied Make sure the disk is not full or write-protected and that the file is not currently in use<<
I also tried rename of file without success. I also tried with db service stoppet (not preferred) without success.
How to find out, which process locks the files? Best regards
if i have a given database (a model) and i want to copy this database in the same database instance. Is it ok to copy the mdf and ldf file and attach the files with a new database name in the same instance.
I am attempting to use the copy wizard to copy databases from SQL Server 2005 to SQL Server 2008 R2 w/ FP1.
The copy fails with a login failure to SQL Server 2005. I have a user id & password under Windows for both servers. I have a user id and password under SQL security with the called for admin security rights.
The 2005 server has two instances, 20 databases, two dozen maintenance plans, and over a hundred users. I really would like to use the utility so I don't have to recreate everything manually.
Before implementing memory based bulk copy insert with IRowsetFastLoad interface of SQL Server 2005 OLE DB provider, I want to know some considerations.
- performance : compared with T-SQL's "BULK INSERT ..." and bcp utility
- SQL Server's resource usage : when running memory based bulk copy, server resource's influence
- server side action(behavior) : when server is busy, delayed-update means IRowsetFastLoad::Commit(true) method can insert right after?
- row-count : The rowcount limitation can be inserted by IRowsetFastLoad::InsertRow() method before IRowsetFastLoad::Commit
Hi~, I have 3 questions about memory based bulk copy.
1. What is the limitation count of IRowsetFastLoad::InsertRow() method before IRowsetFastLoad::Commit(true)? For example, how much insert row at below sample?(the max value of nCount) for(i=0 ; i<nCount ; i++) { pIFastLoad->InsertRow(hAccessor, (void*)(&BulkData)); }
2. In above code sample, isn't there method of inserting prepared array at once directly(BulkData array, not for loop)
3. In OLE DB memory based bulk copy, what is the equivalent of below's T-SQL bulk copy option ? BULK INSERT database_name.schema_name.table_name FROM 'data_file' WITH (ROWS_PER_BATCH = rows_per_batch, TABLOCK);
------------------------------------------------------- My solution is like this. Is it correct?
// CoCreateInstance(...); // Data source // Create session
I'm getting this, after upgrading from 2000 to 2005.Replication-Replication Distribution Subsystem: agent (null) failed.The subscription to publication '(null)' has expired or does notexist.The only suggestions I've seen are to dump all subscriptions. Sincewe have several dozen publications to several servers, is there adecent way to script it all out, if that's the only suggestion?Thanks in advance.
Hi,I have transactional replication set up on on of our MS SQL 2000 (SP4)Std Edition database serverBecause of an unfortunate scenario, I had to restore one of thepublication databases. I scripted the replication module and droppedthe publication first. Then did a full restore.When I try to set up the replication thru the script, it created thepublication with the following error messageServer: Msg 2714, Level 16, State 5, Procedure SYNC_FCR ToGPRPTS_GL00100, Line 1There is already an object named 'SYNC_FCR To GPRPTS_GL00100' in thedatabase.It seems the previous replication has set up these system viewsSYNC_FCR To GPRPTS_GL00100. And I have tried dropping the replicationmodule again to see if it drops the views but it didn't.The replication fails with some wired error & complains about thisviews when I try to run the synch..I even tried running the sp_removedbreplication to drop thereplication module, but the views do not seem to disappear.My question is how do I remove these system views or how do I make thereplication work without using these views or create new views.. Whyis this creating those system views in the first place?I would appreciate if anyone can help me fix this issue. Please feelfree to let me know if any additional information or scripts needed.Thanks in advance..Regards,Aravin Rajendra.
In my production box is running on SQL7.0 with Merge replication and i want add one more table and i want add one more column existing replication table. Any body guide me how to add .This is very urgent Regards Don
DBCC OPENTRAN shows "REPLICATION" on a server that is not configured for replication. The transaction log is almost as large as the database (40GB) with a Simple recovery model. I would like to find out how the log can be truncated in such a situation.