I have noticed a question on the Admin exam which involves declaring and usting a time variable for scheduling a backup or dump of a database. Is this possible and has anyone else seen this on the exam or used it in reality?
Can anyone tell me where I can find information about why sql server might be executing a stack and/or symptom dump? That is, what are some of the conditions that might cause this. Every time I try to DBCC DBREINDEX on a certain table, stack and symptom dumps occur and SQL Server remains incommunicado (can't connect, even locally) until I restart it. I'm going to try to fix the table by using rename table, create new table (and dependent objects) , transfer data, and toast the original table. No problem, I've done that before with smashing success. But, if this solves my problem, I'll still be no wiser about what it was that caused the problem in the first place. Any suggestions, questions, comments, or even jeers would be appreciated. Thanks, Chris
I need to set up a dump of my main database's transaction log to dump every hour but not overwrite until the 8th hour (keeping 7 log dumps). Can anyone tell me how to set up the scheduled tasks for the transaction log dump? Is there anything special I need to do for the main database dump?
Hi: Actually we have problems doing database dumps, so we're copying directly the .DAT files to a backup machine, I mean the .DAT files are the data devices that uses SQL no dumps files. We can do or eecute the dump process.
What problems can we expect doing this? Which can be the problems 'cause we cannot dump our DB's?
Our SQL Servers is giving us a headache, after a certain period in time, either SQL Service automatically shuts down by itself or hangs. I've opened the logs and found hex dumps. Can you help me out with these?
I've read that the SQL dump can only be done directly into the local server. Is there a way to put directly the dump into another server via the network (not copying it but writing it during the process of backup). Any help much apreciate.
We are experiencng high cpu utilization across all 4 cpu's at the top of the hour when our transaction log dump job runs. Has anyone observed this bahavior before? Is there anything we can do to mitigate this? Thank You.
I am testing a procedure to automate the transaction log dump. I am following the steps located in Chapter 22 of the Microsoft SQL Server Administrator's Companion ("Automatic Transaction Log Dumps Using Performance Monitor"). The alert in Performance Monitor appears to be starting when the log is 75% full, but the alert is not firing off the file that contains the dump transaction sql command. For the 'Run Program on Alert' box this is what I have: isql -Ssvrname -Usa -P -id:appsmssqlinndump.sql The dump.sql file contains: 'dump transaction pubs with no_log'
I have also tried the following 4 steps: 1) Created a SQL Alert Messsage, 2) Created an NT Performance Monitor Threshold Alert to run sqlalrtr to issue a certain error when the pubs log is 75% full, 3) Created a TSQL Task, and 4) Created a SQL Server Alert to run the Task created in step 3. This appears to do the same thing. The Alert is fired off, but the Task is never executed. Note: I am able to execute the task from within the Schedule Tasks Window.
I am using Standard Security with SQL Server 6.5 (sp5a) running on NT4. Thanks for you help in advance.
I am new to microsoft SQL server as I am from Oracle background. I am preparing to the MCP certification for Exam 70-229, Designing and Implementing Databases with Microsoft SQL Server 2000 Enterprise Edition. The exam shows as SQL server 2000 is there any exam for SQL server 2005.
Is it the write exam to give to start as fesher into microsoft platform. Iam into development not towards administration.
If any body has any Exam preparation questions please forward them to krishna.kanigelpula@gmail.com.
How can I check to make sure that my dumps are not corrupted? I have been using a utility from Microsoft called DSCAN5, but have found that this has some limitations.
does anybody know how to automate the loading of incremental transaction dumps. The manual way is to use the "load tran DB with FILE = x" statement.
Since this has to be done in the right sequence and i need to automate this task to keep a second server up to date i like to know if there is a stored procedure or any other tool which could do the task.
I have a package which access a DB2 database and pulls data from a single table. I can't put a specific event on it, but the package has been causing a dump to occur on a rather regular basis. The really odd part is sometime when I add a data viewer on the output link of the OLE DB Source it works....then it starts to dump again a couple of executions later. There are not date/time values involved in the result set, just character strings. Default code page is set to 1252 and use default page is set to False....any ideas appreciated - this is really starting to drive me nuts!
our users believe that we lost some valid data, but no one knows who did it, I thought I can find it from the transaction dumps I take every hour so ,Can I read Transaction Dumps (*.TRN) file in SQL server 7.0 or Can I get this information through other means.
I'm going back and forth on an issue and was looking for some outside observations. I have over a dozen SQL NT servers, both 6.5 and 7.0. On some servers I'm dumping the databases and backing up the dumps with Veritas BackupExec, on 4 (two SQL 6.5, two v7) I'm backing up with the Veritas SQL Backup Agent.
Obviously, if you don't have room to dump a database, you must use the backup agent. That is the case on one of my servers and becoming so on a second. But my personal preference is to do the dump/backup.
(as a side note, one server is not backing up correctly with the backup agent but ultimately that box will require it due to DB growth, so it is something that I have to resolve)
I like the dump system for a few reasons. I find it easier to load from a dump, particularly a single table. Likewise, I find it easier to copy a database by loading from a dump rather than going from a backup, but that's mainly because of BackupExec being a little bit strange on redirecting restores.
Here's my clincher. I find restores via the backup agent to be ridiculously slow. Let's say I have a 5gig DB that has 1.5gig in it. The dump size will probably be somewhere below 2gig. The restore via backup agent carefully writes the entire 5gig even though 3.5g of that is empty. This takes a lot of time. Add to that the "post-restore DBCC". I had such a restore take something on the order of 13 HOURS which, needless to say, conflicted with my nightly DBCC's and backups.
OK. End of rant. Any suggestions or thoughts on the subject?