What is the best way to delete ALL data in a table without the transaction log filling up? I do not need to log the deletions.
Truncate Table ReportSearchRecordSets with NO_LOG?
Delete * from table with NO_LOG?
I have a table with no primary key and i just want to see all the duplicate entries on the basis of two columns. Can anyone suggest me how should i go about it.
Can anyone provide me the syntax for the same? I have only 1 table say ISSR_TBL and two columns using which i want to delete the duplicate ones. i.e. MIN and MAX.
Hi I have a SSIS package that I run from dtexec command prompt in parallel. they run completely isolated. Sometime when I push 3 instance of the packege at the same time, one of the instances will fail. I have implemented detail Log on the package to see exactly where it's going wrong.
to brife you on what the package does, I can say in nutsheel that it does copy tables between servers.
due to the nature of the problem, the point in failer can is completely random.
if I run just one instance it will work fine (always). if it is more than one that there is a chance that it might fail. (but there is agood chance that they will run successfully).
my guess is, as this packages share resources (CPU,Mempry and disk I/O) sometimes is there is anyshortage on of the packages can fail. is there anyway to specify how log for example a sql object transfer task will wait before raise and error message.
ALSO is there a way to truncate the memory bufferafter each table copy as it seems like when it is loops for differnet tables they data copy content get's piled up in memory and it get's truncated only when the whole instance is finished not after each table copy
Hi.I have a multiplier that multiplies 2 floating point numbers with 7bits exponent and 10 bits mantissa.so its output has 7 bits exponentand 20 bits mantissa.now its output must return to its input in order to compute anothermultiplication.on the other hand its output must be truncated to 10bits. how can I do this? please help me.thanks.
Does anyone have experience with truncating an expression like the Excel TRUNC?
For example in Excel, you might have something like =TRUNC(IF($AE11=0,1,X11/$AE11),5) which drops off a certain amount of the results after the decimal point. 87.5659321 becomes 87.565 instead of the result of a rounding.
Hi all, This is a database on which there are continuous inserts and updates. Data is loaded into this database in bulk at regular intervals. I do not back up the transaction log. For these reasons, I have set the options, 'select into/bulk copy' and 'truncate log on checkpoint'. When the 'truncate log on checkpoint' option was not set, the log file would grow very huge and fill up the entire disk space(file growth is unlimited). But, even after the 'truncate log on checkpoint' option has been set, the log file does grow at times to fill up the entire disk. I assumed that since the 'truncate log on checkpoint' option was set, the inactive portion of the log would be truncated every 1 minute(ie, during every checkpoint). Could someone please explain the reason for this behaviour?
hello, Daily, after a database backup done in the evening, around 3am we have a lot of flat files to integrate in tables and then process.
The question is : We want to free the space used by the transation log. Then, we use "backup log with no_log" and/or "backup log with truncate_only" We look at the size of the transaction log with "dbcc sqlperf( logspace )" but the 'Log space used %' stays the same.
Could you give us some informations or tips on this.
I am trying to truncate one field within a table and replace it with data from another database. Creating the data and inserting it is no problem, does anyone have any way of completing the truncate could I use an update or a delete as an alternative method?
I am having a problem with growing transaction log size, it has grown to 10 gb and I need to truncate it. How can I do it without interfering the users since it's our production database with 24/7 operational service.
Recently I start working with SQL 6.5 server on NT4 (sp6)
Before me someone make abnormal reinstallation of this SQL server. after that server need to be restarted every day
On the database edit window I have seen that Avaluable Log Space is 0 and in the course of working with device have seen that device definition is erroneous. the path to file is different from the name of file, i.e. in device defined (D:aaa03_log) and actually present file (D:aaalog_03)
and as result every operation on device get the error. I have change the name of the file to match the sql device definitions but first this don't helped. I have added new device and expanded the database log on it, but Avaluable Log Space remains 0. After a number of restarts I noticed that old erroneous device is now become visible and I can work with it, for instance expand it. But Avaluable Log Space remains 0 and in the sql monitor I also see that log occupied 100% of avaluable space.
May be transaction log located with data and I just can't see that ?
There is option on database edit window: Truncate transaction Log and I want run it. The organisation don't interested in log. main thing is that the data will be safe.
I have field1 decimal(11,0) containing number 1234567 and I must get the last six digits to int-field, eg. I want field2 int containing 234567 as result. How should I do that with functions? And it should work regardless of the length of the field1, eg field1 = 123456789 -> field2 = 456789
I've set up a number of jobs (not a maintenance plan) via a script in SQL 2005. These jobs do the following:
1) Full backup every sunday night 2) Differential backup every weeknight 3) Log backup every hour
The database is obviously in the full recovery model.
The backups all seem to be running, with one issue - the log file is still growing and is not being truncated. I was under the impression that a log backup should result in the log being truncated after each full backup. However, this does not seem to be the case.
Is there anything obvious I've missed that needs to be set up, or is there a way I can check that the full backup is actually setting the appropriate checkpoint and that the log backups are 'seeing' these checkpoints?
I am having trouble Truncating a Transaction Log. I`ve tried everything in Book Online. I`ve backed up the database, I`ve tried DBCC SHRINKFILE, DBCC SHRINKDATABASE, BACKUP LOG TRUNCATE_ONLY ...etc, but it will not shrink. Any suggestions ? Thanks.
The log on one of my databases keeps filling up, even though I have it set to truncate on checkpoint. the only real difference between this database and the others on my server is that it is built from the dump of another database (on another server) where the tables are marked for replication.
I'm wondering if the fact it is built from a replicating database could be causing this. I've noticed I can't drop any of the table, even though my database isn't set to replicate (or publish).
two questions 1) Any ideas? 2) Is there anyway I can make my server realize I'm not replicating so it will let me drop those tables? (nothing in Enterprise manager indicates that my database is replicating or publishing).
I've got a Sql server 2000 box that currently backs up to tape in full every night. I also want to back the box up off site but not in full (as 30GB is a little too much to transfer every evening).
So my plan was to do a full backup to tape at 7pm then a differential at 8pm (to transfer off site).
The problem I am having is that after my differential has been done the logs get truncated so if I want to replay them for any reason I need to get that differential back to site.
I am having a problem debugging an XML error we are getting in our production environment because I can't view the entire call to the stored procedure in Profiler. I have successfully traced the error, but when I go to the line with the call to the SP that caused the error, it doesn't show me the entire call. It only shows me the 'exec sproc_name and then the first 16 characters of the XML string parameter that is being passed to the proc. For some reason it's doing this to ONLY the stored procs that have XML parameters...on procs that use standard parameters, it displays the entire call correctly.
I have looked for some type of setting that controls this, but haven't been able to find it. I also have looked through many forums for this issue but to no avail. Does anyone know why this is happening? And, is there a workaround/fix?
I've noticed that when a dataflow task returns an Oracle LONG field. If the query involves one table, the LONG is returned normally. If the query involves any simple joins, the LONG will be truncated at the first 100 characters.
The Microsoft driver does work correctly.
Has anyone else experienced this with the Oracle driver and are there any known work-arounds?
I'm trying to find an efficient and elegant way to truncate datetime columns to a whole date. Currently, when I perform this operation I use something like the following.
TSQL: Select Convert(datetime, Convert(varchar, column_name, 112)) from table_name
PSQL: Select trunc(column_name, 'DDD') from table_name
I've been using this TSQL code ever sense version 7, but I know there has to be a more efficient way other than converting to a varchar and back to a datetime. Over the years, I've tried looking for a more suitable method to perform the conversion in TSQL but this has always been the most practical and efficient method. The PSQL select statement is fast and graceful.
How/when should transaction logs be truncated without breaking Log Shipping?. I'm trying to deal with the scenario of my tlogs increasing their size as I configure log shipping to 5 min. -frequency backups.