Why Does Database Need Minimal Space In Transaction Log?
Jul 27, 2000
The transaction log takes up a lot of space on my database, and even after I try truncating the log, doing a transaction log backup, and then shrinking it, I am not allowed to reduce the size of the transaction log to less than 250MB. Is there some reason why this space is required?
I am having a problem backing up my database and TLog files due to alack of local diskspace. The db file is about 30GB and the TLog isabout 20GB each on a different hard disk. Each disk doesn't haveenough available space to accomadate a backup. I also can't shrink thefiles because part of that procedure would require a backup.Question: Can I use a redirected drive for the backup media? Is therea way to trick SQL into allowing this? If the answer is no, doesanyone have a suggestion as to want I should do? I am in the processof requesting more disk space,but that could take a while.Thanks,
I have a process that restores a production DB, overwriting the existing copy each night. I'd like to keep the solution "up" for as long as possible. And this'll be more important if I want to update it in the day (where there are more queries) too. The nature of queries thrown at the system is that there are about 20 per hour, it's underpinning a reporting system, it's not an OLTP system.
It seems to me I could restore the fresh DB copy into a holding DB, then rename it to the production DB name at the end of the process. The rename process should be pretty much instant.
But I need to think about detecting and waiting for queries to complete on the prod DB, before removing/demoting it (actually, I though to rename it, then reusing it as the next copy to update).
- restore a backup of a 3rd party database onto one of our servers - this has no users that I can use - there is some ETL processing so we're using Control-M to manage the process - create a database user and grant it db_reader.
I'd like to do this without granting any users elevated privileges if possible.
What I've done so far is grant the Control-M user (this is a domain user) dbcreator rights and made it owner of our copy of the database that is being refreshed.
The refresh is completing, but Control-M is not able to log onto the database to create the user.
What is the best way to accomplish this task without granting the control-m user sysadmin rights?
Would I be able to do it if I used a SQL Agent job for the restore and user creation?
It has come to my attention recently that my Transaction Log 200MB and a 1Gb Data Device has gotten much bigger than it used to be after I deleted several thousand rows of data. It used to be under 5% space used and now has settled at around 70 % space used. I have dumped the transaction log with no_log and done backups immediately after on several ocassions and this has not solved the problem. I have also increased the log size from 150MB to 175MB but this does not seem to have solved my problem.
Anyone got any ideas on how to decrease the % used log space in the Transaction Log ?
Is it bador unwise to have so much of the Transaction Log space used up ?
I am new to SQL and might be missing something very easy. I have a situation where the space allocated to the transaction log of a database is extremly large (5 Gig). I can not manually reduce it. This gives me a "Error 21335: [SQL-DMO]The new DBfile size must be larger than its current size." This is a problem because the increase in size has taken all available space on the server.
I am having a strange problem, never encountered before. I have a database that has been allocated 3000MB of Data space and 1200 of Log space. When i go an check the Properties of Database it give a count of 662.23 Data space available and 47.99MB of log space available. That`s weired. I tried truncating the log the Log space available is still the same and neither there is a change in the data space. It should`nt eat that much of data space according to my analysis.
Is there a way to know how much space is free in each transaction log file of the same database?
Example:
A database with 3, 1GB files for transaction log: A, B and C created in this sequence.
From what I have read, since SQL Server 2005 writes to a single transaction log only, I guess if the transaction log is using 2.5GB than A and B are full and B is only half full. Is this correct?
I have a database with around 2 GB space for the data and 5.8 GB for the transaction log . Now the problem is i do not have any more space on the system and data files requires more space than 2 G.B to execute some stored procedures .
Is it possible to decrease some space of the transaction log , say from 5.8 GB to 2 GB and allocate it to the data files . My data and log files are on different drives . I did not find anything related to this topic in the BOL .
Can somebody help me with this problem ? Anthing related to this issue will be of great help to me since i have no expertise in this field .
I started working in one company as a DBA where the Ex-DBA left the company long time back. So all the databases have huge transaction logs (50-80GB) in size. But the problem is- 1. HDD space on the m/c its only 45GB. When I ran TLog backup cmd on that m/c, the hdd space was started reducing around 1GB/min and in few minutes I saw hdd space left is only 15GB so I stopped cmd (It had completed only 37% of the backup). Second time again I tried to take TLog backup on hdd placed on n/w path but again I saw it was eating local disk space very fast.
So my question is, Is it like when you take Tlog backup of size say 80GB it will use 80GB local hdd as a temporary storage?
2. For this I found 1 workaround- I took Full backup then I restored it on some other m/c, then shrun the log files to few MB's then I again took Full backup of this shrunk db and restored it on production. Is this only solution to my problem(Very less space on production's local HDD)? If anyone knows better soln than this then plz suggest me.
today I've put in production a big database accessed by 200 concurrent users, this database has READ_COMMITTED_SNAPHOT set to ON.I know that RCSI set to ON is very aggressive on tempDB so I'm monitoring it.I've noticed that the Transaction log space usage (%) on TempDB is slowly but ever increasing, I mean in the last 24 hours I've started from a 99% space free, now we are 37% space free...is it normal? TempDB log is 35GB in size.
The smallest downloadable version seems to be 53MB. Is there any way to create a smaller version to use as an embedded DB for an application? I only need it to be single-user, for what that's worth. I'm thinking more in the 10MB to 20MB range.
I have bought SQL Server 2005 Standard Ed. which will be used only to host databases for different applications like WSUS, McAfee Protection Pilot and CA Brightstor ArcServer. I do not have intention to create corporate applications using SQL Server.
In fact, I am using the SQL Server as a "multiple MSDE database server"...
My question is what are the minimal components between the following features to install (as I do not not really what they are doing) : -SQL Server -Analysis Services -Reporting Services -Notification Services -Data Tranformation Services -Workstation Components
Hello,I'm upgrading from SQL 7 to SQL 2000 on another box. To minimize thedowntime I would like to1) backup my sql 7 database,2) copy it to the new box with SQL 2000 already installed,3) restore the database on the SQL 2000 box,4) Shutdown my sql 7 database,5) Copy the transaction logs to the SQL 2000 database,6) Restore the transaction logs to the SQl 2000 database,7) Bring up SQL 2000.My only concern with this is restoring the transaction logs that werecreated on SQL 7 to SQL 2000. Do you know if I can do this?Do you see any (other) problem(s) with my plan.Thanks, Scott
I have finally gotten a setup to work, but I suspect there are improvements I should make to the permissions. Here's the setup: I am accessing two databases, (aspnetdb and my own database called custom) on a local SQL 2005 server. In SQL Server Management Studio (SMS) I created logins at the level of the SQL instance for two users ( "NT AUTHORITYNETWORK SERVICE" and "viewer"). Then I select each user, and set the properties to access the Server Roles section. There I found each was given the public server role. Now here's where I set the permission to sysadmin. This setting allows my application to work, but I'm sure there are less permissive approaches I should take. However, I just can't seem to find a simple and direct explanation of the procedure to set more appropriate permissions. The "NT AUTHORITYNETWORK SERVICE" accesses aspnetdb in order to create users and membership settings. The "viewer" login accesses the database called custom, and it reads and alters this database. The application (which is name viewer) performs these operations using a connection string like this: Data Source=STORE;Initial Catalog=custom;Persist Security Info=True; User ID=viewer; password=xxxxx; What is the recommended way to set the minimal permissions for these logins on these databases?
We are using sqlserver2005 at our liveserver. Due to some third party attacks which caused loss of data, we changed the sql user permission to only read,write and execute. Now, some of the sps in the db contain code to insert into identity column with line
SET IDENTITY_INSERT [tblName] ON insert stmts... SET IDENTITY_INSERT [tblName] OFF
This throwing error as
Cannot find the object "tblName" because it does not exist or you do not have permissions.
Which minimal permission can be given to get the above code work with identity insert on/off? We have removed the dbo permission due to external attacks.
Here a code for finding all minimal loops (cyclic paths) in a graph with vertexes of degree >= 3. Almost obviously that before seeking for loops we should eliminate from the graph all its vertexes of degree < 3 (degree of a vertex is the number of edges outcoming from the vertex). Note: there are no any 'parent' - 'child' nodes here. All vertexes are absolutely equitable. if object_id('g3')>0 drop table g3 if object_id('g3x')>0 drop table g3x if object_id('g3y')>0 drop table g3y if object_id('g3l')>0 drop table g3l GO create table g3y(v1 int, v2 int) -- ancillary table GO create table g3x(n int, v1 int, v2 int) -- ancillary table GO create table g3l(nl int, v1 int, v2 int) -- table for storing of 'detected' loops GO create table g3(v1 int, v2 int) -- table of test data with pairs of adjoining vertexes -- each vertex is named by an arbitrary number GO insert into g3 select 2, 3 union all select 2, 4 union all select 1, 4 union all select 3, 5 union all select 5, 6 union all select 1, 6 union all select 4, 7 union all select 6, 8 union all select 3, 9 union all select 1, 7 union all select 2, 7 union all select 1, 8 union all select 5, 8 union all select 2, 9 union all select 5, 9 ----union all /* select 2, 13 union all select 3, 13 union all select 13, 14 union all select 12, 14 union all select 12, 15 union all select 11, 15 union all select 11, 13 union all select 10, 11 union all select 10, 12 union all select 10, 14 union all select 10, 15 */ GO insert into g3 select v2, v1 from g3
declare @i int, @n int, @v1 int, @v2 int set @i=1
while 0=0 begin set @n=1 truncate table g3x truncate table g3y select top 1 @v1=g3.v1, @v2=g3.v2 from g3 left join g3l on (g3.v1=g3l.v1 and g3.v2=g3l.v2)or(g3.v1=g3l.v2 and g3.v2=g3l.v1) where g3l.nl is null if @@rowcount=0 break insert into g3x select @n, @v1, @v2
while @v1<>(select top 1 v2 from g3x order by n desc) begin set @n=@n+1 insert into g3x select top 1 @n, v1, v2 from g3 where v2=@v1 and v1<>@v2 and v1=(select top 1 v2 from g3x order by n desc)
if @@rowcount=0 begin insert into g3x select top 1 @n, v1, v2 from g3 where v2 not in (select v1 from g3x union all select v2 from g3x) and v1=(select top 1 v2 from g3x order by n desc) and not exists (select 0 from g3y where g3y.v1=g3.v1 and g3y.v2=g3.v2) if @@rowcount=0 if @n>2 begin insert into g3y select v1, v2 from g3x where n=@n-1 delete from g3x where n=@n-1 set @n=@n-2 end else begin insert into g3l select 0, v1, v2 from g3x break end end else begin insert into g3l select @i, v1, v2 from g3x set @i=@i+1 end end end select * from g3l order by nl Below is what we get:
7 5 9 7 9 3 7 3 5 Of course, in general case not all found by the code loops are minimal. But this is exactly my approach: firstly find any possible loops (avoiding excessiveness!!), then, in WHILE loop, try to mark out minimal loop(s) from intersection of two non-minimal loops... seems it will be an interesting t-sql job.
We use Netbackup for our SQL servers to backup and restore databases. I would like the service account used by Netbackup to have as limited permissions as possible. The account should be able to backup and restore a db without being able to read any of the content. Right now the account jobs fail if the service account is not in the sysadmin role.
I removed the account from sysadmin and limited it to dbcreator and public but the job fail.
How to setup an account so that people who know the service account password can't log in with that account and read db information?
I'm currently working on a project at work to test the effects of database compression, trying to obtain measurable data on the impact of the compression on other server resources, and therefore whether the reduction in space used is worth the extra overhead. This has involved taking a trace of a production customer's workload for a period of time and replaying it against a backup using Distributed replay in synchronised mode.
I'm then taking a trace of that replay, as well as using perfmon to record useful data about the server, before and after compression is enabled. Finally, I'm loading the traces into a tool called Qure to analyse the impact of the compression on reads, writes, CPU, overall duration etc.
What I'm finding is that even across 2 different 'baseline' runs, which are replaying the exact same workload against the exact same database, performance etc differs to a significant enough degree that it calls into question the validity of the test. I can only put this down to the fact this server is on a VM, which is affecting available resources, which in turn affects execution plans the workload is generating and causes different replays of the same workload. I'm therefore looking at doing this on a standalone server, but I still can't be sure the differences will go away.
How to make tests such as this as similar as possible on multiple runs, when elements outside of SQL Server are in effect out of my control?
I have a table (named table1) with 20million rows. It takes around 11 minutes to apply the primary key to this table. There are some tables with over 100 million rows so based on the previous time if my calculations are correct it will take close to an hour apply this primary key for tables with around 100 million rows.
My current solution is to create another table (named table2) with no indexs or primary keys. Pump over only like 5 days worth of data, then apply the primary key. Then have a script that will eventually populate table2 with the rest of the data gradually. When I say gradually I mean like insert like every 100k per hour or something. Keep in mind this table2 is heavily updated with new records.
I am trying to migrate our processing from command line based scripts and foxpro to SQL so I need to run the SSIS packages using dtexec. I copied the dtexec file and a few dll's that are missing to our production servers but i cant execute the packages. I dont want to install the full client tools (particularly managment/business inteligence studio) on our production servers due to the overhead and limited system disk space.
Can somebody tell me what the minimum install would be so I would be able to run SSIS packages using the dtexec or dtexecui tools? I would also like to install some of the other command line client tools like osql etc.
I understand that minimal logging can occur on a non clustered indexed heap as long as [URL] ...
*not replicated
*tablock is used
*table is empty
The following test seems to contradict this
In the test I create a non indexed heap, insert some record and check the log, then repeat the test on an indexed heap.
The results suggest that even though the conditions for minimal logging into a indexed heap are met, minimal logging is not happening although it does happen on an non indexed heap. What am I doing wrong?
CREATE DATABASE logtest GO USE logtest GO CREATE TABLE test (field varchar(100)) GO CHECKPOINT
Hi, I work on SQL Server 6.5, and I want to make a documentation about available space of my databases. So, I must know how many databases space used everyday ?
If somebody know how to get this available space or how to make this script. I hope you want to tell me about this. Thanks you so much for your cooperation.
hiiii would appreaciate if somebody can help me to figure how can I monitor my database space used, available bu using Transact SQL through SQL Analyzer...
When i try to check the space used in one of the databases in my server I don't get the correct statistics on the bar graph. The statistics show that the database size & the space used are same.It is not the same for all the databases. The database size is 17500MB.I did increase the database size to 18000MB but the situation has not improved. I tried running the sp_space used and i get the following output:
database_name database_size unallocated space ------------------------------ ------------------ ------------------ VSI 18000.00 MB -3553.26 MB
Is anyone knows how to get the space used for both the data and the log of a database using the system tables ? I was able to get the database size but don't know how to compute for the space used.
My sql database is showing space available for one of the database is 0. I have got SQL Server 2005 Ent. Edition(32bit). File growth for .mdf file is set to Autogrowth 10% growth Unrestricted.
I have got lots of physical space on hard drive, why then it is showing 0.00 space available for database. What do i need to do to free up space.
When i ran sp_spaceused it gave me following results
database_name database_size unallocated space XXXX 1817.31 MB 0.00 MB
What is the best suggestion to do if a database is using more than 95% of its space. Should I leave it as is and it will grew when it needs? Or do an action. What action?
One of my user database has grown to almost 50 GB and most of the space is used by these 2 user tables. The number of rows has in each table is around 5500, but the reservered is very high. What can be done to reduce the space so that my database growth can be brought down.