DB Engine :: Clarification On MDF / LDF Sizes And Backups
Jun 7, 2015
My environment looks like:
- Windows 2008R2 SP2 VM (VMware 6)
- SQL 2012 SP2 Std.
- NetApp iSCSI LUNs, Snapmanager for SQL
So I have created a test database and configured the logs to grow to max. 120MB in size.
To check the allocated and free space I right-clicked the database -> tasks -> shrink file
I see currently allocated: 5,00MB and 2,50MB free space for the MDF Database
I see currently allocated: 1,00MB and 0,63MB free space for the LDF Logfile
Next I use a data generator to fill the database with random data.
After adding around 500k rows I check the size again:
I see currently allocated: 17,00MB and 0,44MB free space for the MDF Database
I see currently allocated: 61,94MB and 0,27MB free space for the LDF Logfile
Next I take a full backup incl. truncating log files. After that I check the size again:
I see currently allocated:Â 17,00MB and 0,44MB free space for the MDF Database
I see currently allocated: 61,94MB and 57,24MB free space for the LDF Logfile
So now my question is where are those 56,97MB? I imagined they should now be added to the MDF file but they seem to be just gone. I did this procedure 2x more time and the MDF stays the same size while the LDF is almost empty after backups. Then I thought maybe its in the memory of the server so I rebooted it. But still the MDF has the same size... Is this normal? How it should work?
The space allocated to the Log in question is 180 GB. During this time period I was running TLog backups every 5 minutes, yet the log continued to chew through to 80 GB used, even after the process was complete and a final TLog backup had been taken. It continued to stay very large until the Full backup was complete -- or something else that I'm unaware of completed. Like every other DBA I typically take a TLog backup to shrink the log, but what appeared to be the case here was the Full completed and it released the used log space. All said, will Transaction Log backups not free up the log during Full backups?
I'm requesting of our dba that he create a database with recovery simple for my peer and I to start using. I'm asking him to give us db owner on this db so we can create schemas, tables, views, procs, do table inserts, deletes etc etc. what sql permission (if any) would allow my peer and I to do a backup once in a while to the default sql directory for backups? And for that matter a restore from there.Â
how to restore database backups with different recovery fork. I have 1-full backup 2-diff backups and 10-tran backups. My prod database in mirror, so after error, switched to mirror with "allow_data_loss" option. And now I have full and diff backup with one recovery fork GUID and other backups with another GUID.So the question is, how to restore all this backups if in middle of restoration will be different recovery fork.Tryed to restore log backups with new fork guid and got error:This backup set cannot be applied because it is on a recovery path that is inconsistent with the database. The recovery path is the sequence of data and log backups that have brought the database to a particular recovery point. Find a compatible backup to restore, or restore the rest of the database to match a recovery point within this backup set, which will restore the database to a different point in time.
Hi friends, How r u?..i have one clarification..Curently, i am doing one search engine stuff...for that, i have to populate data from sql server database to flat file..i got java script search engine code from one web-site,in that code,data's are fetching from flat-file only..so i have populate the data from sql server to flat-file, and then get the key-word value from that flat-file according to the user input..Is it possible?..send mail..
I just finished getting a local database mirrored to an offsite "DR" server. Interesting experience getting that working, but that's another story! It's set up as "High Safety without automatic failover (synchronous) (without a witness). I was hoping I could get a sanity check in this flow....
Local database gets updated each morning with a few meg of data, so it's pretty low use. It's mirrored through a T1 (soon to be dual). My remove copy was restored from yesterday€™s backup, and the mirror was sync'd in less than 10 minutes, so I don't expect bandwidth to be an issue...
In the event that my local server or network is down, I'd be able to log into my remote server, and run my application remotely. From what I've seen, the "Failover" option is only available on the "Primary" server? Will this become an option if the remote no longer sees the primary, or do I need to do this through the command: ALTER DATABASE <dbname> SET PARTNER FORCE_SERVICE_ALLOW_DATA_LOSS?
Since the updates are so limited, I'm not too concerned with data loss, but will need to make sure I can get this running quickly if needed...
As I understand it, this will put the mirrored database as the primary, and when the old primary comes back only, it'll become the new mirror? I can then let them sync, and force a failover from the remote system to put the primary back to the local network.
I just want to make sure I understand this correctly before I start testing. Thanks for any assistance!
I want to create a DataList that shows products, which will be on multiple pages. I have my stored proc to show paged results, which contains a return value for more records. I have found examples of coding the DataReader, defining all the parameters etc, but what about the drag and drop SqlDataSource?? You can select the DataSource Mode to be "DataReader". I can put select parameters in, with input and my return value. I don't know how to then access the return value, or output value if needed, from this? My DataList references the SqlDataSource, but I don't know how to get the return/output value out??? This is very frustrating, cause I can't find any info about it anywhere. Always input parameters, but no output. This is my current SqlDataSource...
If I take out the RETURN_VALUE Parameter, my results display in my data list, but that's useless if I can't access the return value to determine the remaining number of pages etc. Is my RETURN_VALUE parameter wrong? How do I access that? My stored proc is shown below...
CREATE PROCEDURE sp_PagedItems ( @Page int, @RecsPerPage int, @CategoryID int ) AS
-- We don't want to return the # of rows inserted -- into our temporary table, so turn NOCOUNT ON SET NOCOUNT ON
--Create a temporary table CREATE TABLE #TempItems ( ID int IDENTITY, No varchar(100), Name varchar(100), SDescription varchar(500), Size varchar(10), ImageURL varchar(100) )
-- Insert the rows from tblItems into the temp. table INSERT INTO #TempItems (No, Name, SDescription, Size, ImageURL) SELECT No, Name, SDescription, Size, ImageURL FROM Products WHERE CategoryID=@CategoryID
-- Find out the first and last record we want DECLARE @FirstRec int, @LastRec int SELECT @FirstRec = (@Page - 1) * @RecsPerPage SELECT @LastRec = (@Page * @RecsPerPage + 1)
-- Now, return the set of paged records, plus, an indiciation of we -- have more records or not! SELECT *, MoreRecords = ( SELECT COUNT(*) FROM #TempItems TI WHERE TI.ID >= @LastRec ) FROM #TempItems WHERE ID > @FirstRec AND ID < @LastRec
mine is a small organisation and we have deployed an ERP on windows 2003 server. their are total 60 ERP users and we have 25 concurrent licences for ERP.
at the backend we are running MS SQL 2000 standard edition. we have created one user in Database and ERP accesses data through this user only.
now i need to understand how many CAL for database i need to purchase. either 60 or 25 or 1.
I hope someone can help here. I found that I was having replication failing due to the fact that there is a foreign key being referrenced, here is the error: Could not drop object 'dbo.tblName' because it is referenced by a FOREIGN KEY constraint. (Source: Database Name (Data source); Error number: 3726)
In searching for some sort of answer - I came across this: Join Filters
Foreign key relationships are central to most database designs. For example, in a database that tracks product orders, you often see a CUSTOMER table with a CustID primary key and an ORDER table with a CustID foreign key that references the CUSTOMER table. In merge replication, if you filter the CUSTOMER table with a subset filter clause, you must also filter the ORDER table, so that only the rows that reference the filtered subset of rows from the CUSTOMER table are replicated. Filtering requirements that are based on a foreign key relationship must be explicitly represented in the merge replication configuration through a join filter (which lists the two related table articles and a logical expression that identifies the relationship), as shown in the following example:CUSTOMER.CustID = ORDER.CustIDThe effect of this join filter between the CUSTOMER table and the ORDER table, together with the subset filter clause on the CUSTOMER table, is that Subscribers to the publication receive only those rows from the CUSTOMER table where State = 'WA' and only those rows from the ORDER table that correspond to the CustID values in the filtered CUSTOMER table. ________________________________________________________________________ Am I reading this right? If I have only one table that I need to replicate, I would need to included the other tables connected by the foreign key for the replication to work, and in doing so - the table I designate for updating in this replication will be the only one affected and the others that have go with this - won't update their corresponding dupe tables? Sorry - I hope this makes sense.
I have a question , because I€™m not very experienced with VB.
The function declares
Public myString Public myKey
When something is declared this way, what happens if two or more users happen to execute the report at the same time? Is myString and myKey public to all of those reports at the same time, or does the Report Server manage those variables separately for each of the reports running concurrently?
My goal is to populate a dropdownlist with only with users that are "Techs". I am using the membership database that you ccan set up through VWD. I added this column to the aspnet_Users table: IsTech as bit datatype. I thought I had the right SQL statement but apparently not, because I get an Invalid column name 'True'.Here is my statement: <asp:SqlDataSource ID="SqlDataSource3" runat="server" ConnectionString="<%$ ConnectionStrings:HRIServiceConnectionString1 %>" SelectCommand="SELECT [UserID], [FirstName]+ ' ' + [LastName] AS techid FROM [aspnet_Users] WHERE [isTech] = True ORDER BY [LastName], [FirstName]"> </asp:SqlDataSource>
http://www.microsoft.com/sql/editions/compact/sscecomparison.mspx The above link says the pros and cons of two SQL Server 2005 editions. Document says Compact Edition is not good for €˜When you need a multi-user database server €™. What is the meaning of multi-user database? Does this means the database strictly will not support two database connections at a time? -Thank you, Gish
I am working on a SQL Server database that will be accessed (mainly) via a web interface that will have users from various countries. As I understand it, the best way to store data is to use 'n' (nvarchar etc) datatypes for columns, which allow users to enter data and it will be stored in the raw Unicode format. For other reasons the collation of the db has been changed (to Slovenian) but I understand that this will only affect how data is sorted. However, whenever I try to put certain data into the nvarchar field (Such as the £ symbol) the data get transferred to a letter L. Is that how it would be expected to work and if so am I missing something here - note I Know ent manager sometimes shows data incorrectly, but this happens when I add/view data from anywhere (i.e. end manager/ the website/ query analyser).
Hi, I'm seeing confusing results coming back from a query and I want to make sure my joins are working as expected.
I have 3 tables, tbl_family, tbl_familyPhone, and tbl_phone. tbl_FamilyPhone is a linking table between families and phones that specifies if it's the primary number.
So a family has many familyPhones and a phone has many family phones. I'm trying to get all the families and their home phone only, if they have one. I don't want families to duplicate and I don't want any left out. Here's what I've got
Code Snippet select [whatever]
from tbl_family LEFT OUTER JOIN tbl_familyphone on tbl_family.pk_familyid = tbl_Familyphone.fk_familyid inner JOIN tbl_phone on tbl_familyphone.fk_phoneID = tbl_phone.pk_phoneid and tbl_phone.fk_phonetypeid = 'E6F1688E-015B-481D-8C41-DCC1FEA5D5AB'
My thinking is the inner join between tbl_Phone and tbl_FamilyPhone will cause any FamilyPhone record without a phone record to be left out (fk_phonetypeid is the id of a home phone). Though some FamilyPhone records may be left out, I will not lose any families because it is left outer joined to familyphone. Is this right? Because if I just do select count(*) from families I get 4517 records, but when running the query above with the joins, I get 4383 records.
Does the Table Lock option in the OLE DB Destination task in a data flow refer to a lock on the destination table or on the source table from which the table is loading from?
The reason I ask is because I have a package run twice simultaneously pulling from the same server and table onto two different tables on two different servers. When I kick off my job to run the two data pulls, one of the jobs terminates with a
"TCP Provider: An existing connection was forcibly closed by the remote host." message. I believe this is due to a table lock on the server, but can't figure this out.
From BOL, I see these remarks with respect to the MODIFY FILE subcommand (my underline added):
Initializing Files By default, data and log files are initialized by filling the files with zeros when you perform one of the following operations:
Create a database
Add files to an existing database
Increase the size of an existing file
Restore a database or filegroup
Which leads me to believe that expanding the size of a datafile will also wipe out (my definition of 'initialize') any existing data within that file.
I may be misunderstanding 'initialize', because when I tested it out, I found this wasn't the case - my table data written to the file was still there after a resize.
Need to clarify to what degree I'd be taking a risk by increasing the file size on a datafile which already has data in it.
We're upgrading our SQL Server database from 2005 to 2012.I ran the Upgrade Advisory report and got this issue "Non-integer constants are not allowed in the ORDER BY clause in 90" because of the script below
SELECT gp.BRAND+' <> '+gp.CATEGORY AS 'full name', gp.PRODCODE, gp.CATEGORY FROM dbo.GFK_PRODUCT gp ORDER BY 'full name'
I tried running the same query in our test SQL Server 2012 and it ran successfully. Now I'm confuse if i still need to change it.I google the issue a bit and came across this link and mentioned this.
1.) Non-integer constants are ... constants that are not integer number.
Example:Â 'string1'Â represents a string constant 0x01Â represents a varbinary constant {ts '2015-02-26 06:00:00'}Â represents a datetime constant
1.23Â represents a numeric constants
2) So single quotes are used to define a string constants / character string constants but SQL Server allows also to use single quotation marks use also as column identifier delimiter: SELECT ... expression AS 'Column1' FROM ...
In this context is clear that 'Column1' is a column identifier but when used in ORDER BY : ORDER BY 'Column1' it generates confusion because SQL Server doesn't knows if it represents a string literal (character string constant) or it represents a column identifier / column name."Do I still need to change the existing code even though it's working fine in 2012? If yes, it is because of best practice reason or it will total get deprecated/not working in the future version?
Anyone here with a ready to go sqlscript that lists all db's, files, sizes, owner etc? I guess it's a combination of sp_databases, sp_helpdb and sp_helpdb [db].
Does anyone know of a quick way to find out what the largest indexes on a database are? I have a number of tables and was wondering if there's a stored proc or query that I can execute that will list the indexes and their size in order by size? Thanks
Does anyone know of a quick way to find out what the largest indexes on a database are? I have a number of tables and was wondering if there's a stored proc or query that I can execute that will list the indexes and their size in order by size? Thanks
When you have the autogrowth turned on for log files. What happens when you put a max file size on it? Will just overwrite the old logs to keep the file at the max size or will it just create a new file every time it hits the max size?
I'm putting together a manual system that tracks data growth in a certain database. I was going to use sp_spaceused as a part of it, but then realized the datatypes for size are CHAR, not INT or BIGINT. I was going to do counts, averages, etc. on those columns but that wouldn't work against a CHAR field obviously. I could easily write a little something to strip out the KB, but was hoping there was another way to get those figures.
Secondly...has anynoe seen a stored procedure/code/etc. that just calculates the largest/smallest/average row size for a table? I haven't been able to find anything anywhere...
I am currently cleaning up my database to get its total size down and am not sure how nvarchar and varchar work exactly.
When defining the length of a varchar or nvarchar in enterprise manager, will that effect the size of the entry (as far as data size) no matter what the length of the entry? In other words, will there be a difference in Data Size for an entry with the length of 4 characters with a definition of varchar(4) versus an entry with the length of 4 characters with a definition of varchar(50).
****If there is no difference, is there any reason in trying to best guess the size to give nvarchar or varchar columns? It would seem easier to just define the lengths of columns which need variable lengths to 200 or 400 just to save time in not trying to best guess what the size might be...*****
Hi, I am looking to runa query to get the sizes of the tables in my SQL 7 DB. I know I can access the info in Enterprise Manager, under "Tables & Indexes". But I need to get this info via a query. I need rows and size. I figured out how to get rows through the sys tables: select sysobjects.name, sysindexes.rows from sysobjects,sysindexes where sysobjects.name = sysindexes.name and xtype = 'U'
Is the size of each table stored in a sys table as well? I can't find it.
Hey all, Got a little problem. have 2 matching tables on different servers with the EXACT same column layout and data (the tables are being replicated with MSSQL7) and one table is 200MB while the other is 2000MB. I'm running MSSQL7 SP2. Any ideas???
Hi, my log files are growing like anything. One of my log file size is 20GB. How i have to reduce the log file size. If i run DBCC command is it come backs... Pls tell me the way how i have to find the free space and reduce logsizes. After taking backups also my log file sizes are not reducing.
I have inherited a number of databases which were substantially over sized when they were set up. I'd like to reduce both the log and database files to be smaller than their original sizes, what's the easiest way to do this? If anyone has any experience of doing this please reply.